WorldWideScience

Sample records for metaheuristics methods applied

  1. Nonparametric Comparison of Two Dynamic Parameter Setting Methods in a Meta-Heuristic Approach

    Directory of Open Access Journals (Sweden)

    Seyhun HEPDOGAN

    2007-10-01

    Full Text Available Meta-heuristics are commonly used to solve combinatorial problems in practice. Many approaches provide very good quality solutions in a short amount of computational time; however most meta-heuristics use parameters to tune the performance of the meta-heuristic for particular problems and the selection of these parameters before solving the problem can require much time. This paper investigates the problem of setting parameters using a typical meta-heuristic called Meta-RaPS (Metaheuristic for Randomized Priority Search.. Meta-RaPS is a promising meta-heuristic optimization method that has been applied to different types of combinatorial optimization problems and achieved very good performance compared to other meta-heuristic techniques. To solve a combinatorial problem, Meta-RaPS uses two well-defined stages at each iteration: construction and local search. After a number of iterations, the best solution is reported. Meta-RaPS performance depends on the fine tuning of two main parameters, priority percentage and restriction percentage, which are used during the construction stage. This paper presents two different dynamic parameter setting methods for Meta-RaPS. These dynamic parameter setting approaches tune the parameters while a solution is being found. To compare these two approaches, nonparametric statistic approaches are utilized since the solutions are not normally distributed. Results from both these dynamic parameter setting methods are reported.

  2. METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS

    Directory of Open Access Journals (Sweden)

    V. Panteleev Andrei

    2017-01-01

    Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and

  3. Trajectory and Population Metaheuristics applied to Combinatorial Optimization Problems

    Directory of Open Access Journals (Sweden)

    Natalia Alancay

    2016-04-01

    Full Text Available In the world there are a multitude of everyday problems that require a solution that meets a set of requirements in the most appropriate way maximizing or minimizing a certain value. However, finding an optimal solution for certain optimization problems can be an incredibly difficult or an impossible task. This is because when a problem becomes large enough, we have to look through a huge number of possible solutions, the most efficient solution, that is, the one that has the lower cost. The ways to treat feasible solutions for their practical application are varied. One of the strategy that has gained a great acceptance and that has been getting an important formal body are the metaheuristics since it is established strategies to cross and explore the space of solutions of the problem usually generated in a random and iterative way. The main advantage of this technique is their flexibility and robustness, which allows them to be applied to a wide range of problems. In this work we focus on a metaheuristic based on Simulated Annealing trajectory and a population - based Cellular Genetic Algorithm with the objective of carrying out a study and comparison of the results obtained in its application for the resolution of a set of academic problems of combinatorial optimization.

  4. Theory and principled methods for the design of metaheuristics

    CERN Document Server

    Borenstein, Yossi

    2013-01-01

    Metaheuristics, and evolutionary algorithms in particular, are known to provide efficient, adaptable solutions for many real-world problems, but the often informal way in which they are defined and applied has led to misconceptions, and even successful applications are sometimes the outcome of trial and error. Ideally, theoretical studies should explain when and why metaheuristics work, but the challenge is huge: mathematical analysis requires significant effort even for simple scenarios and real-life problems are usually quite complex.  In this book the editors establish a bridge between theo

  5. A hybrid approach for efficient anomaly detection using metaheuristic methods.

    Science.gov (United States)

    Ghanem, Tamer F; Elkilani, Wail S; Abdul-Kader, Hatem M

    2015-07-01

    Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.

  6. A hybrid approach for efficient anomaly detection using metaheuristic methods

    Directory of Open Access Journals (Sweden)

    Tamer F. Ghanem

    2015-07-01

    Full Text Available Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.

  7. Hybrid Metaheuristics

    CERN Document Server

    2013-01-01

    The main goal of this book is to provide a state of the art of hybrid metaheuristics. The book provides a complete background that enables readers to design and implement hybrid metaheuristics to solve complex optimization problems (continuous/discrete, mono-objective/multi-objective, optimization under uncertainty) in a diverse range of application domains. Readers learn to solve large scale problems quickly and efficiently combining metaheuristics with complementary metaheuristics, mathematical programming, constraint programming and machine learning. Numerous real-world examples of problems and solutions demonstrate how hybrid metaheuristics are applied in such fields as networks, logistics and transportation, bio-medical, engineering design, scheduling.

  8. Metaheuristic Algorithms Applied to Bioenergy Supply Chain Problems: Theory, Review, Challenges, and Future

    Directory of Open Access Journals (Sweden)

    Krystel K. Castillo-Villar

    2014-11-01

    Full Text Available Bioenergy is a new source of energy that accounts for a substantial portion of the renewable energy production in many countries. The production of bioenergy is expected to increase due to its unique advantages, such as no harmful emissions and abundance. Supply-related problems are the main obstacles precluding the increase of use of biomass (which is bulky and has low energy density to produce bioenergy. To overcome this challenge, large-scale optimization models are needed to be solved to enable decision makers to plan, design, and manage bioenergy supply chains. Therefore, the use of effective optimization approaches is of great importance. The traditional mathematical methods (such as linear, integer, and mixed-integer programming frequently fail to find optimal solutions for non-convex and/or large-scale models whereas metaheuristics are efficient approaches for finding near-optimal solutions that use less computational resources. This paper presents a comprehensive review by studying and analyzing the application of metaheuristics to solve bioenergy supply chain models as well as the exclusive challenges of the mathematical problems applied in the bioenergy supply chain field. The reviewed metaheuristics include: (1 population approaches, such as ant colony optimization (ACO, the genetic algorithm (GA, particle swarm optimization (PSO, and bee colony algorithm (BCA; and (2 trajectory approaches, such as the tabu search (TS and simulated annealing (SA. Based on the outcomes of this literature review, the integrated design and planning of bioenergy supply chains problem has been solved primarily by implementing the GA. The production process optimization was addressed primarily by using both the GA and PSO. The supply chain network design problem was treated by utilizing the GA and ACO. The truck and task scheduling problem was solved using the SA and the TS, where the trajectory-based methods proved to outperform the population

  9. Optimization in engineering sciences approximate and metaheuristic methods

    CERN Document Server

    Stefanoiu, Dan; Popescu, Dumitru; Filip, Florin Gheorghe; El Kamel, Abdelkader

    2014-01-01

    The purpose of this book is to present the main metaheuristics and approximate and stochastic methods for optimization of complex systems in Engineering Sciences. It has been written within the framework of the European Union project ERRIC (Empowering Romanian Research on Intelligent Information Technologies), which is funded by the EU's FP7 Research Potential program and has been developed in co-operation between French and Romanian teaching researchers. Through the principles of various proposed algorithms (with additional references) this book allows the reader to explore various methods o

  10. A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems

    Science.gov (United States)

    Abtahi, Amir-Reza; Bijari, Afsane

    2017-09-01

    In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.

  11. The gravitational attraction algorithm: a new metaheuristic applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Oliveira, Cassiano R.E. de

    2005-01-01

    A new metaheuristic called 'Gravitational Attraction Algorithm' (GAA) is introduced in this article. It is an analogy with the gravitational force field, where a body attracts another proportionally to both masses and inversely to their distances. The GAA is a populational algorithm where, first of all, the solutions are clustered using the Fuzzy Clustering Means (FCM) algorithm. Following that, the gravitational forces of the individuals in relation to each cluster are evaluated and this individual or solution is displaced to the cluster with the greatest attractive force. Once it is inside this cluster, the solution receives small stochastic variations, performing a local exploration. Then the solutions are crossed over and the process starts all over again. The parameters required by the GAA are the 'diversity factor', which is used to create a random diversity in a fashion similar to genetic algorithm's mutation, and the number of clusters for the FCM. GAA is applied to the reactor core design optimization problem which consists in adjusting several reactor cell parameters in order to minimize the average peak-factor in a 3-enrichment-zone reactor, considering operational restrictions. This problem was previously attacked using the canonical genetic algorithm (GA) and a Niching Genetic Algorithm (NGA). The new metaheuristic is then compared to those two algorithms. The three algorithms are submitted to the same computational effort and GAA reaches the best results, showing its potential for other applications in the nuclear engineering field as, for instance, the nuclear core reload optimization problem. (author)

  12. A Meta-Heuristic Applying for the Transportation of Wood Raw Material

    Directory of Open Access Journals (Sweden)

    Erhan Çalışkan

    2009-04-01

    Full Text Available Primary products in Turkish forestry are wood material. Thus, an operational organization is necessary to transport these main products to depots and then to the consumers without quality and volume loss. This organization starts from harvesting area in the stand and continues to roadside depots or ramps and to main depots and even to manufactures from there. The computer-assisted models, which aim to examine the optimum path in transportation, can be utilized in solving this quite complex problem. In this study, an evaluation has been performed in importance and current status of transporting wood material, classification of wood transportation, computer-assisted heuristic and meta-heuristic methods, and possibilities of using these methods in transportation of wood materials.

  13. The gravitational attraction algorithm: a new metaheuristic applied to a nuclear reactor core design optimization problem

    Energy Technology Data Exchange (ETDEWEB)

    Sacco, Wagner F.; Oliveira, Cassiano R.E. de [Georgia Institute of Technology, Atlanta, GA (United States). George W. Woodruff School of Mechanical Engineering. Nuclear and Radiological Engineering Program]. E-mail: wagner.sacco@me.gatech.edu; cassiano.oliveira@nre.gatech.edu; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)]. E-mail: cmnap@ien.gov.br

    2005-07-01

    A new metaheuristic called 'Gravitational Attraction Algorithm' (GAA) is introduced in this article. It is an analogy with the gravitational force field, where a body attracts another proportionally to both masses and inversely to their distances. The GAA is a populational algorithm where, first of all, the solutions are clustered using the Fuzzy Clustering Means (FCM) algorithm. Following that, the gravitational forces of the individuals in relation to each cluster are evaluated and this individual or solution is displaced to the cluster with the greatest attractive force. Once it is inside this cluster, the solution receives small stochastic variations, performing a local exploration. Then the solutions are crossed over and the process starts all over again. The parameters required by the GAA are the 'diversity factor', which is used to create a random diversity in a fashion similar to genetic algorithm's mutation, and the number of clusters for the FCM. GAA is applied to the reactor core design optimization problem which consists in adjusting several reactor cell parameters in order to minimize the average peak-factor in a 3-enrichment-zone reactor, considering operational restrictions. This problem was previously attacked using the canonical genetic algorithm (GA) and a Niching Genetic Algorithm (NGA). The new metaheuristic is then compared to those two algorithms. The three algorithms are submitted to the same computational effort and GAA reaches the best results, showing its potential for other applications in the nuclear engineering field as, for instance, the nuclear core reload optimization problem. (author)

  14. Metaheuristics and optimization in civil engineering

    CERN Document Server

    Bekdaş, Gebrail; Nigdeli, Sinan

    2016-01-01

    This timely book deals with a current topic, i.e. the applications of metaheuristic algorithms, with a primary focus on optimization problems in civil engineering. The first chapter offers a concise overview of different kinds of metaheuristic algorithms, explaining their advantages in solving complex engineering problems that cannot be effectively tackled by traditional methods, and citing the most important works for further reading. The remaining chapters report on advanced studies on the applications of certain metaheuristic algorithms to specific engineering problems. Genetic algorithm, bat algorithm, cuckoo search, harmony search and simulated annealing are just some of the methods presented and discussed step by step in real-application contexts, in which they are often used in combination with each other. Thanks to its synthetic yet meticulous and practice-oriented approach, the book is a perfect guide for graduate students, researchers and professionals willing to applying metaheuristic algorithms in...

  15. A New Improved Hybrid Meta-Heuristics Method for Unit Commitment with Nonlinear Fuel Cost Function

    Science.gov (United States)

    Okawa, Kenta; Mori, Hiroyuki

    In this paper, a new improved hybrid meta-heuristic method is proposed to solve the unit commitment problem effectively. The objective is to minimize operation cost while satisfying the power balance constraints and so on. It may be formulated as a nonlinear mixed-integer problem. In other words, the unit commitment problem is hard to solve. Therefore, this paper makes use of a hybrid meta-heuristic method with two layers. Layer 1 determines the on/off conditions of generators with tabu search (TS) while Layer 2 evaluates output of generators with evolutionary particle swarm optimization (EPSO). The construction phase of Greedy Randomized Adaptive Search Procedure (GRASP) is used to create initial feasible solutions efficiently. Three kinds of meta-heuristic methods such as TS, EPSO and GRASP are combined to solve the problem. In addition, a parallel scheme of EPSO is developed to improve the computational efficient as well as the accuracy. The effectiveness of the proposed method is tested in sample systems.

  16. Applying a multiobjective metaheuristic inspired by honey bees to phylogenetic inference.

    Science.gov (United States)

    Santander-Jiménez, Sergio; Vega-Rodríguez, Miguel A

    2013-10-01

    The development of increasingly popular multiobjective metaheuristics has allowed bioinformaticians to deal with optimization problems in computational biology where multiple objective functions must be taken into account. One of the most relevant research topics that can benefit from these techniques is phylogenetic inference. Throughout the years, different researchers have proposed their own view about the reconstruction of ancestral evolutionary relationships among species. As a result, biologists often report different phylogenetic trees from a same dataset when considering distinct optimality principles. In this work, we detail a multiobjective swarm intelligence approach based on the novel Artificial Bee Colony algorithm for inferring phylogenies. The aim of this paper is to propose a complementary view of phylogenetics according to the maximum parsimony and maximum likelihood criteria, in order to generate a set of phylogenetic trees that represent a compromise between these principles. Experimental results on a variety of nucleotide data sets and statistical studies highlight the relevance of the proposal with regard to other multiobjective algorithms and state-of-the-art biological methods. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. A comparative analysis of meta-heuristic methods for power management of a dual energy storage system for electric vehicles

    International Nuclear Information System (INIS)

    Trovão, João P.; Antunes, Carlos Henggeler

    2015-01-01

    Highlights: • Two meta-heuristic approaches are evaluated for multi-ESS management in electric vehicles. • An online global energy management strategy with two different layers is studied. • Meta-heuristic techniques are used to define optimized energy sharing mechanisms. • A comparative analysis for ARTEMIS driving cycle is addressed. • The effectiveness of the double-layer management with meta-heuristic is presented. - Abstract: This work is focused on the performance evaluation of two meta-heuristic approaches, simulated annealing and particle swarm optimization, to deal with power management of a dual energy storage system for electric vehicles. The proposed strategy is based on a global energy management system with two layers: long-term (energy) and short-term (power) management. A rule-based system deals with the long-term (strategic) layer and for the short-term (action) layer meta-heuristic techniques are developed to define optimized online energy sharing mechanisms. Simulations have been made for several driving cycles to validate the proposed strategy. A comparative analysis for ARTEMIS driving cycle is presented evaluating three performance indicators (computation time, final value of battery state of charge, and minimum value of supercapacitors state of charge) as a function of input parameters. The results show the effectiveness of an implementation based on a double-layer management system using meta-heuristic methods for online power management supported by a rule set that restricts the search space

  18. Metaheuristics for bi-level optimization

    CERN Document Server

    2013-01-01

    This book provides a complete background on metaheuristics to solve complex bi-level optimization problems (continuous/discrete, mono-objective/multi-objective) in a diverse range of application domains. Readers learn to solve large scale bi-level optimization problems by efficiently combining metaheuristics with complementary metaheuristics and mathematical programming approaches. Numerous real-world examples of problems demonstrate how metaheuristics are applied in such fields as networks, logistics and transportation, engineering design, finance and security.

  19. Trajectory metaheuristic algorithms to optimize problems combinatorics

    Directory of Open Access Journals (Sweden)

    Natalia Alancay

    2016-12-01

    Full Text Available The application of metaheuristic algorithms to optimization problems has been very important during the last decades. The main advantage of these techniques is their flexibility and robustness, which allows them to be applied to a wide range of problems. In this work we concentrate on metaheuristics based on Simulated Annealing, Tabu Search and Variable Neighborhood Search trajectory whose main characteristic is that they start from a point and through the exploration of the neighborhood vary the current solution, forming a trajectory. By means of the instances of the selected combinatorial problems, a computational experimentation is carried out that illustrates the behavior of the algorithmic methods to solve them. The main objective of this work is to perform the study and comparison of the results obtained for the selected trajectories metaheuristics in its application for the resolution of a set of academic problems of combinatorial optimization.

  20. Applying Stochastic Metaheuristics to the Problem of Data Management in a Multi-Tenant Database Cluster

    Directory of Open Access Journals (Sweden)

    E. A. Boytsov

    2014-01-01

    Full Text Available A multi-tenant database cluster is a concept of a data-storage subsystem for cloud applications with the multi-tenant architecture. The cluster is a set of relational database servers with the single entry point, combined into one unit with a cluster controller. This system is aimed to be used by applications developed according to Software as a Service (SaaS paradigm and allows to place tenants at database servers so that providing their isolation, data backup and the most effective usage of available computational power. One of the most important problems about such a system is an effective distribution of data into servers, which affects the degree of individual cluster nodes load and faulttolerance. This paper considers the data-management approach, based on the usage of a load-balancing quality measure function. This function is used during initial placement of new tenants and also during placement optimization steps. Standard schemes of metaheuristic optimization such as simulated annealing and tabu search are used to find a better tenant placement.

  1. A hybrid metaheuristic method to optimize the order of the sequences in continuous-casting

    Directory of Open Access Journals (Sweden)

    Achraf Touil

    2016-06-01

    Full Text Available In this paper, we propose a hybrid metaheuristic algorithm to maximize the production and to minimize the processing time in the steel-making and continuous casting (SCC by optimizing the order of the sequences where a sequence is a group of jobs with the same chemical characteristics. Based on the work Bellabdaoui and Teghem (2006 [Bellabdaoui, A., & Teghem, J. (2006. A mixed-integer linear programming model for the continuous casting planning. International Journal of Production Economics, 104(2, 260-270.], a mixed integer linear programming for scheduling steelmaking continuous casting production is presented to minimize the makespan. The order of the sequences in continuous casting is assumed to be fixed. The main contribution is to analyze an additional way to determine the optimal order of sequences. A hybrid method based on simulated annealing and genetic algorithm restricted by a tabu list (SA-GA-TL is addressed to obtain the optimal order. After parameter tuning of the proposed algorithm, it is tested on different instances using a.NET application and the commercial software solver Cplex v12.5. These results are compared with those obtained by SA-TL (simulated annealing restricted by tabu list.

  2. A surrogate-based metaheuristic global search method for beam angle selection in radiation treatment planning.

    Science.gov (United States)

    Zhang, H H; Gao, S; Chen, W; Shi, L; D'Souza, W D; Meyer, R R

    2013-03-21

    An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equallyspaced beams (eplans), we have developed a global search metaheuristic process based on the nested partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are of superior quality.

  3. A surrogate-based metaheuristic global search method for beam angle selection in radiation treatment planning

    International Nuclear Information System (INIS)

    Zhang, H H; D’Souza, W D; Gao, S; Shi, L; Chen, W; Meyer, R R

    2013-01-01

    An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equally-spaced beams (eplans), we have developed a global search metaheuristic process based on the nested partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are of superior quality. (paper)

  4. Metaheuristics for medicine and biology

    CERN Document Server

    Talbi, El-Ghazali

    2017-01-01

    This book highlights recent research on metaheuristics for biomedical engineering, addressing both theoretical and applications aspects. Given the multidisciplinary nature of bio-medical image analysis, it has now become one of the most central topics in computer science, computer engineering and electrical and electronic engineering, and attracted the interest of many researchers. To deal with these problems, many traditional and recent methods, algorithms and techniques have been proposed. Among them, metaheuristics is the most common choice. This book provides essential content for senior and young researchers interested in methodologies for implementing metaheuristics to help solve biomedical engineering problems.

  5. Stable and accurate methods for identification of water bodies from Landsat series imagery using meta-heuristic algorithms

    Science.gov (United States)

    Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid

    2017-10-01

    Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.

  6. Improving the Fine-Tuning of Metaheuristics: An Approach Combining Design of Experiments and Racing Algorithms

    Directory of Open Access Journals (Sweden)

    Eduardo Batista de Moraes Barbosa

    2017-01-01

    Full Text Available Usually, metaheuristic algorithms are adapted to a large set of problems by applying few modifications on parameters for each specific case. However, this flexibility demands a huge effort to correctly tune such parameters. Therefore, the tuning of metaheuristics arises as one of the most important challenges in the context of research of these algorithms. Thus, this paper aims to present a methodology combining Statistical and Artificial Intelligence methods in the fine-tuning of metaheuristics. The key idea is a heuristic method, called Heuristic Oriented Racing Algorithm (HORA, which explores a search space of parameters looking for candidate configurations close to a promising alternative. To confirm the validity of this approach, we present a case study for fine-tuning two distinct metaheuristics: Simulated Annealing (SA and Genetic Algorithm (GA, in order to solve the classical traveling salesman problem. The results are compared considering the same metaheuristics tuned through a racing method. Broadly, the proposed approach proved to be effective in terms of the overall time of the tuning process. Our results reveal that metaheuristics tuned by means of HORA achieve, with much less computational effort, similar results compared to the case when they are tuned by the other fine-tuning approach.

  7. Kriging with Meta-Heuristic Methods for Optimal Design to Reduce the Noise of the Engine Cooling Fan

    Science.gov (United States)

    Sim, Hyoun-Jin; Cha, Kyung-Joon; Oh, Jae-Eung; Ryu, Je-Seon

    This paper proposes an optimal design scheme to reduce the noise of the engine cooling fan by adapting Kriging with two meta-heuristic techniques. An engineering model has been developed for the prediction of the noise spectrum of the engine cooling fan. The noise of the fan is expressed as the discrete frequency noise peaks at the BPF and its harmonics and line spectrum at the broad band by noise generation mechanisms. The object of this paper is to find the optimal design for noise reduction of the engine cooling fan. We firstly show a comparison of the measured and calculated noise spectra of the fan for the validation of the noise prediction program. Then, L18 orthogonal array is applied as design of experiments because it is suitable for Kriging. With these simulated data, we can estimate a correlation parameter of Kriging by solving the nonlinear problem with genetic algorithm and find an optimal level for the noise reduction of the cooling fan by optimizing Kriging estimates with simulated annealing. We notice that this optimal design scheme gives noticeable results. Therefore, an optimal design for the cooling fan is proposed by reducing the noise of its system.

  8. Handbook of metaheuristics

    CERN Document Server

    Kochenberger, Gary

    2003-01-01

    Metaheuristics, in their original definition, are solution methods that orchestrate an interaction between local improvement procedures and higher level strategies to create a process capable of escaping from local optima and performing a robust search of a solution space. Over time, these methods have also come to include any procedures that employ strategies for overcoming the trap of local optimality in complex solution spaces, especially those procedures that utilize one or more neighborhood structures as a means of defining admissible moves to transition from one solution to another, or to build or destroy solutions in constructive and destructive processes. The degree to which neighborhoods are exploited varies according to the type of procedure. In the case of certain population-based procedures, such as genetic al- rithms, neighborhoods are implicitly (and somewhat restrictively) defined by reference to replacing components of one solution with those of another, by variously chosen rules of exchange p...

  9. Meta-heuristic methods for optimization and application to material flow control and scheduling in manufacturing; Butsuryu scheduling no tame no system saitekika meta senryaku

    Energy Technology Data Exchange (ETDEWEB)

    Konishi, M. [Kobe Steel, Ltd., Kobe (Japan)

    1996-09-01

    This paper introduces meta-heuristic methods for system optimization for material flow scheduling as well as their applications. The systems are intended to optimize combinations which determine such variables as selection of transport routes in and out of a factory, ratio of transport vehicles, and ratio of product orders to facilities. The meta-heuristic methods include the simulated annealing (SA) algorithm and the genetic algorithm (GA). The SA method is a method to search an optimal solution by utilizing combination optimization and analogy in the statistical dynamics. Although the system has limitation in the scope of application, it is characterized in that the setting of vicinity structure utilizing experience is effective. The GA method is a collective search method which models after the evolution mechanism of living organisms, and is characterized by parallel search on a plurality of search points. The applications of the SA method include a system to optimize limits of receiving product orders (shipment plans) in an expanded copper plate manufacturing factory. The applications of the GA method include optimization of a problem to allot a plurality of orders to a plurality of slabs. A method that can be comparable to the GA and SA methods is the expert system. 11 refs., 8 figs., 1 tab.

  10. Advances in metaheuristic algorithms for optimal design of structures

    CERN Document Server

    Kaveh, A

    2014-01-01

    This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally ...

  11. Advances in metaheuristic algorithms for optimal design of structures

    CERN Document Server

    Kaveh, A

    2017-01-01

    This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally ...

  12. Applied nonparametric statistical methods

    CERN Document Server

    Sprent, Peter

    2007-01-01

    While preserving the clear, accessible style of previous editions, Applied Nonparametric Statistical Methods, Fourth Edition reflects the latest developments in computer-intensive methods that deal with intractable analytical problems and unwieldy data sets. Reorganized and with additional material, this edition begins with a brief summary of some relevant general statistical concepts and an introduction to basic ideas of nonparametric or distribution-free methods. Designed experiments, including those with factorial treatment structures, are now the focus of an entire chapter. The text also e

  13. A Meta-heuristic Approach for Variants of VRP in Terms of Generalized Saving Method

    Science.gov (United States)

    Shimizu, Yoshiaki

    Global logistic design is becoming a keen interest to provide an essential infrastructure associated with modern societal provision. For examples, we can designate green and/or robust logistics in transportation systems, smart grids in electricity utilization systems, and qualified service in delivery systems, and so on. As a key technology for such deployments, we engaged in practical vehicle routing problem on a basis of the conventional saving method. This paper extends such idea and gives a general framework available for various real-world applications. It can cover not only delivery problems but also two kind of pick-up problems, i.e., straight and drop-by routings. Moreover, multi-depot problem is considered by a hybrid approach with graph algorithm and its solution method is realized in a hierarchical manner. Numerical experiments have been taken place to validate effectiveness of the proposed method.

  14. Metaheuristics progress in complex systems optimization

    CERN Document Server

    Doerner, Karl F; Greistorfer, Peter; Gutjahr, Walter; Hartl, Richard F; Reimann, Marc

    2007-01-01

    The aim of ""Metaheuristics: Progress in Complex Systems Optimization"" is to provide several different kinds of information: a delineation of general metaheuristics methods, a number of state-of-the-art articles from a variety of well-known classical application areas as well as an outlook to modern computational methods in promising new areas. Therefore, this book may equally serve as a textbook in graduate courses for students, as a reference book for people interested in engineering or social sciences, and as a collection of new and promising avenues for researchers working in this field.

  15. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    Directory of Open Access Journals (Sweden)

    Banga Julio R

    2006-11-01

    Full Text Available Abstract Background We consider the problem of parameter estimation (model calibration in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector. In order to surmount these difficulties, global optimization (GO methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown structure (i.e. black-box models. In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously

  16. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    Science.gov (United States)

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  17. Metaheuristic Algorithms for Convolution Neural Network.

    Science.gov (United States)

    Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni

    2016-01-01

    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).

  18. Metaheuristic Algorithms for Convolution Neural Network

    Directory of Open Access Journals (Sweden)

    L. M. Rasdi Rere

    2016-01-01

    Full Text Available A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN, a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent.

  19. Metaheuristics for Dynamic Optimization

    CERN Document Server

    Nakib, Amir; Siarry, Patrick

    2013-01-01

    This book is an updated effort in summarizing the trending topics and new hot research lines in solving dynamic problems using metaheuristics. An analysis of the present state in solving complex problems quickly draws a clear picture: problems that change in time, having noise and uncertainties in their definition are becoming very important. The tools to face these problems are still to be built, since existing techniques are either slow or inefficient in tracking the many global optima that those problems are presenting to the solver technique. Thus, this book is devoted to include several of the most important advances in solving dynamic problems. Metaheuristics are the more popular tools to this end, and then we can find in the book how to best use genetic algorithms, particle swarm, ant colonies, immune systems, variable neighborhood search, and many other bioinspired techniques. Also, neural network solutions are considered in this book. Both, theory and practice have been addressed in the chapters of t...

  20. Handbook of metaheuristics

    CERN Document Server

    Potvin, Jean-Yves

    2010-01-01

    “… an excellent book if you want to learn about a number of individual metaheuristics." (U. Aickelin, Journal of the Operational Research Society, Issue 56, 2005, on the First Edition) The first edition of the Handbook of Metaheuristics was published in 2003 under the editorship of Fred Glover and Gary A. Kochenberger. Given the numerous developments observed in the field of metaheuristics in recent years, it appeared that the time was ripe for a second edition of the Handbook. When Glover and Kochenberger were unable to prepare this second edition, they suggested that Michel Gendreau and Jean-Yves Potvin should take over the editorship, and so this important new edition is now available. Through its 21 chapters, this second edition is designed to provide a broad coverage of the concepts, implementations and applications in this important field of optimization. Original contributors either revised or updated their work, or provided entirely new chapters. The Handbook now includes updated chapters on the b...

  1. Applied Bayesian hierarchical methods

    National Research Council Canada - National Science Library

    Congdon, P

    2010-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...

  2. Methods of applied mathematics

    CERN Document Server

    Hildebrand, Francis B

    1992-01-01

    This invaluable book offers engineers and physicists working knowledge of a number of mathematical facts and techniques not commonly treated in courses in advanced calculus, but nevertheless extremely useful when applied to typical problems in many different fields. It deals principally with linear algebraic equations, quadratic and Hermitian forms, operations with vectors and matrices, the calculus of variations, and the formulations and theory of linear integral equations. Annotated problems and exercises accompany each chapter.

  3. A preliminary study to metaheuristic approach in multilayer radiation shielding optimization

    Science.gov (United States)

    Arif Sazali, Muhammad; Rashid, Nahrul Khair Alang Md; Hamzah, Khaidzir

    2018-01-01

    Metaheuristics are high-level algorithmic concepts that can be used to develop heuristic optimization algorithms. One of their applications is to find optimal or near optimal solutions to combinatorial optimization problems (COPs) such as scheduling, vehicle routing, and timetabling. Combinatorial optimization deals with finding optimal combinations or permutations in a given set of problem components when exhaustive search is not feasible. A radiation shield made of several layers of different materials can be regarded as a COP. The time taken to optimize the shield may be too high when several parameters are involved such as the number of materials, the thickness of layers, and the arrangement of materials. Metaheuristics can be applied to reduce the optimization time, trading guaranteed optimal solutions for near-optimal solutions in comparably short amount of time. The application of metaheuristics for radiation shield optimization is lacking. In this paper, we present a review on the suitability of using metaheuristics in multilayer shielding design, specifically the genetic algorithm and ant colony optimization algorithm (ACO). We would also like to propose an optimization model based on the ACO method.

  4. Effective heuristics and meta-heuristics for the quadratic assignment problem with tuned parameters and analytical comparisons

    Science.gov (United States)

    Bashiri, Mahdi; Karimi, Hossein

    2012-07-01

    Quadratic assignment problem (QAP) is a well-known problem in the facility location and layout. It belongs to the NP-complete class. There are many heuristic and meta-heuristic methods, which are presented for QAP in the literature. In this paper, we applied 2-opt, greedy 2-opt, 3-opt, greedy 3-opt, and VNZ as heuristic methods and tabu search (TS), simulated annealing, and particle swarm optimization as meta-heuristic methods for the QAP. This research is dedicated to compare the relative percentage deviation of these solution qualities from the best known solution which is introduced in QAPLIB. Furthermore, a tuning method is applied for meta-heuristic parameters. Results indicate that TS is the best in 31%of QAPs, and the IFLS method, which is in the literature, is the best in 58 % of QAPs; these two methods are the same in 11 % of test problems. Also, TS has a better computational time among heuristic and meta-heuristic methods.

  5. Metaheuristic applications to speech enhancement

    CERN Document Server

    Kunche, Prajna

    2016-01-01

    This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.

  6. TOWARDS A UNIFIED VIEW OF METAHEURISTICS

    Directory of Open Access Journals (Sweden)

    El-Ghazali Talbi

    2013-02-01

    Full Text Available This talk provides a complete background on metaheuristics and presents in a unified view the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. The key search components of metaheuristics are considered as a toolbox for: - Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems for optimization problems. - Designing efficient metaheuristics for multi-objective optimization problems. - Designing hybrid, parallel and distributed metaheuristics. - Implementing metaheuristics on sequential and parallel machines.

  7. Innovative Meta-Heuristic Approach Application for Parameter Estimation of Probability Distribution Model

    Science.gov (United States)

    Lee, T. S.; Yoon, S.; Jeong, C.

    2012-12-01

    The primary purpose of frequency analysis in hydrology is to estimate the magnitude of an event with a given frequency of occurrence. The precision of frequency analysis depends on the selection of an appropriate probability distribution model (PDM) and parameter estimation techniques. A number of PDMs have been developed to describe the probability distribution of the hydrological variables. For each of the developed PDMs, estimated parameters are provided based on alternative estimation techniques, such as the method of moments (MOM), probability weighted moments (PWM), linear function of ranked observations (L-moments), and maximum likelihood (ML). Generally, the results using ML are more reliable than the other methods. However, the ML technique is more laborious than the other methods because an iterative numerical solution, such as the Newton-Raphson method, must be used for the parameter estimation of PDMs. In the meantime, meta-heuristic approaches have been developed to solve various engineering optimization problems (e.g., linear and stochastic, dynamic, nonlinear). These approaches include genetic algorithms, ant colony optimization, simulated annealing, tabu searches, and evolutionary computation methods. Meta-heuristic approaches use a stochastic random search instead of a gradient search so that intricate derivative information is unnecessary. Therefore, the meta-heuristic approaches have been shown to be a useful strategy to solve optimization problems in hydrology. A number of studies focus on using meta-heuristic approaches for estimation of hydrological variables with parameter estimation of PDMs. Applied meta-heuristic approaches offer reliable solutions but use more computation time than derivative-based methods. Therefore, the purpose of this study is to enhance the meta-heuristic approach for the parameter estimation of PDMs by using a recently developed algorithm known as a harmony search (HS). The performance of the HS is compared to the

  8. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Directory of Open Access Journals (Sweden)

    Tashkova Katerina

    2011-10-01

    Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of

  9. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    Science.gov (United States)

    2011-01-01

    Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These

  10. Metaheuristics in the service industry

    CERN Document Server

    Geiger, Martin Josef; Sevaux, Marc; Sörensen, Kenneth

    2009-01-01

    This book presents novel methodological approaches and improved results of metaheuristics for modern services. It examines applications in the area of transportation and logistics, while other areas include production and financial services.

  11. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    Science.gov (United States)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  12. Metaheuristic Approaches for Hydropower System Scheduling

    Directory of Open Access Journals (Sweden)

    Ieda G. Hidalgo

    2015-01-01

    Full Text Available This paper deals with the short-term scheduling problem of hydropower systems. The objective is to meet the daily energy demand in an economic and safe way. The individuality of the generating units and the nonlinearity of their efficiency curves are taken into account. The mathematical model is formulated as a dynamic, mixed integer, nonlinear, nonconvex, combinatorial, and multiobjective optimization problem. We propose two solution methods using metaheuristic approaches. They combine Genetic Algorithm with Strength Pareto Evolutionary Algorithm and Ant Colony Optimization. Both approaches are divided into two phases. In the first one, to maximize the plant’s net generation, the problem is solved for each hour of the day (static dispatch. In the second phase, to minimize the units’ switching on-off, the day is considered as a whole (dynamic dispatch. The proposed methodology is applied to two Brazilian hydroelectric plants, in cascade, that belong to the national interconnected system. The nondominated solutions from both approaches are presented. All of them meet demand respecting the physical, electrical, and hydraulic constraints.

  13. Advanced metaheuristic algorithms for laser optimization

    International Nuclear Information System (INIS)

    Tomizawa, H.

    2010-01-01

    A laser is one of the most important experimental tools. In synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for Xray-FELs, a laser has important roles as a seed light source or photo-cathode-illuminating light source to generate a high brightness electron bunch. The controls of laser pulse characteristics are required for many kinds of experiments. However, the laser should be tuned and customized for each requirement by laser experts. The automatic tuning of laser is required to realize with some sophisticated algorithms. The metaheuristic algorithm is one of the useful candidates to find one of the best solutions as acceptable as possible. The metaheuristic laser tuning system is expected to save our human resources and time for the laser preparations. I have shown successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles and a hill climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each experimental requirement. (author)

  14. Comparison of metaheuristic techniques to determine optimal placement of biomass power plants

    International Nuclear Information System (INIS)

    Reche-Lopez, P.; Ruiz-Reyes, N.; Garcia Galan, S.; Jurado, F.

    2009-01-01

    This paper deals with the application and comparison of several metaheuristic techniques to optimize the placement and supply area of biomass-fueled power plants. Both, trajectory and population-based methods are applied for our goal. In particular, two well-known trajectory method, such as Simulated Annealing (SA) and Tabu Search (TS), and two commonly used population-based methods, such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are hereby considered. In addition, a new binary PSO algorithm has been proposed, which incorporates an inertia weight factor, like the classical continuous approach. The fitness function for the metaheuristics is the profitability index, defined as the ratio between the net present value and the initial investment. In this work, forest residues are considered as biomass source, and the problem constraints are: the generation system must be located inside the supply area, and its maximum electric power is 5 MW. The comparative results obtained by all considered metaheuristics are discussed. Random walk has also been assessed for the problem we deal with.

  15. Comparison of metaheuristic techniques to determine optimal placement of biomass power plants

    Energy Technology Data Exchange (ETDEWEB)

    Reche-Lopez, P.; Ruiz-Reyes, N.; Garcia Galan, S. [Telecommunication Engineering Department, University of Jaen Polytechnic School, C/ Alfonso X el Sabio 28, 23700 Linares, Jaen (Spain); Jurado, F. [Electrical Engineering Department, University of Jaen Polytechnic School, C/ Alfonso X el Sabio 28, 23700 Linares, Jaen (Spain)

    2009-08-15

    This paper deals with the application and comparison of several metaheuristic techniques to optimize the placement and supply area of biomass-fueled power plants. Both, trajectory and population-based methods are applied for our goal. In particular, two well-known trajectory method, such as Simulated Annealing (SA) and Tabu Search (TS), and two commonly used population-based methods, such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are hereby considered. In addition, a new binary PSO algorithm has been proposed, which incorporates an inertia weight factor, like the classical continuous approach. The fitness function for the metaheuristics is the profitability index, defined as the ratio between the net present value and the initial investment. In this work, forest residues are considered as biomass source, and the problem constraints are: the generation system must be located inside the supply area, and its maximum electric power is 5 MW. The comparative results obtained by all considered metaheuristics are discussed. Random walk has also been assessed for the problem we deal with. (author)

  16. Meta-heuristic algorithms as tools for hydrological science

    Science.gov (United States)

    Yoo, Do Guen; Kim, Joong Hoon

    2014-12-01

    In this paper, meta-heuristic optimization techniques are introduced and their applications to water resources engineering, particularly in hydrological science are introduced. In recent years, meta-heuristic optimization techniques have been introduced that can overcome the problems inherent in iterative simulations. These methods are able to find good solutions and require limited computation time and memory use without requiring complex derivatives. Simulation-based meta-heuristic methods such as Genetic algorithms (GAs) and Harmony Search (HS) have powerful searching abilities, which can occasionally overcome the several drawbacks of traditional mathematical methods. For example, HS algorithms can be conceptualized from a musical performance process and used to achieve better harmony; such optimization algorithms seek a near global optimum determined by the value of an objective function, providing a more robust determination of musical performance than can be achieved through typical aesthetic estimation. In this paper, meta-heuristic algorithms and their applications (focus on GAs and HS) in hydrological science are discussed by subject, including a review of existing literature in the field. Then, recent trends in optimization are presented and a relatively new technique such as Smallest Small World Cellular Harmony Search (SSWCHS) is briefly introduced, with a summary of promising results obtained in previous studies. As a result, previous studies have demonstrated that meta-heuristic algorithms are effective tools for the development of hydrological models and the management of water resources.

  17. Teaching metaheuristics in business schools

    OpenAIRE

    Ramalhinho-Lourenço, Helena

    2005-01-01

    In this work we discuss some ideas and opinions related with teaching Metaheuristics in Business Schools. The main purpose of the work is to initiate a discussion and collaboration about this topic,with the final objective to improve the teaching and publicity of the area. The main topics to be discussed are the environment and focus of this teaching. We also present a SWOT analysis which lead us to the conclusion that the area of Metaheuristics only can win with the presentation and discussi...

  18. Fast solutions for UC problems by a new metaheuristic approach

    Energy Technology Data Exchange (ETDEWEB)

    Viana, Ana; de Sousa, J. Pinho; Matos, Manuel A. [INESC Porto, Campus da FEUP, Rua Dr. Roberto Frias 378, 4200-465 Porto (Portugal)

    2008-08-15

    Due to its combinatorial nature, the Unit Commitment problem has for long been an important research challenge, with several optimization techniques, from exact to heuristic methods, having been proposed to deal with it. In line with one current trend of research, metaheuristic approaches have been studied and some interesting results have already been achieved and published. However, a successful utilization of these methodologies in practice, when embedded in Energy Management Systems, is still constrained by the reluctance of industrial partners in using techniques whose performance highly depends on a correct parameter tuning. Therefore, the application of metaheuristics to the Unit Commitment problem does still justify further research. In this paper we propose a new search strategy, for Local Search based metaheuristics, that tries to overcome this issue. The approach has been tested in a set of instances, leading to very good results in terms of solution cost, when compared either to the classical Lagrangian Relaxation or to other metaheuristics. It also drastically reduced the computation times. Furthermore, the approach proved to be robust, always leading to good results independently of the metaheuristic parameters used. (author)

  19. A Hybrid Metaheuristic DE/CS Algorithm for UCAV Three-Dimension Path Planning

    Directory of Open Access Journals (Sweden)

    Gaige Wang

    2012-01-01

    Full Text Available Three-dimension path planning for uninhabited combat air vehicle (UCAV is a complicated high-dimension optimization problem, which primarily centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. A new hybrid metaheuristic differential evolution (DE and cuckoo search (CS algorithm is proposed to solve the UCAV three-dimension path planning problem. DE is applied to optimize the process of selecting cuckoos of the improved CS model during the process of cuckoo updating in nest. The cuckoos can act as an agent in searching the optimal UCAV path. And then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic CS. The realization procedure for this hybrid metaheuristic approach DE/CS is also presented. In order to make the optimized UCAV path more feasible, the B-Spline curve is adopted for smoothing the path. To prove the performance of this proposed hybrid metaheuristic method, it is compared with basic CS algorithm. The experiment shows that the proposed approach is more effective and feasible in UCAV three-dimension path planning than the basic CS model.

  20. A hybrid metaheuristic DE/CS algorithm for UCAV three-dimension path planning.

    Science.gov (United States)

    Wang, Gaige; Guo, Lihong; Duan, Hong; Wang, Heqi; Liu, Luo; Shao, Mingzhen

    2012-01-01

    Three-dimension path planning for uninhabited combat air vehicle (UCAV) is a complicated high-dimension optimization problem, which primarily centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. A new hybrid metaheuristic differential evolution (DE) and cuckoo search (CS) algorithm is proposed to solve the UCAV three-dimension path planning problem. DE is applied to optimize the process of selecting cuckoos of the improved CS model during the process of cuckoo updating in nest. The cuckoos can act as an agent in searching the optimal UCAV path. And then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic CS. The realization procedure for this hybrid metaheuristic approach DE/CS is also presented. In order to make the optimized UCAV path more feasible, the B-Spline curve is adopted for smoothing the path. To prove the performance of this proposed hybrid metaheuristic method, it is compared with basic CS algorithm. The experiment shows that the proposed approach is more effective and feasible in UCAV three-dimension path planning than the basic CS model.

  1. Deterministic oscillatory search: a new meta-heuristic optimization ...

    Indian Academy of Sciences (India)

    The paper proposes a new optimization algorithm that is extremely robust in solving mathematical and engineering problems. The algorithm combines the deterministic nature of classical methods of optimization and global converging characteristics of meta-heuristic algorithms. Common traits of nature-inspired algorithms ...

  2. A Fast Evolutionary Metaheuristic for VRP with Time Windows

    NARCIS (Netherlands)

    Bräysy, Olli; Dullaert, Wout

    2003-01-01

    This paper presents a new evolutionary metaheuristic for the vehicle routing problem with time windiows. Ideas on multi-start local search, ejection chains, simulated annealing and evolutionary computation are combined in a heuristic that is both robust and efficient. the proposed method produces

  3. Water distribution systems design optimisation using metaheuristics and hyperheuristics

    Directory of Open Access Journals (Sweden)

    DN Raad

    2011-06-01

    Full Text Available The topic of multi-objective water distribution systems (WDS design optimisation using metaheuristics is investigated, comparing numerous modern metaheuristics, including sev- eral multi-objective evolutionary algorithms, an estimation of distribution algorithm and a recent hyperheuristic named AMALGAM (an evolutionary framework for the simultaneous incorporation of multiple metaheuristics, in order to determine which approach is most capa- ble with respect to WDS design optimisation. Novel metaheuristics and variants of existing algorithms are developed, for a total of twenty-three algorithms examined. Testing with re- spect to eight small-to-large-sized WDS benchmarks from the literature reveal that the four top-performing algorithms are mutually non-dominated with respect to the various perfor- mance metrics used. These algorithms are NSGA-II, TAMALGAMJndu , TAMALGAMndu and AMALGAMSndp (the last three being novel variants of AMALGAM. However, when these four algorithms are applied to the design of a very large real-world benchmark, the AMALGAM paradigm outperforms NSGA-II convincingly, with AMALGAMSndp exhibiting the best performance overall.

  4. MEIGO: an open-source software suite based on metaheuristics for global optimization in systems biology and bioinformatics.

    Science.gov (United States)

    Egea, Jose A; Henriques, David; Cokelaer, Thomas; Villaverde, Alejandro F; MacNamara, Aidan; Danciu, Diana-Patricia; Banga, Julio R; Saez-Rodriguez, Julio

    2014-05-10

    Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools. We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixed-integer programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a single-thread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other state-of-the-art methods. MEIGO provides a free, open-source platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods.

  5. A study on the performance comparison of metaheuristic algorithms on the learning of neural networks

    Science.gov (United States)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2017-08-01

    The learning or training process of neural networks entails the task of finding the most optimal set of parameters, which includes translation vectors, dilation parameter, synaptic weights, and bias terms. Apart from the traditional gradient descent-based methods, metaheuristic methods can also be used for this learning purpose. Since the inception of genetic algorithm half a century ago, the last decade witnessed the explosion of a variety of novel metaheuristic algorithms, such as harmony search algorithm, bat algorithm, and whale optimization algorithm. Despite the proof of the no free lunch theorem in the discipline of optimization, a survey in the literature of machine learning gives contrasting results. Some researchers report that certain metaheuristic algorithms are superior to the others, whereas some others argue that different metaheuristic algorithms give comparable performance. As such, this paper aims to investigate if a certain metaheuristic algorithm will outperform the other algorithms. In this work, three metaheuristic algorithms, namely genetic algorithms, particle swarm optimization, and harmony search algorithm are considered. The algorithms are incorporated in the learning of neural networks and their classification results on the benchmark UCI machine learning data sets are compared. It is found that all three metaheuristic algorithms give similar and comparable performance, as captured in the average overall classification accuracy. The results corroborate the findings reported in the works done by previous researchers. Several recommendations are given, which include the need of statistical analysis to verify the results and further theoretical works to support the obtained empirical results.

  6. Parameter estimation using meta-heuristics in systems biology: a comprehensive review.

    Science.gov (United States)

    Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie

    2012-01-01

    This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.

  7. A well-scalable metaheuristic for the fleet size and mix vehicle routing problem with time windows

    NARCIS (Netherlands)

    Bräysy, Olli; Porkka, Pasi P.; Dullaert, Wout; Repoussis, Panagiotis P.; Tarantilis, Christos D.

    This paper presents an efficient and well-scalable metaheuristic for fleet size and mix vehicle routing with time windows. The suggested solution method combines the strengths of well-known threshold accepting and guided local search metaheuristics to guide a set of four local search heuristics. The

  8. Water distribution systems design optimisation using metaheuristics ...

    African Journals Online (AJOL)

    The topic of multi-objective water distribution systems (WDS) design optimisation using metaheuristics is investigated, comparing numerous modern metaheuristics, including several multi-objective evolutionary algorithms, an estimation of distribution algorithm and a recent hyperheuristic named AMALGAM (an evolutionary ...

  9. Applied Formal Methods for Elections

    DEFF Research Database (Denmark)

    Wang, Jian

    development time, or second dynamically, i.e. monitoring while an implementation is used during an election, or after the election is over, for forensic analysis. This thesis contains two chapters on this subject: the chapter Analyzing Implementations of Election Technologies describes a technique...... process. The chapter Measuring Voter Lines describes an automated data collection method for measuring voters' waiting time, and discusses statistical models designed to provide an understanding of the voter behavior in polling stations....

  10. Applied Formal Methods for Elections

    DEFF Research Database (Denmark)

    Wang, Jian

    Information technology is changing the way elections are organized. Technology renders the electoral process more efficient, but things could also go wrong: Voting software is complex, it consists of over thousands of lines of code, which makes it error-prone. Technical problems may cause delays ...... process. The chapter Measuring Voter Lines describes an automated data collection method for measuring voters' waiting time, and discusses statistical models designed to provide an understanding of the voter behavior in polling stations....... at polling stations, or even delay the announcement of the final result. This thesis describes a set of methods to be used, for example, by system developers, administrators, or decision makers to examine election technologies, social choice algorithms and voter experience. Technology: Verifiability refers...... development time, or second dynamically, i.e. monitoring while an implementation is used during an election, or after the election is over, for forensic analysis. This thesis contains two chapters on this subject: the chapter Analyzing Implementations of Election Technologies describes a technique...

  11. Bayesian methods applied to GWAS.

    Science.gov (United States)

    Fernando, Rohan L; Garrick, Dorian

    2013-01-01

    Bayesian multiple-regression methods are being successfully used for genomic prediction and selection. These regression models simultaneously fit many more markers than the number of observations available for the analysis. Thus, the Bayes theorem is used to combine prior beliefs of marker effects, which are expressed in terms of prior distributions, with information from data for inference. Often, the analyses are too complex for closed-form solutions and Markov chain Monte Carlo (MCMC) sampling is used to draw inferences from posterior distributions. This chapter describes how these Bayesian multiple-regression analyses can be used for GWAS. In most GWAS, false positives are controlled by limiting the genome-wise error rate, which is the probability of one or more false-positive results, to a small value. As the number of test in GWAS is very large, this results in very low power. Here we show how in Bayesian GWAS false positives can be controlled by limiting the proportion of false-positive results among all positives to some small value. The advantage of this approach is that the power of detecting associations is not inversely related to the number of markers.

  12. Metaheuristics progress as real problem solvers

    CERN Document Server

    Nonobe, Koji; Yagiura, Mutsunori

    2005-01-01

    Metaheuristics: Progress as Real Problem Solvers is a peer-reviewed volume of eighteen current, cutting-edge papers by leading researchers in the field. Included are an invited paper by F. Glover and G. Kochenberger, which discusses the concept of Metaheuristic agent processes, and a tutorial paper by M.G.C. Resende and C.C. Ribeiro discussing GRASP with path-relinking. Other papers discuss problem-solving approaches to timetabling, automated planograms, elevators, space allocation, shift design, cutting stock, flexible shop scheduling, colorectal cancer and cartography. A final group of methodology papers clarify various aspects of Metaheuristics from the computational view point.

  13. A Comparison between Different Meta-Heuristic Techniques in Power Allocation for Physical Layer Security

    Directory of Open Access Journals (Sweden)

    N. Okati

    2017-12-01

    Full Text Available Node cooperation can protect wireless networks from eavesdropping by using the physical characteristics of wireless channels rather than cryptographic methods. Allocating the proper amount of power to cooperative nodes is a challenging task. In this paper, we use three cooperative nodes, one as relay to increase throughput at the destination and two friendly jammers to degrade eavesdropper’s link. For this scenario, the secrecy rate function is a non-linear non-convex problem. So, in this case, exact optimization methods can only achieve suboptimal solution. In this paper, we applied different meta-heuristic optimization techniques, like Genetic Algorithm (GA, Partial Swarm Optimization (PSO, Bee Algorithm (BA, Tabu Search (TS, Simulated Annealing (SA and Teaching-Learning-Based Optimization (TLBO. They are compared with each other to obtain solution for power allocation in a wiretap wireless network. Although all these techniques find suboptimal solutions, but they appear superlative to exact optimization methods. Finally, we define a Figure of Merit (FOM as a rule of thumb to determine the best meta-heuristic algorithm. This FOM considers quality of solution, number of required iterations to converge, and CPU time.

  14. Metaheuristics for engineering and architectural design of hospitals

    DEFF Research Database (Denmark)

    Holst, Malene Kirstine; Kirkegaard, Poul Henning

    2014-01-01

    This paper presents an approach for optimized hospital layout design based on metaheuristics. Through the use of metaheuristics the hospital functionalities are decomposed into geometric units. The units define the baseline for the design of the hospital, as the units are based on correlations...... of the functionalities within the units and across the units. For the study, presented in this paper, a model for hospital design is developed formulating the design requirements of the hospital functionalities as correlations of the health facility. The requirements of the hospital define the constraints for layout...... design as correlations of the functionalities as well as sizes. The main contribution of this study is an investigation of a soft computing method for optimization of the hospital layout design, where design requirements are met while the design quality in terms of design preferences is maximized...

  15. Metaheuristic algorithms for building Covering Arrays: A review

    Directory of Open Access Journals (Sweden)

    Jimena Adriana Timaná-Peña

    2016-09-01

    Full Text Available Covering Arrays (CA are mathematical objects used in the functional testing of software components. They enable the testing of all interactions of a given size of input parameters in a procedure, function, or logical unit in general, using the minimum number of test cases. Building CA is a complex task (NP-complete problem that involves lengthy execution times and high computational loads. The most effective methods for building CAs are algebraic, Greedy, and metaheuristic-based. The latter have reported the best results to date. This paper presents a description of the major contributions made by a selection of different metaheuristics, including simulated annealing, tabu search, genetic algorithms, ant colony algorithms, particle swarm algorithms, and harmony search algorithms. It is worth noting that simulated annealing-based algorithms have evolved as the most competitive, and currently form the state of the art.

  16. On the Effectiveness of Nature-Inspired Metaheuristic Algorithms for Performing Phase Equilibrium Thermodynamic Calculations

    Directory of Open Access Journals (Sweden)

    Seif-Eddeen K. Fateen

    2014-01-01

    Full Text Available The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS, intelligent firefly (IFA, bat (BA, artificial bee colony (ABC, MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES, magnetic charged system search (MCSS, and bare bones particle swarm optimization (BBPSO. The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design.

  17. On the effectiveness of nature-inspired metaheuristic algorithms for performing phase equilibrium thermodynamic calculations.

    Science.gov (United States)

    Fateen, Seif-Eddeen K; Bonilla-Petriciolet, Adrian

    2014-01-01

    The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design.

  18. Advanced metaheuristic algorithms for laser optimization in optical accelerator technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tomizawa, Hiromitsu, E-mail: hiro@spring8.or.jp [Japan Synchrotron Radiation Research Institute (JASRI), XFEL Joint Project/SPring-8, 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo (Japan)

    2011-10-15

    Lasers are among the most important experimental tools for user facilities, including synchrotron radiation and free electron lasers (FEL). In the synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for X-ray-FELs, lasers play important roles as seed light sources or photocathode-illuminating light sources to generate a high-brightness electron bunch. For future accelerators, laser-based techonologies such as electro-optic (EO) sampling to measure ultra-short electron bunches and optical-fiber-based femtosecond timing systems have been intensively developed in the last decade. Therefore, controls and optimizations of laser pulse characteristics are strongly required for many kinds of experiments and improvement of accelerator systems. However, people believe that lasers should be tuned and customized for each requirement manually by experts. This makes it difficult for laser systems to be part of the common accelerator infrastructure. Automatic laser tuning requires sophisticated algorithms, and the metaheuristic algorithm is one of the best solutions. The metaheuristic laser tuning system is expected to reduce the human effort and time required for laser preparations. I have shown some successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles, and a hill-climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each machine requirement.

  19. Advanced metaheuristic algorithms for laser optimization in optical accelerator technologies

    International Nuclear Information System (INIS)

    Tomizawa, Hiromitsu

    2011-01-01

    Lasers are among the most important experimental tools for user facilities, including synchrotron radiation and free electron lasers (FEL). In the synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for X-ray-FELs, lasers play important roles as seed light sources or photocathode-illuminating light sources to generate a high-brightness electron bunch. For future accelerators, laser-based techonologies such as electro-optic (EO) sampling to measure ultra-short electron bunches and optical-fiber-based femtosecond timing systems have been intensively developed in the last decade. Therefore, controls and optimizations of laser pulse characteristics are strongly required for many kinds of experiments and improvement of accelerator systems. However, people believe that lasers should be tuned and customized for each requirement manually by experts. This makes it difficult for laser systems to be part of the common accelerator infrastructure. Automatic laser tuning requires sophisticated algorithms, and the metaheuristic algorithm is one of the best solutions. The metaheuristic laser tuning system is expected to reduce the human effort and time required for laser preparations. I have shown some successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles, and a hill-climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each machine requirement.

  20. Solving Large Clustering Problems with Meta-Heuristic Search

    DEFF Research Database (Denmark)

    Turkensteen, Marcel; Andersen, Kim Allan; Bang-Jensen, Jørgen

    problems, but applications to large clustering problems are rare. The simulated annealing heuristic has mainly been applied to relatively small instances. In this paper, we implement tabu search and simulated annealing approaches and compare them to the commonly used k-means approach. We find that the meta......In Clustering Problems, groups of similar subjects are to be retrieved from data sets. In this paper, Clustering Problems with the frequently used Minimum Sum-of-Squares Criterion are solved using meta-heuristic search. Tabu search has proved to be a successful methodology for solving optimization...

  1. Metaheuristic analysis in reverse logistics of waste

    Energy Technology Data Exchange (ETDEWEB)

    Serrano Elena, A.

    2016-07-01

    This paper focuses in the use of search metaheuristic techniques on a dynamic and deterministic model to analyze and solve cost optimization problems and location in reverse logistics, within the field of municipal waste management of Málaga (Spain). In this work we have selected two metaheuristic techniques having relevance in present research, to test the validity of the proposed approach: an important technique for its international presence as is the Genetic Algorithm (GA) and another interesting technique that works with swarm intelligence as is the Particles Swarm Optimization (PSO). These metaheuristic techniques will be used to solve cost optimization problems and location of MSW recovery facilities (transfer centers and treatment plants). (Author)

  2. H-methods in applied sciences

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2008-01-01

    The author has developed a framework for mathematical modelling within applied sciences. It is characteristic for data from 'nature and industry' that they have reduced rank for inference. It means that full rank solutions normally do not give satisfactory solutions. The basic idea of H...... with finding a balance between the estimation task and the prediction task. The name H-methods has been chosen because of close analogy with the Heisenberg uncertainty inequality. A similar situation is present in modelling data. The mathematical modelling stops, when the prediction aspect of the model cannot...... be improved. H-methods have been applied to wide range of fields within applied sciences. In each case, the H-methods provide with superior solutions compared to the traditional ones. A background for the H-methods is presented. The H-principle of mathematical modelling is explained. It is shown how...

  3. [Montessori method applied to dementia - literature review].

    Science.gov (United States)

    Brandão, Daniela Filipa Soares; Martín, José Ignacio

    2012-06-01

    The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study.

  4. Engineering applications of metaheuristics: an introduction

    Science.gov (United States)

    Oliva, Diego; Hinojosa, Salvador; Demeshko, M. V.

    2017-01-01

    Metaheuristic algorithms are important tools that in recent years have been used extensively in several fields. In engineering, there is a big amount of problems that can be solved from an optimization point of view. This paper is an introduction of how metaheuristics can be used to solve complex problems of engineering. Their use produces accurate results in problems that are computationally expensive. Experimental results support the performance obtained by the selected algorithms in such specific problems as digital filter design, image processing and solar cells design.

  5. Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.

    Science.gov (United States)

    Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David

    2014-01-01

    We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.

  6. A Threshold Accepting Metaheuristic for the Vehicle Routing Problem with Time Windows

    NARCIS (Netherlands)

    Bräysy, Olli; Berger, Jean; Barkaoui, Mohamed; Dullaert, Wout

    2003-01-01

    Threshold Accepting, a variant of Simulated Annealing, is applied for the first time to a set of 356 benchmark instances for the Vehicle Routing with Time Windows. The Threshold Accepting metaheuristic is used to improve upon results obtained with a recent parallel genetic algorithm and a

  7. A Threshold Accepting Metaheuristic for the Vehicle Routing Problem with Time Windows.

    NARCIS (Netherlands)

    Bräysy, Olli; Berger, Jean; Barkaoui, Mohamed; Dullaert, Wout

    2003-01-01

    Threshold Accepting, a variant of Simulated, Annealing, is applied for the first time to a set of 356 benchmark instances for the Vehicle Routing with Time Windows. The Threshold Accepting metaheuristic is used to improve upon results obtained with a recent parallel genetic algorithm and a

  8. Generalized Response Surface Methodology : A New Metaheuristic

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2006-01-01

    Generalized Response Surface Methodology (GRSM) is a novel general-purpose metaheuristic based on Box and Wilson.s Response Surface Methodology (RSM).Both GRSM and RSM estimate local gradients to search for the optimal solution.These gradients use local first-order polynomials.GRSM, however, uses

  9. Metaheuristic optimization of acoustic inverse problems.

    NARCIS (Netherlands)

    van Leijen, A.V.; Rothkrantz, L.; Groen, F.

    2011-01-01

    Swift solving of geoacoustic inverse problems strongly depends on the application of a global optimization scheme. Given a particular inverse problem, this work aims to answer the questions how to select an appropriate metaheuristic search strategy, and how to configure it for optimal performance.

  10. Swarm intelligence metaheuristics for enhanced data analysis and optimization.

    Science.gov (United States)

    Hanrahan, Grady

    2011-09-21

    The swarm intelligence (SI) computing paradigm has proven itself as a comprehensive means of solving complicated analytical chemistry problems by emulating biologically-inspired processes. As global optimum search metaheuristics, associated algorithms have been widely used in training neural networks, function optimization, prediction and classification, and in a variety of process-based analytical applications. The goal of this review is to provide readers with critical insight into the utility of swarm intelligence tools as methods for solving complex chemical problems. Consideration will be given to algorithm development, ease of implementation and model performance, detailing subsequent influences on a number of application areas in the analytical, bioanalytical and detection sciences.

  11. Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem.

    Science.gov (United States)

    Chen, Xiaopan; Kong, Yunfeng; Dang, Lanxue; Hou, Yane; Ye, Xinyue

    2015-01-01

    As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.

  12. Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem.

    Directory of Open Access Journals (Sweden)

    Xiaopan Chen

    Full Text Available As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.

  13. Applied mathematical methods in nuclear thermal hydraulics

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1983-01-01

    Applied mathematical methods are used extensively in modeling of nuclear reactor thermal-hydraulic behavior. This application has required significant extension to the state-of-the-art. The problems encountered in modeling of two-phase fluid transients and the development of associated numerical solution methods are reviewed and quantified using results from a numerical study of an analogous linear system of differential equations. In particular, some possible approaches for formulating a well-posed numerical problem for an ill-posed differential model are investigated and discussed. The need for closer attention to numerical fidelity is indicated

  14. Entropy viscosity method applied to Euler equations

    International Nuclear Information System (INIS)

    Delchini, M. O.; Ragusa, J. C.; Berry, R. A.

    2013-01-01

    The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)

  15. A Fast Evolutionary Metaheuristic for the Vehicle Routing Problem with Time Windows

    NARCIS (Netherlands)

    Bräysy, Olli; Dullaert, W.

    2003-01-01

    This paper presents a new evolutionary metaheuristic for the vehicle routing problem with time windows. Ideas on multi-start local search, ejection chains, simulated annealing and evolutionary computation are combined in a heuristic that is both robust and efficient. The proposed method produces

  16. A novel metaheuristic for continuous optimization problems: Virus optimization algorithm

    Science.gov (United States)

    Liang, Yun-Chia; Rodolfo Cuevas Juarez, Josue

    2016-01-01

    A novel metaheuristic for continuous optimization problems, named the virus optimization algorithm (VOA), is introduced and investigated. VOA is an iteratively population-based method that imitates the behaviour of viruses attacking a living cell. The number of viruses grows at each replication and is controlled by an immune system (a so-called 'antivirus') to prevent the explosive growth of the virus population. The viruses are divided into two classes (strong and common) to balance the exploitation and exploration effects. The performance of the VOA is validated through a set of eight benchmark functions, which are also subject to rotation and shifting effects to test its robustness. Extensive comparisons were conducted with over 40 well-known metaheuristic algorithms and their variations, such as artificial bee colony, artificial immune system, differential evolution, evolutionary programming, evolutionary strategy, genetic algorithm, harmony search, invasive weed optimization, memetic algorithm, particle swarm optimization and simulated annealing. The results showed that the VOA is a viable solution for continuous optimization.

  17. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based......The geomagnetic field varies on a variety of time- and length scales, which are only rudimentary considered in most present field models. The part of the observed field that can not be explained by a given model, the model residuals, is often considered as an estimate of the data uncertainty (which...... on 5 years of Ørsted and CHAMP data, and includes secular variation and acceleration, as well as low-degree external (magnetospheric) and induced fields. The analysis is done in order to find the statistical behaviour of the space-time structure of the residuals, as a proxy for the data covariances...

  18. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    Science.gov (United States)

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  19. METAHEURISTICS EVALUATION: A PROPOSAL FOR A MULTICRITERIA METHODOLOGY

    Directory of Open Access Journals (Sweden)

    Valdir Agustinho de Melo

    2015-12-01

    Full Text Available ABSTRACT In this work we propose a multicriteria evaluation scheme for heuristic algorithms based on the classic Condorcet ranking technique. Weights are associated to the ranking of an algorithm among a set being object of comparison. We used five criteria and a function on the set of natural numbers to create a ranking. The discussed comparison involves three well-known problems of combinatorial optimization - Traveling Salesperson Problem (TSP, Capacitated Vehicle Routing Problem (CVRP and Quadratic Assignment Problem (QAP. The tested instances came from public libraries. Each algorithm was used with essentially the same structure, the same local search was applied and the initial solutions were similarly built. It is important to note that the work does not make proposals involving algorithms: the results for the three problems are shown only to illustrate the operation of the evaluation technique. Four metaheuristics - GRASP, Tabu Search, ILS and VNS - are therefore only used for the comparisons.

  20. Computational methods applied to wind tunnel optimization

    Science.gov (United States)

    Lindsay, David

    methods, coordinate transformation theorems and techniques including the Method of Jacobians, and a derivation of the fluid flow fundamentals required for the model. It applies the methods to study the effect of cross-section and fillet variation, and to obtain a sample design of a high-uniformity nozzle.

  1. Putting Continuous Metaheuristics to Work in Binary Search Spaces

    Directory of Open Access Journals (Sweden)

    Broderick Crawford

    2017-01-01

    Full Text Available In the real world, there are a number of optimization problems whose search space is restricted to take binary values; however, there are many continuous metaheuristics with good results in continuous search spaces. These algorithms must be adapted to solve binary problems. This paper surveys articles focused on the binarization of metaheuristics designed for continuous optimization.

  2. Metaheuristics in water, geotechnical and transport engineering

    CERN Document Server

    Yang, Xin-She; Talatahari, Siamak; Alavi, Amir Hossein

    2013-01-01

    Due to an ever-decreasing supply in raw materials and stringent constraints on conventional energy sources, demand for lightweight, efficient and low cost structures has become crucially important in modern engineering design. This requires engineers to search for optimal and robust design options to address design problems that are often large in scale and highly nonlinear, making finding solutions challenging. In the past two decades, metaheuristic algorithms have shown promising power, efficiency and versatility in solving these difficult optimization problems. This book examines the la

  3. Stellarator optimization under several criteria using metaheuristics

    International Nuclear Information System (INIS)

    Castejón, F; Gómez-Iglesias, A; Vega-Rodríguez, M A; Jiménez, J A; Velasco, J L; Romero, J A

    2013-01-01

    A new algorithm based on metaheuristics has been developed to perform stellarator optimization. This algorithm, which is inspired by the behaviour of bees and is called distributed asynchronous bees, has been used for the optimization under three criteria: minimization of B × grad(B) drift, Mercier and ballooning stability. This algorithm is tested by partially optimizing TJ-II and, afterwards, a three-period optimized configuration is found by performing a full optimization that starts from a three-period heliac. (paper)

  4. A hybrid metaheuristic for closest string problem.

    Science.gov (United States)

    Mousavi, Sayyed Rasoul

    2011-01-01

    The Closest String Problem (CSP) is an optimisation problem, which is to obtain a string with the minimum distance from a number of given strings. In this paper, a new metaheuristic algorithm is investigated for the problem, whose main feature is relatively high speed in obtaining good solutions, which is essential when the input size is large. The proposed algorithm is compared with four recent algorithms suggested for the problem, outperforming them in more than 98% of the cases. It is also remarkably faster than all of them, running within 1 s in most of the experimental cases.

  5. A Hybrid Metaheuristic DE/CS Algorithm for UCAV Three-Dimension Path Planning

    OpenAIRE

    Wang, Gaige; Guo, Lihong; Duan, Hong; Wang, Heqi; Liu, Luo; Shao, Mingzhen

    2012-01-01

    Three-dimension path planning for uninhabited combat air vehicle (UCAV) is a complicated high-dimension optimization problem, which primarily centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. A new hybrid metaheuristic differential evolution (DE) and cuckoo search (CS) algorithm is proposed to solve the UCAV three-dimension path planning problem. DE is applied to optimize the process of selecting cuckoos of th...

  6. Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain

    Science.gov (United States)

    Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida

    2013-04-01

    Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.

  7. Advances in metaheuristics for gene selection and classification of microarray data.

    Science.gov (United States)

    Duval, Béatrice; Hao, Jin-Kao

    2010-01-01

    Gene selection aims at identifying a (small) subset of informative genes from the initial data in order to obtain high predictive accuracy for classification. Gene selection can be considered as a combinatorial search problem and thus be conveniently handled with optimization methods. In this article, we summarize some recent developments of using metaheuristic-based methods within an embedded approach for gene selection. In particular, we put forward the importance and usefulness of integrating problem-specific knowledge into the search operators of such a method. To illustrate the point, we explain how ranking coefficients of a linear classifier such as support vector machine (SVM) can be profitably used to reinforce the search efficiency of Local Search and Evolutionary Search metaheuristic algorithms for gene selection and classification.

  8. A study of metaheuristic algorithms for high dimensional feature selection on microarray data

    Science.gov (United States)

    Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna

    2017-11-01

    Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.

  9. Metaheuristic ILS with path relinking for the number partitioning problem

    Directory of Open Access Journals (Sweden)

    Cesar Augusto Souza de Oliveira

    2017-07-01

    Full Text Available This study brings an implementation of a metaheuristic procedure to solve the Number Partitioning Problem (NPP, which is a classic NP-hard combinatorial optimization problem. The presented problem has applications in different areas, such as: logistics, production and operations management, besides important relationships with other combinatorial problems. This paper aims to perform a comparative analysis between the proposed algorithm with others metaheuristics using a group of instances available on the literature. Implementations of constructive heuristics, local search and metaheuristics ILS with path relinking as mechanism of intensification and diversification were made in order to improve solutions, surpassing the others algorithms.

  10. Generalized reciprocal method applied in processing seismic ...

    African Journals Online (AJOL)

    A geophysical investigation was carried out at Shika, near Zaria, using seismic refraction method; with the aim of analyzing the data obtained using the generalized reciprocal method (GRM). The technique is for delineating undulating refractors at any depth from in-line seismic refraction data consisting of forward and ...

  11. Search and optimization by metaheuristics techniques and algorithms inspired by nature

    CERN Document Server

    Du, Ke-Lin

    2016-01-01

    This textbook provides a comprehensive introduction to nature-inspired metaheuristic methods for search and optimization, including the latest trends in evolutionary algorithms and other forms of natural computing. Over 100 different types of these methods are discussed in detail. The authors emphasize non-standard optimization problems and utilize a natural approach to the topic, moving from basic notions to more complex ones. An introductory chapter covers the necessary biological and mathematical backgrounds for understanding the main material. Subsequent chapters then explore almost all of the major metaheuristics for search and optimization created based on natural phenomena, including simulated annealing, recurrent neural networks, genetic algorithms and genetic programming, differential evolution, memetic algorithms, particle swarm optimization, artificial immune systems, ant colony optimization, tabu search and scatter search, bee and bacteria foraging algorithms, harmony search, biomolecular computin...

  12. Applying scrum methods to ITS projects.

    Science.gov (United States)

    2017-08-01

    The introduction of new technology generally brings new challenges and new methods to help with deployments. Agile methodologies have been introduced in the information technology industry to potentially speed up development. The Federal Highway Admi...

  13. Statistical classification methods applied to seismic discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, F.M. [ed.; Anderson, D.N.; Anderson, K.K.; Hagedorn, D.N.; Higbee, K.T.; Miller, N.E.; Redgate, T.; Rohay, A.C.

    1996-06-11

    To verify compliance with a Comprehensive Test Ban Treaty (CTBT), low energy seismic activity must be detected and discriminated. Monitoring small-scale activity will require regional (within {approx}2000 km) monitoring capabilities. This report provides background information on various statistical classification methods and discusses the relevance of each method in the CTBT seismic discrimination setting. Criteria for classification method selection are explained and examples are given to illustrate several key issues. This report describes in more detail the issues and analyses that were initially outlined in a poster presentation at a recent American Geophysical Union (AGU) meeting. Section 2 of this report describes both the CTBT seismic discrimination setting and the general statistical classification approach to this setting. Seismic data examples illustrate the importance of synergistically using multivariate data as well as the difficulties due to missing observations. Classification method selection criteria are presented and discussed in Section 3. These criteria are grouped into the broad classes of simplicity, robustness, applicability, and performance. Section 4 follows with a description of several statistical classification methods: linear discriminant analysis, quadratic discriminant analysis, variably regularized discriminant analysis, flexible discriminant analysis, logistic discriminant analysis, K-th Nearest Neighbor discrimination, kernel discrimination, and classification and regression tree discrimination. The advantages and disadvantages of these methods are summarized in Section 5.

  14. Informative Gene Selection for Cancer Classification with Microarray Data Using a Metaheuristic Framework

    Science.gov (United States)

    M, Pyingkodi; R, Thangarajan

    2018-02-26

    Objective: Cancer diagnosis is one of the most vital emerging clinical applications of microarray data. Due to the high dimensionality, gene selection is an important step for improving expression data classification performance. There is therefore a need for effective methods to select informative genes for prediction and diagnosis of cancer. The main objective of this research was to derive a heuristic approach to select highly informative genes. Methods: A metaheuristic approach with a Genetic Algorithm with Levy Flight (GA-LV) was applied for classification of cancer genes in microarrays. The experimental results were analyzed with five major cancer gene expression benchmark datasets. Result: GA-LV proved superior to GA and statistical approaches, with 100% accuracy for the dataset for Leukemia, Lung and Lymphoma. For Prostate and Colon datasets the GA-LV was 99.5% and 99.2% accurate, respectively. Conclusion: The experimental results show that the proposed approach is suitable for effective gene selection with all benchmark datasets, removing irrelevant and redundant genes to improve classification accuracy. Creative Commons Attribution License

  15. Applying Fuzzy Possibilistic Methods on Critical Objects

    DEFF Research Database (Denmark)

    Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz

    2016-01-01

    Providing a flexible environment to process data objects is a desirable goal of machine learning algorithms. In fuzzy and possibilistic methods, the relevance of data objects is evaluated and a membership degree is assigned. However, some critical objects objects have the potential ability to affect...... the performance of the clustering algorithms if they remain in a specific cluster or they are moved into another. In this paper we analyze and compare how critical objects affect the behaviour of fuzzy possibilistic methods in several data sets. The comparison is based on the accuracy and ability of learning...

  16. Tutte's barycenter method applied to isotopies

    NARCIS (Netherlands)

    de Verdiere, EC; Pocchiola, M; Vegter, G

    This paper is concerned with applications of Tutte's barycentric embedding theorem (Proc. London Math. Soc. 13 (1963) 743-768). It presents a method for building isotopies of triangulations in the plane, based on Tutte's theorem and the computation of equilibrium stresses of graphs by

  17. Spectral methods applied to Ising models

    International Nuclear Information System (INIS)

    DeFacio, B.; Hammer, C.L.; Shrauner, J.E.

    1980-01-01

    Several applications of Ising models are reviewed. A 2-d Ising model is studied, and the problem of describing an interface boundary in a 2-d Ising model is addressed. Spectral methods are used to formulate a soluble model for the surface tension of a many-Fermion system

  18. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  19. [The diagnostic methods applied in mycology].

    Science.gov (United States)

    Kurnatowska, Alicja; Kurnatowski, Piotr

    2008-01-01

    The systemic fungal invasions are recognized with increasing frequency and constitute a primary cause of morbidity and mortality, especially in immunocompromised patients. Early diagnosis improves prognosis, but remains a problem because there is lack of sensitive tests to aid in the diagnosis of systemic mycoses on the one hand, and on the other the patients only present unspecific signs and symptoms, thus delaying early diagnosis. The diagnosis depends upon a combination of clinical observation and laboratory investigation. The successful laboratory diagnosis of fungal infection depends in major part on the collection of appropriate clinical specimens for investigations and on the selection of appropriate microbiological test procedures. So these problems (collection of specimens, direct techniques, staining methods, cultures on different media and non-culture-based methods) are presented in article.

  20. Conceptual Comparison of Population Based Metaheuristics for Engineering Problems

    Directory of Open Access Journals (Sweden)

    Oluwole Adekanmbi

    2015-01-01

    Full Text Available Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.

  1. Conceptual comparison of population based metaheuristics for engineering problems.

    Science.gov (United States)

    Adekanmbi, Oluwole; Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.

  2. Applications of metaheuristic optimization algorithms in civil engineering

    CERN Document Server

    Kaveh, A

    2017-01-01

    The book presents recently developed efficient metaheuristic optimization algorithms and their applications for solving various optimization problems in civil engineering. The concepts can also be used for optimizing problems in mechanical and electrical engineering.

  3. Proteomics methods applied to malaria: Plasmodium falciparum

    International Nuclear Information System (INIS)

    Cuesta Astroz, Yesid; Segura Latorre, Cesar

    2012-01-01

    Malaria is a parasitic disease that has a high impact on public health in developing countries. The sequencing of the plasmodium falciparum genome and the development of proteomics have enabled a breakthrough in understanding the biology of the parasite. Proteomics have allowed to characterize qualitatively and quantitatively the parasite s expression of proteins and has provided information on protein expression under conditions of stress induced by antimalarial. Given the complexity of their life cycle, this takes place in the vertebrate host and mosquito vector. It has proven difficult to characterize the protein expression during each stage throughout the infection process in order to determine the proteome that mediates several metabolic, physiological and energetic processes. Two dimensional electrophoresis, liquid chromatography and mass spectrometry have been useful to assess the effects of antimalarial on parasite protein expression and to characterize the proteomic profile of different p. falciparum stages and organelles. The purpose of this review is to present state of the art tools and advances in proteomics applied to the study of malaria, and to present different experimental strategies used to study the parasite's proteome in order to show the advantages and disadvantages of each one.

  4. Neural model of gene regulatory network: a survey on supportive meta-heuristics.

    Science.gov (United States)

    Biswas, Surama; Acharyya, Sriyankar

    2016-06-01

    Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.

  5. METHOD OF APPLYING NICKEL COATINGS ON URANIUM

    Science.gov (United States)

    Gray, A.G.

    1959-07-14

    A method is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.

  6. Versatile Formal Methods Applied to Quantum Information.

    Energy Technology Data Exchange (ETDEWEB)

    Witzel, Wayne [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Rudinger, Kenneth Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sarovar, Mohan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-11-01

    Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.

  7. Multicompare Tests of the Performance of Different Metaheuristics in EEG Dipole Source Localization

    Directory of Open Access Journals (Sweden)

    Diana Irazú Escalona-Vargas

    2014-01-01

    Full Text Available We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA, genetic algorithm (GA, particle swarm optimization (PSO, and differential evolution (DE, when used for electroencephalographic (EEG source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization’s performance in terms of metaheuristics’ operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics’ performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.

  8. jMetalCpp: optimizing molecular docking problems with a C++ metaheuristic framework.

    Science.gov (United States)

    López-Camacho, Esteban; García Godoy, María Jesús; Nebro, Antonio J; Aldana-Montes, José F

    2014-02-01

    Molecular docking is a method for structure-based drug design and structural molecular biology, which attempts to predict the position and orientation of a small molecule (ligand) in relation to a protein (receptor) to produce a stable complex with a minimum binding energy. One of the most widely used software packages for this purpose is AutoDock, which incorporates three metaheuristic techniques. We propose the integration of AutoDock with jMetalCpp, an optimization framework, thereby providing both single- and multi-objective algorithms that can be used to effectively solve docking problems. The resulting combination of AutoDock + jMetalCpp allows users of the former to easily use the metaheuristics provided by the latter. In this way, biologists have at their disposal a richer set of optimization techniques than those already provided in AutoDock. Moreover, designers of metaheuristic techniques can use molecular docking for case studies, which can lead to more efficient algorithms oriented to solving the target problems.  jMetalCpp software adapted to AutoDock is freely available as a C++ source code at http://khaos.uma.es/AutodockjMetal/.

  9. Reflections on Mixing Methods in Applied Linguistics Research

    Science.gov (United States)

    Hashemi, Mohammad R.

    2012-01-01

    This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…

  10. METAHEURISTICS FOR OPTIMIZING SAFETY STOCK IN MULTI STAGE INVENTORY SYSTEM

    Directory of Open Access Journals (Sweden)

    Gordan Badurina

    2013-02-01

    Full Text Available Managing the right level of inventory is critical in order to achieve the targeted level of customer service, but it also carries significant cost in supply chain. In majority of cases companies define safety stock on the most downstream level, i.e. the finished product level, using different analytical methods. Safety stock on upstream level, however, usually covers only those problems which companies face on that particular level (uncertainty of delivery, issues in production, etc.. This paper looks into optimizing safety stock in a pharmaceutical supply considering the three stages inventory system. The problem is defined as a single criterion mixed integer programming problem. The objective is to minimize the inventory cost while the service level is predetermined. In order to coordinate inventories at all echelons, the variable representing the so-called service time is introduced. Because of the problem dimensions, metaheuristics based on genetic algorithm and simulated annealing are constructed and compared, using real data from a Croatian pharmaceutical company. The computational results are presented evidencing improvements in minimizing inventory costs.

  11. Multi-objective optimization in computer networks using metaheuristics

    CERN Document Server

    Donoso, Yezid

    2007-01-01

    Metaheuristics are widely used to solve important practical combinatorial optimization problems. Many new multicast applications emerging from the Internet-such as TV over the Internet, radio over the Internet, and multipoint video streaming-require reduced bandwidth consumption, end-to-end delay, and packet loss ratio. It is necessary to design and to provide for these kinds of applications as well as for those resources necessary for functionality. Multi-Objective Optimization in Computer Networks Using Metaheuristics provides a solution to the multi-objective problem in routing computer networks. It analyzes layer 3 (IP), layer 2 (MPLS), and layer 1 (GMPLS and wireless functions). In particular, it assesses basic optimization concepts, as well as several techniques and algorithms for the search of minimals; examines the basic multi-objective optimization concepts and the way to solve them through traditional techniques and through several metaheuristics; and demonstrates how to analytically model the compu...

  12. A new approach for visual identification of orange varieties using neural networks and metaheuristic algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Sabzi

    2018-03-01

    Full Text Available Accurate classification of fruit varieties in processing factories and during post-harvesting applications is a challenge that has been widely studied. This paper presents a novel approach to automatic fruit identification applied to three common varieties of oranges (Citrus sinensis L., namely Bam, Payvandi and Thomson. A total of 300 color images were used for the experiments, 100 samples for each orange variety, which are publicly available. After segmentation, 263 parameters, including texture, color and shape features, were extracted from each sample using image processing. Among them, the 6 most effective features were automatically selected by using a hybrid approach consisting of an artificial neural network and particle swarm optimization algorithm (ANN-PSO. Then, three different classifiers were applied and compared: hybrid artificial neural network – artificial bee colony (ANN-ABC; hybrid artificial neural network – harmony search (ANN-HS; and k-nearest neighbors (kNN. The experimental results show that the hybrid approaches outperform the results of kNN. The average correct classification rate of ANN-HS was 94.28%, while ANN-ABS achieved 96.70% accuracy with the available data, contrasting with the 70.9% baseline accuracy of kNN. Thus, this new proposed methodology provides a fast and accurate way to classify multiple fruits varieties, which can be easily implemented in processing factories. The main contribution of this work is that the method can be directly adapted to other use cases, since the selection of the optimal features and the configuration of the neural network are performed automatically using metaheuristic algorithms.

  13. The use of meta-heuristics for airport gate assignment

    DEFF Research Database (Denmark)

    Cheng, Chun-Hung; Ho, Sin C.; Kwan, Cheuk-Lam

    2012-01-01

    Improper assignment of gates may result in flight delays, inefficient use of the resource, customer’s dissatisfaction. A typical metropolitan airport handles hundreds of flights a day. Solving the gate assignment problem (GAP) to optimality is often impractical. Meta-heuristics have recently been...... proposed to generate good solutions within a reasonable timeframe. In this work, we attempt to assess the performance of three meta-heuristics, namely, genetic algorithm (GA), tabu search (TS), simulated annealing (SA) and a hybrid approach based on SA and TS. Flight data from Incheon International Airport...

  14. Metaheuristics for a tectonic development of hospital design footprints

    DEFF Research Database (Denmark)

    Holst, Malene Kirstine; Kirkegaard, Poul Henning

    2014-01-01

    The present paper presents a model for footprint generation of hospital design based on functionalities. The model takes point of departure in layout generation for hospitals by use of a metaheuristic approach. The hospital functionalities with respective constraints lead a decomposition of the h......The present paper presents a model for footprint generation of hospital design based on functionalities. The model takes point of departure in layout generation for hospitals by use of a metaheuristic approach. The hospital functionalities with respective constraints lead a decomposition...

  15. Solving Molecular Docking Problems with Multi-Objective Metaheuristics

    Directory of Open Access Journals (Sweden)

    María Jesús García-Godoy

    2015-06-01

    Full Text Available Molecular docking is a hard optimization problem that has been tackled in the past with metaheuristics, demonstrating new and challenging results when looking for one objective: the minimum binding energy. However, only a few papers can be found in the literature that deal with this problem by means of a multi-objective approach, and no experimental comparisons have been made in order to clarify which of them has the best overall performance. In this paper, we use and compare, for the first time, a set of representative multi-objective optimization algorithms applied to solve complex molecular docking problems. The approach followed is focused on optimizing the intermolecular and intramolecular energies as two main objectives to minimize. Specifically, these algorithms are: two variants of the non-dominated sorting genetic algorithm II (NSGA-II, speed modulation multi-objective particle swarm optimization (SMPSO, third evolution step of generalized differential evolution (GDE3, multi-objective evolutionary algorithm based on decomposition (MOEA/D and S-metric evolutionary multi-objective optimization (SMS-EMOA. We assess the performance of the algorithms by applying quality indicators intended to measure convergence and the diversity of the generated Pareto front approximations. We carry out a comparison with another reference mono-objective algorithm in the problem domain (Lamarckian genetic algorithm (LGA provided by the AutoDock tool. Furthermore, the ligand binding site and molecular interactions of computed solutions are analyzed, showing promising results for the multi-objective approaches. In addition, a case study of application for aeroplysinin-1 is performed, showing the effectiveness of our multi-objective approach in drug discovery.

  16. Convergence of Iterative Methods applied to Boussinesq equation

    Directory of Open Access Journals (Sweden)

    Sh. S. Behzadi

    2013-11-01

    Full Text Available In this paper, a Boussinesq equation is solved by using the Adomian's decomposition method, modified Adomian's decomposition method, variational iteration method, modified variational iteration method, homotopy perturbation method, modified homotopy perturbation method and homotopy analysis method. The approximate solution of this equation is calculated in the form of series which its components are computed by applying a recursive relation. The existence and uniqueness of the solution and the convergence of the proposed methods are proved. A numerical example is studied to demonstrate the accuracy of the presented methods.

  17. A PSO-based hybrid metaheuristic for permutation flowshop scheduling problems.

    Science.gov (United States)

    Zhang, Le; Wu, Jinnan

    2014-01-01

    This paper investigates the permutation flowshop scheduling problem (PFSP) with the objectives of minimizing the makespan and the total flowtime and proposes a hybrid metaheuristic based on the particle swarm optimization (PSO). To enhance the exploration ability of the hybrid metaheuristic, a simulated annealing hybrid with a stochastic variable neighborhood search is incorporated. To improve the search diversification of the hybrid metaheuristic, a solution replacement strategy based on the pathrelinking is presented to replace the particles that have been trapped in local optimum. Computational results on benchmark instances show that the proposed PSO-based hybrid metaheuristic is competitive with other powerful metaheuristics in the literature.

  18. Discrimination symbol applying method for sintered nuclear fuel product

    International Nuclear Information System (INIS)

    Ishizaki, Jin

    1998-01-01

    The present invention provides a symbol applying method for applying discrimination information such as an enrichment degree on the end face of a sintered nuclear product. Namely, discrimination symbols of information of powders are applied by a sintering aid to the end face of a molded member formed by molding nuclear fuel powders under pressure. Then, the molded product is sintered. The sintering aid comprises aluminum oxide, a mixture of aluminum oxide and silicon dioxide, aluminum hydride or aluminum stearate alone or in admixture. As an applying means of the sintering aid, discrimination symbols of information of powders are drawn by an isostearic acid on the end face of the molded product, and the sintering aid is sprayed thereto, or the sintering aid is applied directly, or the sintering aid is suspended in isostearic acid, and the suspension is applied with a brush. As a result, visible discrimination information can be applied to the sintered member easily. (N.H.)

  19. Method of applying a mirror reflecting layer to instrument parts

    Science.gov (United States)

    Alkhanov, L. G.; Danilova, I. A.; Delektorskiy, G. V.

    1974-01-01

    A method follows for applying a mirror reflecting layer to the surfaces of parts, instruments, apparatus, and so on. A brief analysis is presented of the existing methods of obtaining the mirror surface and the advantages of the new method of obtaining the mirror surface by polymer casting mold are indicated.

  20. Building "Applied Linguistic Historiography": Rationale, Scope, and Methods

    Science.gov (United States)

    Smith, Richard

    2016-01-01

    In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…

  1. The implementation frameworks of meta-heuristics hybridization with ...

    African Journals Online (AJOL)

    The hybridization of meta-heuristics algorithms has achieved a remarkable improvement from the adaptation of dynamic parameterization. This paper proposes a variety of implementation frameworks for the hybridization of Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) and the dynamic parameterization.

  2. Deterministic oscillatory search: a new meta-heuristic optimization ...

    Indian Academy of Sciences (India)

    N Archana

    and UPSEB 75 bus system. Results show better performance over other standard algorithms in terms of voltage stability, real power loss and sizing and cost of FACTS devices. Keywords. Artificial intelligence; global optimization; oscillatory search; meta-heuristic optimization; power system problem. 1. Introduction.

  3. Application of Heuristic and Metaheuristic Algorithms in Solving Constrained Weber Problem with Feasible Region Bounded by Arcs

    Directory of Open Access Journals (Sweden)

    Igor Stojanović

    2017-01-01

    Full Text Available The continuous planar facility location problem with the connected region of feasible solutions bounded by arcs is a particular case of the constrained Weber problem. This problem is a continuous optimization problem which has a nonconvex feasible set of constraints. This paper suggests appropriate modifications of four metaheuristic algorithms which are defined with the aim of solving this type of nonconvex optimization problems. Also, a comparison of these algorithms to each other as well as to the heuristic algorithm is presented. The artificial bee colony algorithm, firefly algorithm, and their recently proposed improved versions for constrained optimization are appropriately modified and applied to the case study. The heuristic algorithm based on modified Weiszfeld procedure is also implemented for the purpose of comparison with the metaheuristic approaches. Obtained numerical results show that metaheuristic algorithms can be successfully applied to solve the instances of this problem of up to 500 constraints. Among these four algorithms, the improved version of artificial bee algorithm is the most efficient with respect to the quality of the solution, robustness, and the computational efficiency.

  4. Comparison of metaheuristic optimization techniques for BWR fuel reloads pattern design

    International Nuclear Information System (INIS)

    François, Juan-Luis; Ortiz-Servin, Juan José; Martín-del-Campo, Cecilia; Castillo, Alejandro; Esquivel-Estrada, Jaime

    2013-01-01

    Highlights: ► This paper shows a performance comparison of several optimization techniques for fuel reload in BWR. ► Genetic Algorithms, Neural Networks, Tabu Search and several Ant Algorithms were used. ► All optimization techniques were executed under same conditions: objective function and an equilibrium cycle. ► Fuel bundles with minor actinides were loaded into the core. ► Tabu search and Ant System were the best optimization technique for the studied problem. -- Abstract: Fuel reload pattern optimization is a crucial fuel management activity in nuclear power reactors. Along the years, a lot of work has been done in this area. In particular, several metaheuristic optimization techniques have been applied with good results for boiling water reactors (BWRs). In this paper, a comparison of different metaheuristics: genetic algorithms, tabu search, recurrent neural networks and several ant colony optimization techniques, were applied, in order to evaluate their performance. The optimization of an equilibrium core of a BWR, loaded with mixed oxide fuel composed of plutonium and minor actinides, was selected to be optimized. Results show that the best average values are obtained with the recurrent neural networks technique, meanwhile the best fuel reload was obtained with tabu search. However, according to the number of objective functions evaluated, the two fastest optimization techniques are tabu search and Ant System.

  5. Quantitative EEG Applying the Statistical Recognition Pattern Method

    DEFF Research Database (Denmark)

    Engedal, Knut; Snaedal, Jon; Hoegh, Peter

    2015-01-01

    BACKGROUND/AIM: The aim of this study was to examine the discriminatory power of quantitative EEG (qEEG) applying the statistical pattern recognition (SPR) method to separate Alzheimer's disease (AD) patients from elderly individuals without dementia and from other dementia patients. METHODS...

  6. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    The harmonics detection method based on neural network applied to harmonics compensation. R Dehini, A Bassou, B Ferdi. Abstract. Several different methods have been used to sense load currents and extract its harmonic component in order to produce a reference current in shunt active power filters (SAPF), and to ...

  7. Solving the time dependent vehicle routing problem by metaheuristic algorithms

    Science.gov (United States)

    Johar, Farhana; Potts, Chris; Bennell, Julia

    2015-02-01

    The problem we consider in this study is Time Dependent Vehicle Routing Problem (TDVRP) which has been categorized as non-classical VRP. It is motivated by the fact that multinational companies are currently not only manufacturing the demanded products but also distributing them to the customer location. This implies an efficient synchronization of production and distribution activities. Hence, this study will look into the routing of vehicles which departs from the depot at varies time due to the variation in manufacturing process. We consider a single production line where demanded products are being process one at a time once orders have been received from the customers. It is assumed that order released from the production line will be loaded into scheduled vehicle which ready to be delivered. However, the delivery could only be done once all orders scheduled in the vehicle have been released from the production line. Therefore, there could be lateness on the delivery process from awaiting all customers' order of the route to be released. Our objective is to determine a schedule for vehicle routing that minimizes the solution cost including the travelling and tardiness cost. A mathematical formulation is developed to represent the problem and will be solved by two metaheuristics; Variable Neighborhood Search (VNS) and Tabu Search (TS). These algorithms will be coded in C ++ programming and run using 56's Solomon instances with some modification. The outcome of this experiment can be interpreted as the quality criteria of the different approximation methods. The comparison done shown that VNS gave the better results while consuming reasonable computational efforts.

  8. Application of genetic programming in shape optimization of concrete gravity dams by metaheuristics

    Directory of Open Access Journals (Sweden)

    Abdolhossein Baghlani

    2014-12-01

    Full Text Available A gravity dam maintains its stability against the external loads by its massive size. Hence, minimization of the weight of the dam can remarkably reduce the construction costs. In this paper, a procedure for finding optimal shape of concrete gravity dams with a computationally efficient approach is introduced. Genetic programming (GP in conjunction with metaheuristics is used for this purpose. As a case study, shape optimization of the Bluestone dam is presented. Pseudo-dynamic analysis is carried out on a total number of 322 models in order to establish a database of the results. This database is then used to find appropriate relations based on GP for design criteria of the dam. This procedure eliminates the necessity of the time-consuming process of structural analyses in evolutionary optimization methods. The method is hybridized with three different metaheuristics, including particle swarm optimization, firefly algorithm (FA, and teaching–learning-based optimization, and a comparison is made. The results show that although all algorithms are very suitable, FA is slightly superior to other two algorithms in finding a lighter structure in less number of iterations. The proposed method reduces the weight of dam up to 14.6% with very low computational effort.

  9. Focusing on the golden ball metaheuristic: an extended study on a wider set of problems.

    Science.gov (United States)

    Osaba, E; Diaz, F; Carballedo, R; Onieva, E; Perallos, A

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.

  10. Population-based metaheuristic optimization in neutron optics and shielding design

    Energy Technology Data Exchange (ETDEWEB)

    DiJulio, D.D., E-mail: Douglas.DiJulio@esss.se [European Spallation Source ERIC, P.O. Box 176, SE-221 00 Lund (Sweden); Division of Nuclear Physics, Lund University, SE-221 00 Lund (Sweden); Björgvinsdóttir, H. [European Spallation Source ERIC, P.O. Box 176, SE-221 00 Lund (Sweden); Department of Physics and Astronomy, Uppsala University, SE-751 20 Uppsala (Sweden); Zendler, C. [European Spallation Source ERIC, P.O. Box 176, SE-221 00 Lund (Sweden); Bentley, P.M. [European Spallation Source ERIC, P.O. Box 176, SE-221 00 Lund (Sweden); Department of Physics and Astronomy, Uppsala University, SE-751 20 Uppsala (Sweden)

    2016-11-01

    Population-based metaheuristic algorithms are powerful tools in the design of neutron scattering instruments and the use of these types of algorithms for this purpose is becoming more and more commonplace. Today there exists a wide range of algorithms to choose from when designing an instrument and it is not always initially clear which may provide the best performance. Furthermore, due to the nature of these types of algorithms, the final solution found for a specific design scenario cannot always be guaranteed to be the global optimum. Therefore, to explore the potential benefits and differences between the varieties of these algorithms available, when applied to such design scenarios, we have carried out a detailed study of some commonly used algorithms. For this purpose, we have developed a new general optimization software package which combines a number of common metaheuristic algorithms within a single user interface and is designed specifically with neutronic calculations in mind. The algorithms included in the software are implementations of Particle-Swarm Optimization (PSO), Differential Evolution (DE), Artificial Bee Colony (ABC), and a Genetic Algorithm (GA). The software has been used to optimize the design of several problems in neutron optics and shielding, coupled with Monte-Carlo simulations, in order to evaluate the performance of the various algorithms. Generally, the performance of the algorithms depended on the specific scenarios, however it was found that DE provided the best average solutions in all scenarios investigated in this work.

  11. Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems

    Directory of Open Access Journals (Sweden)

    E. Osaba

    2014-01-01

    Full Text Available Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB. The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem and one combinatorial design problem (the one-dimensional bin packing problem have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.

  12. Population-based metaheuristic optimization in neutron optics and shielding design

    International Nuclear Information System (INIS)

    DiJulio, D.D.; Björgvinsdóttir, H.; Zendler, C.; Bentley, P.M.

    2016-01-01

    Population-based metaheuristic algorithms are powerful tools in the design of neutron scattering instruments and the use of these types of algorithms for this purpose is becoming more and more commonplace. Today there exists a wide range of algorithms to choose from when designing an instrument and it is not always initially clear which may provide the best performance. Furthermore, due to the nature of these types of algorithms, the final solution found for a specific design scenario cannot always be guaranteed to be the global optimum. Therefore, to explore the potential benefits and differences between the varieties of these algorithms available, when applied to such design scenarios, we have carried out a detailed study of some commonly used algorithms. For this purpose, we have developed a new general optimization software package which combines a number of common metaheuristic algorithms within a single user interface and is designed specifically with neutronic calculations in mind. The algorithms included in the software are implementations of Particle-Swarm Optimization (PSO), Differential Evolution (DE), Artificial Bee Colony (ABC), and a Genetic Algorithm (GA). The software has been used to optimize the design of several problems in neutron optics and shielding, coupled with Monte-Carlo simulations, in order to evaluate the performance of the various algorithms. Generally, the performance of the algorithms depended on the specific scenarios, however it was found that DE provided the best average solutions in all scenarios investigated in this work.

  13. Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems

    Science.gov (United States)

    Osaba, E.; Diaz, F.; Carballedo, R.; Onieva, E.; Perallos, A.

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results. PMID:25165742

  14. Applying the Taguchi method for optimized fabrication of bovine ...

    African Journals Online (AJOL)

    The objective of the present study was to optimize the fabrication of bovine serum albumin (BSA) nanoparticle by applying the Taguchi method with characterization of the nanoparticle bioproducts. BSA nanoparticles have been extensively studied in our previous works as suitable carrier for drug delivery, since they are ...

  15. A Metaheuristic Approach to the Multi-Objective Unit Commitment Problem Combining Economic and Environmental Criteria

    Directory of Open Access Journals (Sweden)

    Luís A. C. Roque

    2017-12-01

    Full Text Available We consider a Unit Commitment Problem (UCP addressing not only the economic objective of minimizing the total production costs—as is done in the standard UCP—but also addressing environmental concerns. Our approach utilizes a multi-objective formulation and includes in the objective function a criterion to minimize the emission of pollutants. Environmental concerns are having a significant impact on the operation of power systems related to the emissions from fossil-fuelled power plants. However, the standard UCP, which minimizes just the total production costs, is inadequate to address environmental concerns. We propose to address the UCP with environmental concerns as a multi-objective problem and use a metaheuristic approach combined with a non-dominated sorting procedure to solve it. The metaheuristic developed is a variant of an evolutionary algorithm, known as Biased Random Key Genetic Algorithm. Computational experiments have been carried out on benchmark problems with up to 100 generation units for a 24 h scheduling horizon. The performance of the method, as well as the quality, diversity and the distribution characteristics of the solutions obtained are analysed. It is shown that the method proposed compares favourably against alternative approaches in most cases analysed.

  16. Using and comparing metaheuristic algorithms for optimizing bidding strategy viewpoint of profit maximization of generators

    Science.gov (United States)

    Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan

    2015-03-01

    With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.

  17. A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery.

    Science.gov (United States)

    Gunawardena, Athula D A; D'Souza, Warren D; Goadrich, Laura D; Meyer, Robert R; Sorensen, Kelly J; Naqvi, Shahid A; Shi, Leyuan

    2006-05-21

    At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans.

  18. A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery

    International Nuclear Information System (INIS)

    Gunawardena, Athula D A; D'Souza, Warren D; Goadrich, Laura D; Meyer, Robert R; Sorensen, Kelly J; Naqvi, Shahid A; Shi, Leyuan

    2006-01-01

    At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans

  19. An Ad-Hoc Initial Solution Heuristic for Metaheuristic Optimization of Energy Market Participation Portfolios

    Directory of Open Access Journals (Sweden)

    Ricardo Faia

    2017-06-01

    Full Text Available The deregulation of the electricity sector has culminated in the introduction of competitive markets. In addition, the emergence of new forms of electric energy production, namely the production of renewable energy, has brought additional changes in electricity market operation. Renewable energy has significant advantages, but at the cost of an intermittent character. The generation variability adds new challenges for negotiating players, as they have to deal with a new level of uncertainty. In order to assist players in their decisions, decision support tools enabling assisting players in their negotiations are crucial. Artificial intelligence techniques play an important role in this decision support, as they can provide valuable results in rather small execution times, namely regarding the problem of optimizing the electricity markets participation portfolio. This paper proposes a heuristic method that provides an initial solution that allows metaheuristic techniques to improve their results through a good initialization of the optimization process. Results show that by using the proposed heuristic, multiple metaheuristic optimization methods are able to improve their solutions in a faster execution time, thus providing a valuable contribution for players support in energy markets negotiations.

  20. Linear algebraic methods applied to intensity modulated radiation therapy.

    Science.gov (United States)

    Crooks, S M; Xing, L

    2001-10-01

    Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.

  1. Calibration of microscopic traffic simulation models using metaheuristic algorithms

    Directory of Open Access Journals (Sweden)

    Miao Yu

    2017-06-01

    Full Text Available This paper presents several metaheuristic algorithms to calibrate a microscopic traffic simulation model. The genetic algorithm (GA, Tabu Search (TS, and a combination of the GA and TS (i.e., warmed GA and warmed TS are implemented and compared. A set of traffic data collected from the I-5 Freeway, Los Angles, California, is used. Objective functions are defined to minimize the difference between simulated and field traffic data which are built based on the flow and speed. Several car-following parameters in VISSIM, which can significantly affect the simulation outputs, are selected to calibrate. A better match to the field measurements is reached with the GA, TS, and warmed GA and TS when comparing with that only using the default parameters in VISSIM. Overall, TS performs very well and can be used to calibrate parameters. Combining metaheuristic algorithms clearly performs better and therefore is highly recommended for calibrating microscopic traffic simulation models.

  2. Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem

    Science.gov (United States)

    Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh

    2014-01-01

    This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359

  3. Methods of applied mathematics with a software overview

    CERN Document Server

    Davis, Jon H

    2016-01-01

    This textbook, now in its second edition, provides students with a firm grasp of the fundamental notions and techniques of applied mathematics as well as the software skills to implement them. The text emphasizes the computational aspects of problem solving as well as the limitations and implicit assumptions inherent in the formal methods. Readers are also given a sense of the wide variety of problems in which the presented techniques are useful. Broadly organized around the theme of applied Fourier analysis, the treatment covers classical applications in partial differential equations and boundary value problems, and a substantial number of topics associated with Laplace, Fourier, and discrete transform theories. Some advanced topics are explored in the final chapters such as short-time Fourier analysis and geometrically based transforms applicable to boundary value problems. The topics covered are useful in a variety of applied fields such as continuum mechanics, mathematical physics, control theory, and si...

  4. Metaheuristic Algorithm for Photovoltaic Parameters: Comparative Study and Prediction with a Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Mohamed Louzazni

    2018-02-01

    Full Text Available In this paper, a Firefly algorithm is proposed for identification and comparative study of five, seven and eight parameters of a single and double diode solar cell and photovoltaic module under different solar irradiation and temperature. Further, a metaheuristic algorithm is proposed in order to predict the electrical parameters of three different solar cell technologies. The first is a commercial RTC mono-crystalline silicon solar cell with single and double diodes at 33 °C and 1000 W/m2. The second, is a flexible hydrogenated amorphous silicon a-Si:H solar cell single diode. The third is a commercial photovoltaic module (Photowatt-PWP 201 in which 36 polycrystalline silicon cells are connected in series, single diode, at 25 °C and 1000 W/m2 from experimental current-voltage. The proposed constrained objective function is adapted to minimize the absolute errors between experimental and predicted values of voltage and current in two zones. Finally, for performance validation, the parameters obtained through the Firefly algorithm are compared with recent research papers reporting metaheuristic optimization algorithms and analytical methods. The presented results confirm the validity and reliability of the Firefly algorithm in extracting the optimal parameters of the photovoltaic solar cell.

  5. Using VPython to Apply Mathematics to Physics in Mathematical Methods

    Science.gov (United States)

    Demaree, Dedra; Eagan, J.; Finn, P.; Knight, B.; Singleton, J.; Therrien, A.

    2006-12-01

    At the College of the Holy Cross, the sophomore mathematical methods of physics students completed VPython programming projects. This is the first time VPython has been used in a physics course at this college. These projects were aimed at applying some methods learned to actual physical situations. Students first completed worksheets from North Carolina State University to learn the programming environment. They then used VPython to apply the mathematics of vectors and differential equations learned in class to solve physics situations which appear simple but are not easy to solve analytically. For most of these students it was their first programming experience. It was also one of the only chances we had to do actual physics applications during the semester due to the large amount of mathematical content covered. In addition to showcasing the students’ final programs, this poster will share their view of including VPython in this course.

  6. Synthetic data. A proposed method for applied risk management

    OpenAIRE

    Carbajal De Nova, Carolina

    2017-01-01

    The proposed method attempts to contribute towards the econometric and simulation applied risk management literature. It consists on an algorithm to construct synthetic data and risk simulation econometric models, supported by a set of behavioral assumptions. This algorithm has the advantage of replicating natural phenomena and uncertainty events in a short period of time. These features convey economically low costs besides computational efficiency. An application for wheat farmers is develo...

  7. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology

    Science.gov (United States)

    Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.

    2017-01-01

    Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the

  8. Newton-Krylov methods applied to nonequilibrium radiation diffusion

    International Nuclear Information System (INIS)

    Knoll, D.A.; Rider, W.J.; Olsen, G.L.

    1998-01-01

    The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton's method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton's method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step

  9. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  10. A parallel metaheuristic for large mixed-integer dynamic optimization problems, with applications in computational biology.

    Science.gov (United States)

    Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R

    2017-01-01

    We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests

  11. Analysis of concrete beams using applied element method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a displacement based method of structural analysis. Some of its features are similar to that of Finite Element Method (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.

  12. Applying sample survey methods to clinical trials data.

    Science.gov (United States)

    LaVange, L M; Koch, G G; Schwartz, T A

    This paper outlines the utility of statistical methods for sample surveys in analysing clinical trials data. Sample survey statisticians face a variety of complex data analysis issues deriving from the use of multi-stage probability sampling from finite populations. One such issue is that of clustering of observations at the various stages of sampling. Survey data analysis approaches developed to accommodate clustering in the sample design have more general application to clinical studies in which repeated measures structures are encountered. Situations where these methods are of interest include multi-visit studies where responses are observed at two or more time points for each patient, multi-period cross-over studies, and epidemiological studies for repeated occurrences of adverse events or illnesses. We describe statistical procedures for fitting multiple regression models to sample survey data that are more effective for repeated measures studies with complicated data structures than the more traditional approaches of multivariate repeated measures analysis. In this setting, one can specify a primary sampling unit within which repeated measures have intraclass correlation. This intraclass correlation is taken into account by sample survey regression methods through robust estimates of the standard errors of the regression coefficients. Regression estimates are obtained from model fitting estimation equations which ignore the correlation structure of the data (that is, computing procedures which assume that all observational units are independent or are from simple random samples). The analytic approach is straightforward to apply with logistic models for dichotomous data, proportional odds models for ordinal data, and linear models for continuously scaled data, and results are interpretable in terms of population average parameters. Through the features summarized here, the sample survey regression methods have many similarities to the broader family of

  13. Classification of Specialized Farms Applying Multivariate Statistical Methods

    Directory of Open Access Journals (Sweden)

    Zuzana Hloušková

    2017-01-01

    Full Text Available Classification of specialized farms applying multivariate statistical methods The paper is aimed at application of advanced multivariate statistical methods when classifying cattle breeding farming enterprises by their economic size. Advantage of the model is its ability to use a few selected indicators compared to the complex methodology of current classification model that requires knowledge of detailed structure of the herd turnover and structure of cultivated crops. Output of the paper is intended to be applied within farm structure research focused on future development of Czech agriculture. As data source, the farming enterprises database for 2014 has been used, from the FADN CZ system. The predictive model proposed exploits knowledge of actual size classes of the farms tested. Outcomes of the linear discriminatory analysis multifactor classification method have supported the chance of filing farming enterprises in the group of Small farms (98 % filed correctly, and the Large and Very Large enterprises (100 % filed correctly. The Medium Size farms have been correctly filed at 58.11 % only. Partial shortages of the process presented have been found when discriminating Medium and Small farms.

  14. Enhanced Molecular Dynamics Methods Applied to Drug Design Projects.

    Science.gov (United States)

    Ziada, Sonia; Braka, Abdennour; Diharce, Julien; Aci-Sèche, Samia; Bonnet, Pascal

    2018-01-01

    Nobel Laureate Richard P. Feynman stated: "[…] everything that living things do can be understood in terms of jiggling and wiggling of atoms […]." The importance of computer simulations of macromolecules, which use classical mechanics principles to describe atom behavior, is widely acknowledged and nowadays, they are applied in many fields such as material sciences and drug discovery. With the increase of computing power, molecular dynamics simulations can be applied to understand biological mechanisms at realistic timescales. In this chapter, we share our computational experience providing a global view of two of the widely used enhanced molecular dynamics methods to study protein structure and dynamics through the description of their characteristics, limits and we provide some examples of their applications in drug design. We also discuss the appropriate choice of software and hardware. In a detailed practical procedure, we describe how to set up, run, and analyze two main molecular dynamics methods, the umbrella sampling (US) and the accelerated molecular dynamics (aMD) methods.

  15. Multi-Objective Analysis Applied to Mixed-Model Assembly Line Sequencing Problem through Elite Induced Evolutionary Method

    Science.gov (United States)

    Shimizu, Yoshiaki; Sakaguchi, Tatsuhiko; Pralomkarn, Theerayoth

    To meet higher customer satisfaction and shorter production lead time, assembly lines are shifting to mixed-model assembly lines. Accordingly, sequencing is becoming an increasingly important operation scheduling that directly affects on efficiency of the entire process. In this study, such sequencing problem at the mixed-model assembly line has been formulated as a bi-objective integer programming problem so that decision making through trade-off analysis can bring about significant production improvements. Then we have developed a multi-objective analysis method by hybridizing conventional and recent meta-heuristic methods. After showing its generic idea, the car mixed-model assembly line sequencing problem is concerned as a case study. Certain measures are also introduced to quantitatively evaluate the performances of the method through comparison.

  16. A nuclear heuristic for application to metaheuristics in-core fuel management optimization

    International Nuclear Information System (INIS)

    Meneses, Anderson Alvarenga de Moura; Gambardella, Luca Maria; Schirru, Roberto

    2009-01-01

    The In-Core Fuel Management Optimization (ICFMO) is a well-known problem of nuclear engineering whose features are complexity, high number of feasible solutions, and a complex evaluation process with high computational cost, thus it is prohibitive to have a great number of evaluations during an optimization process. Heuristics are criteria or principles for deciding which among several alternative courses of action are more effective with respect to some goal. In this paper, we propose a new approach for the use of relational heuristics for the search in the ICFMO. The Heuristic is based on the reactivity of the fuel assemblies and their position into the reactor core. It was applied to random search, resulting in less computational effort concerning the number of evaluations of loading patterns during the search. The experiments demonstrate that it is possible to achieve results comparable to results in the literature, for future application to metaheuristics in the ICFMO. (author)

  17. Metrological evaluation of characterization methods applied to nuclear fuels

    International Nuclear Information System (INIS)

    Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho

    2010-01-01

    In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO 2 that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO 2 samples were focused. The thermal characterization of UO 2 samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of the

  18. Nuclear and nuclear related analytical methods applied in environmental research

    International Nuclear Information System (INIS)

    Popescu, Ion V.; Gheboianu, Anca; Bancuta, Iulian; Cimpoca, G. V; Stihi, Claudia; Radulescu, Cristiana; Oros Calin; Frontasyeva, Marina; Petre, Marian; Dulama, Ioana; Vlaicu, G.

    2010-01-01

    Nuclear Analytical Methods can be used for research activities on environmental studies like water quality assessment, pesticide residues, global climatic change (transboundary), pollution and remediation. Heavy metal pollution is a problem associated with areas of intensive industrial activity. In this work the moss bio monitoring technique was employed to study the atmospheric deposition in Dambovita County Romania. Also, there were used complementary nuclear and atomic analytical methods: Neutron Activation Analysis (NAA), Atomic Absorption Spectrometry (AAS) and Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES). These high sensitivity analysis methods were used to determine the chemical composition of some samples of mosses placed in different areas with different pollution industrial sources. The concentrations of Cr, Fe, Mn, Ni and Zn were determined. The concentration of Fe from the same samples was determined using all these methods and we obtained a very good agreement, in statistical limits, which demonstrate the capability of these analytical methods to be applied on a large spectrum of environmental samples with the same results. (authors)

  19. Analysis of Brick Masonry Wall using Applied Element Method

    Science.gov (United States)

    Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen

    2018-03-01

    The Applied Element Method (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite Element Method (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.

  20. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  1. A simple metaheuristic for the fleetsize and mix problem with TimeWindows

    NARCIS (Netherlands)

    Bräysy, Olli; Dullaert, Wout; Porkka, Pasi P.

    2018-01-01

    This paper presents a powerful new single-parameter metaheuristic to solve the Fleet Size and Mix Vehicle Routing Problem with Time Windows. The key idea of the new metaheuristic is to perform a random number of random-sized jumps in random order through four well-known local search operators.

  2. The Dynamical Recollection of Interconnected Neural Networks Using Meta-heuristics

    Science.gov (United States)

    Kuremoto, Takashi; Watanabe, Shun; Kobayashi, Kunikazu; Feng, Laing-Bing; Obayashi, Masanao

    The interconnected recurrent neural networks are well-known with their abilities of associative memory of characteristic patterns. For example, the traditional Hopfield network (HN) can recall stored pattern stably, meanwhile, Aihara's chaotic neural network (CNN) is able to realize dynamical recollection of a sequence of patterns. In this paper, we propose to use meta-heuristic (MH) methods such as the particle swarm optimization (PSO) and the genetic algorithm (GA) to improve traditional associative memory systems. Using PSO or GA, for CNN, optimal parameters are found to accelerate the recollection process and raise the rate of successful recollection, and for HN, optimized bias current is calculated to improve the network with dynamical association of a series of patterns. Simulation results of binary pattern association showed effectiveness of the proposed methods.

  3. Analytical methods applied to diverse types of Brazilian propolis

    Directory of Open Access Journals (Sweden)

    Marcucci Maria

    2011-06-01

    Full Text Available Abstract Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented.

  4. Method of applying a coating onto a steel plate

    International Nuclear Information System (INIS)

    Masuda, Hiromasa; Murakami, Shozo; Chihara, Yoshihi.

    1970-01-01

    A method of applying a protective coating onto a steel plate to protect it from corrosion is given, using an irradiation process and a vehicle consisting of a radically polymerizable high molecular compound, a radically polymerizable less-volatile monomer and/or a functional intermediate agent, and a volatile solvent. The radiation may be electron beams at an energy level ranging from 100 to 1,000 keV. An advantage of this invention is that the ratio of the prepolymer to the monomer can be kept constant without difficulty during the irradiation operation, so that the variation in thickness is very small. Another advantage is that the addition of a monomer is not necessary for viscosity reduction, so that the optimum cross-linking density can be obtained. The molecular weight is so high that application by spraying is possible. The solvent remaining after the irradiation operation has substantially no influence on the polymerization hardening and gel content. In one example, 62 parts of prepolymer produced by reacting an epoxy resin Epikote No.1001 with an equal equivalent of acrylic acid were mixed with 17 parts of hydroxyl ethyl acrylate, 77.5 parts of methyl ethyl ketone and 5.5 parts of isopropyl alcohol to produce a vehicle composition. This composition was applied onto the surface of glass plate 20 microns in thickness. The monomer remaining in the mixture showed a very small change over an elapsed period of time. (Iwakiri, K.)

  5. On interval methods applied to robot reliability quantification

    International Nuclear Information System (INIS)

    Carreras, C.; Walker, I.D.

    2000-01-01

    Interval methods have recently been successfully applied to obtain significantly improved robot reliability estimates via fault trees for the case of uncertain and time-varying input reliability data. These initial studies generated output distributions of failure probabilities by extending standard interval arithmetic with new abstractions called interval grids which can be parameterized to control the complexity and accuracy of the estimation process. In this paper different parameterization strategies are evaluated in order to gain a more complete understanding of the potential benefits of the approach. A canonical example of a robot manipulator system is used to show that an appropriate selection of parameters is a key issue for the successful application of such novel interval-based methodologies

  6. Parallel fast multipole boundary element method applied to computational homogenization

    Science.gov (United States)

    Ptaszny, Jacek

    2018-01-01

    In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.

  7. The virtual fields method applied to spalling tests on concrete

    Directory of Open Access Journals (Sweden)

    Forquin P.

    2012-08-01

    Full Text Available For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s−1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM. First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative ‘load cell’. This method applied to three spalling tests allowed to identify Young’s modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.

  8. Metaheuristic based scheduling meta-tasks in distributed heterogeneous computing systems.

    Science.gov (United States)

    Izakian, Hesam; Abraham, Ajith; Snášel, Václav

    2009-01-01

    Scheduling is a key problem in distributed heterogeneous computing systems in order to benefit from the large computing capacity of such systems and is an NP-complete problem. In this paper, we present a metaheuristic technique, namely the Particle Swarm Optimization (PSO) algorithm, for this problem. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. The scheduler aims at minimizing makespan, which is the time when finishes the latest task. Experimental studies show that the proposed method is more efficient and surpasses those of reported PSO and GA approaches for this problem.

  9. Metaheuristic Based Scheduling Meta-Tasks in Distributed Heterogeneous Computing Systems

    Directory of Open Access Journals (Sweden)

    Hesam Izakian

    2009-07-01

    Full Text Available Scheduling is a key problem in distributed heterogeneous computing systems in order to benefit from the large computing capacity of such systems and is an NP-complete problem. In this paper, we present a metaheuristic technique, namely the Particle Swarm Optimization (PSO algorithm, for this problem. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. The scheduler aims at minimizing makespan, which is the time when finishes the latest task. Experimental studies show that the proposed method is more efficient and surpasses those of reported PSO and GA approaches for this problem.

  10. Potassium fertilizer applied by different methods in the zucchini crop

    Directory of Open Access Journals (Sweden)

    Carlos N. V. Fernandes

    Full Text Available ABSTRACT Aiming to evaluate the effect of potassium (K doses applied by the conventional method and fertigation in zucchini (Cucurbita pepo L., a field experiment was conducted in Fortaleza, CE, Brazil. The statistical design was a randomized block, with four replicates, in a 4 x 2 factorial scheme, which corresponded to four doses of K (0, 75, 150 and 300 kg K2O ha-1 and two fertilization methods (conventional and fertigation. The analyzed variables were: fruit mass (FM, number of fruits (NF, fruit length (FL, fruit diameter (FD, pulp thickness (PT, soluble solids (SS, yield (Y, water use efficiency (WUE and potassium use efficiency (KUE, besides an economic analysis using the net present value (NPV, internal rate of return (IRR and payback period (PP. K doses influenced FM, FD, PT and Y, which increased linearly, with the highest value estimated at 36,828 kg ha-1 for the highest K dose (300 kg K2O ha-1. This dose was also responsible for the largest WUE, 92 kg ha-1 mm-1. KUE showed quadratic behavior and the dose of 174 kg K2O ha-1 led to its maximum value (87.41 kg ha-1 (kg K2O ha-1-1. All treatments were economically viable, and the most profitable months were May, April, December and November.

  11. Application of Meta-Heuristic Hybrid Artificial Intelligence Techniques for Modeling of Bonding Strength of Plywood Panels

    Directory of Open Access Journals (Sweden)

    Cenk Demirkır

    2014-04-01

    Full Text Available Plywood, which is one of the most important wood based panels, has many usage areas changing from traffic signs to building constructions in many countries. It is known that the high quality plywood panel manufacturing has been achieved with a good bonding under the optimum pressure conditions depending on adhesive type. This is a study of determining the using possibilities of modern meta-heuristic hybrid artificial intelligence techniques such as IKE and AANN methods for prediction of bonding strength of plywood panels. This study has composed of two main parts as experimental and analytical. Scots pine, maritime pine and European black pine logs were used as wood species. The pine veneers peeled at 32°C and 50°C were dried at 110°C, 140°C and 160°C temperatures. Phenol formaldehyde and melamine urea formaldehyde resins were used as adhesive types. EN 314-1 standard was used to determine the bonding shear strength values of plywood panels in experimental part of this study. Then the intuitive k-nearest neighbor estimator (IKE and adaptive artificial neural network (AANN were used to estimate bonding strength of plywood panels. The best estimation performance was obtained from MA metric for k-value=10. The most effective factor on bonding strength was determined as adhesive type. Error rates were determined less than 5% for both of the IKE and AANN. It may be recommended that proposed methods could be used in applying to estimation of bonding strength values of plywood panels.

  12. A difference-matrix metaheuristic for intensity map segmentation in step-and-shoot IMRT delivery

    Energy Technology Data Exchange (ETDEWEB)

    Gunawardena, Athula D A [Department of Mathematics and Computer Sciences, University of Wisconsin-Whitewater, 800 West Main Street, Whitewater, WI (United States); D' Souza, Warren D [Department of Radiation Oncology, School of Medicine, University of Maryland, 22 South Greene Street, Baltimore, MD (United States); Goadrich, Laura D [Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI (United States); Meyer, Robert R [Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI (United States); Sorensen, Kelly J [Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI (United States); Naqvi, Shahid A [Department of Radiation Oncology, School of Medicine, University of Maryland, 22 South Greene Street, Baltimore, MD (United States); Shi, Leyuan [Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI (United States)

    2006-05-21

    At an intermediate stage of radiation treatment planning for IMRT, most commercial treatment planning systems for IMRT generate intensity maps that describe the grid of beamlet intensities for each beam angle. Intensity map segmentation of the matrix of individual beamlet intensities into a set of MLC apertures and corresponding intensities is then required in order to produce an actual radiation delivery plan for clinical use. Mathematically, this is a very difficult combinatorial optimization problem, especially when mechanical limitations of the MLC lead to many constraints on aperture shape, and setup times for apertures make the number of apertures an important factor in overall treatment time. We have developed, implemented and tested on clinical cases a metaheuristic (that is, a method that provides a framework to guide the repeated application of another heuristic) that efficiently generates very high-quality (low aperture number) segmentations. Our computational results demonstrate that the number of beam apertures and monitor units in the treatment plans resulting from our approach is significantly smaller than the corresponding values for treatment plans generated by the heuristics embedded in a widely use commercial system. We also contrast the excellent results of our fast and robust metaheuristic with results from an 'exact' method, branch-and-cut, which attempts to construct optimal solutions, but, within clinically acceptable time limits, generally fails to produce good solutions, especially for intensity maps with more than five intensity levels. Finally, we show that in no instance is there a clinically significant change of quality associated with our more efficient plans.

  13. M2Align: parallel multiple sequence alignment with a multi-objective metaheuristic.

    Science.gov (United States)

    Zambrano-Vega, Cristian; Nebro, Antonio J; García-Nieto, José; Aldana-Montes, José F

    2017-10-01

    Multiple sequence alignment (MSA) is an NP-complete optimization problem found in computational biology, where the time complexity of finding an optimal alignment raises exponentially along with the number of sequences and their lengths. Additionally, to assess the quality of a MSA, a number of objectives can be taken into account, such as maximizing the sum-of-pairs, maximizing the totally conserved columns, minimizing the number of gaps, or maximizing structural information based scores such as STRIKE. An approach to deal with MSA problems is to use multi-objective metaheuristics, which are non-exact stochastic optimization methods that can produce high quality solutions to complex problems having two or more objectives to be optimized at the same time. Our motivation is to provide a multi-objective metaheuristic for MSA that can run in parallel taking advantage of multi-core-based computers. The software tool we propose, called M2Align (Multi-objective Multiple Sequence Alignment), is a parallel and more efficient version of the three-objective optimizer for sequence alignments MO-SAStrE, able of reducing the algorithm computing time by exploiting the computing capabilities of common multi-core CPU clusters. Our performance evaluation over datasets of the benchmark BAliBASE (v3.0) shows that significant time reductions can be achieved by using up to 20 cores. Even in sequential executions, M2Align is faster than MO-SAStrE, thanks to the encoding method used for the alignments. M2Align is an open source project hosted in GitHub, where the source code and sample datasets can be freely obtained: https://github.com/KhaosResearch/M2Align. antonio@lcc.uma.es. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  14. A Parallel Multiobjective Metaheuristic for Multiple Sequence Alignment.

    Science.gov (United States)

    Rubio-Largo, Álvaro; Castelli, Mauro; Vanneschi, Leonardo; Vega-Rodríguez, Miguel A

    2018-04-19

    The alignment among three or more nucleotides/amino acids sequences at the same time is known as multiple sequence alignment (MSA), a nondeterministic polynomial time (NP)-hard optimization problem. The time complexity of finding an optimal alignment raises exponentially when the number of sequences to align increases. In this work, we deal with a multiobjective version of the MSA problem wherein the goal is to simultaneously optimize the accuracy and conservation of the alignment. A parallel version of the hybrid multiobjective memetic metaheuristics for MSA is proposed. To evaluate the parallel performance of our proposal, we have selected a pull of data sets with different number of sequences (up to 1000 sequences) and study its parallel performance against other well-known parallel metaheuristics published in the literature, such as MSAProbs, tree-based consistency objective function for alignment evaluation (T-Coffee), Clustal [Formula: see text], and multiple alignment using fast Fourier transform (MAFFT). The comparative study reveals that our parallel aligner obtains better results than MSAProbs, T-Coffee, Clustal [Formula: see text], and MAFFT. In addition, the parallel version is around 25 times faster than the sequential version with 32 cores, obtaining an efficiency around 80%.

  15. Flood Hazard Mapping by Applying Fuzzy TOPSIS Method

    Science.gov (United States)

    Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.

    2017-12-01

    There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS

  16. Applying multi-resolution numerical methods to geodynamics

    Science.gov (United States)

    Davies, David Rhodri

    Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled

  17. Analytic methods in applied probability in memory of Fridrikh Karpelevich

    CERN Document Server

    Suhov, Yu M

    2002-01-01

    This volume is dedicated to F. I. Karpelevich, an outstanding Russian mathematician who made important contributions to applied probability theory. The book contains original papers focusing on several areas of applied probability and its uses in modern industrial processes, telecommunications, computing, mathematical economics, and finance. It opens with a review of Karpelevich's contributions to applied probability theory and includes a bibliography of his works. Other articles discuss queueing network theory, in particular, in heavy traffic approximation (fluid models). The book is suitable

  18. Review on finite element method | Erhunmwun | Journal of Applied ...

    African Journals Online (AJOL)

    Journal of Applied Sciences and Environmental Management. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 21, No 5 (2017) >. Log in or Register to get access to full text downloads.

  19. Acti-Glide: a simple method of applying compression hosiery.

    Science.gov (United States)

    Hampton, Sylvie

    2005-05-01

    Compression hosiery is often worn to help prevent aching legs and swollen ankles, to prevent ulceration, to treat venous ulceration or to treat varicose veins. However, patients and nurses may experience problems applying hosiery and this can lead to non-concordance in patients and possibly reluctance from nurses to use compression hosiery. A simple solution to applying firm hosiery is Acti-Glide from Activa Healthcare.

  20. Dose rate reduction method for NMCA applied BWR plants

    International Nuclear Information System (INIS)

    Nagase, Makoto; Aizawa, Motohiro; Ito, Tsuyoshi; Hosokawa, Hideyuki; Varela, Juan; Caine, Thomas

    2012-09-01

    BRAC (BWR Radiation Assessment and Control) dose rate is used as an indicator of the incorporation of activated corrosion by products into BWR recirculation piping, which is known to be a significant contributor to dose rate received by workers during refueling outages. In order to reduce radiation exposure of the workers during the outage, it is desirable to keep BRAC dose rates as low as possible. After HWC was adopted to reduce IGSCC, a BRAC dose rate increase was observed in many plants. As a countermeasure to these rapid dose rate increases under HWC conditions, Zn injection was widely adopted in United States and Europe resulting in a reduction of BRAC dose rates. However, BRAC dose rates in several plants remain high, prompting the industry to continue to investigate methods to achieve further reductions. In recent years a large portion of the BWR fleet has adopted NMCA (NobleChem TM ) to enhance the hydrogen injection effect to suppress SCC. After NMCA, especially OLNC (On-Line NobleChem TM ), BRAC dose rates were observed to decrease. In some OLNC applied BWR plants this reduction was observed year after year to reach a new reduced equilibrium level. This dose rate reduction trends suggest the potential dose reduction might be obtained by the combination of Pt and Zn injection. So, laboratory experiments and in-plant tests were carried out to evaluate the effect of Pt and Zn on Co-60 deposition behaviour. Firstly, laboratory experiments were conducted to study the effect of noble metal deposition on Co deposition on stainless steel surfaces. Polished type 316 stainless steel coupons were prepared and some of them were OLNC treated in the test loop before the Co deposition test. Water chemistry conditions to simulate HWC were as follows: Dissolved oxygen, hydrogen and hydrogen peroxide were below 5 ppb, 100 ppb and 0 ppb (no addition), respectively. Zn was injected to target a concentration of 5 ppb. The test was conducted up to 1500 hours at 553 K. Test

  1. Parallel Processing and Applied Mathematics. 10th International Conference, PPAM 2013. Revised Selected Papers

    DEFF Research Database (Denmark)

    The following topics are dealt with: parallel scientific computing; numerical algorithms; parallel nonnumerical algorithms; cloud computing; evolutionary computing; metaheuristics; applied mathematics; GPU computing; multicore systems; hybrid architectures; hierarchical parallelism; HPC systems...

  2. Metaheuristic approaches to order sequencing on a unidirectional picking line

    Directory of Open Access Journals (Sweden)

    AP de Villiers

    2013-06-01

    Full Text Available In this paper the sequencing of orders on a unidirectional picking line is considered. The aim of the order sequencing is to minimise the number of cycles travelled by a picker within the picking line to complete all orders. A tabu search, simulated annealing, genetic algorithm, generalised extremal optimisation and a random local search are presented as possible solution approaches. Computational results based on real life data instances are presented for these metaheuristics and compared to the performance of a lower bound and the solutions used in practise. The random local search exhibits the best overall solution quality, however, the generalised extremal optimisation approach delivers comparable results in considerably shorter computational times.

  3. On metaheuristic "failure modes": a case study in Tabu search for job-shop scheduling.

    Energy Technology Data Exchange (ETDEWEB)

    Watson, Jean-Paul

    2005-06-01

    In this paper, we analyze the relationship between pool maintenance schemes, long-term memory mechanisms, and search space structure, with the goal of placing metaheuristic design on a more concrete foundation.

  4. Applied Research of Decision Tree Method on Football Training

    Directory of Open Access Journals (Sweden)

    Liu Jinhui

    2015-01-01

    Full Text Available This paper will make an analysis of decision tree at first, and then offer a further analysis of CLS based on it. As CLS contains the most substantial and most primitive decision-making idea, it can provide the basis of decision tree establishment. Due to certain limitation in details, the ID3 decision tree algorithm is introduced to offer more details. It applies information gain as attribute selection metrics to provide reference for seeking the optimal segmentation point. At last, the ID3 algorithm is applied in football training. Verification is made on this algorithm and it has been proved effectively and reasonably.

  5. Muon radiography method for fundamental and applied research

    Science.gov (United States)

    Alexandrov, A. B.; Vladymyrov, M. S.; Galkin, V. I.; Goncharova, L. A.; Grachev, V. M.; Vasina, S. G.; Konovalova, N. S.; Malovichko, A. A.; Managadze, A. K.; Okat'eva, N. M.; Polukhina, N. G.; Roganova, T. M.; Starkov, N. I.; Tioukov, V. E.; Chernyavsky, M. M.; Shchedrina, T. V.

    2017-12-01

    This paper focuses on the basic principles of the muon radiography method, reviews the major muon radiography experiments, and presents the first results in Russia obtained by the authors using this method based on emulsion track detectors.

  6. New Method for Tuning Robust Controllers Applied to Robot Manipulators

    Directory of Open Access Journals (Sweden)

    Gerardo Romero

    2012-11-01

    Full Text Available This paper presents a methodology to select the parameters of a nonlinear controller using Linear Matrix Inequalities (LMI. The controller is applied to a robotic manipulator to improve its robustness. This type of dynamic system enables the robust control law to be applied because it largely depends on the mathematical model of the system; however, in most cases it is impossible to be completely precise. The discrepancy between the dynamic behaviour of the robot and its mathematical model is taken into account by including a nonlinear term that represents the model's uncertainty. The controller's parameters are selected with two purposes: to guarantee the asymptotic stability of the closed-loop system while taking into account the uncertainty, and to increase its robustness margin. The results are validated with numerical simulations for a particular case study; these are then compared with previously published results to prove a better controller performance.

  7. Waste classification and methods applied to specific disposal sites

    International Nuclear Information System (INIS)

    Rogers, V.C.

    1979-01-01

    An adequate definition of the classes of radioactive wastes is necessary to regulating the disposal of radioactive wastes. A classification system is proposed in which wastes are classified according to characteristics relating to their disposal. Several specific sites are analyzed with the methodology in order to gain insights into the classification of radioactive wastes. Also presented is the analysis of ocean dumping as it applies to waste classification. 5 refs

  8. Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model

    Science.gov (United States)

    Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled

    2018-03-01

    The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.

  9. Review on finite element method | Erhunmwun | Journal of Applied ...

    African Journals Online (AJOL)

    ... finite elements, so that it is possible to systematically construct the approximation functions needed in a variational or weighted-residual approximation of the solution of a problem over each element. Keywords: Weak Formulation, Discretisation, Numerical methods, Finite element method, Global equations, Nodal solution ...

  10. The flow curvature method applied to canard explosion

    Energy Technology Data Exchange (ETDEWEB)

    Ginoux, Jean-Marc [Laboratoire Protee, IUT de Toulon, Universite du Sud, BP 20132, F-83957 La Garde cedex (France); Llibre, Jaume, E-mail: ginoux@univ-tln.fr, E-mail: jllibre@mat.uab.cat [Departament de Matematiques, Universitat Autonoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)

    2011-11-18

    The aim of this work is to establish that the bifurcation parameter value leading to a canard explosion in dimension 2 obtained by the so-called geometric singular perturbation method can be found according to the flow curvature method. This result will be then exemplified with the classical Van der Pol oscillator. (paper)

  11. The flow curvature method applied to canard explosion

    Science.gov (United States)

    Ginoux, Jean-Marc; Llibre, Jaume

    2011-11-01

    The aim of this work is to establish that the bifurcation parameter value leading to a canard explosion in dimension 2 obtained by the so-called geometric singular perturbation method can be found according to the flow curvature method. This result will be then exemplified with the classical Van der Pol oscillator.

  12. Literature Review of Applying Visual Method to Understand Mathematics

    Directory of Open Access Journals (Sweden)

    Yu Xiaojuan

    2015-01-01

    Full Text Available As a new method to understand mathematics, visualization offers a new way of understanding mathematical principles and phenomena via image thinking and geometric explanation. It aims to deepen the understanding of the nature of concepts or phenomena and enhance the cognitive ability of learners. This paper collates and summarizes the application of this visual method in the understanding of mathematics. It also makes a literature review of the existing research, especially with a visual demonstration of Euler’s formula, introduces the application of this method in solving relevant mathematical problems, and points out the differences and similarities between the visualization method and the numerical-graphic combination method, as well as matters needing attention for its application.

  13. Methodical Aspects of Applying Strategy Map in an Organization

    Directory of Open Access Journals (Sweden)

    Piotr Markiewicz

    2013-06-01

    Full Text Available One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC. The method was created as a result of implementing the project “Measuring performance in the Organization of the future” of 1990, completed by a team under the supervision of David Norton (Kaplan, Norton 2002. The developed method was used first of all to evaluate performance by decomposition of a strategy into four perspectives and identification of measures of achievement. In the middle of 1990s the method was improved by enriching it, first of all, with a strategy map, in which the process of transition of intangible assets into tangible financial effects is reflected (Kaplan, Norton 2001. Strategy map enables illustration of cause and effect relationship between processes in all four perspectives and performance indicators at the level of organization. The purpose of the study being prepared is to present methodical conditions of using strategy maps in the strategy implementation process in organizations of different nature.

  14. Diagrammatic Monte Carlo method as applied to the polaron problem

    International Nuclear Information System (INIS)

    Mishchenko, A.S.

    2005-01-01

    Exact numerical solution methods for the problem of a few particles interacting with one another and with several bosonic excitation modes are presented. The diagrammatic Monte Carlo method allows the exact calculation of the Green function, and the stochastic optimization technique provides an analytic continuation. Results unobtainable by conventional methods are discussed, including the properties of excited states in the self-trapping phenomenon, the optical spectra of polarons in all coupling regimes, the validity analysis of the exciton models, and the photoemission spectra of a phonon-coupled hole [ru

  15. Applying a life cycle approach to project management methods

    OpenAIRE

    Biggins, David; Trollsund, F.; Høiby, A.L.

    2016-01-01

    Project management is increasingly important to organisations because projects are the method\\ud by which organisations respond to their environment. A key element within project management\\ud is the standards and methods that are used to control and conduct projects, collectively known as\\ud project management methods (PMMs) and exemplified by PRINCE2, the Project Management\\ud Institute’s and the Association for Project Management’s Bodies of Knowledge (PMBOK and\\ud APMBOK. The purpose of t...

  16. Method for curing alkyd resin compositions by applying ionizing radiation

    International Nuclear Information System (INIS)

    Watanabe, T.; Murata, K.; Maruyama, T.

    1975-01-01

    An alkyd resin composition is prepared by dissolving a polymerizable alkyd resin having from 10 to 50 percent of oil length into a vinyl monomer. The polymerizable alkyd resin is obtained by a half-esterification reaction of an acid anhydride having a polymerizable unsaturated group and an alkyd resin modified with conjugated unsaturated oil having at least one reactive hydroxyl group per one molecule. The alkyd resin composition thus obtained is coated on an article, and ionizing radiation is applied on the article to cure the coated film thereon. (U.S.)

  17. Spectral methods applied to fluidized bed combustors. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Brown, R.C.; Christofides, N.J.; Junk, K.W.; Raines, T.S.; Thiede, T.D.

    1996-08-01

    The objective of this project was to develop methods for characterizing fuels and sorbents from time-series data obtained during transient operation of fluidized bed boilers. These methods aimed at determining time constants for devolatilization and char burnout using carbon dioxide (CO{sub 2}) profiles and from time constants for the calcination and sulfation processes using CO{sub 2} and sulfur dioxide (SO{sub 2}) profiles.

  18. Apply of torque method at rationalization of work

    Directory of Open Access Journals (Sweden)

    Bandurová Miriam

    2001-03-01

    Full Text Available Aim of the study was to analyse consumption of time for profession - cylinder grinder, by torque method.Method of torque following is used for detection of sorts and size of time slope, on detection of portion of individual sorts of time consumption and cause of time slope. By this way it is possible to find out coefficient of employment and recovery of workers in organizational unit. Advantage of torque survey is low costs on informations acquirement, non-fastidiousness per worker and observer, which is easy trained. It is mentally acceptable method for objects of survey.Finding and detection of reserves in activity of cylinders grinder result of torque was surveys. Loss of time presents till 8% of working time. In 5 - shift service and average occupiying of shift by 4,4 grinder ( from statistic information of service , loss at grinder of cylinders are for whole centre 1,48 worker.According presented information it was recommended to cancel one job place - grinder of cylinders - and reduce state about one grinder. Next job place isn't possible cancel, because grindery of cylinders must to adapt to the grind line by number of polished cylinders in shift and semi - finishing of polished cylinders can not be high for often changes in area of grinding and sortiment changes.By this contribution we confirmed convenience of exploitation of torque method as one of the methods using during the job rationalization.

  19. Thermoluminescence as a dating method applied to the Morocco Neolithic

    International Nuclear Information System (INIS)

    Ousmoi, M.

    1989-09-01

    Thermoluminescence is an absolute dating method which is well adapted to the study of burnt clays and so of the prehistoric ceramics belonging to the Neolithic period. The purpose of this study is to establish a first absolute chronology of the septentrional morocco Neolithic between 3000 and 7000 years before us and some improvements of the TL dating. The first part of the thesis contains some hypothesis about the morocco Neolithic and some problems to solve. Then we study the TL dating method along with new process to ameliorate the quality of the results like the shift of quartz TL peaks or the crushing of samples. The methods which were employed using 24 samples belonging to various civilisations are: the quartz inclusion method and the fine grain technique. For the dosimetry, several methods were used: determination of the K 2 O contents, alpha counting, site dosimetry using TL dosimeters and a scintillation counter. The results which were found bring some interesting answers to the archeologic question and ameliorate the chronologic schema of the Northern morocco Neolithic: development of the old cardial Neolithic in the North, and perhaps in the center of Morocco (the region of Rabat), between 5500 and 7000 before us. Development of the recent middle Neolithic around 4000-5000 before us, with a protocampaniforme (Skhirat), little older than the campaniforme recognized in the south of Spain. Development of the bronze age around 2000-4000 before us [fr

  20. Evaluation of Controller Tuning Methods Applied to Distillation Column Control

    DEFF Research Database (Denmark)

    Nielsen, Kim; W. Andersen, Henrik; Kümmel, Professor Mogens

    A frequency domain approach is used to compare the nominal performance and robustness of dual composition distillation column control tuned according to Ziegler-Nichols (ZN) and Biggest Log Modulus Tuning (BLT) for three binary distillation columns, WOBE, LUVI and TOFA. The scope...... of this is to examine whether ZN and BLT design yield satisfactory control of distillation columns. Further, PI controllers are tuned according to a proposed multivariable frequency domain method. A major conclusion is that the ZN tuned controllers yield undesired overshoot and oscillation and poor stability robustness...... properties. BLT tuning removes the overshoot and oscillation, however, at the expense of a more sluggish response. We conclude that if a simple control design is to be used, the BLT method should be referred compared to the ZN method. The frequency domain design approach presented yields a more proper trade...

  1. Modal method for crack identification applied to reactor recirculation pump

    International Nuclear Information System (INIS)

    Miller, W.H.; Brook, R.

    1991-01-01

    Nuclear reactors have been operating and producing useful electricity for many years. Within the last few years, several plants have found cracks in the reactor coolant pump shaft near the thermal barrier. The modal method and results described herein show the analytical results of using a Modal Analysis test method to determine the presence, size, and location of a shaft crack. The authors have previously demonstrated that the test method can analytically and experimentally identify shaft cracks as small as five percent (5%) of the shaft diameter. Due to small differences in material property distribution, the attempt to identify cracks smaller than 3% of the shaft diameter has been shown to be impractical. The rotor dynamics model includes a detailed motor rotor, external weights and inertias, and realistic total support stiffness. Results of the rotor dynamics model have been verified through a comparison with on-site vibration test data

  2. Boron autoradiography method applied to the study of steels

    International Nuclear Information System (INIS)

    Gugelmeier, R.; Barcelo, G.N.; Boado, J.H.; Fernandez, C.

    1986-01-01

    The boron state, contained in the steel microestructure, is determined. The autoradiography by neutrons is used, permiting to obtain boron distribution images by means of additional information which is difficult to acquire by other methods. The application of the method is described, based on the neutronic irradiation of a polished steel sample, over which a celulose nitrate sheet or other appropriate material is fixed to constitute the detector. The particles generated by the neutron-boron interaction affect the detector sheet, which is subsequently revealed with a chemical treatment and can be observed at the optical microscope. In the case of materials used for the construction of nuclear reactors, special attention must be given to the presence of boron, since owing to the exceptionaly high capacity of neutron absorption, lowest quantities of boron acquire importance. The adaption of the method to metallurgical problems allows the obtainment of a correlation between the boron distribution images and the material's microstructure. (M.E.L.) [es

  3. Diagrammatic Monte Carlo method as applied to the polaron problems

    International Nuclear Information System (INIS)

    Mishchenko, Andrei S

    2005-01-01

    Numerical methods whereby exact solutions to the problem of a few particles interacting with one another and with several bosonic excitation branches are presented. The diagrammatic Monte Carlo method allows the exact calculation of the Matsubara Green function, and the stochastic optimization technique provides an approximation-free analytic continuation. In this review, results unobtainable by conventional methods are discussed, including the properties of excited states in the self-trapping phenomenon, the optical spectra of polarons in all coupling regimes, the validity range analysis of the Frenkel and Wannier approximations relevant to the exciton, and the peculiarities of photoemission spectra of a lattice-coupled hole in a Mott insulator. (reviews of topical problems)

  4. DAKOTA reliability methods applied to RAVEN/RELAP-7.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton; Mandelli, Diego; Rabiti, Cristian; Alfonsi, Andrea

    2013-09-01

    This report summarizes the result of a NEAMS project focused on the use of reliability methods within the RAVEN and RELAP-7 software framework for assessing failure probabilities as part of probabilistic risk assessment for nuclear power plants. RAVEN is a software tool under development at the Idaho National Laboratory that acts as the control logic driver and post-processing tool for the newly developed Thermal-Hydraulic code RELAP-7. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. Reliability methods are algorithms which transform the uncertainty problem to an optimization problem to solve for the failure probability, given uncertainty on problem inputs and a failure threshold on an output response. The goal of this work is to demonstrate the use of reliability methods in Dakota with RAVEN/RELAP-7. These capabilities are demonstrated on a demonstration of a Station Blackout analysis of a simplified Pressurized Water Reactor (PWR).

  5. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  6. Variance reduction methods applied to deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course

  7. Robustness of Modal Parameter Estimation Methods Applied to Lightweight Structures

    DEFF Research Database (Denmark)

    Dickow, Kristoffer Ahrens; Kirkegaard, Poul Henning; Andersen, Lars Vabbersgaard

    2013-01-01

    of two parameter estimation methods built into the commercial modal testing software B&K Pulse Re ex Advanced Modal Analysis. The investigations are done by means of frequency response functions generated from a nite-element model and subjected to articial noise before being analyzed with Pulse Re ex....... The ability to handle closely spaced modes and broad frequency ranges is investigated for a numerical model of a lightweight junction under dierent signal-to-noise ratios. The selection of both excitation points and response points are discussed. It is found that both the Rational Fraction Polynomial-Z method...

  8. Efficient electronic structure methods applied to metal nanoparticles

    DEFF Research Database (Denmark)

    Larsen, Ask Hjorth

    of efficient approaches to density functional theory and the application of these methods to metal nanoparticles. We describe the formalism and implementation of localized atom-centered basis sets within the projector augmented wave method. Basis sets allow for a dramatic increase in performance compared...... and jumps in Fermi level near magic numbers can lead to alkali-like or halogen-like behaviour when main-group atoms adsorb onto gold clusters. A non-self-consistent NewnsAnderson model is used to more closely study the chemisorption of main-group atoms on magic-number Au clusters. The behaviour at magic...

  9. Applying the Priority Distribution Method for Employee Motivation

    Directory of Open Access Journals (Sweden)

    Jonas Žaptorius

    2013-09-01

    Full Text Available In an age of increasing healthcare expenditure, the efficiency of healthcare services is a burning issue. This paper deals with the creation of a performance-related remuneration system, which would meet requirements for efficiency and sustainable quality. In real world scenarios, it is difficult to create an objective and transparent employee performance evaluation model dealing with both qualitative and quantitative criteria. To achieve these goals, the use of decision support methods is suggested and analysed. The systematic approach of practical application of the Priority Distribution Method to healthcare provider organisations is created and described.

  10. Non-perturbative methods applied to multiphoton ionization

    International Nuclear Information System (INIS)

    Brandi, H.S.; Davidovich, L.; Zagury, N.

    1982-09-01

    The use of non-perturbative methods in the treatment of atomic ionization is discussed. Particular attention is given to schemes of the type proposed by Keldysh where multiphoton ionization and tunnel auto-ionization occur for high intensity fields. These methods are shown to correspond to a certain type of expansion of the T-matrix in the intra-atomic potential; in this manner a criterium concerning the range of application of these non-perturbative schemes is suggested. A brief comparison between the ionization rate of atoms in the presence of linearly and circularly polarized light is presented. (Author) [pt

  11. Design of a biomass-to-biorefinery logistics system through bio-inspired metaheuristic optimization considering multiple types of feedstocks

    Science.gov (United States)

    Trueba, Isidoro

    fossil fuels to biofuels. In many ways biomass is a unique renewable resource. It can be stored and transported relatively easily in contrast to renewable options such as wind and solar, which create intermittent electrical power that requires immediate consumption and a connection to the grid. This thesis presents two different models for the design optimization of a biomass-to-biorefinery logistics system through bio-inspired metaheuristic optimization considering multiple types of feedstocks. This work compares the performance and solutions obtained by two types of metaheuristic approaches; genetic algorithm and ant colony optimization. Compared to rigorous mathematical optimization methods or iterative algorithms, metaheuristics do not guarantee that a global optimal solution can be found on some class of problems. Problems with similar characteristics to the one presented in this thesis have been previously solved using linear programming, integer programming and mixed integer programming methods. However, depending on the type of problem, these mathematical or complete methods might need exponential computation time in the worst-case. This often leads to computation times too high for practical purposes. Therefore, this thesis develops two types of metaheuristic approaches for the design optimization of a biomass-to-biorefinery logistics system considering multiple types of feedstocks and shows that metaheuristics are highly suitable to solve hard combinatorial optimization problems such as the one addressed in this research work.

  12. Tutte’s barycenter method applied to isotopies

    NARCIS (Netherlands)

    Colin de Verdière, Éric; Pocchiola, Michel; Vegter, Gert

    2003-01-01

    This paper is concerned with applications of Tutte’s barycentric embedding theorem. It presents a method for building isotopies of triangulations in the plane, based on Tutte’s theorem and the computation of equilibrium stresses of graphs by Maxwell–Cremona’s theorem; it also provides a

  13. Inversion method applied to the rotation curves of galaxies

    Science.gov (United States)

    Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.

    2017-07-01

    We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.

  14. E-LEARNING METHOD APPLIED TO TECHNICAL GRAPHICS SUBJECTS

    Directory of Open Access Journals (Sweden)

    GOANTA Adrian Mihai

    2011-11-01

    Full Text Available The paper presents some of the author’s endeavors in creating video courses for the students from the Faculty of Engineering in Braila related to subjects involving technical graphics . There are also mentioned the steps taken in completing the method and how to achieve a feedback on the rate of access to these types of courses by the students.

  15. Some methods of computational geometry applied to computer graphics

    NARCIS (Netherlands)

    Overmars, M.H.; Edelsbrunner, H.; Seidel, R.

    1984-01-01

    Abstract Windowing a two-dimensional picture means to determine those line segments of the picture that are visible through an axis-parallel window. A study of some algorithmic problems involved in windowing a picture is offered. Some methods from computational geometry are exploited to store the

  16. [Synchrotron-based characterization methods applied to ancient materials (I)].

    Science.gov (United States)

    Anheim, Étienne; Thoury, Mathieu; Bertrand, Loïc

    2015-12-01

    This article aims at presenting the first results of a transdisciplinary research programme in heritage sciences. Based on the growing use and on the potentialities of micro- and nano-characterization synchrotron-based methods to study ancient materials (archaeology, palaeontology, cultural heritage, past environments), this contribution will identify and test conceptual and methodological elements of convergence between physicochemical and historical sciences.

  17. About the Finite Element Method Applied to Thick Plates

    Directory of Open Access Journals (Sweden)

    Mihaela Ibănescu

    2006-01-01

    Full Text Available The present paper approaches of plates subjected to transverse loads, when the shear force and the actual boundary conditions are considered, by using the Finite Element Method. The isoparametric finite elements create real facilities in formulating the problems and great possibilities in creating adequate computer programs.

  18. The harmonics detection method based on neural network applied ...

    African Journals Online (AJOL)

    user

    with MATLAB Simulink Power System Toolbox. The simulation study results of this novel technique compared to other similar methods are found quite satisfactory by assuring good filtering characteristics and high system stability. Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total Harmonic ...

  19. Data Mining Methods Applied to Flight Operations Quality Assurance Data: A Comparison to Standard Statistical Methods

    Science.gov (United States)

    Stolzer, Alan J.; Halford, Carl

    2007-01-01

    In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.

  20. Theoretical and applied aerodynamics and related numerical methods

    CERN Document Server

    Chattot, J J

    2015-01-01

    This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...

  1. Generic Methods for Formalising Sequent Calculi Applied to Provability Logic

    Science.gov (United States)

    Dawson, Jeremy E.; Goré, Rajeev

    We describe generic methods for reasoning about multiset-based sequent calculi which allow us to combine shallow and deep embeddings as desired. Our methods are modular, permit explicit structural rules, and are widely applicable to many sequent systems, even to other styles of calculi like natural deduction and term rewriting systems. We describe new axiomatic type classes which enable simplification of multiset or sequent expressions using existing algebraic manipulation facilities. We demonstrate the benefits of our combined approach by formalising in Isabelle/HOL a variant of a recent, non-trivial, pen-and-paper proof of cut-admissibility for the provability logic GL, where we abstract a large part of the proof in a way which is immediately applicable to other calculi. Our work also provides a machine-checked proof to settle the controversy surrounding the proof of cut-admissibility for GL.

  2. Applying probabilistic methods for assessments and calculations for accident prevention

    International Nuclear Information System (INIS)

    Anon.

    1984-01-01

    The guidelines for the prevention of accidents require plant design-specific and radioecological calculations to be made in order to show that maximum acceptable expsoure values will not be exceeded in case of an accident. For this purpose, main parameters affecting the accident scenario have to be determined by probabilistic methods. This offers the advantage that parameters can be quantified on the basis of unambigious and realistic criteria, and final results can be defined in terms of conservativity. (DG) [de

  3. The transfer matrix method applied to steel sheet pile walls

    Science.gov (United States)

    Kort, D. A.

    2003-05-01

    This paper proposes two subgrade reaction models for the analysis of steel sheet pile walls based on the transfer matrix method. In the first model a plastic hinge is generated when the maximum moment in the retaining structure is exceeded. The second model deals with a beam with an asymmetrical cross-section that can bend in two directions.In the first part of this paper the transfer matrix method is explained on the basis of a simple example. Further the development of two computer models is described: Plaswall and Skewwall.The second part of this paper deals with an application of both models. In the application of Plaswall the effect of four current earth pressure theories to the subgrade reaction method is compared to a finite element calculation. It is shown that the earth pressure theory is of major importance on the calculation result of a sheet pile wall both with and without a plastic hinge.In the application of Skewwall the effectiveness of structural measures to reduce oblique bending is investigated. The results are compared to a 3D finite element calculation. It is shown that with simple structural measures the loss of structural resistance due to oblique bending can be reduced.

  4. Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation

    Directory of Open Access Journals (Sweden)

    Marlen Promann

    2015-03-01

    Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.

  5. System and method of applying energetic ions for sterilization

    Science.gov (United States)

    Schmidt, John A.

    2003-12-23

    A method of sterilization of a container is provided whereby a cold plasma is caused to be disposed near a surface to be sterilized, and the cold plasma is then subjected to a pulsed voltage differential for producing energized ions in the plasma. Those energized ions then operate to achieve spore destruction on the surface to be sterilized. Further, a system for sterilization of a container which includes a conductive or non-conductive container, a cold plasma in proximity to the container, and a high voltage source for delivering a pulsed voltage differential between an electrode and the container and across the cold plasma, is provided.

  6. Optimizing the warranty period by cuckoo meta-heuristic algorithm in heterogeneous customers' population

    Science.gov (United States)

    Roozitalab, Ali; Asgharizadeh, Ezzatollah

    2013-12-01

    Warranty is now an integral part of each product. Since its length is directly related to the cost of production, it should be set in such a way that it would maximize revenue generation and customers' satisfaction. Furthermore, based on the behavior of customers, it is assumed that increasing the warranty period to earn the trust of more customers leads to more sales until the market is saturated. We should bear in mind that different groups of consumers have different consumption behaviors and that performance of the product has a direct impact on the failure rate over the life of the product. Therefore, the optimum duration for every group is different. In fact, we cannot present different warranty periods for various customer groups. In conclusion, using cuckoo meta-heuristic optimization algorithm, we try to find a common period for the entire population. Results with high convergence offer a term length that will maximize the aforementioned goals simultaneously. The study was tested using real data from Appliance Company. The results indicate a significant increase in sales when the optimization approach was applied; it provides a longer warranty through increased revenue from selling, not only reducing profit margins but also increasing it.

  7. Interesting Developments in Testing Methods Applied to Foundation Piles

    Science.gov (United States)

    Sobala, Dariusz; Tkaczyński, Grzegorz

    2017-10-01

    Both: piling technologies and pile testing methods are a subject of current development. New technologies, providing larger diameters or using in-situ materials, are very demanding in terms of providing proper quality of execution of works. That concerns the material quality and continuity which define the integral strength of pile. On the other side we have the capacity of the ground around the pile and its ability to carry the loads transferred by shaft and pile base. Inhomogeneous nature of soils and a relatively small amount of tested piles imposes very good understanding of small amount of results. In some special cases the capacity test itself form an important cost in the piling contract. This work presents a brief description of selected testing methods and authors remarks based on cooperation with Universities constantly developing new ideas. Paper presents some experience based remarks on integrity testing by means of low energy impact (low strain) and introduces selected (Polish) developments in the field of closed-end pipe piles testing based on bi-directional loading, similar to Osterberg idea, but without sacrificial hydraulic jack. Such test is suitable especially when steel piles are used for temporary support in the rivers, where constructing of conventional testing appliance with anchor piles or kentledge meets technical problems. According to the author’s experience, such tests were not yet used on the building site but they bring a real potential especially, when the displacement control can be provided from the river bank using surveying techniques.

  8. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  9. Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation

    Science.gov (United States)

    Arotaritei, D.; Rotariu, C.

    2015-09-01

    In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).

  10. Modern analytic methods applied to the art and archaeology

    International Nuclear Information System (INIS)

    Tenorio C, M. D.; Longoria G, L. C.

    2010-01-01

    The interaction of diverse areas as the analytic chemistry, the history of the art and the archaeology has allowed the development of a variety of techniques used in archaeology, in conservation and restoration. These methods have been used to date objects, to determine the origin of the old materials and to reconstruct their use and to identify the degradation processes that affect the integrity of the art works. The objective of this chapter is to offer a general vision on the researches that have been realized in the Instituto Nacional de Investigaciones Nucleares (ININ) in the field of cultural goods. A series of researches carried out in collaboration with national investigators and of the foreigner is described shortly, as well as with the great support of degree students and master in archaeology of the National School of Anthropology and History, since one of the goals that have is to diffuse the knowledge of the existence of these techniques among the young archaeologists, so that they have a wider vision of what they could use in an in mediate future and they can check hypothesis with scientific methods. (Author)

  11. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  12. The Movable Type Method Applied to Protein-Ligand Binding.

    Science.gov (United States)

    Zheng, Zheng; Ucisik, Melek N; Merz, Kenneth M

    2013-12-10

    Accurately computing the free energy for biological processes like protein folding or protein-ligand association remains a challenging problem. Both describing the complex intermolecular forces involved and sampling the requisite configuration space make understanding these processes innately difficult. Herein, we address the sampling problem using a novel methodology we term "movable type". Conceptually it can be understood by analogy with the evolution of printing and, hence, the name movable type. For example, a common approach to the study of protein-ligand complexation involves taking a database of intact drug-like molecules and exhaustively docking them into a binding pocket. This is reminiscent of early woodblock printing where each page had to be laboriously created prior to printing a book. However, printing evolved to an approach where a database of symbols (letters, numerals, etc.) was created and then assembled using a movable type system, which allowed for the creation of all possible combinations of symbols on a given page, thereby, revolutionizing the dissemination of knowledge. Our movable type (MT) method involves the identification of all atom pairs seen in protein-ligand complexes and then creating two databases: one with their associated pairwise distant dependent energies and another associated with the probability of how these pairs can combine in terms of bonds, angles, dihedrals and non-bonded interactions. Combining these two databases coupled with the principles of statistical mechanics allows us to accurately estimate binding free energies as well as the pose of a ligand in a receptor. This method, by its mathematical construction, samples all of configuration space of a selected region (the protein active site here) in one shot without resorting to brute force sampling schemes involving Monte Carlo, genetic algorithms or molecular dynamics simulations making the methodology extremely efficient. Importantly, this method explores the free

  13. Applied statistical methods in agriculture, health and life sciences

    CERN Document Server

    Lawal, Bayo

    2014-01-01

    This textbook teaches crucial statistical methods to answer research questions using a unique range of statistical software programs, including MINITAB and R. This textbook is developed for undergraduate students in agriculture, nursing, biology and biomedical research. Graduate students will also find it to be a useful way to refresh their statistics skills and to reference software options. The unique combination of examples is approached using MINITAB and R for their individual strengths. Subjects covered include among others data description, probability distributions, experimental design, regression analysis, randomized design and biological assay. Unlike other biostatistics textbooks, this text also includes outliers, influential observations in regression and an introduction to survival analysis. Material is taken from the author's extensive teaching and research in Africa, USA and the UK. Sample problems, references and electronic supplementary material accompany each chapter.

  14. Applying Simulation Method in Formulation of Gluten-Free Cookies

    Directory of Open Access Journals (Sweden)

    Nikitina Marina

    2017-01-01

    Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.

  15. Applying Human-Centered Design Methods to Scientific Communication Products

    Science.gov (United States)

    Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.

    2016-12-01

    Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.

  16. Hybrid Metaheuristic Approach for Nonlocal Optimization of Molecular Systems.

    Science.gov (United States)

    Dresselhaus, Thomas; Yang, Jack; Kumbhar, Sadhana; Waller, Mark P

    2013-04-09

    Accurate modeling of molecular systems requires a good knowledge of the structure; therefore, conformation searching/optimization is a routine necessity in computational chemistry. Here we present a hybrid metaheuristic optimization (HMO) algorithm, which combines ant colony optimization (ACO) and particle swarm optimization (PSO) for the optimization of molecular systems. The HMO implementation meta-optimizes the parameters of the ACO algorithm on-the-fly by the coupled PSO algorithm. The ACO parameters were optimized on a set of small difluorinated polyenes where the parameters exhibited small variance as the size of the molecule increased. The HMO algorithm was validated by searching for the closed form of around 100 molecular balances. Compared to the gradient-based optimized molecular balance structures, the HMO algorithm was able to find low-energy conformations with a 87% success rate. Finally, the computational effort for generating low-energy conformation(s) for the phenylalanyl-glycyl-glycine tripeptide was approximately 60 CPU hours with the ACO algorithm, in comparison to 4 CPU years required for an exhaustive brute-force calculation.

  17. A new hybrid metaheuristic algorithm for wind farm micrositing

    International Nuclear Information System (INIS)

    Massan, S.U.R.; Wagan, A.I.; Shaikh, M.M.

    2017-01-01

    This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm) for the solution of the WTO (Wind Turbine Optimization) problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm) and the FA (Firefly Algorithm). The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm) used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together. (author)

  18. Simplified Methods Applied to Nonlinear Motion of Spar Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Haslum, Herbjoern Alf

    2000-07-01

    Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft

  19. Perturbation Method of Analysis Applied to Substitution Measurements of Buckling

    Energy Technology Data Exchange (ETDEWEB)

    Persson, Rolf

    1966-11-15

    Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.

  20. Variational methods applied to problems of diffusion and reaction

    CERN Document Server

    Strieder, William

    1973-01-01

    This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...

  1. Nondestructive methods of analysis applied to oriental swords

    Directory of Open Access Journals (Sweden)

    Edge, David

    2015-12-01

    Full Text Available Various neutron techniques were employed at the Budapest Nuclear Centre in an attempt to find the most useful method for analysing the high-carbon steels found in Oriental arms and armour, such as those in the Wallace Collection, London. Neutron diffraction was found to be the most useful in terms of identifying such steels and also indicating the presence of hidden patternEn el Centro Nuclear de Budapest se han empleado varias técnicas neutrónicas con el fin de encontrar un método adecuado para analizar las armas y armaduras orientales con un alto contenido en carbono, como algunas de las que se encuentran en la Colección Wallace de Londres. El empleo de la difracción de neutrones resultó ser la técnica más útil de cara a identificar ese tipo de aceros y también para encontrar patrones escondidos.

  2. Flood susceptibility mapping using novel ensembles of adaptive neuro fuzzy inference system and metaheuristic algorithms.

    Science.gov (United States)

    Razavi Termeh, Seyed Vahid; Kornejady, Aiding; Pourghasemi, Hamid Reza; Keesstra, Saskia

    2018-02-15

    Flood is one of the most destructive natural disasters which cause great financial and life losses per year. Therefore, producing susceptibility maps for flood management are necessary in order to reduce its harmful effects. The aim of the present study is to map flood hazard over the Jahrom Township in Fars Province using a combination of adaptive neuro-fuzzy inference systems (ANFIS) with different metaheuristics algorithms such as ant colony optimization (ACO), genetic algorithm (GA), and particle swarm optimization (PSO) and comparing their accuracy. A total number of 53 flood locations areas were identified, 35 locations of which were randomly selected in order to model flood susceptibility and the remaining 16 locations were used to validate the models. Learning vector quantization (LVQ), as one of the supervised neural network methods, was employed in order to estimate factors' importance. Nine flood conditioning factors namely: slope degree, plan curvature, altitude, topographic wetness index (TWI), stream power index (SPI), distance from river, land use/land cover, rainfall, and lithology were selected and the corresponding maps were prepared in ArcGIS. The frequency ratio (FR) model was used to assign weights to each class within particular controlling factor, then the weights was transferred into MATLAB software for further analyses and to combine with metaheuristic models. The ANFIS-PSO was found to be the most practical model in term of producing the highly focused flood susceptibility map with lesser spatial distribution related to highly susceptible classes. The chi-square result attests the same, where the ANFIS-PSO had the highest spatial differentiation within flood susceptibility classes over the study area. The area under the curve (AUC) obtained from ROC curve indicated the accuracy of 91.4%, 91.8%, 92.6% and 94.5% for the respective models of FR, ANFIS-ACO, ANFIS-GA, and ANFIS-PSO ensembles. So, the ensemble of ANFIS-PSO was introduced as the

  3. Event based neutron activation spectroscopy and analysis algorithm using MLE and meta-heuristics

    International Nuclear Information System (INIS)

    Wallace, B.

    2014-01-01

    Techniques used in neutron activation analysis are often dependent on the experimental setup. In the context of developing a portable and high efficiency detection array, good energy resolution and half-life discrimination are difficult to obtain with traditional methods given the logistic and financial constraints. An approach different from that of spectrum addition and standard spectroscopy analysis was needed. The use of multiple detectors prompts the need for a flexible storage of acquisition data to enable sophisticated post processing of information. Analogously to what is done in heavy ion physics, gamma detection counts are stored as two-dimensional events. This enables post-selection of energies and time frames without the need to modify the experimental setup. This method of storage also permits the use of more complex analysis tools. Given the nature of the problem at hand, a light and efficient analysis code had to be devised. A thorough understanding of the physical and statistical processes involved was used to create a statistical model. Maximum likelihood estimation was combined with meta-heuristics to produce a sophisticated curve-fitting algorithm. Simulated and experimental data were fed into the analysis code prompting positive results in terms of half-life discrimination, peak identification and noise reduction. The code was also adapted to other fields of research such as heavy ion identification of the quasi-target (QT) and quasi-particle (QP). The approach used seems to be able to translate well into other fields of research. (author)

  4. An Automatic Multilevel Image Thresholding Using Relative Entropy and Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    Josue R. Cuevas

    2013-06-01

    Full Text Available Multilevel thresholding has been long considered as one of the most popular techniques for image segmentation. Multilevel thresholding outputs a gray scale image in which more details from the original picture can be kept, while binary thresholding can only analyze the image in two colors, usually black and white. However, two major existing problems with the multilevel thresholding technique are: it is a time consuming approach, i.e., finding appropriate threshold values could take an exceptionally long computation time; and defining a proper number of thresholds or levels that will keep most of the relevant details from the original image is a difficult task. In this study a new evaluation function based on the Kullback-Leibler information distance, also known as relative entropy, is proposed. The property of this new function can help determine the number of thresholds automatically. To offset the expensive computational effort by traditional exhaustive search methods, this study establishes a procedure that combines the relative entropy and meta-heuristics. From the experiments performed in this study, the proposed procedure not only provides good segmentation results when compared with a well known technique such as Otsu’s method, but also constitutes a very efficient approach.

  5. SLIDER: a generic metaheuristic for the discovery of correlated motifs in protein-protein interaction networks.

    Science.gov (United States)

    Boyen, Peter; Van Dyck, Dries; Neven, Frank; van Ham, Roeland C H J; van Dijk, Aalt D J

    2011-01-01

    Correlated motif mining (cmm) is the problem of finding overrepresented pairs of patterns, called motifs, in sequences of interacting proteins. Algorithmic solutions for cmm thereby provide a computational method for predicting binding sites for protein interaction. In this paper, we adopt a motif-driven approach where the support of candidate motif pairs is evaluated in the network. We experimentally establish the superiority of the Chi-square-based support measure over other support measures. Furthermore, we obtain that cmm is an np-hard problem for a large class of support measures (including Chi-square) and reformulate the search for correlated motifs as a combinatorial optimization problem. We then present the generic metaheuristic slider which uses steepest ascent with a neighborhood function based on sliding motifs and employs the Chi-square-based support measure. We show that slider outperforms existing motif-driven cmm methods and scales to large protein-protein interaction networks. The slider-implementation and the data used in the experiments are available on http://bioinformatics.uhasselt.be.

  6. Event based neutron activation spectroscopy and analysis algorithm using MLE and metaheuristics

    Science.gov (United States)

    Wallace, Barton

    2014-03-01

    Techniques used in neutron activation analysis are often dependent on the experimental setup. In the context of developing a portable and high efficiency detection array, good energy resolution and half-life discrimination are difficult to obtain with traditional methods [1] given the logistic and financial constraints. An approach different from that of spectrum addition and standard spectroscopy analysis [2] was needed. The use of multiple detectors prompts the need for a flexible storage of acquisition data to enable sophisticated post processing of information. Analogously to what is done in heavy ion physics, gamma detection counts are stored as two-dimensional events. This enables post-selection of energies and time frames without the need to modify the experimental setup. This method of storage also permits the use of more complex analysis tools. Given the nature of the problem at hand, a light and efficient analysis code had to be devised. A thorough understanding of the physical and statistical processes [3] involved was used to create a statistical model. Maximum likelihood estimation was combined with metaheuristics to produce a sophisticated curve-fitting algorithm. Simulated and experimental data were fed into the analysis code prompting positive results in terms of half-life discrimination, peak identification and noise reduction. The code was also adapted to other fields of research such as heavy ion identification of the quasi-target (QT) and quasi-particle (QP). The approach used seems to be able to translate well into other fields of research.

  7. A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems

    Directory of Open Access Journals (Sweden)

    Angel A. Juan

    2015-12-01

    Full Text Available Many combinatorial optimization problems (COPs encountered in real-world logistics, transportation, production, healthcare, financial, telecommunication, and computing applications are NP-hard in nature. These real-life COPs are frequently characterized by their large-scale sizes and the need for obtaining high-quality solutions in short computing times, thus requiring the use of metaheuristic algorithms. Metaheuristics benefit from different random-search and parallelization paradigms, but they frequently assume that the problem inputs, the underlying objective function, and the set of optimization constraints are deterministic. However, uncertainty is all around us, which often makes deterministic models oversimplified versions of real-life systems. After completing an extensive review of related work, this paper describes a general methodology that allows for extending metaheuristics through simulation to solve stochastic COPs. ‘Simheuristics’ allow modelers for dealing with real-life uncertainty in a natural way by integrating simulation (in any of its variants into a metaheuristic-driven framework. These optimization-driven algorithms rely on the fact that efficient metaheuristics already exist for the deterministic version of the corresponding COP. Simheuristics also facilitate the introduction of risk and/or reliability analysis criteria during the assessment of alternative high-quality solutions to stochastic COPs. Several examples of applications in different fields illustrate the potential of the proposed methodology.

  8. Complexity methods applied to turbulence in plasma astrophysics

    Science.gov (United States)

    Vlahos, L.; Isliker, H.

    2016-09-01

    In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the

  9. Harmony Search Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    X. Z. Gao

    2015-01-01

    Full Text Available The Harmony Search (HS method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.

  10. Time series analytics using sliding window metaheuristic optimization-based machine learning system for identifying building energy consumption patterns

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Ngo, Ngoc-Tri

    2016-01-01

    Highlights: • This study develops a novel time-series sliding window forecast system. • The system integrates metaheuristics, machine learning and time-series models. • Site experiment of smart grid infrastructure is installed to retrieve real-time data. • The proposed system accurately predicts energy consumption in residential buildings. • The forecasting system can help users minimize their electricity usage. - Abstract: Smart grids are a promising solution to the rapidly growing power demand because they can considerably increase building energy efficiency. This study developed a novel time-series sliding window metaheuristic optimization-based machine learning system for predicting real-time building energy consumption data collected by a smart grid. The proposed system integrates a seasonal autoregressive integrated moving average (SARIMA) model and metaheuristic firefly algorithm-based least squares support vector regression (MetaFA-LSSVR) model. Specifically, the proposed system fits the SARIMA model to linear data components in the first stage, and the MetaFA-LSSVR model captures nonlinear data components in the second stage. Real-time data retrieved from an experimental smart grid installed in a building were used to evaluate the efficacy and effectiveness of the proposed system. A k-week sliding window approach is proposed for employing historical data as input for the novel time-series forecasting system. The prediction system yielded high and reliable accuracy rates in 1-day-ahead predictions of building energy consumption, with a total error rate of 1.181% and mean absolute error of 0.026 kW h. Notably, the system demonstrates an improved accuracy rate in the range of 36.8–113.2% relative to those of the linear forecasting model (i.e., SARIMA) and nonlinear forecasting models (i.e., LSSVR and MetaFA-LSSVR). Therefore, end users can further apply the forecasted information to enhance efficiency of energy usage in their buildings, especially

  11. Using Metaheuristic Algorithms for Solving a Hub Location Problem: Application in Passive Optical Network Planning

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2017-02-01

    Full Text Available Nowadays, fiber-optic due to having greater bandwidth and being more efficient compared with other similar technologies, are counted as one the most important tools for data transfer. In this article, an integrated mathematical model for a three-level fiber-optic distribution network with consideration of simultaneous backbone and local access networks is presented in which the backbone network is a ring and the access networks has a star-star topology. The aim of the model is to determine the location of the central offices and splitters, how connections are made between central offices, and allocation of each demand node to a splitter or central office in a way that the wiring cost of fiber optical and concentrator installation are minimized. Moreover, each user’s desired bandwidth should be provided efficiently. Then, the proposed model is validated by GAMS software in small-sized problems, afterwards the model is solved by two meta-heuristic methods including differential evolution (DE and genetic algorithm (GA in large-scaled problems and the results of two algorithms are compared with respect to computational time and objective function obtained value. Finally, a sensitivity analysis is provided. Keyword: Fiber-optic, telecommunication network, hub-location, passive splitter, three-level network.

  12. Modelling of Hydrothermal Unit Commitment Coordination Using Efficient Metaheuristic Algorithm: A Hybridized Approach

    Directory of Open Access Journals (Sweden)

    Suman Sutradhar

    2016-01-01

    Full Text Available In this paper, a novel approach of hybridization of two efficient metaheuristic algorithms is proposed for energy system analysis and modelling based on a hydro and thermal based power system in both single and multiobjective environment. The scheduling of hydro and thermal power is modelled descriptively including the handling method of various practical nonlinear constraints. The main goal for the proposed modelling is to minimize the total production cost (which is highly nonlinear and nonconvex problem and emission while satisfying involved hydro and thermal unit commitment limitations. The cascaded hydro reservoirs of hydro subsystem and intertemporal constraints regarding thermal units along with nonlinear nonconvex, mixed-integer mixed-binary objective function make the search space highly complex. To solve such a complicated system, a hybridization of Gray Wolf Optimization and Artificial Bee Colony algorithm, that is, h-ABC/GWO, is used for better exploration and exploitation in the multidimensional search space. Two different test systems are used for modelling and analysis. Experimental results demonstrate the superior performance of the proposed algorithm as compared to other recently reported ones in terms of convergence and better quality of solutions.

  13. Meta-heuristics in cellular manufacturing: A state-of-the-art review

    Directory of Open Access Journals (Sweden)

    Tamal Ghosh

    2011-01-01

    Full Text Available Meta-heuristic approaches are general algorithmic framework, often nature-inspired and designed to solve NP-complete optimization problems in cellular manufacturing systems and has been a growing research area for the past two decades. This paper discusses various meta-heuristic techniques such as evolutionary approach, Ant colony optimization, simulated annealing, Tabu search and other recent approaches, and their applications to the vicinity of group technology/cell formation (GT/CF problem in cellular manufacturing. The nobility of this paper is to incorporate various prevailing issues, open problems of meta-heuristic approaches, its usage, comparison, hybridization and its scope of future research in the aforesaid area.

  14. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    Science.gov (United States)

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  15. Comparing the performance of different meta-heuristics for unweighted parallel machine scheduling

    Directory of Open Access Journals (Sweden)

    Adamu, Mumuni Osumah

    2015-08-01

    Full Text Available This article considers the due window scheduling problem to minimise the number of early and tardy jobs on identical parallel machines. This problem is known to be NP complete and thus finding an optimal solution is unlikely. Three meta-heuristics and their hybrids are proposed and extensive computational experiments are conducted. The purpose of this paper is to compare the performance of these meta-heuristics and their hybrids and to determine the best among them. Detailed comparative tests have also been conducted to analyse the different heuristics with the simulated annealing hybrid giving the best result.

  16. A Metaheuristic Scheduler for Time Division Multiplexed Network-on-Chip

    DEFF Research Database (Denmark)

    Sørensen, Rasmus Bo; Sparsø, Jens; Pedersen, Mark Ruvald

    This report presents a metaheuristic scheduler for inter-processor communication in multi-core platforms using time division multiplexed (TDM) networks on chip (NOC). Input to the scheduler is a specification of the target multi-core platform and a specification of the application. Compared...... that this is possible with only negligible impact on the schedule period. We evaluate the scheduler with seven different applications from the MCSL NOC benchmark suite. We observe that the metaheuristics perform better than the greedy solution. In the special case of all-to-all communication with equal bandwidths...

  17. Soft computing and metaheuristics: using knowledge and reasoning to control search and vice-versa

    Science.gov (United States)

    Bonissone, Piero P.

    2004-01-01

    Meta-heuristics are heuristic procedures used to tune, control, guide, allocate computational resources or reason about object-level problem solvers in order to improve their quality, performance, or efficiency. Offline meta-heuristics define the best structural and/or parametric configurations for the object-level model, while on-line heuristics generate run-time corrections for the behavior of the same object-level solvers. Soft Computing is a framework in which we encode domain knowledge to develop such meta-heuristics. We explore the use of meta-heuristics in three application areas: a) control; b) optimization; and c) classification. In the context of control problems, we describe the use of evolutionary algorithms to perform offline parametric tuning of fuzzy controllers, and the use of fuzzy supervisory controllers to perform on-line mode-selection and output interpolation. In the area of optimization, we illustrate the application of fuzzy controllers to manage the transition from exploration to exploitation of evolutionary algorithms that solve the optimization problem. In the context of discrete classification problems, we have leveraged evolutionary algorithms to tune knowledge-based classifiers and maximize their coverage and accuracy.

  18. A Metaheuristic Scheduler for Time Division Multiplexed Network-on-Chip

    DEFF Research Database (Denmark)

    Sørensen, Rasmus Bo; Sparsø, Jens; Pedersen, Mark Ruvald

    2014-01-01

    This paper presents a metaheuristic scheduler for inter-processor communication in multi-processor platforms using time division multiplexed (TDM) networks on chip (NOC). Compared to previous works, the scheduler handles a broader and more general class of platforms. Another contribution, which has...

  19. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    OpenAIRE

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commer...

  20. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    Science.gov (United States)

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  1. An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy

    Science.gov (United States)

    Gamso, Nancy M.

    2011-01-01

    The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…

  2. Protein structure prediction using bee colony optimization metaheuristic

    DEFF Research Database (Denmark)

    Fonseca, Rasmus; Paluszewski, Martin; Winter, Pawel

    2010-01-01

    Predicting the native structure of proteins is one of the most challenging problems in molecular biology. The goal is to determine the three-dimensional struc- ture from the one-dimensional amino acid sequence. De novo prediction algorithms seek to do this by developing a representation of the pr......Predicting the native structure of proteins is one of the most challenging problems in molecular biology. The goal is to determine the three-dimensional struc- ture from the one-dimensional amino acid sequence. De novo prediction algorithms seek to do this by developing a representation...... of the proteins structure, an energy potential and some optimization algorithm that ¿nds the structure with minimal energy. Bee Colony Optimization (BCO) is a relatively new approach to solving opti- mization problems based on the foraging behaviour of bees. Several variants of BCO have been suggested...... in the literature. We have devised a new variant that uni¿es the existing and is much more ¿exible with respect to replacing the various elements of the BCO. In particular this applies to the choice of the local search as well as the method for generating scout locations and performing the waggle dance. We apply...

  3. Applying ant colony optimization metaheuristic to solve forest transportation planning problems with side constraints

    Science.gov (United States)

    Marco A. Contreras; Woodam Chung; Greg Jones

    2008-01-01

    Forest transportation planning problems (FTPP) have evolved from considering only the financial aspects of timber management to more holistic problems that also consider the environmental impacts of roads. These additional requirements have introduced side constraints, making FTPP larger and more complex. Mixed-integer programming (MIP) has been used to solve FTPP, but...

  4. Metaheuristics applied to vehicle routing. A case study. Parte 1: formulating the problem

    Directory of Open Access Journals (Sweden)

    Guillermo González Vargas

    2006-09-01

    Full Text Available This paper deals with VRP (vehicle routing problem mathematical formulation and presents some methodologies used by different authors to solve VRP variation. This paper is presented as the springboard for introducing future papers about a manufacturing company’s location decisions based on the total distance traveled to distribute its product.

  5. Wielandt method applied to the diffusion equations discretized by finite element nodal methods

    International Nuclear Information System (INIS)

    Mugica R, A.; Valle G, E. del

    2003-01-01

    Nowadays the numerical methods of solution to the diffusion equation by means of algorithms and computer programs result so extensive due to the great number of routines and calculations that should carry out, this rebounds directly in the execution times of this programs, being obtained results in relatively long times. This work shows the application of an acceleration method of the convergence of the classic method of those powers that it reduces notably the number of necessary iterations for to obtain reliable results, what means that the compute times they see reduced in great measure. This method is known in the literature like Wielandt method and it has incorporated to a computer program that is based on the discretization of the neutron diffusion equations in plate geometry and stationary state by polynomial nodal methods. In this work the neutron diffusion equations are described for several energy groups and their discretization by means of those called physical nodal methods, being illustrated in particular the quadratic case. It is described a model problem widely described in the literature which is solved for the physical nodal grade schemes 1, 2, 3 and 4 in three different ways: to) with the classic method of the powers, b) method of the powers with the Wielandt acceleration and c) method of the powers with the Wielandt modified acceleration. The results for the model problem as well as for two additional problems known as benchmark problems are reported. Such acceleration method can also be implemented to problems of different geometry to the proposal in this work, besides being possible to extend their application to problems in 2 or 3 dimensions. (Author)

  6. What is the method in applying formal methods to PLC applications?

    NARCIS (Netherlands)

    Mader, Angelika H.; Engel, S.; Wupper, Hanno; Kowalewski, S.; Zaytoon, J.

    2000-01-01

    The question we investigate is how to obtain PLC applications with confidence in their proper functioning. Especially, we are interested in the contribution that formal methods can provide for their development. Our maxim is that the place of a particular formal method in the total picture of system

  7. Formal methods applied to industrial complex systems implementation of the B method

    CERN Document Server

    Boulanger, Jean-Louis

    2014-01-01

    This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from

  8. The Wigner method applied to the photodissociation of CH3I

    DEFF Research Database (Denmark)

    Henriksen, Niels Engholm

    1985-01-01

    The Wigner method is applied to the Shapiro-Bersohn model of the photodissociation of CH3I. The partial cross sections obtained by this semiclassical method are in very good agreement with results of exact quantum calculations. It is also shown that a harmonic approximation to the vibrational...

  9. A new clamp method for firing bricks | Obeng | Journal of Applied ...

    African Journals Online (AJOL)

    A new clamp method for firing bricks. ... Journal of Applied Science and Technology ... To overcome this operational deficiencies, a new method of firing bricks that uses brick clamp technique that incorporates a clamp wall of 60 cm thickness, a six tier approach of sealing the top of the clamp (by combination of green bricks) ...

  10. Determination methods for plutonium as applied in the field of reprocessing

    International Nuclear Information System (INIS)

    1983-07-01

    The papers presented report on Pu-determination methods, which are routinely applied in process control, and also on new developments which could supercede current methods either because they are more accurate or because they are simpler and faster. (orig./DG) [de

  11. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    Science.gov (United States)

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  12. Water Permeability of Pervious Concrete Is Dependent on the Applied Pressure and Testing Methods

    Directory of Open Access Journals (Sweden)

    Yinghong Qin

    2015-01-01

    Full Text Available Falling head method (FHM and constant head method (CHM are, respectively, used to test the water permeability of permeable concrete, using different water heads on the testing samples. The results indicate the apparent permeability of pervious concrete decreasing with the applied water head. The results also demonstrate the permeability measured from the FHM is lower than that from the CHM. The fundamental difference between the CHM and FHM is examined from the theory of fluid flowing through porous media. The testing results suggest that the water permeability of permeable concrete should be reported with the applied pressure and the associated testing method.

  13. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    Directory of Open Access Journals (Sweden)

    Javier Cubas

    2015-01-01

    Full Text Available A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  14. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    Science.gov (United States)

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  15. Efficient Integration of Highly Eccentric Orbits by Scaling Methods Applied to Kustaanheimo-Stiefel Regularization

    Science.gov (United States)

    Fukushima, Toshio

    2004-12-01

    We apply our single scaling method to the numerical integration of perturbed two-body problems regularized by the Kustaanheimo-Stiefel (K-S) transformation. The scaling is done by multiplying a single scaling factor with the four-dimensional position and velocity vectors of an associated harmonic oscillator in order to maintain the Kepler energy relation in terms of the K-S variables. As with the so-called energy rectification of Aarseth, the extra cost for the scaling is negligible, since the integration of the Kepler energy itself is already incorporated in the original K-S formulation. On the other hand, the single scaling method can be applied at every integration step without facing numerical instabilities. For unperturbed cases, the single scaling applied at every step gives a better result than either the original K-S formulation, the energy rectification applied at every apocenter, or the single scaling method applied at every apocenter. For the perturbed cases, however, the single scaling method applied at every apocenter provides the best performance for all perturbation types, whether the main source of error is truncation or round-off.

  16. Fuzzy logic augmentation of nature-inspired optimization metaheuristics theory and applications

    CERN Document Server

    Melin, Patricia

    2015-01-01

    This book describes recent advances on fuzzy logic augmentation of nature-inspired optimization metaheuristics and their application in areas such as intelligent control and robotics, pattern recognition, time series prediction and optimization of complex problems. The book is organized in two main parts, which contain a group of papers around a similar subject. The first part consists of papers with the main theme of theoretical aspects of fuzzy logic augmentation of nature-inspired optimization metaheuristics, which basically consists of papers that propose new optimization algorithms enhanced using fuzzy systems. The second part contains papers with the main theme of application of optimization algorithms, which are basically papers using nature-inspired techniques to achieve optimization of complex optimization problems in diverse areas of application.

  17. Active Problem Solving and Applied Research Methods in a Graduate Course on Numerical Methods

    Science.gov (United States)

    Maase, Eric L.; High, Karen A.

    2008-01-01

    "Chemical Engineering Modeling" is a first-semester graduate course traditionally taught in a lecture format at Oklahoma State University. The course as taught by the author for the past seven years focuses on numerical and mathematical methods as necessary skills for incoming graduate students. Recent changes to the course have included Visual…

  18. Viewpoint: An Alternative Teaching Method. The WFTU Applies Active Methods to Educate Workers.

    Science.gov (United States)

    Courbe, Jean-Francois

    1989-01-01

    Develops a set of ideas and practices acquired from experience in organizing trade union education sessions. The method is based on observations that lecturing has not proved highly efficient, although traditional approaches--lecture, reading, discussion--are not totally rejected. (JOW)

  19. Proposal and Evaluation of Management Method for College Mechatronics Education Applying the Project Management

    Science.gov (United States)

    Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto

    In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.

  20. Forensic chemistry: perspective of new analytical methods applied to documentoscopy, ballistic and drugs of abuse

    OpenAIRE

    Romão, Wanderson; Schwab, Nicolas V; Bueno, Maria Izabel M. S; Sparrapan, Regina; Eberlin, Marcos N; Martiny, Andrea; Sabino, Bruno D; Maldaner, Adriano O

    2011-01-01

    In this review recent methods developed and applied to solve criminal occurences related to documentoscopy, ballistic and drugs of abuse are discussed. In documentoscopy, aging of ink writings, the sequence of line crossings and counterfeiting of documents are aspects to be solved with reproducible, fast and non-destructive methods. In ballistic, the industries are currently producing ''lead-free'' or ''nontoxic'' handgun ammunitions, so new methods of gunshot residues characterization are be...

  1. Apparatus and method for applying an end plug to a fuel rod tube end

    International Nuclear Information System (INIS)

    Rieben, S.L.; Wylie, M.E.

    1987-01-01

    An apparatus is described for applying an end plug to a hollow end of a nuclear fuel rod tube, comprising: support means mounted for reciprocal movement between remote and adjacent positions relative to a nuclear fuel rod tube end to which an end plug is to be applied; guide means supported on the support means for movement; and drive means coupled to the support means and being actuatable for movement between retracted and extended positions for reciprocally moving the support means between its respective remote and adjacent positions. A method for applying an end plug to a hollow end of a nuclear fuel rod tube is also described

  2. Optimizing a multi-product closed-loop supply chain using NSGA-II, MOSA, and MOPSO meta-heuristic algorithms

    Science.gov (United States)

    Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar

    2017-07-01

    This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.

  3. Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm.

    Science.gov (United States)

    Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel

    2016-01-01

    The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.

  4. Method of levelized discounted costs applied in economic evaluation of nuclear power plant project

    International Nuclear Information System (INIS)

    Tian Li; Wang Yongqing; Liu Jingquan; Guo Jilin; Liu Wei

    2000-01-01

    The main methods of economic evaluation of bid which are in common use are introduced. The characteristics of levelized discounted cost method and its application are presented. The method of levelized discounted cost is applied to the cost calculation of a 200 MW nuclear heating reactor economic evaluation. The results indicate that the method of levelized discounted costs is simple, feasible and which is considered most suitable for the economic evaluation of various case. The method is suggested which is used in the national economic evaluation

  5. Particle swarm optimization with random keys applied to the nuclear reactor reload problem

    International Nuclear Information System (INIS)

    Meneses, Anderson Alvarenga de Moura; Fundacao Educacional de Macae; Machado, Marcelo Dornellas; Medeiros, Jose Antonio Carlos Canedo; Schirru, Roberto

    2007-01-01

    In 1995, Kennedy and Eberhart presented the Particle Swarm Optimization (PSO), an Artificial Intelligence metaheuristic technique to optimize non-linear continuous functions. The concept of Swarm Intelligence is based on the socials aspects of intelligence, it means, the ability of individuals to learn with their own experience in a group as well as to take advantage of the performance of other individuals. Some PSO models for discrete search spaces have been developed for combinatorial optimization, although none of them presented satisfactory results to optimize a combinatorial problem as the nuclear reactor fuel reloading problem (NRFRP). In this sense, we developed the Particle Swarm Optimization with Random Keys (PSORK) in previous research to solve Combinatorial Problems. Experiences demonstrated that PSORK performed comparable to or better than other techniques. Thus, PSORK metaheuristic is being applied in optimization studies of the NRFRP for Angra 1 Nuclear Power Plant. Results will be compared with Genetic Algorithms and the manual method provided by a specialist. In this experience, the problem is being modeled for an eight-core symmetry and three-dimensional geometry, aiming at the minimization of the Nuclear Enthalpy Power Peaking Factor as well as the maximization of the cycle length. (author)

  6. Particle swarm optimization with random keys applied to the nuclear reactor reload problem

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Anderson Alvarenga de Moura [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Programa de Engenharia Nuclear; Fundacao Educacional de Macae (FUNEMAC), RJ (Brazil). Faculdade Professor Miguel Angelo da Silva Santos; Machado, Marcelo Dornellas; Medeiros, Jose Antonio Carlos Canedo; Schirru, Roberto [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Programa de Engenharia Nuclear]. E-mails: ameneses@con.ufrj.br; marcelo@lmp.ufrj.br; canedo@lmp.ufrj.br; schirru@lmp.ufrj.br

    2007-07-01

    In 1995, Kennedy and Eberhart presented the Particle Swarm Optimization (PSO), an Artificial Intelligence metaheuristic technique to optimize non-linear continuous functions. The concept of Swarm Intelligence is based on the socials aspects of intelligence, it means, the ability of individuals to learn with their own experience in a group as well as to take advantage of the performance of other individuals. Some PSO models for discrete search spaces have been developed for combinatorial optimization, although none of them presented satisfactory results to optimize a combinatorial problem as the nuclear reactor fuel reloading problem (NRFRP). In this sense, we developed the Particle Swarm Optimization with Random Keys (PSORK) in previous research to solve Combinatorial Problems. Experiences demonstrated that PSORK performed comparable to or better than other techniques. Thus, PSORK metaheuristic is being applied in optimization studies of the NRFRP for Angra 1 Nuclear Power Plant. Results will be compared with Genetic Algorithms and the manual method provided by a specialist. In this experience, the problem is being modeled for an eight-core symmetry and three-dimensional geometry, aiming at the minimization of the Nuclear Enthalpy Power Peaking Factor as well as the maximization of the cycle length. (author)

  7. Method to detect substances in a body and device to apply the method

    International Nuclear Information System (INIS)

    Voigt, H.

    1978-01-01

    The method and the measuring disposition serve to localize pellets doped with Gd 2 O 3 , lying between UO 2 pellets within a reactor fuel rod. The fuel rod is penetrating a homogeneous magnetic field generated between two pole shoes. The magnetic stray field caused by the doping substances is then measured by means of Hall probes (e.g. InAs) for quantitative discrimination from UO 2 . The position of the Gd 2 O 3 -doped pellets is determined by moving the fuel rod through the magnetic field in a direction perpendicular to the homogeneous field. The measuring signal is caused by the different susceptibility of Gd 2 O 3 with respect to UO 2 . (DG) [de

  8. Performance of Some Metaheuristic Algorithms for Multiuser Detection in TTCM-Assisted Rank-Deficient SDMA-OFDM System

    Directory of Open Access Journals (Sweden)

    Haris PA

    2010-01-01

    Full Text Available We propose two novel and computationally efficient metaheuristic algorithms based on Artificial Bee Colony (ABC and Particle Swarm Optimization (PSO principles for Multiuser Detection (MUD in Turbo Trellis Coded modulation- (TTCM- based Space Division Multiple Access (SDMA Orthogonal Frequency Division Multiplexing (OFDM system. Unlike gradient descent methods, both ABC and PSO methods ensure minimization of the objective function without the solution being trapped into local optima. These techniques are capable of achieving excellent performance in the so-called overloaded system, where the number of transmit antennas is higher than the number of receiver antennas, in which the known classic MUDs fail. The performance of the proposed algorithm is compared with each other and also against Genetic Algorithm- (GA- based MUD. Simulation results establish better performance, computational efficiency, and convergence characteristics for ABC and PSO methods. It is seen that the proposed detectors achieve similar performance to that of well-known optimum Maximum Likelihood Detector (MLD at a significantly lower computational complexity and outperform the traditional MMSE MUD.

  9. Several methods applied to measuring residual stress in a known specimen

    International Nuclear Information System (INIS)

    Prime, M.B.; Rangaswamy, P.; Daymond, M.R.; Abelin, T.G.

    1998-01-01

    In this study, a beam with a precisely known residual stress distribution provided a unique experimental opportunity. A plastically bent beam was carefully prepared in order to provide a specimen with a known residual stress profile. 21Cr-6Ni-9Mn austenitic stainless steel was obtained as 43 mm square forged stock. Several methods were used to determine the residual stresses, and the results were compared to the known values. Some subtleties of applying the various methods were exposed

  10. Method of applying single higher order polynomial basis function over multiple domains

    CSIR Research Space (South Africa)

    Lysko, AA

    2010-03-01

    Full Text Available M) with higher-order polynomial basis functions, and applied to a surface form of the electrical field integral equation, under thin wire approximation. The main advantage of the proposed method is in permitting to reduce the required number of unknowns when...

  11. Method of applying single higher order polynomial basis function over multiple domains

    CSIR Research Space (South Africa)

    Lysko, AA

    2010-03-01

    Full Text Available A novel method has been devised where one set of higher order polynomial-based basis functions can be applied over several wire segments, thus permitting to decouple the number of unknowns from the number of segments, and so from the geometrical...

  12. 21 CFR 111.320 - What requirements apply to laboratory methods for testing and examination?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to laboratory methods for testing and examination? 111.320 Section 111.320 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING...

  13. Critical path method applied to research project planning: Fire Economics Evaluation System (FEES)

    Science.gov (United States)

    Earl B. Anderson; R. Stanton Hales

    1986-01-01

    The critical path method (CPM) of network analysis (a) depicts precedence among the many activities in a project by a network diagram; (b) identifies critical activities by calculating their starting, finishing, and float times; and (c) displays possible schedules by constructing time charts. CPM was applied to the development of the Forest Service's Fire...

  14. Applying terminological methods and description logic for creating and implementing and ontology on inhibition

    DEFF Research Database (Denmark)

    Zambach, Sine; Madsen, Bodil Nistrup

    2009-01-01

    By applying formal terminological methods to model an ontology within the domain of enzyme inhibition, we aim to clarify concepts and to obtain consistency. Additionally, we propose a procedure for implementing this ontology in OWL with the aim of obtaining a strict structure which can form...

  15. [Alternative medicine methods applied in patients before surgical treatment of lumbar discopathy].

    Science.gov (United States)

    Rutkowska, E; Kamiński, S; Kucharczyk, A

    2001-01-01

    Case records of 200 patients operated on in 1998/99 for herniated lumbar disc in Neurosurgery Dept. showed that 95 patients (47.5%) had been treated previously by 148 alternative medical or non-medical procedures. The authors discuss the problem of non-conventional treatment methods applied for herniated lumbar disc by professionals or non professionals. The procedures are often dangerous.

  16. The Effect Of The Applied Performance Methods On The Objective Of The Managers

    Directory of Open Access Journals (Sweden)

    Derya Kara

    2009-09-01

    Full Text Available Within the changing management concept, employees and employers have the constant feeling of keeping up with the changing environment. In this regard, performance evaluation activities are regarded as an indispensable element. Data obtained from the results of the performance evaluation activities, shed light on the development of the employees and enable the enterprises to stand in the fierce competitive environment. This study sets out to find out the effect of the applied performance methods on the objective of the managers. The population of the study comprises 182 hotel enterprises with five stars operating in Antalya, İzmir and Muğla with 2184 managers. Sample population was comprised of 578 managers. The results of the study suggest that the effect of the applied performance methods on the objective of the managers counts. The objective of the managers applying 360-degree performance evaluation method is found to be “finding out the training and development needs”, while the objective of the managers applying conventional performance evaluation methods is found to be “enhancing the existing performance”.

  17. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services.

    Science.gov (United States)

    Rajabi, A; Dabiri, A

    2012-01-01

    Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.

  18. Iterated local search and record-to-record travel applied to the fixed charge transportation problem

    DEFF Research Database (Denmark)

    Andersen, Jeanne; Klose, Andreas

    , transportation costs do, however, include a fixed charge. Iterated local search and record-to-record travel are both simple local search based meta-heuristics that, to our knowledge, not yet have been applied to the FCTP. In this paper, we apply both types of search strategies and combine them into a single...

  19. Polar Bear Optimization Algorithm: Meta-Heuristic with Fast Population Movement and Dynamic Birth and Death Mechanism

    Directory of Open Access Journals (Sweden)

    Dawid Połap

    2017-09-01

    Full Text Available In the proposed article, we present a nature-inspired optimization algorithm, which we called Polar Bear Optimization Algorithm (PBO. The inspiration to develop the algorithm comes from the way polar bears hunt to survive in harsh arctic conditions. These carnivorous mammals are active all year round. Frosty climate, unfavorable to other animals, has made polar bears adapt to the specific mode of exploration and hunting in large areas, not only over ice but also water. The proposed novel mathematical model of the way polar bears move in the search for food and hunt can be a valuable method of optimization for various theoretical and practical problems. Optimization is very similar to nature, similarly to search for optimal solutions for mathematical models animals search for optimal conditions to develop in their natural environments. In this method. we have used a model of polar bear behaviors as a search engine for optimal solutions. Proposed simulated adaptation to harsh winter conditions is an advantage for local and global search, while birth and death mechanism controls the population. Proposed PBO was evaluated and compared to other meta-heuristic algorithms using sample test functions and some classical engineering problems. Experimental research results were compared to other algorithms and analyzed using various parameters. The analysis allowed us to identify the leading advantages which are rapid recognition of the area by the relevant population and efficient birth and death mechanism to improve global and local search within the solution space.

  20. An applied study using systems engineering methods to prioritize green systems options

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sonya M [Los Alamos National Laboratory; Macdonald, John M [Los Alamos National Laboratory

    2009-01-01

    For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective into how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.

  1. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  2. Economic consequences assessment for scenarios and actual accidents do the same methods apply

    International Nuclear Information System (INIS)

    Brenot, J.

    1991-01-01

    Methods for estimating the economic consequences of major technological accidents, and their corresponding computer codes, are briefly presented with emphasis on the basic choices. When applied to hypothetic scenarios, those methods give results that are of interest for risk managers with a decision aiding perspective. Simultaneously the various costs, and the procedures for their estimation are reviewed for some actual accidents (Three Mile Island, Chernobyl,..). These costs are used in a perspective of litigation and compensation. The comparison of the methods used and cost estimates obtained for scenarios and actual accidents shows the points of convergence and discrepancies that are discussed

  3. Applying the Support Vector Machine Method to Matching IRAS and SDSS Catalogues

    Directory of Open Access Journals (Sweden)

    Chen Cao

    2007-10-01

    Full Text Available This paper presents results of applying a machine learning technique, the Support Vector Machine (SVM, to the astronomical problem of matching the Infra-Red Astronomical Satellite (IRAS and Sloan Digital Sky Survey (SDSS object catalogues. In this study, the IRAS catalogue has much larger positional uncertainties than those of the SDSS. A model was constructed by applying the supervised learning algorithm (SVM to a set of training data. Validation of the model shows a good identification performance (∼ 90% correct, better than that derived from classical cross-matching algorithms, such as the likelihood-ratio method used in previous studies.

  4. A stochastic root finding approach: the homotopy analysis method applied to Dyson-Schwinger equations

    Science.gov (United States)

    Pfeffer, Tobias; Pollet, Lode

    2017-04-01

    We present the construction and stochastic summation of rooted-tree diagrams, based on the expansion of a root finding algorithm applied to the Dyson-Schwinger equations. The mathematical formulation shows superior convergence properties compared to the bold diagrammatic Monte Carlo approach and the developed algorithm allows one to tackle generic high-dimensional integral equations, to avoid the curse of dealing explicitly with high-dimensional objects and to access non-perturbative regimes. The sign problem remains the limiting factor, but it is not found to be worse than in other approaches. We illustrate the method for {φ }4 theory but note that it applies in principle to any model.

  5. Control Method for Electromagnetic Unmanned Robot Applied to Automotive Test Based on Improved Smith Predictor Compensator

    Directory of Open Access Journals (Sweden)

    Gang Chen

    2015-07-01

    Full Text Available A new control method for an electromagnetic unmanned robot applied to automotive testing (URAT and based on improved Smith predictor compensator, and considering a time delay, is proposed. The mechanical system structure and the control system structure are presented. The electromagnetic URAT adopts pulse width modulation (PWM control, while the displacement and the current doubles as a closed-loop control strategy. The coordinated control method of multiple manipulators for the electromagnetic URAT, e.g., a skilled human driver with intelligent decision-making ability is provided, and the improved Smith predictor compensator controller for the electromagnetic URAT considering a time delay is designed. Experiments are conducted using a Ford FOCUS automobile. Comparisons between the PID control method and the proposed method are conducted. Experimental results show that the proposed method can achieve the accurate tracking of the target vehicle's speed and reduce the mileage derivation of autonomous driving, which meets the requirements of national test standards.

  6. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    International Nuclear Information System (INIS)

    Hadj Salah, S.; Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-01

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude

  7. Multigrid method applied to the solution of an elliptic, generalized eigenvalue problem

    Energy Technology Data Exchange (ETDEWEB)

    Alchalabi, R.M. [BOC Group, Murray Hill, NJ (United States); Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States)

    1996-12-31

    The work presented in this paper is concerned with the development of an efficient MG algorithm for the solution of an elliptic, generalized eigenvalue problem. The application is specifically applied to the multigroup neutron diffusion equation which is discretized by utilizing the Nodal Expansion Method (NEM). The underlying relaxation method is the Power Method, also known as the (Outer-Inner Method). The inner iterations are completed using Multi-color Line SOR, and the outer iterations are accelerated using Chebyshev Semi-iterative Method. Furthermore, the MG algorithm utilizes the consistent homogenization concept to construct the restriction operator, and a form function as a prolongation operator. The MG algorithm was integrated into the reactor neutronic analysis code NESTLE, and numerical results were obtained from solving production type benchmark problems.

  8. Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction

    Science.gov (United States)

    Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan

    2009-01-01

    Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and

  9. Agglomeration multigrid methods with implicit Runge-Kutta smoothers applied to aerodynamic simulations on unstructured grids

    Science.gov (United States)

    Langer, Stefan

    2014-11-01

    For unstructured finite volume methods an agglomeration multigrid with an implicit multistage Runge-Kutta method as a smoother is developed for solving the compressible Reynolds averaged Navier-Stokes (RANS) equations. The implicit Runge-Kutta method is interpreted as a preconditioned explicit Runge-Kutta method. The construction of the preconditioner is based on an approximate derivative. The linear systems are solved approximately with a symmetric Gauss-Seidel method. To significantly improve this solution method grid anisotropy is treated within the Gauss-Seidel iteration in such a way that the strong couplings in the linear system are resolved by tridiagonal systems constructed along these directions of strong coupling. The agglomeration strategy is adapted to this procedure by taking into account exactly these anisotropies in such a way that a directional coarsening is applied along these directions of strong coupling. Turbulence effects are included by a Spalart-Allmaras model, and the additional transport-type equation is approximately solved in a loosely coupled manner with the same method. For two-dimensional and three-dimensional numerical examples and a variety of differently generated meshes we show the wide range of applicability of the solution method. Finally, we exploit the GMRES method to determine approximate spectral information of the linearized RANS equations. This approximate spectral information is used to discuss and compare characteristics of multistage Runge-Kutta methods.

  10. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Science.gov (United States)

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  11. Meta-heuristic algorithms for parallel identical machines scheduling problem with weighted late work criterion and common due date.

    Science.gov (United States)

    Xu, Zhenzhen; Zou, Yongxing; Kong, Xiangjie

    2015-01-01

    To our knowledge, this paper investigates the first application of meta-heuristic algorithms to tackle the parallel machines scheduling problem with weighted late work criterion and common due date ([Formula: see text]). Late work criterion is one of the performance measures of scheduling problems which considers the length of late parts of particular jobs when evaluating the quality of scheduling. Since this problem is known to be NP-hard, three meta-heuristic algorithms, namely ant colony system, genetic algorithm, and simulated annealing are designed and implemented, respectively. We also propose a novel algorithm named LDF (largest density first) which is improved from LPT (longest processing time first). The computational experiments compared these meta-heuristic algorithms with LDF, LPT and LS (list scheduling), and the experimental results show that SA performs the best in most cases. However, LDF is better than SA in some conditions, moreover, the running time of LDF is much shorter than SA.

  12. The 2D Spectral Intrinsic Decomposition Method Applied to Image Analysis

    Directory of Open Access Journals (Sweden)

    Samba Sidibe

    2017-01-01

    Full Text Available We propose a new method for autoadaptive image decomposition and recomposition based on the two-dimensional version of the Spectral Intrinsic Decomposition (SID. We introduce a faster diffusivity function for the computation of the mean envelope operator which provides the components of the SID algorithm for any signal. The 2D version of SID algorithm is implemented and applied to some very known images test. We extracted relevant components and obtained promising results in images analysis applications.

  13. Accuracy of the Adomian decomposition method applied to the Lorenz system

    International Nuclear Information System (INIS)

    Hashim, I.; Noorani, M.S.M.; Ahmad, R.; Bakar, S.A.; Ismail, E.S.; Zakaria, A.M.

    2006-01-01

    In this paper, the Adomian decomposition method (ADM) is applied to the famous Lorenz system. The ADM yields an analytical solution in terms of a rapidly convergent infinite power series with easily computable terms. Comparisons between the decomposition solutions and the fourth-order Runge-Kutta (RK4) numerical solutions are made for various time steps. In particular we look at the accuracy of the ADM as the Lorenz system changes from a non-chaotic system to a chaotic one

  14. Applying the Goal-Question-Indicator-Metric (GQIM) Method to Perform Military Situational Analysis

    Science.gov (United States)

    2016-05-11

    MAXIMUM 200 WORDS ) When developing situational awareness in support of military operations, the U.S. armed forces use a mnemonic, or memory aide, to...REV-03.18.2016.0 Applying the Goal- Question -Indicator- Metric (GQIM) Method to Perform Military Situational Analysis Douglas Gray May 2016...Acknowledgments The subject matter covered in this technical note evolved from an excellent question from Capt. Tomomi Ogasawara, Japan Ground Self

  15. Applied Ecosystem Analysis - - a Primer : EDT the Ecosystem Diagnosis and Treatment Method.

    Energy Technology Data Exchange (ETDEWEB)

    Lestelle, Lawrence C.; Mobrand, Lars E.

    1996-05-01

    The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual.

  16. Applied ecosystem analysis - a primer; the ecosystem diagnosis and treatment method

    International Nuclear Information System (INIS)

    Lestelle, L.C.; Mobrand, L.E.; Lichatowich, J.A.; Vogel, T.S.

    1996-05-01

    The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual

  17. A Comparison of Parametric and Non-Parametric Methods Applied to a Likert Scale

    OpenAIRE

    Mircioiu, Constantin; Atkinson, Jeffrey

    2017-01-01

    A trenchant and passionate dispute over the use of parametric versus non-parametric methods for the analysis of Likert scale ordinal data has raged for the past eight decades. The answer is not a simple “yes” or “no” but is related to hypotheses, objectives, risks, and paradigms. In this paper, we took a pragmatic approach. We applied both types of methods to the analysis of actual Likert data on responses from different professional subgroups of European pharmacists regarding competencies fo...

  18. A method for finding the ridge between saddle points applied to rare event rate estimates

    DEFF Research Database (Denmark)

    Maronsson, Jon Bergmann; Jónsson, Hannes; Vegge, Tejs

    2012-01-01

    A method is presented for finding the ridge between first order saddle points on a multidimensional surface. For atomic scale systems, such saddle points on the energy surface correspond to atomic rearrangement mechanisms. Information about the ridge can be used to test the validity of the harmonic...... to the path. The method is applied to Al adatom diffusion on the Al(100) surface to find the ridge between 2-, 3- and 4-atom concerted displacements and hop mechanisms. A correction to the harmonic approximation of transition state theory was estimated by direct evaluation of the configuration integral along...

  19. Development of a tracking method for augmented reality applied to nuclear plant maintenance work

    International Nuclear Information System (INIS)

    Shimoda, Hiroshi; Maeshima, Masayuki; Nakai, Toshinori; Bian, Zhiqiang; Ishii, Hirotake; Yoshikawa, Hidekazu

    2005-01-01

    In this paper, a plant maintenance support method is described, which employs the state-of-the-art information technology, Augmented Reality (AR), in order to improve efficiency of NPP maintenance work and to prevent from human error. Although AR has a great possibility to support various works in real world, it is difficult to apply it to actual work support because the tracking method is the bottleneck for the practical use. In this study, a bar code marker tracking method is proposed to apply AR system for a maintenance work support in NPP field. The proposed method calculates the users position and orientation in real time by two long markers, which are captured by the user-mounted camera. The markers can be easily pasted on the pipes in plant field, and they can be easily recognized in long distance in order to reduce the number of pasted markers in the work field. Experiments were conducted in a laboratory and plant field to evaluate the proposed method. The results show that (1) fast and stable tracking can be realized, (2) position error in camera view is less than 1%, which is almost perfect under the limitation of camera resolution, and (3) it is relatively difficult to catch two markers in one camera view especially in short distance

  20. Solving a Production Scheduling Problem by Means of Two Biobjective Metaheuristic Procedures

    Science.gov (United States)

    Toncovich, Adrián; Oliveros Colay, María José; Moreno, José María; Corral, Jiménez; Corral, Rafael

    2009-11-01

    Production planning and scheduling problems emphasize the need for the availability of management tools that can help to assure proper service levels to customers, maintaining, at the same time, the production costs at acceptable levels and maximizing the utilization of the production facilities. In this case, a production scheduling problem that arises in the context of the activities of a company dedicated to the manufacturing of furniture for children and teenagers is addressed. Two bicriteria metaheuristic procedures are proposed to solve the sequencing problem in a production equipment that constitutes the bottleneck of the production process of the company. The production scheduling problem can be characterized as a general flow shop with sequence dependant setup times and additional inventory constraints. Two objectives are simultaneously taken into account when the quality of the candidate solutions is evaluated: the minimization of completion time of all jobs, or makespan, and the minimization of the total flow time of all jobs. Both procedures are based on a local search strategy that responds to the structure of the simulated annealing metaheuristic. In this case, both metaheuristic approaches generate a set of solutions that provides an approximation to the optimal Pareto front. In order to evaluate the performance of the proposed techniques a series of experiments was conducted. After analyzing the results, it can be said that the solutions provided by both approaches are adequate from the viewpoint of the quality as well as the computational effort involved in their generation. Nevertheless, a further refinement of the proposed procedures should be implemented with the aim of facilitating a quasi-automatic definition of the solution parameters.

  1. Lessons learned applying CASE methods/tools to Ada software development projects

    Science.gov (United States)

    Blumberg, Maurice H.; Randall, Richard L.

    1993-01-01

    This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.

  2. Method to integrate clinical guidelines into the electronic health record (EHR) by applying the archetypes approach.

    Science.gov (United States)

    Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro

    2013-01-01

    Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline.

  3. Applying the response matrix method for solving coupled neutron diffusion and transport problems

    International Nuclear Information System (INIS)

    Sibiya, G.S.

    1980-01-01

    The numerical determination of the flux and power distribution in the design of large power reactors is quite a time-consuming procedure if the space under consideration is to be subdivided into very fine weshes. Many computing methods applied in reactor physics (such as the finite-difference method) require considerable computing time. In this thesis it is shown that the response matrix method can be successfully used as an alternative approach to solving the two-dimension diffusion equation. Furthermore it is shown that sufficient accuracy of the method is achieved by assuming a linear space dependence of the neutron currents on the boundaries of the geometries defined for the given space. (orig.) [de

  4. Seven-Spot Ladybird Optimization: A Novel and Efficient Metaheuristic Algorithm for Numerical Optimization

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2013-01-01

    Full Text Available This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO. The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.

  5. Seven-spot ladybird optimization: a novel and efficient metaheuristic algorithm for numerical optimization.

    Science.gov (United States)

    Wang, Peng; Zhu, Zhouquan; Huang, Shuai

    2013-01-01

    This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO). The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.

  6. Models and Tabu Search Metaheuristics for Service Network Design with Asset-Balance Requirements

    DEFF Research Database (Denmark)

    Pedersen, Michael Berliner; Crainic, T.G.; Madsen, Oli B.G.

    2009-01-01

    This paper focuses on a generic model for service network design, which includes asset positioning and utilization through constraints on asset availability at terminals. We denote these relations as "design-balance constraints" and focus on the design-balanced capacitated multicommodity network...... design model, a generalization of the capacitated multicommodity network design model generally used in service network design applications. Both arc-and cycle-based formulations for the new model are presented. The paper also proposes a tabu search metaheuristic framework for the arc-based formulation...

  7. Applying some methods to process the data coming from the nuclear reactions

    International Nuclear Information System (INIS)

    Suleymanov, M.K.; Abdinov, O.B.; Belashev, B.Z.

    2010-01-01

    Full text : The methods of a posterior increasing the resolution of the spectral lines are offered to process the data coming from the nuclear reactions. The methods have applied to process the data coming from the nuclear reactions at high energies. They give possibilities to get more detail information on a structure of the spectra of particles emitted in the nuclear reactions. The nuclear reactions are main source of the information on the structure and physics of the atomic nuclei. Usually the spectrums of the fragments of the reactions are complex ones. Apparently it is not simple to extract the necessary for investigation information. In the talk we discuss the methods of a posterior increasing the resolution of the spectral lines. The methods could be useful to process the complex data coming from the nuclear reactions. We consider the Fourier transformation method and maximum entropy one. The complex structures were identified by the method. One can see that at lest two selected points are indicated by the method. Recent we presented a talk where we shown that the results of the analyzing the structure of the pseudorapidity spectra of charged relativistic particles with ≥ 0.7 measured in Au+Em and Pb+Em at AGS and SPS energies using the Fourier transformation method and maximum entropy one. The dependences of these spectra on the number of fast target protons were studied. These distribution shown visually some plateau and shoulder that was at least three selected points on the distributions. The plateaus become wider in PbEm reactions. The existing of plateau is necessary for the parton models. The maximum entropy method could confirm the existing of the plateau and the shoulder on the distributions. The figure shows the results of applying the maximum entropy method. One can see that the method indicates several clean selected points. Some of them same with observed visually ones. We would like to note that the Fourier transformation method could not

  8. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  9. Solution of the neutron point kinetics equations with temperature feedback effects applying the polynomial approach method

    Energy Technology Data Exchange (ETDEWEB)

    Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica

    2015-07-01

    In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)

  10. A methodological framework applied to the choice of the best method in replacement of nuclear systems

    International Nuclear Information System (INIS)

    Vianna Filho, Alfredo Marques

    2009-01-01

    The economic equipment replacement problem is a central question in Nuclear Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost etc. New equipment, however, require a higher initial investment. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs, but in contrast having lower financial and insurance costs. The weighting of all these costs can be made with deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. The aim of this paper is to present a methodological framework to the choice of the most useful method applied in the problem of nuclear system substitution.(author)

  11. Power secant method applied to natural frequency extraction of Timoshenko beam structures

    Directory of Open Access Journals (Sweden)

    C.A.N. Dias

    Full Text Available This work deals with an improved plane frame formulation whose exact dynamic stiffness matrix (DSM presents, uniquely, null determinant for the natural frequencies. In comparison with the classical DSM, the formulation herein presented has some major advantages: local mode shapes are preserved in the formulation so that, for any positive frequency, the DSM will never be ill-conditioned; in the absence of poles, it is possible to employ the secant method in order to have a more computationally efficient eigenvalue extraction procedure. Applying the procedure to the more general case of Timoshenko beams, we introduce a new technique, named "power deflation", that makes the secant method suitable for the transcendental nonlinear eigenvalue problems based on the improved DSM. In order to avoid overflow occurrences that can hinder the secant method iterations, limiting frequencies are formulated, with scaling also applied to the eigenvalue problem. Comparisons with results available in the literature demonstrate the strength of the proposed method. Computational efficiency is compared with solutions obtained both by FEM and by the Wittrick-Williams algorithm.

  12. A Metropolis algorithm combined with Nelder-Mead Simplex applied to nuclear reactor core design

    Energy Technology Data Exchange (ETDEWEB)

    Sacco, Wagner F. [Depto. de Modelagem Computacional, Instituto Politecnico, Universidade do Estado do Rio de Janeiro, R. Alberto Rangel, s/n, P.O. Box 972285, Nova Friburgo, RJ 28601-970 (Brazil)], E-mail: wfsacco@iprj.uerj.br; Filho, Hermes Alves; Henderson, Nelio [Depto. de Modelagem Computacional, Instituto Politecnico, Universidade do Estado do Rio de Janeiro, R. Alberto Rangel, s/n, P.O. Box 972285, Nova Friburgo, RJ 28601-970 (Brazil); Oliveira, Cassiano R.E. de [Nuclear and Radiological Engineering Program, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405 (United States)

    2008-05-15

    A hybridization of the recently introduced Particle Collision Algorithm (PCA) and the Nelder-Mead Simplex algorithm is introduced and applied to a core design optimization problem which was previously attacked by other metaheuristics. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a three-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. The new metaheuristic performs better than the genetic algorithm, particle swarm optimization, and the Metropolis algorithms PCA and the Great Deluge Algorithm, thus demonstrating its potential for other applications.

  13. A Metropolis algorithm combined with Nelder-Mead Simplex applied to nuclear reactor core design

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Filho, Hermes Alves; Henderson, Nelio; Oliveira, Cassiano R.E. de

    2008-01-01

    A hybridization of the recently introduced Particle Collision Algorithm (PCA) and the Nelder-Mead Simplex algorithm is introduced and applied to a core design optimization problem which was previously attacked by other metaheuristics. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a three-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. The new metaheuristic performs better than the genetic algorithm, particle swarm optimization, and the Metropolis algorithms PCA and the Great Deluge Algorithm, thus demonstrating its potential for other applications

  14. Error diffusion method applied to design combined CSG-BSG element used in ICF driver

    Science.gov (United States)

    Zhang, Yixiao; Yao, Xin; Gao, Fuhua; Guo, Yongkang; Wang, Lei; Hou, Xi

    2006-08-01

    In the final optics assembly of Inertial Confinement Fusion (ICF) driver, Diffractive Optical Elements (DOEs) are applied to achieve some important functions, such as harmonic wave separation, beam sampling, beam smoothing and pulse compression etc. However, in order to optimize the system structure, decrease the energy loss and avoid the damage of laser induction or self-focusing effect, the number of elements used in the ICF system, especially in the final optics assembly, should be minimized. The multiple exposure method has been proposed, for this purpose, to fabricate BSG and CSG on one surface of a silica plate. But the multiple etch processes utilized in this method is complex and will introduce large alignment error. Error diffusion method that based on pulse-density modulation has been widely used in signal processing and computer generated hologram (CGH). In this paper, according to error diffusion method in CGH and partial coherent imaging theory, we present a new method to design coding mask of combine CSG-BSG element with error diffusion method. With the designed mask, only one exposure process is needed in fabricating combined element, which will greatly reduce the fabrication difficulty and avoid the alignment error introduced by multiple etch processes. We illustrate the designed coding mask for CSG-BSG element with this method and compare the intensity distribution of the spatial image in partial coherent imaging system with desired relief.

  15. Solution and study of nodal neutron transport equation applying the LTSN-DiagExp method

    International Nuclear Information System (INIS)

    Hauser, Eliete Biasotto; Pazos, Ruben Panta; Vilhena, Marco Tullio de; Barros, Ricardo Carvalho de

    2003-01-01

    In this paper we report advances about the three-dimensional nodal discrete-ordinates approximations of neutron transport equation for Cartesian geometry. We use the combined collocation method of the angular variables and nodal approach for the spatial variables. By nodal approach we mean the iterated transverse integration of the S N equations. This procedure leads to the set of one-dimensional averages angular fluxes in each spatial variable. The resulting system of equations is solved with the LTS N method, first applying the Laplace transform to the set of the nodal S N equations and then obtained the solution by symbolic computation. We include the LTS N method by diagonalization to solve the nodal neutron transport equation and then we outline the convergence of these nodal-LTS N approximations with the help of a norm associated to the quadrature formula used to approximate the integral term of the neutron transport equation. (author)

  16. Single trial EEG classification applied to a face recognition experiment using different feature extraction methods.

    Science.gov (United States)

    Li, Yudu; Ma, Sen; Hu, Zhongze; Chen, Jiansheng; Su, Guangda; Dou, Weibei

    2015-01-01

    Research on brain machine interface (BMI) has been developed very fast in recent years. Numerous feature extraction methods have successfully been applied to electroencephalogram (EEG) classification in various experiments. However, little effort has been spent on EEG based BMI systems regarding familiarity of human faces cognition. In this work, we have implemented and compared the classification performances of four common feature extraction methods, namely, common spatial pattern, principal component analysis, wavelet transform and interval features. High resolution EEG signals were collected from fifteen healthy subjects stimulated by equal number of familiar and novel faces. Principal component analysis outperforms other methods with average classification accuracy reaching 94.2% leading to possible real life applications. Our findings thereby may contribute to the BMI systems for face recognition.

  17. A reflective lens: applying critical systems thinking and visual methods to ecohealth research.

    Science.gov (United States)

    Cleland, Deborah; Wyborn, Carina

    2010-12-01

    Critical systems methodology has been advocated as an effective and ethical way to engage with the uncertainty and conflicting values common to ecohealth problems. We use two contrasting case studies, coral reef management in the Philippines and national park management in Australia, to illustrate the value of critical systems approaches in exploring how people respond to environmental threats to their physical and spiritual well-being. In both cases, we used visual methods--participatory modeling and rich picturing, respectively. The critical systems methodology, with its emphasis on reflection, guided an appraisal of the research process. A discussion of these two case studies suggests that visual methods can be usefully applied within a critical systems framework to offer new insights into ecohealth issues across a diverse range of socio-political contexts. With this article, we hope to open up a conversation with other practitioners to expand the use of visual methods in integrated research.

  18. A note on the accuracy of spectral method applied to nonlinear conservation laws

    Science.gov (United States)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  19. Artificial intelligence methods applied for quantitative analysis of natural radioactive sources

    International Nuclear Information System (INIS)

    Medhat, M.E.

    2012-01-01

    Highlights: ► Basic description of artificial neural networks. ► Natural gamma ray sources and problem of detections. ► Application of neural network for peak detection and activity determination. - Abstract: Artificial neural network (ANN) represents one of artificial intelligence methods in the field of modeling and uncertainty in different applications. The objective of the proposed work was focused to apply ANN to identify isotopes and to predict uncertainties of their activities of some natural radioactive sources. The method was tested for analyzing gamma-ray spectra emitted from natural radionuclides in soil samples detected by a high-resolution gamma-ray spectrometry based on HPGe (high purity germanium). The principle of the suggested method is described, including, relevant input parameters definition, input data scaling and networks training. It is clear that there is satisfactory agreement between obtained and predicted results using neural network.

  20. Finite volume and finite element methods applied to 3D laminar and turbulent channel flows

    Science.gov (United States)

    Louda, Petr; Sváček, Petr; Kozel, Karel; Příhoda, Jaromír

    2014-12-01

    The work deals with numerical simulations of incompressible flow in channels with rectangular cross section. The rectangular cross section itself leads to development of various secondary flow patterns, where accuracy of simulation is influenced by numerical viscosity of the scheme and by turbulence modeling. In this work some developments of stabilized finite element method are presented. Its results are compared with those of an implicit finite volume method also described, in laminar and turbulent flows. It is shown that numerical viscosity can cause errors of same magnitude as different turbulence models. The finite volume method is also applied to 3D turbulent flow around backward facing step and good agreement with 3D experimental results is obtained.

  1. Finite volume and finite element methods applied to 3D laminar and turbulent channel flows

    Energy Technology Data Exchange (ETDEWEB)

    Louda, Petr; Příhoda, Jaromír [Institute of Thermomechanics, Czech Academy of Sciences, Prague (Czech Republic); Sváček, Petr; Kozel, Karel [Czech Technical University in Prague, Fac. of Mechanical Engineering (Czech Republic)

    2014-12-10

    The work deals with numerical simulations of incompressible flow in channels with rectangular cross section. The rectangular cross section itself leads to development of various secondary flow patterns, where accuracy of simulation is influenced by numerical viscosity of the scheme and by turbulence modeling. In this work some developments of stabilized finite element method are presented. Its results are compared with those of an implicit finite volume method also described, in laminar and turbulent flows. It is shown that numerical viscosity can cause errors of same magnitude as different turbulence models. The finite volume method is also applied to 3D turbulent flow around backward facing step and good agreement with 3D experimental results is obtained.

  2. The reduction method of statistic scale applied to study of climatic change

    International Nuclear Information System (INIS)

    Bernal Suarez, Nestor Ricardo; Molina Lizcano, Alicia; Martinez Collantes, Jorge; Pabon Jose Daniel

    2000-01-01

    In climate change studies the global circulation models of the atmosphere (GCMAs) enable one to simulate the global climate, with the field variables being represented on a grid points 300 km apart. One particular interest concerns the simulation of possible changes in rainfall and surface air temperature due to an assumed increase of greenhouse gases. However, the models yield the climatic projections on grid points that in most cases do not correspond to the sites of major interest. To achieve local estimates of the climatological variables, methods like the one known as statistical down scaling are applied. In this article we show a case in point by applying canonical correlation analysis (CCA) to the Guajira Region in the northeast of Colombia

  3. An Effective Method on Applying Feedback Error Learning Scheme to Functional Electrical Stimulation Controller

    Science.gov (United States)

    Watanabe, Takashi; Kurosawa, Kenji; Yoshizawa, Makoto

    A Feedback Error Learning (FEL) scheme was found to be applicable to joint angle control by Functional Electrical Stimulation (FES) in our previous study. However, the FEL-FES controller had a problem in learning of the inverse dynamics model (IDM) in some cases. In this paper, methods of applying the FEL to FES control were examined in controlling 1-DOF movement of the wrist joint stimulating 2 muscles through computer simulation under several control conditions with several subject models. The problems in applying FEL to FES controller were suggested to be in restricting stimulation intensity to positive values between the minimum and the maximum intensities and in the case of very small output values of the IDM. Learning of the IDM was greatly improved by considering the IDM output range with setting the minimum ANN output value in calculating ANN connection weight change.

  4. Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation

    Science.gov (United States)

    Hatten, Noble; Russell, Ryan P.

    2017-12-01

    A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.

  5. ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE

    Directory of Open Access Journals (Sweden)

    SABOU FELICIA

    2014-05-01

    Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied

  6. ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE

    Directory of Open Access Journals (Sweden)

    SABOU FELICIA

    2014-05-01

    Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied.

  7. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    Science.gov (United States)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  8. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence.

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-02-18

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  9. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    Directory of Open Access Journals (Sweden)

    Bailing Liu

    2016-02-01

    Full Text Available Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  10. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  11. Projector methods applied to numerical integration of the SN transport equation

    International Nuclear Information System (INIS)

    Hristea, V.; Covaci, St.

    2003-01-01

    We are developing two methods of integration for the S N transport equation in x - y geometry, methods based on projector technique. By cellularization of the phase space and by choosing a finite basis of orthogonal functions, which characterize the angular flux, the non-selfadjoint transport equation is reduced to a cellular automaton. This automaton is completely described by the transition Matrix T. Within this paper two distinct methods of projection are described. One of them uses the transversal integration technique. As an alternative to this we applied the method of the projectors for the integral S N transport equation. We show that the constant spatial approximation of the integral S N transport equation does not lead to negative fluxes. One of the problems with the projector method, namely the appearance of numerical instability for small intervals is solved by the Pade representation of the elements for Matrix T. Numerical tests here presented compare the numerical performances of the algorithms obtained by the two projection methods. The Pade representation was also taken into account for these two algorithm types. (authors)

  12. Efficient combination of acceleration techniques applied to high frequency methods for solving radiation and scattering problems

    Science.gov (United States)

    Lozano, Lorena; Algar, Ma Jesús; García, Eliseo; González, Iván; Cátedra, Felipe

    2017-12-01

    An improved ray-tracing method applied to high-frequency techniques such as the Uniform Theory of Diffraction (UTD) is presented. The main goal is to increase the speed of the analysis of complex structures while considering a vast number of observation directions and taking into account multiple bounces. The method is based on a combination of the Angular Z-Buffer (AZB), the Space Volumetric Partitioning (SVP) algorithm and the A∗ heuristic search method to treat multiple bounces. In addition, a Master Point strategy was developed to analyze efficiently a large number of Near-Field points or Far-Field directions. This technique can be applied to electromagnetic radiation problems, scattering analysis, propagation at urban or indoor environments and to the mutual coupling between antennas. Due to its efficiency, its application is suitable to study large antennas radiation patterns and even its interactions with complex environments, including satellites, ships, aircrafts, cities or another complex electrically large bodies. The new technique appears to be extremely efficient at these applications even when considering multiple bounces.

  13. Methodical basis of training of cadets for the military applied heptathlon competitions

    Directory of Open Access Journals (Sweden)

    R.V. Anatskyi

    2017-12-01

    Full Text Available The purpose of the research is to develop methodical bases of training of cadets for the military applied heptathlon competitions. Material and methods: Cadets of 2-3 courses at the age of 19-20 years (n=20 participated in researches. Cadets were selected by the best results of exercises performing included into the program of military applied heptathlon competitions (100 m run, 50 m freestyle swimming, Kalashnikov rifle shooting, pull-up, obstacle course, grenade throwing, 3000 m run. Preparation took place on the basis of training center. All trainings were organized and carried out according to the methodical basics: in a week preparation microcycle five days cadets had two trainings a day (on Saturday was one training, on Sunday they had rest. The selected exercises with individual loads were performed, Results : Sport scores demonstrated top results in the performance of 100 m run, 3000 m run and pull-up. The indices of performing exercise "obstacle course" were much lower than expected. Rather low results were demonstrated in swimming and shooting. Conclusions . Results of researches indicate the necessity of quality improvement: cadets’ weapons proficiency; physical readiness to perform the exercises requiring complex demonstration of all physical qualities.

  14. Symanzik's method applied to fractional quantum Hall edge states

    Energy Technology Data Exchange (ETDEWEB)

    Blasi, A.; Ferraro, D.; Maggiore, N.; Magnoli, N. [Dipartimento di Fisica, Universita di Genova (Italy); LAMIA-INFM-CNR, Genova (Italy); Sassetti, M.

    2008-11-15

    The method of separability, introduced by Symanzik, is applied in order to describe the effect of a boundary for a fractional quantum Hall liquid in the Laughlin series. An Abelian Chern-Simons theory with plane boundary is considered and the Green functions both in the bulk and on the edge are constructed, following a rigorous, perturbative, quantum field theory treatment. We show that the conserved boundary currents find an explicit interpretation in terms of the continuity equation with the electron density satisfying the Tomonaga-Luttinger commutation relation. (Abstract Copyright [2008], Wiley Periodicals, Inc.)

  15. Method for applying a photoresist layer to a substrate having a preexisting topology

    Science.gov (United States)

    Morales, Alfredo M.; Gonzales, Marcela

    2004-01-20

    The present invention describes a method for preventing a photoresist layer from delaminating, peeling, away from the surface of a substrate that already contains an etched three dimensional structure such as a hole or a trench. The process comprises establishing a saturated vapor phase of the solvent media used to formulate the photoresist layer, above the surface of the coated substrate as the applied photoresist is heated in order to "cure" or drive off the retained solvent constituent within the layer. By controlling the rate and manner in which solvent is removed from the photoresist layer the layer is stabilized and kept from differentially shrinking and peeling away from the substrate.

  16. The Trojan Horse Method Applied to the Astrophysically Relevant Proton Capture Reactions on Li Isotopes

    Science.gov (United States)

    Tumino, A.; Spitaleri, C.; Musumarra, A.; Pellegriti, M. G.; Pizzone, R. G.; Rinollo, A.; Romano, S.; Pappalardo, L.; Bonomo, C.; Del Zoppo, A.; Di Pietro, A.; Figuera, P.; La Cognata, M.; Lamia, L.; Cherubini, S.; Rolfs, C.; Typel, S.

    2005-12-01

    The 7Li(p,α)4He 6Li(d,α)4He and 6Li(p,α)3He reactions was performed and studied in the framework of the Trojan Horse Method applied to the d(7Li,αα)n, 6Li(6Li,αα)4He and d(6Li,α3He)n three-body reactions respectively. Their bare astrophysical S-factors were extracted and from the comparison with the behavior of the screened direct data, an independent estimate of the screening potential was obtained.

  17. Making Design Decisions Visible: Applying the Case-Based Method in Designing Online Instruction

    Directory of Open Access Journals (Sweden)

    Heng Luo,

    2011-01-01

    Full Text Available The instructional intervention in this design case is a self-directed online tutorial that applies the case-based method to teach educators how to design and conduct entrepreneurship programs for elementary school students. In this article, the authors describe the major decisions made in each phase of the design and development process, explicate the rationales behind them, and demonstrate their effect on the production of the tutorial. Based on such analysis, the guidelines for designing case-based online instruction are summarized for the design case.

  18. Equation Solution Figures of Merit, Metaheuristic Search, and the Schrodinger Equation

    Science.gov (United States)

    MacNeil, Paul

    2014-03-01

    This presentation deals with: a definition of ``equation error'' a consideration of equation solution figures of merit based on equation error, and on other measures; and the use of metaheuristic techniques in the search for approximate solutions. These considerations are illustrated by application to the Schrodinger equation for a simple system. Models suitable for computation are produced. Computation results are used to compare the consequences of selection of different figures of merit. ``Equation error'' is defined to be the quantity by which an approximate solution fails to satisfy an equation. ``Equation error variance'' is defined to be the squared modulus of the equation error summed/integrated over the domain of interest. (Generalization to sets of equations is straightforward.) In the example, equation error variance is a functional of the Schrodinger wave function. Possible figures of merit include: ground state energy, system geometry, and equation solution variance. The (derivative-free) metaheuristic used to solve the Schrodinger equation has been changed from a genetic algorithm, used in earlier versions of this research, to evolution strategy with covariance matrix adaptation.

  19. Vendor managed inventory control system for deteriorating items using metaheuristic algorithms

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2018-01-01

    Full Text Available Inventory control of deteriorating items constitutes a large part of the world’s economy and covers various goods including any commodity, which loses its worth over time because of deterioration and/or obsolescence. Vendor managed inventory (VMI, which is a win-win strategy for both suppliers and buyers gains better results than traditional supply chain. In this research, we study an economic order quantity (EOQ with shortage in form of partial backorder under VMI policy. The model is concerned with multi-item subject to multi-constraint including storage space, time period and budget constraints. Two metaheuristic algorithms, namely Simulated Annealing and Tabu Search, are used to find a near optimal solution for the proposed fuzzy nonlinear integer-programming problem with the objective of minimizing the total cost of the supply chain. Furthermore, the sensitivity analysis of the metaheuristic parameters is performed and five numerical examples containing different numbers of items are conducted in order to evaluate the performance of the algorithms.

  20. A cellular automata based FPGA realization of a new metaheuristic bat-inspired algorithm

    Science.gov (United States)

    Progias, Pavlos; Amanatiadis, Angelos A.; Spataro, William; Trunfio, Giuseppe A.; Sirakoulis, Georgios Ch.

    2016-10-01

    Optimization algorithms are often inspired by processes occuring in nature, such as animal behavioral patterns. The main concern with implementing such algorithms in software is the large amounts of processing power they require. In contrast to software code, that can only perform calculations in a serial manner, an implementation in hardware, exploiting the inherent parallelism of single-purpose processors, can prove to be much more efficient both in speed and energy consumption. Furthermore, the use of Cellular Automata (CA) in such an implementation would be efficient both as a model for natural processes, as well as a computational paradigm implemented well on hardware. In this paper, we propose a VHDL implementation of a metaheuristic algorithm inspired by the echolocation behavior of bats. More specifically, the CA model is inspired by the metaheuristic algorithm proposed earlier in the literature, which could be considered at least as efficient than other existing optimization algorithms. The function of the FPGA implementation of our algorithm is explained in full detail and results of our simulations are also demonstrated.

  1. An investigation of generalized differential evolution metaheuristic for multiobjective optimal crop-mix planning decision.

    Science.gov (United States)

    Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.

  2. Qualitative classification of milled rice grains using computer vision and metaheuristic techniques.

    Science.gov (United States)

    Zareiforoush, Hemad; Minaei, Saeid; Alizadeh, Mohammad Reza; Banakar, Ahmad

    2016-01-01

    Qualitative grading of milled rice grains was carried out in this study using a machine vision system combined with some metaheuristic classification approaches. Images of four different classes of milled rice including Low-processed sound grains (LPS), Low-processed broken grains (LPB), High-processed sound grains (HPS), and High-processed broken grains (HPB), representing quality grades of the product, were acquired using a computer vision system. Four different metaheuristic classification techniques including artificial neural networks, support vector machines, decision trees and Bayesian Networks were utilized to classify milled rice samples. Results of validation process indicated that artificial neural network with 12-5*4 topology had the highest classification accuracy (98.72 %). Next, support vector machine with Universal Pearson VII kernel function (98.48 %), decision tree with REP algorithm (97.50 %), and Bayesian Network with Hill Climber search algorithm (96.89 %) had the higher accuracy, respectively. Results presented in this paper can be utilized for developing an efficient system for fully automated classification and sorting of milled rice grains.

  3. Generator maintenance scheduling in power systems using metaheuristic-based hybrid approaches

    Energy Technology Data Exchange (ETDEWEB)

    Dahal, Keshav P. [School of Informatics, University of Bradford, Bradford (United Kingdom); Chakpitak, Nopasit [College of Arts, Media and Technology, Chiang Mai University, Chiang Mai (Thailand)

    2007-05-15

    The effective maintenance scheduling of power system generators is very important for the economical and reliable operation of a power system. This represents a tough scheduling problem which continues to present a challenge for efficient optimization solution techniques. This paper presents the application of metaheuristic approaches, such as a genetic algorithm (GA), simulated annealing (SA) and their hybrid for generator maintenance scheduling (GMS) in power systems using an integer representation. This paper mainly focuses on the application of GA/SA and GA/SA/heuristic hybrid approaches. GA/SA hybrid uses the probabilistic acceptance criterion of SA within the GA framework. GA/SA/heuristic hybrid combines heuristic approaches within the GA/SA hybrid to seed the initial population. A case study is formulated in this paper as an integer programming problem using a reliability-based objective function and typical problem constraints. The implementation and performance of the metaheuristic approaches and their hybrid for the test case study are discussed. The results obtained are promising and show that the hybrid approaches are less sensitive to the variations of technique parameters and offer an effective alternative for solving the generator maintenance scheduling problem. (author)

  4. A Review of Auditing Methods Applied to the Content of Controlled Biomedical Terminologies

    Science.gov (United States)

    Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.

    2012-01-01

    Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing methods that apply formal methods to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these methods and have created a framework for characterizing them. The framework considers manual, systematic and heuristic methods that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the methods and provide examples to illustrate each part of the framework. PMID:19285571

  5. A Comparison of Parametric and Non-Parametric Methods Applied to a Likert Scale.

    Science.gov (United States)

    Mircioiu, Constantin; Atkinson, Jeffrey

    2017-05-10

    A trenchant and passionate dispute over the use of parametric versus non-parametric methods for the analysis of Likert scale ordinal data has raged for the past eight decades. The answer is not a simple "yes" or "no" but is related to hypotheses, objectives, risks, and paradigms. In this paper, we took a pragmatic approach. We applied both types of methods to the analysis of actual Likert data on responses from different professional subgroups of European pharmacists regarding competencies for practice. Results obtained show that with "large" (>15) numbers of responses and similar (but clearly not normal) distributions from different subgroups, parametric and non-parametric analyses give in almost all cases the same significant or non-significant results for inter-subgroup comparisons. Parametric methods were more discriminant in the cases of non-similar conclusions. Considering that the largest differences in opinions occurred in the upper part of the 4-point Likert scale (ranks 3 "very important" and 4 "essential"), a "score analysis" based on this part of the data was undertaken. This transformation of the ordinal Likert data into binary scores produced a graphical representation that was visually easier to understand as differences were accentuated. In conclusion, in this case of Likert ordinal data with high response rates, restraining the analysis to non-parametric methods leads to a loss of information. The addition of parametric methods, graphical analysis, analysis of subsets, and transformation of data leads to more in-depth analyses.

  6. Geometric methods for estimating representative sidewalk widths applied to Vienna's streetscape surfaces database

    Science.gov (United States)

    Brezina, Tadej; Graser, Anita; Leth, Ulrich

    2017-04-01

    Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.

  7. Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder

    Directory of Open Access Journals (Sweden)

    He Yan

    2017-01-01

    Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.

  8. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  9. Efficient alpha particle detection by CR-39 applying 50 Hz-HV electrochemical etching method

    International Nuclear Information System (INIS)

    Sohrabi, M.; Soltani, Z.

    2016-01-01

    Alpha particles can be detected by CR-39 by applying either chemical etching (CE), electrochemical etching (ECE), or combined pre-etching and ECE usually through a multi-step HF-HV ECE process at temperatures much higher than room temperature. By applying pre-etching, characteristics responses of fast-neutron-induced recoil tracks in CR-39 by HF-HV ECE versus KOH normality (N) have shown two high-sensitivity peaks around 5–6 and 15–16 N and a large-diameter peak with a minimum sensitivity around 10–11 N at 25°C. On the other hand, 50 Hz-HV ECE method recently advanced in our laboratory detects alpha particles with high efficiency and broad registration energy range with small ECE tracks in polycarbonate (PC) detectors. By taking advantage of the CR-39 sensitivity to alpha particles, efficacy of 50 Hz-HV ECE method and CR-39 exotic responses under different KOH normalities, detection characteristics of 0.8 MeV alpha particle tracks were studied in 500 μm CR-39 for different fluences, ECE duration and KOH normality. Alpha registration efficiency increased as ECE duration increased to 90 ± 2% after 6–8 h beyond which plateaus are reached. Alpha track density versus fluence is linear up to 10 6  tracks cm −2 . The efficiency and mean track diameter versus alpha fluence up to 10 6  alphas cm −2 decrease as the fluence increases. Background track density and minimum detection limit are linear functions of ECE duration and increase as normality increases. The CR-39 processed for the first time in this study by 50 Hz-HV ECE method proved to provide a simple, efficient and practical alpha detection method at room temperature. - Highlights: • Alpha particles of 0.8 MeV were detected in CR-39 by 50 Hz-HV ECE method. • Efficiency/track diameter was studied vs fluence and time for 3 KOH normality. • Background track density and minimum detection limit vs duration were studied. • A new simple, efficient and low-cost alpha detection method

  10. A METHOD FOR PREPARING A SUBSTRATE BY APPLYING A SAMPLE TO BE ANALYSED

    DEFF Research Database (Denmark)

    2017-01-01

    The invention relates to a method for preparing a substrate (105a) comprising a sample reception area (110) and a sensing area (111). The method comprises the steps of: 1) applying a sample on the sample reception area; 2) rotating the substrate around a predetermined axis; 3) during rotation......, at least part of the liquid travels from the sample reception area to the sensing area due to capillary forces acting between the liquid and the substrate; and 4) removing the wave of particles and liquid formed at one end of the substrate. The sensing area is closer to the predetermined axis than...... the sample reception area. The sample comprises a liquid part and particles suspended therein....

  11. Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations

    Science.gov (United States)

    Atkins, Harold L.

    2009-01-01

    The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.

  12. SANS contrast variation method applied in experiments on ferrofluids at MURN instrument of IBR-2 reactor

    Science.gov (United States)

    Balasoiu, Maria; Kuklin, Alexander

    2012-03-01

    Separate determination of the nuclear and magnetic contributions to the scattering intensity by means of a contrast variation method applied in a small angle neutron scattering experiment of nonpolarized neutrons in ferrofluids in early 90 's at the MURN instrument is reviewed. The nuclear scattering contribution gives the features of the colloidal particle dimensions, surfactant shell structure and the solvent degree penetration to the macromolecular layer. The magnetic scattering part is compatible to the models where is supposed that the particle surface has a nonmagnetic layer. Details on experimental "Grabcev method" in obtaining separate nuclear and magnetic contributions to the small angle neutron scattering intensity of unpolarized neutrons are emphasized for the case of a high quality ultrastabile benzene-based ferrofluid with magnetite nanoparticles.

  13. SANS contrast variation method applied in experiments on ferrofluids at MURN instrument of IBR-2 reactor

    International Nuclear Information System (INIS)

    Balasoiu, Maria; Kuklin, Alexander

    2012-01-01

    Separate determination of the nuclear and magnetic contributions to the scattering intensity by means of a contrast variation method applied in a small angle neutron scattering experiment of nonpolarized neutrons in ferrofluids in early 90 's at the MURN instrument is reviewed. The nuclear scattering contribution gives the features of the colloidal particle dimensions, surfactant shell structure and the solvent degree penetration to the macromolecular layer. The magnetic scattering part is compatible to the models where is supposed that the particle surface has a nonmagnetic layer. Details on experimental 'Grabcev method' in obtaining separate nuclear and magnetic contributions to the small angle neutron scattering intensity of unpolarized neutrons are emphasized for the case of a high quality ultrastabile benzene-based ferrofluid with magnetite nanoparticles.

  14. Infrared thermography inspection methods applied to the target elements of W7-X divertor

    Energy Technology Data Exchange (ETDEWEB)

    Missirlian, M. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France)], E-mail: marc.missirlian@cea.fr; Traxler, H. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria); Boscary, J. [Max-Planck-Institut fuer Plasmaphysik, Euratom Association, Boltzmannstr. 2, D-85748 Garching (Germany); Durocher, A.; Escourbiac, F.; Schlosser, J. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France); Schedler, B.; Schuler, P. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria)

    2007-10-15

    The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application.

  15. Infrared thermography inspection methods applied to the target elements of W7-X divertor

    International Nuclear Information System (INIS)

    Missirlian, M.; Traxler, H.; Boscary, J.; Durocher, A.; Escourbiac, F.; Schlosser, J.; Schedler, B.; Schuler, P.

    2007-01-01

    The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application

  16. Data Analytics of Mobile Serious Games: Applying Bayesian Data Analysis Methods

    Directory of Open Access Journals (Sweden)

    Heide Lukosch

    2018-03-01

    Full Text Available Traditional teaching methods in the field of resuscitation training show some limitations, while teaching the right actions in critical situations could increase the number of people saved after a cardiac arrest. For our study, we developed a mobile game to support the transfer of theoretical knowledge on resuscitation.  The game has been tested at three schools of further education. A number of data has been collected from 171 players. To analyze this large data set from different sources and quality, different types of data modeling and analyses had to be applied. This approach showed its usefulness in analyzing the large set of data from different sources. It revealed some interesting findings, such as that female players outperformed the male ones, and that the game fostering informal, self-directed is equally efficient as the traditional formal learning method.

  17. Performance comparison of two efficient genomic selection methods (gsbay & MixP) applied in aquacultural organisms

    Science.gov (United States)

    Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin

    2017-02-01

    Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.

  18. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation

    Science.gov (United States)

    Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on

  19. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    Science.gov (United States)

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies

  20. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  1. Labile soil phosphorus as influenced by methods of applying radioactive phosphorus

    International Nuclear Information System (INIS)

    Selvaratnam, V.V.; Andersen, A.J.; Thomsen, J.D.; Gissel-Nielsen, G.

    1980-03-01

    The influence of different methods of applying radioactive phosphorus on the E- and L-values was studied in four foil types using barley, buckwheat, and rye grass for the L-value determination. The four soils differed greatly in their E- and L-values. The experiment was carried out both with and without carrier-P. The presence of carrier-P had no influence on the E-values, while carrier-P in some cases gave a lower L-value. Both E- and L-values dependent on the method of application. When the 32 P was applied on a small soil or sand sample and dried before mixing with the total amount of soil, the E-values were higher than at direct application most likely because of a stronger fixation to the soil/sand particles. This was not the case for the L-values that are based on a much longer equilibrium time. On the contrary, the direct application of the 32 p-solution to the whole amount of soil gave higher L-values of a non-homogeneous distribution of the 32 p in the soil. (author)

  2. Analysis of coupled neutron-gamma radiations, applied to shieldings in multigroup albedo method

    International Nuclear Information System (INIS)

    Dunley, Leonardo Souza

    2002-01-01

    The principal mathematical tools frequently available for calculations in Nuclear Engineering, including coupled neutron-gamma radiations shielding problems, involve the full Transport Theory or the Monte Carlo techniques. The Multigroup Albedo Method applied to shieldings is characterized by following the radiations through distinct layers of materials, allowing the determination of the neutron and gamma fractions reflected from, transmitted through and absorbed in the irradiated media when a neutronic stream hits the first layer of material, independently of flux calculations. Then, the method is a complementary tool of great didactic value due to its clarity and simplicity in solving neutron and/or gamma shielding problems. The outstanding results achieved in previous works motivated the elaboration and the development of this study that is presented in this dissertation. The radiation balance resulting from the incidence of a neutronic stream into a shielding composed by 'm' non-multiplying slab layers for neutrons was determined by the Albedo method, considering 'n' energy groups for neutrons and 'g' energy groups for gammas. It was taken into account there is no upscattering of neutrons and gammas. However, it was considered that neutrons from any energy groups are able to produce gammas of all energy groups. The ANISN code, for an angular quadrature order S 2 , was used as a standard for comparison of the results obtained by the Albedo method. So, it was necessary to choose an identical system configuration, both for ANISN and Albedo methods. This configuration was six neutron energy groups and eight gamma energy groups, using three slab layers (iron aluminum - manganese). The excellent results expressed in comparative tables show great agreement between the values determined by the deterministic code adopted as standard and, the values determined by the computational program created using the Albedo method and the algorithm developed for coupled neutron

  3. In silico toxicology: comprehensive benchmarking of multi-label classification methods applied to chemical toxicity data

    KAUST Repository

    Raies, Arwa B.

    2017-12-05

    One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds\\' features may improve model\\'s performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.

  4. An effective multirestart deterministic annealing metaheuristic for the fleet size and mix vehicle-routing problem with time windows

    NARCIS (Netherlands)

    Bräysy, Olli; Dullaert, Wout; Hasle, Geir; Mester, David; Gendreau, Michel

    This paper presents a new deterministic annealing metaheuristic for the fleet size and mix vehicle-routing problem with time windows. The objective is to service, at minimal total cost, a set of customers within their time windows by a heterogeneous capacitated vehicle fleet. First, we motivate and

  5. The Cn method applied to problems with an anisotropic diffusion law

    International Nuclear Information System (INIS)

    Grandjean, P.M.

    A 2-dimensional Cn calculation has been applied to homogeneous media subjected to the Rayleigh impact law. Results obtained with collision probabilities and Chandrasekhar calculations are compared to those from Cn method. Introducing in the expression of the transport equation, an expansion truncated on a polynomial basis for the outgoing angular flux (or possibly entrance flux) gives two Cn systems of algebraic linear equations for the expansion coefficients. The matrix elements of these equations are the moments of the Green function in infinite medium. The search for the Green function is effected through the Fourier transformation of the integrodifferential equation and its moments are derived from their Fourier transforms through a numerical integration in the complex plane. The method has been used for calculating the albedo in semi-infinite media, the extrapolation length of the Milne problem, and the albedo and transmission factor of a slab (a concise study of convergence is presented). A system of integro-differential equations bearing on the moments of the angular flux inside the medium has been derived, for the collision probability method. It is numerically solved with approximately the bulk flux by step functions. The albedo in semi-infinite medium has also been computed through the semi-analytical Chandrasekhar method. In the latter, the outgoing flux is expressed as a function of the entrance flux by means of a integral whose kernel is numerically derived [fr

  6. Study on Feasibility of Applying Function Approximation Moment Method to Achieve Reliability-Based Design Optimization

    International Nuclear Information System (INIS)

    Huh, Jae Sung; Kwak, Byung Man

    2011-01-01

    Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated

  7. LOGICAL CONDITIONS ANALYSIS METHOD FOR DIAGNOSTIC TEST RESULTS DECODING APPLIED TO COMPETENCE ELEMENTS PROFICIENCY

    Directory of Open Access Journals (Sweden)

    V. I. Freyman

    2015-11-01

    Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.

  8. An IMU-to-Body Alignment Method Applied to Human Gait Analysis

    Directory of Open Access Journals (Sweden)

    Laura Susana Vargas-Valencia

    2016-12-01

    Full Text Available This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  9. An IMU-to-Body Alignment Method Applied to Human Gait Analysis.

    Science.gov (United States)

    Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo

    2016-12-10

    This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.

  10. Applying of whole-tree harvesting method; Kokopuujuontomenetelmaen soveltaminen aines- ja energiapuun hankintaan

    Energy Technology Data Exchange (ETDEWEB)

    Vesisenaho, T. [VTT Energy, Jyvaeskylae (Finland); Liukkonen, S. [VTT Manufacturing Technology, Espoo (Finland)

    1997-12-01

    The objective of this project is to apply whole-tree harvesting method to Finnish timber harvesting conditions in order to lower the harvesting costs of energy wood and timber in spruce-dominant final cuttings. In Finnish conditions timber harvesting is normally based on the log-length method. Because of small landings and the high level of thinning cuttings, whole-tree skidding methods cannot be utilised extensively. The share of stands which could be harvested with whole-tree skidding method showed up to be about 10 % of the total harvesting amount of 50 mill. m{sup 3}. The corresponding harvesting potential of energy wood is 0,25 Mtoe. The aim of the structural measurements made in this project was to get information about the effect of different hauling methods into the structural response of the tractor, and thus reveal the possible special requirements that the new whole-tree skidding places forest tractor design. Altogether 7 strain gauge based sensors were mounted into the rear frame structures and drive shafts of the forest tractor. Five strain gauges measured local strains in some critical details and two sensors measured the torque moments of the front and rear bogie drive shafts. Also the revolution speed of the rear drive shaft was recorded. Signal time histories, maximum peaks, Time at Level distributions and Rainflow distributions were gathered in different hauling modes. From these, maximum values, average stress levels and fatigue life estimates were calculated for each mode, and a comparison of the different methods from the structural point of view was performed

  11. Brucellosis Prevention Program: Applying “Child to Family Health Education” Method

    Directory of Open Access Journals (Sweden)

    H. Allahverdipour

    2010-04-01

    Full Text Available Introduction & Objective: Pupils have efficient potential to increase community awareness and promoting community health through participating in the health education programs. Child to family health education program is one of the communicative strategies that was applied in this field trial study. Because of high prevalence of Brucellosis in Hamadan province, Iran, the aim of this study was promoting families’ knowledge and preventive behaviors about Brucellosis in the rural areas by using child to family health education method.Materials & Methods: In this nonequivalent control group design study three rural schools were chosen (one as intervention and two others as control. At first knowledge and behavior of families about Brucellosis were determined using a designed questionnaire. Then the families were educated through “child to family” procedure. At this stage the students gained information. Then they were instructed to teach their parents what they had learned. After 3 months following the last session of education, the level of knowledge and behavior changes of the families about Brucellosis were determined and analyzed by paired t-test.Results: The results showed significant improvement in the knowledge of the mothers. The knowledge of the mothers about the signs of Brucellosis disease in human increased from 1.81 to 3.79 ( t:-21.64 , sig:0.000 , and also the knowledge on the signs of Brucellosis in animals increased from 1.48 to 2.82 ( t:-10.60 , sig:0.000. Conclusion: Child to family health education program is one of the effective and available methods, which would be useful and effective in most communities, and also Students potential would be effective for applying in the health promotion programs.

  12. Evaluation of cleaning methods applied in home environments after renovation and remodeling activities

    International Nuclear Information System (INIS)

    Yiin, L.-M.; Lu, S.-E.; Sannoh, Sulaiman; Lim, B.S.; Rhoads, G.G.

    2004-01-01

    We conducted a cleaning trial in 40 northern New Jersey homes where home renovation and remodeling (R and R) activities were undertaken. Two cleaning protocols were used in the study: a specific method recommended by the US Department of Housing and Urban Development (HUD), in the 1995 'Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing', using a high-efficiency particulate air (HEPA)-filtered vacuum cleaner and a tri-sodium phosphate solution (TSP); and an alternative method using a household vacuum cleaner and a household detergent. Eligible homes were built before the 1970s with potential lead-based paint and had recent R and R activities without thorough cleaning. The two cleaning protocols were randomly assigned to the participants' homes and followed the HUD-recommended three-step procedure: vacuuming, wet washing, and repeat vacuuming. Wipe sampling was conducted on floor surfaces or windowsills before and after cleaning to evaluate the efficacy. All floor and windowsill data indicated that both methods (TSP/HEPA and non-TSP/non-HEPA) were effective in reducing lead loading on the surfaces (P<0.001). When cleaning was applied to surfaces with initial lead loading above the clearance standards, the reductions were even greater, above 95% for either cleaning method. The mixed-effect model analysis showed no significant difference between the two methods. Baseline lead loading was found to be associated with lead loading reduction significantly on floors (P<0.001) and marginally on windowsills (P=0.077). Such relations were different between the two cleaning methods significantly on floors (P<0.001) and marginally on windowsills (P=0.066), with the TSP/HEPA method being favored for higher baseline levels and the non-TSP/non-HEPA method for lower baseline levels. For the 10 homes with lead abatement, almost all post-cleaning lead loadings were below the standards using either cleaning method. Based on our results, we recommend that

  13. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

    International Nuclear Information System (INIS)

    Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.

    2002-01-01

    An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

  14. Postgraduate Education in Quality Improvement Methods: Initial Results of the Fellows' Applied Quality Training (FAQT) Curriculum.

    Science.gov (United States)

    Winchester, David E; Burkart, Thomas A; Choi, Calvin Y; McKillop, Matthew S; Beyth, Rebecca J; Dahm, Phillipp

    2016-06-01

    Training in quality improvement (QI) is a pillar of the next accreditation system of the Accreditation Committee on Graduate Medical Education and a growing expectation of physicians for maintenance of certification. Despite this, many postgraduate medical trainees are not receiving training in QI methods. We created the Fellows Applied Quality Training (FAQT) curriculum for cardiology fellows using both didactic and applied components with the goal of increasing confidence to participate in future QI projects. Fellows completed didactic training from the Institute for Healthcare Improvement's Open School and then designed and completed a project to improve quality of care or patient safety. Self-assessments were completed by the fellows before, during, and after the first year of the curriculum. The primary outcome for our curriculum was the median score reported by the fellows regarding their self-confidence to complete QI activities. Self-assessments were completed by 23 fellows. The majority of fellows (15 of 23, 65.2%) reported no prior formal QI training. Median score on baseline self-assessment was 3.0 (range, 1.85-4), which was significantly increased to 3.27 (range, 2.23-4; P = 0.004) on the final assessment. The distribution of scores reported by the fellows indicates that 30% were slightly confident at conducting QI activities on their own, which was reduced to 5% after completing the FAQT curriculum. An interim assessment was conducted after the fellows completed didactic training only; median scores were not different from the baseline (mean, 3.0; P = 0.51). After completion of the FAQT, cardiology fellows reported higher self-confidence to complete QI activities. The increase in self-confidence seemed to be limited to the applied component of the curriculum, with no significant change after the didactic component.

  15. High-Resolution Seismics Methods Applied to Till Covered Hard Rock Environments

    International Nuclear Information System (INIS)

    Bergman, Bjoern

    2005-01-01

    Reflection seismic and seismic tomography methods can be used to image the upper kilometer of hard bedrock and the loose unconsolidated sediments covering it. Developments of these two methods and their application, as well as identifying issues concerning their usage, are the main focus of the thesis. Data used for this development were acquired at three different sites in Sweden, in Forsmark 140 km north of Stockholm, in the Oskarshamn area in southern Sweden, and in the northern part of the Siljan Ring impact crater area. The reflection seismic data were acquired with long source-receiver offsets relative to some of the targeted depths to be imaged. In the initial processing standard steps were applied, but the uppermost part of the sections were not always clear. The longer offsets imply that pre-stack migration is necessary in order to image the uppermost bedrock as clearly as possible. Careful choice of filters and velocity functions improve the pre-stack migrated image, allowing better correlation with near-surface geological information. The seismic tomography method has been enhanced to calculate, simultaneously with the velocity inversion, optimal corrections to the picked first break travel times in order to compensate for the delays due to the seismic waves passing through the loose sediments covering the bedrock. The reflection seismic processing used in this thesis has produced high-quality images of the upper kilometers, and in one example from the Forsmark site, the image of the uppermost 250 meters of the bedrock has been improved. The three-dimensional orientation of reflections has been determined at the Oskarshamn site. Correlation with borehole data shows that many of these reflections originate from fracture zones. The developed seismic tomography method produces high-detail velocity models for the site in the Siljan impact area and for the Forsmark site. In Forsmark, detailed estimates of the bedrock topography were calculated with the use of

  16. Applying system engineering methods to site characterization research for nuclear waste repositories

    International Nuclear Information System (INIS)

    Woods, T.W.

    1985-01-01

    Nuclear research and engineering projects can benefit from the use of system engineering methods. This paper is brief overview illustrating how system engineering methods could be applied in structuring a site characterization effort for a candidate nuclear waste repository. System engineering is simply an orderly process that has been widely used to transform a recognized need into a fully defined system. Such a system may be physical or abstract, natural or man-made, hardware or procedural, as is appropriate to the system's need or objective. It is a way of mentally visualizing all the constituent elements and their relationships necessary to fulfill a need, and doing so compliant with all constraining requirements attendant to that need. Such a system approach provides completeness, order, clarity, and direction. Admittedly, system engineering can be burdensome and inappropriate for those project objectives having simple and familiar solutions that are easily held and controlled mentally. However, some type of documented and structured approach is needed for those objectives that dictate extensive, unique, or complex programs, and/or creation of state-of-the-art machines and facilities. System engineering methods have been used extensively and successfully in these cases. The scientific methods has served well in ordering countless technical undertakings that address a specific question. Similarly, conventional construction and engineering job methods will continue to be quite adequate to organize routine building projects. Nuclear waste repository site characterization projects involve multiple complex research questions and regulatory requirements that interface with each other and with advanced engineering and subsurface construction techniques. There is little doubt that system engineering is an appropriate orchestrating process to structure such diverse elements into a cohesive, well defied project

  17. Resampling method for applying density-dependent habitat selection theory to wildlife surveys.

    Science.gov (United States)

    Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel

    2015-01-01

    Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large

  18. Photonic simulation method applied to the study of structural color in Myxomycetes.

    Science.gov (United States)

    Dolinko, Andrés; Skigin, Diana; Inchaussandague, Marina; Carmaran, Cecilia

    2012-07-02

    We present a novel simulation method to investigate the multicolored effect of the Diachea leucopoda (Physarales order, Myxomycetes class), which is a microorganism that has a characteristic pointillistic iridescent appearance. It was shown that this appearance is of structural origin, and is produced within the peridium -protective layer that encloses the mass of spores-, which is basically a corrugated sheet of a transparent material. The main characteristics of the observed color were explained in terms of interference effects using a simple model of homogeneous planar slab. In this paper we apply a novel simulation method to investigate the electromagnetic response of such structure in more detail, i.e., taking into account the inhomogeneities of the biological material within the peridium and its curvature. We show that both features, which could not be considered within the simplified model, affect the observed color. The proposed method is of great potential for the study of biological structures, which present a high degree of complexity in the geometrical shapes as well as in the materials involved.

  19. Impact of gene patents on diagnostic testing: a new patent landscaping method applied to spinocerebellar ataxia.

    Science.gov (United States)

    Berthels, Nele; Matthijs, Gert; Van Overwalle, Geertrui

    2011-11-01

    Recent reports in Europe and the United States raise concern about the potential negative impact of gene patents on the freedom to operate of diagnosticians and on the access of patients to genetic diagnostic services. Patents, historically seen as legal instruments to trigger innovation, could cause undesired side effects in the public health domain. Clear empirical evidence on the alleged hindering effect of gene patents is still scarce. We therefore developed a patent categorization method to determine which gene patents could indeed be problematic. The method is applied to patents relevant for genetic testing of spinocerebellar ataxia (SCA). The SCA test is probably the most widely used DNA test in (adult) neurology, as well as one of the most challenging due to the heterogeneity of the disease. Typically tested as a gene panel covering the five common SCA subtypes, we show that the patenting of SCA genes and testing methods and the associated licensing conditions could have far-reaching consequences on legitimate access to this gene panel. Moreover, with genetic testing being increasingly standardized, simply ignoring patents is unlikely to hold out indefinitely. This paper aims to differentiate among so-called 'gene patents' by lifting out the truly problematic ones. In doing so, awareness is raised among all stakeholders in the genetic diagnostics field who are not necessarily familiar with the ins and outs of patenting and licensing.

  20. IAEA-ASSET's root cause analysis method applied to sodium leakage incident at Monju

    International Nuclear Information System (INIS)

    Watanabe, Norio; Hirano, Masashi

    1997-08-01

    The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)

  1. The Application of Intensive Longitudinal Methods to Investigate Change: Stimulating the Field of Applied Family Research.

    Science.gov (United States)

    Bamberger, Katharine T

    2016-03-01

    The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.

  2. A method of applying two-pump system in automatic transmissions for energy conservation

    Directory of Open Access Journals (Sweden)

    Peng Dong

    2015-06-01

    Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.

  3. Improvement in the DTVG detection method as applied to cast austeno-ferritic steels

    International Nuclear Information System (INIS)

    Francois, D.

    1996-05-01

    Initially, the so-called DTVG method was developed to improve detection and (lengthwise) dimensioning of cracks in austenitic steel assembly welds. The results obtained during the study and the structural similarity between austenitic and austeno-ferritic steels led us to carry out research into adapting the method on a sample the material of which is representative of the cast steels used in PWR primary circuit bends. The method was first adapted for use on thick-wall cast austeno-ferritic steel structures and was validated for zero ultrasonic beam incidence and for a flat sample with machine-finished reflectors. A second study was carried out notably to allow for non-zero ultrasonic beam incidence and to look at the method's validity when applied to a non-flat geometry. There were three principal goals to the research; adapting the process to take into account the special case of oblique ultrasonic beam incidence (B image handling), examining the effect of non-flat geometry on the detection method, and evaluating the performance of the method on actual defects (shrinkage cavities). We began by focusing on solving the problem of oblique incidence. Having decided on automatic refracted angle determination, the problem could only be solves by locking the algorithm on a representative image of the suspect material comprising an indicator. We then used a simple geometric model to quantify the deformation of the indicators on a B-scan image due to a non-flat translator/part interface. Finally, tests were carried out on measurements acquired from flat samples containing artificial and real defects so that the overall performance of the method after development could be assessed. This work has allowed the DTVG detection method to be adapted for use with B-scan images acquired with a non-zero ultrasonic beam incidence angle. Moreover, we have been able to show that for similar geometries to those of the cast bends and for deep defects the deformation of the indicators due

  4. Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method

    Directory of Open Access Journals (Sweden)

    M. Macků

    2012-09-01

    Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to theproduction of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out whatdimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mouldproduction, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself.Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically.A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meetrequirements specified by a customer were the results.

  5. Comparison of gradient methods for gain tuning of a PD controller applied on a quadrotor system

    Science.gov (United States)

    Kim, Jinho; Wilkerson, Stephen A.; Gadsden, S. Andrew

    2016-05-01

    Many mechanical and electrical systems have utilized the proportional-integral-derivative (PID) control strategy. The concept of PID control is a classical approach but it is easy to implement and yields a very good tracking performance. Unmanned aerial vehicles (UAVs) are currently experiencing a significant growth in popularity. Due to the advantages of PID controllers, UAVs are implementing PID controllers for improved stability and performance. An important consideration for the system is the selection of PID gain values in order to achieve a safe flight and successful mission. There are a number of different algorithms that can be used for real-time tuning of gains. This paper presents two algorithms for gain tuning, and are based on the method of steepest descent and Newton's minimization of an objective function. This paper compares the results of applying these two gain tuning algorithms in conjunction with a PD controller on a quadrotor system.

  6. Study of different ultrasonic focusing methods applied to non destructive testing

    International Nuclear Information System (INIS)

    El Amrani, M.

    1995-01-01

    The work presented in this thesis concerns the study of different ultrasonic focusing techniques applied to Nondestructive Testing (mechanical focusing and electronic focusing) and compares their capabilities. We have developed a model to predict the ultrasonic field radiated into a solid by water-coupled transducers. The model is based upon the Rayleigh integral formulation, modified to take account the refraction at the liquid-solid interface. The model has been validated by numerous experiments in various configurations. Running this model and the associated software, we have developed new methods to optimize focused transducers and studied the characteristics of the beam generated by transducers using various focusing techniques. (author). 120 refs., 95 figs., 4 appends

  7. Adding randomness controlling parameters in GRASP method applied in school timetabling problem

    Directory of Open Access Journals (Sweden)

    Renato Santos Pereira

    2017-09-01

    Full Text Available This paper studies the influence of randomness controlling parameters (RCP in first stage GRASP method applied in graph coloring problem, specifically school timetabling problems in a public high school. The algorithm (with the inclusion of RCP was based on critical variables identified through focus groups, whose weights can be adjusted by the user in order to meet the institutional needs. The results of the computational experiment, with 11-year-old data (66 observations processed at the same high school show that the inclusion of RCP leads to significantly lowering the distance between initial solutions and local minima. The acceptance and the use of the solutions found allow us to conclude that the modified GRASP, as has been constructed, can make a positive contribution to this timetabling problem of the school in question.

  8. Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method

    Directory of Open Access Journals (Sweden)

    Macků M.

    2012-09-01

    Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to the production of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out what dimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mould production, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself. Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically. A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meet requirements specified by a customer were the results.

  9. Applied methods for mitigation of damage by stress corrosion in BWR type reactors

    International Nuclear Information System (INIS)

    Hernandez C, R.; Diaz S, A.; Gachuz M, M.; Arganis J, C.

    1998-01-01

    The Boiling Water nuclear Reactors (BWR) have presented stress corrosion problems, mainly in components and pipes of the primary system, provoking negative impacts in the performance of energy generator plants, as well as the increasing in the radiation exposure to personnel involucred. This problem has caused development of research programs, which are guided to find solution alternatives for the phenomena control. Among results of greater relevance the control for the reactor water chemistry stands out particularly in the impurities concentration and oxidation of radiolysis products; as well as the supervision in the materials selection and the stresses levels reduction. The present work presents the methods which can be applied to diminish the problems of stress corrosion in BWR reactors. (Author)

  10. Applied methods and techniques for mechatronic systems modelling, identification and control

    CERN Document Server

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  11. Borehole-to-borehole geophysical methods applied to investigations of high level waste repository sites

    International Nuclear Information System (INIS)

    Ramirez, A.L.

    1983-01-01

    This discussion focuses on the use of borehole to borehole geophysical measurements to detect geological discontinuities in High Level Waste (HLW) repository sites. The need for these techniques arises from: (a) the requirement that a HLW repository's characteristics and projected performance be known with a high degree of confidence; and (b) the inadequacy of other geophysical methods in mapping fractures. Probing configurations which can be used to characterize HLW sites are described. Results from experiments in which these techniques were applied to problems similar to those expected at repository sites are briefly discussed. The use of a procedure designed to reduce uncertainty associated with all geophysical exploration techniques is proposed; key components of the procedure are defined

  12. Applied mechanics of the Puricelli osteotomy: a linear elastic analysis with the finite element method

    Directory of Open Access Journals (Sweden)

    de Paris Marcel

    2007-11-01

    Full Text Available Abstract Background Surgical orthopedic treatment of the mandible depends on the development of techniques resulting in adequate healing processes. In a new technical and conceptual alternative recently introduced by Puricelli, osteotomy is performed in a more distal region, next to the mental foramen. The method results in an increased area of bone contact, resulting in larger sliding rates among bone segments. This work aimed to investigate the mechanical stability of the Puricelli osteotomy design. Methods Laboratory tests complied with an Applied Mechanics protocol, in which results from the Control group (without osteotomy were compared with those from Test I (Obwegeser-Dal Pont osteotomy and Test II (Puricelli osteotomy groups. Mandible edentulous prototypes were scanned using computerized tomography, and digitalized images were used to build voxel-based finite element models. A new code was developed for solving the voxel-based finite elements equations, using a reconditioned conjugate gradients iterative solver. The Magnitude of Displacement and von Mises equivalent stress fields were compared among the three groups. Results In Test Group I, maximum stress was seen in the region of the rigid internal fixation plate, with value greater than those of Test II and Control groups. In Test Group II, maximum stress was in the same region as in Control group, but was lower. The results of this comparative study using the Finite Element Analysis suggest that Puricelli osteotomy presents better mechanical stability than the original Obwegeser-Dal Pont technique. The increased area of the proximal segment and consequent decrease of the size of lever arm applied to the mandible in the modified technique yielded lower stress values, and consequently greater stability of the bone segments. Conclusion This work showed that Puricelli osteotomy of the mandible results in greater mechanical stability when compared to the original technique introduced by

  13. Infrared thermography inspection methods applied to the target elements of W7-X Divertor

    International Nuclear Information System (INIS)

    Missirlian, M.; Durocher, A.; Schlosser, J.; Farjon, J.-L.; Vignal, N.; Traxler, H.; Schedler, B.; Boscary, J.

    2006-01-01

    As heat exhaust capability and lifetime of plasma-facing component (PFC) during in-situ operation are linked to the manufacturing quality, a set of non-destructive testing must be operated during R-and-D and manufacturing phases. Within this framework, advanced non-destructive examination (NDE) methods are one of the key issues to achieve a high level of quality and reliability of joining techniques in the production of high heat flux components but also to develop and built successfully PFCs for a next generation of fusion devices. In this frame, two NDE infrared thermographic approaches, which have been recently applied to the qualification of CFC target elements of the W7-X divertor during the first series production will be discussed in this paper. The first one, developed by CEA (SATIR facility) and used with successfully to the control of the mass-produced actively cooled PFCs on Tore Supra, is based on the transient thermography where the testing protocol consists in inducing a thermal transient within the heat sink structure by an alternative hot/cold water flow. The second one, recently developed by PLANSEE (ARGUS facility), is based on the pulsed thermography where the component is heated externally by a single powerful flash of light. Results obtained on qualification experiences performed during the first series production of W7-X divertor components representing about thirty mock-ups with artificial and manufacturing defects, demonstrated the capabilities of these two methods and raised the efficiency of inspection to a level which is appropriate for industrial application. This comparative study, associated to a cross-checking analysis between the high heat flux performance tests and these inspection methods by infrared thermography, showed a good reproducibility and allowed to set a detectable limit specific at each method. Finally, the detectability of relevant defects showed excellent coincidence with thermal images obtained from high heat flux

  14. Oil Spill Trajectories from HF Radars: Applied Dynamical Systems Methods vs. a Lagrangian Stochastic Model

    Science.gov (United States)

    Emery, B. M.; Washburn, L.; Mezic, I.; Loire, S.; Arbabi, H.; Ohlmann, C.; Harlan, J.

    2016-02-01

    We apply several analysis methods to HF radar ocean surface current maps to investigate improvements in trajectory modeling. Results from a Lagrangian Stochastic Model (LSM) are compared with methods based on dynamical systems theory: hypergraphs and Koopman mode analysis. The LSM produces trajectories by integrating Eulerian fields from the HF radar, and accounts for sub-grid scale velocity variability by including a random component based on the Lagrangian decorrelation time. Hypergraphs also integrate the HF radar maps in time, showing areas of strain, strain-rotation, and mixing, by plotting the relative strengths of the eigenvalues of the gradient of the time-averaged Lagrangian velocity. Koopman mode analysis decomposes the velocity field into modes of variability, similarly to EOF or a Fourier analysis, though each Koopman mode varies in time with a distinct frequency. Each method simulates oil drift from a the oil spill of May, 2015 that occurred within the coverage area of the HF radars, in the Santa Barbara Channel near Refugio Beach, CA. Preliminary results indicate some skill in determining the transport of oil when compare to publicly available observations of oil in the Santa Barbara Channel. These simulations have not shown a connection between the Refugio spill site and oil observations in the Santa Monica Bay, near Los Angeles CA, though accumulation zones shown by the hypergraphs correlate in time and space with these observations. Improvements in the HF radar coverage and accuracy were observed during the spill by the deployment of an additional HF radar site near Gaviota, CA. Presently we are collecting observations of oil on beaches and in the ocean, determining the role of winds in the oil movement, and refining the methods. Some HF radar data is being post-processed to incorporate recent antenna calibrations for sites in Santa Monica Bay. We will evaluate effects of the newly processed data on analysis results.

  15. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    Energy Technology Data Exchange (ETDEWEB)

    Bhagwat, Nikhil V. [Iowa State Univ., Ames, IA (United States)

    2005-01-01

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.

  16. Design and fabrication of facial prostheses for cancer patient applying computer aided method and manufacturing (CADCAM)

    Science.gov (United States)

    Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana

    2016-12-01

    Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.

  17. Goal oriented soil mapping: applying modern methods supported by local knowledge: A review

    Science.gov (United States)

    Pereira, Paulo; Brevik, Eric; Oliva, Marc; Estebaranz, Ferran; Depellegrin, Daniel; Novara, Agata; Cerda, Artemi; Menshov, Oleksandr

    2017-04-01

    In the recent years the amount of soil data available increased importantly. This facilitated the production of better and accurate maps, important for sustainable land management (Pereira et al., 2017). Despite these advances, the human knowledge is extremely important to understand the natural characteristics of the landscape. The knowledge accumulated and transmitted generation after generation is priceless, and should be considered as a valuable data source for soil mapping and modelling. The local knowledge and wisdom can complement the new advances in soil analysis. In addition, farmers are the most interested in the participation and incorporation of their knowledge in the models, since they are the end-users of the study that soil scientists produce. Integration of local community's vision and understanding about nature is assumed to be an important step to the implementation of decision maker's policies. Despite this, many challenges appear regarding the integration of local and scientific knowledge, since in some cases there is no spatial correlation between folk and scientific classifications, which may be attributed to the different cultural variables that influence local soil classification. The objective of this work is to review how modern soil methods incorporated local knowledge in their models. References Pereira, P., Brevik, E., Oliva, M., Estebaranz, F., Depellegrin, D., Novara, A., Cerda, A., Menshov, O. (2017) Goal Oriented soil mapping: applying modern methods supported by local knowledge. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006

  18. [An experimental assessment of methods for applying intestinal sutures in intestinal obstruction].

    Science.gov (United States)

    Akhmadudinov, M G

    1992-04-01

    The results of various methods used in applying intestinal sutures in obturation were studied. Three series of experiments were conducted on 30 dogs--resection of the intestine after obstruction with the formation of anastomoses by means of double-row suture (Albert--Shmiden--Lambert) in the first series (10 dogs), by a single-row suture after V. M. Mateshchuk [correction of Mateshuku] in the second series, and bu a single-row stretching suture suggested by the author in the third series. The postoperative complications and the parameters of physical airtightness of the intestinal anastomosis were studied in dynamics in the experimental animals. The results of the study: incompetence of the anastomosis sutures in the first series 6, in the second 4, and in the third series one. Adhesions occurred in all animals of the first and second series and in 2 of the third series. Six dogs of the first series died, 4 of the second, and one of the third. Study of the dynamics of the results showed a direct connection of the complications with the parameters of the physical airtightness of the anastomosis, and the last-named with the method of the intestinal suture. Relatively better results were noted in formation of the anastomosis by means of our suggested stretshing continuous suture passed through the serous, muscular, and submucous coats of the intestine.

  19. Metaheuristics for multi products inventory routing problem with time varying demand

    Science.gov (United States)

    Moin, Noor Hasnah; Ab Halim, Huda Zuhrah; Yuliana, Titi

    2014-07-01

    This paper addresses the inventory routing problem (IRP) with a many-to-one distribution network, consisting of a single depot, an assembly plant, and geographically dispersed suppliers where a capacitated homogeneous vehicle delivers a distinct product from the suppliers to fulfill the demand specified by the assembly plant over the planning horizon. The inventory holding cost is assumed to be product specific and only incurred at the assembly plant. Two metaheuristics comprise of artificial bee colony (ABC) and scatter search (SS) algorithms are proposed to solve the problem. Computational testing on instances which represents small, medium, and large data sets show that the ABC algorithm performs slightly better when compared the SS overall except for fifty suppliers problems.

  20. Integer programming formulation and variable neighborhood search metaheuristic for the multiproduct pipeline scheduling problem

    Energy Technology Data Exchange (ETDEWEB)

    Souza Filho, Erito M.; Bahiense, Laura; Ferreira Filho, Virgilio J.M. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Lima, Leonardo [Centro Federal de Educacao Tecnologica Celso Sukow da Fonseca (CEFET-RJ), Rio de Janeiro, RJ (Brazil)

    2008-07-01

    Pipeline are known as the most reliable and economical mode of transportation for petroleum and its derivatives, especially when large amounts of products have to be pumped for large distances. In this work we address the short-term schedule of a pipeline system comprising the distribution of several petroleum derivatives from a single oil refinery to several depots, connected to local consumer markets, through a single multi-product pipeline. We propose an integer linear programming formulation and a variable neighborhood search meta-heuristic in order to compare the performances of the exact and heuristic approaches to the problem. Computational tests in C language and MOSEL/XPRESS-MP language are performed over a real Brazilian pipeline system. (author)

  1. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    Science.gov (United States)

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.

  2. Data classification using metaheuristic Cuckoo Search technique for Levenberg Marquardt back propagation (CSLM) algorithm

    Science.gov (United States)

    Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.

    2015-05-01

    A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.

  3. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    Science.gov (United States)

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  4. Modelling and Metaheuristic for Gantry Crane Scheduling and Storage Space Allocation Problem in Railway Container Terminals

    Directory of Open Access Journals (Sweden)

    Ming Zeng

    2017-01-01

    Full Text Available The gantry crane scheduling and storage space allocation problem in the main containers yard of railway container terminal is studied. A mixed integer programming model which comprehensively considers the handling procedures, noncrossing constraints, the safety margin and traveling time of gantry cranes, and the storage modes in the main area is formulated. A metaheuristic named backtracking search algorithm (BSA is then improved to solve this intractable problem. A series of computational experiments are carried out to evaluate the performance of the proposed algorithm under some randomly generated cases based on the practical operation conditions. The results show that the proposed algorithm can gain the near-optimal solutions within a reasonable computation time.

  5. Meta-heuristic and Constraint-Based Approaches for Single-Line Railway Timetabling

    Science.gov (United States)

    Barber, Federico; Ingolotti, Laura; Lova, Antonio; Tormos, Pilar; Salido, Miguel A.

    This chapter is devoted to recent advances in heuristic and metaheuristic procedures, arising from the areas of Computer Science and Artificial Intelligence, which are able to cope with large scale problems as those in single-line railway timetable optimization. Timetable design is a central problem in railway planning. In the basic timetabling problem, we are given a line plan as well as demand and infrastructure information. The goal is to compute timetables for passengers and cargo trains that satisfy infrastructure capacity and achieve multicriteria objectives: minimal passenger waiting time (both at changeovers and onboard), efficient use of trains, etc. Due to its central role in the planning process of railway scheduling, timetable design has many interfaces with other classical problems: line planning, vehicle scheduling, and delay management.

  6. Artificial Intelligence, Evolutionary Computing and Metaheuristics In the Footsteps of Alan Turing

    CERN Document Server

    2013-01-01

    Alan Turing pioneered many research areas such as artificial intelligence, computability, heuristics and pattern formation.  Nowadays at the information age, it is hard to imagine how the world would be without computers and the Internet. Without Turing's work, especially the core concept of Turing Machine at the heart of every computer, mobile phone and microchip today, so many things on which we are so dependent would be impossible. 2012 is the Alan Turing year -- a centenary celebration of the life and work of Alan Turing. To celebrate Turing's legacy and follow the footsteps of this brilliant mind, we take this golden opportunity to review the latest developments in areas of artificial intelligence, evolutionary computation and metaheuristics, and all these areas can be traced back to Turing's pioneer work. Topics include Turing test, Turing machine, artificial intelligence, cryptography, software testing, image processing, neural networks, nature-inspired algorithms such as bat algorithm and cuckoo sear...

  7. A Meta-Heuristic Regression-Based Feature Selection for Predictive Analytics

    Directory of Open Access Journals (Sweden)

    Bharat Singh

    2014-11-01

    Full Text Available A high-dimensional feature selection having a very large number of features with an optimal feature subset is an NP-complete problem. Because conventional optimization techniques are unable to tackle large-scale feature selection problems, meta-heuristic algorithms are widely used. In this paper, we propose a particle swarm optimization technique while utilizing regression techniques for feature selection. We then use the selected features to classify the data. Classification accuracy is used as a criterion to evaluate classifier performance, and classification is accomplished through the use of k-nearest neighbour (KNN and Bayesian techniques. Various high dimensional data sets are used to evaluate the usefulness of the proposed approach. Results show that our approach gives better results when compared with other conventional feature selection algorithms.

  8. A new sub-equation method applied to obtain exact travelling wave solutions of some complex nonlinear equations

    International Nuclear Information System (INIS)

    Zhang Huiqun

    2009-01-01

    By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein-Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations.

  9. Comparison of virological methods applied on african swine fever diagnosis in Brazil, 1978

    Directory of Open Access Journals (Sweden)

    Tânia Rosária Pereira Freitas

    2015-10-01

    Full Text Available ABSTRACT. Freitas T.R.P., Souza A.C., Esteves E.G. & Lyra T.M.P. [Comparison of virological methods applied on african swine fever diagnosis in Brazil, 1978.] Comparação dos métodos virológicos aplicados no diagnóstico da peste suína africana no Brasil, 1978. Revista Brasileira de Medicina Veterinária, 37(3:255-263, 2015. Laboratório Nacional Agropecuário, Ministério da Agricultura, Pecuária e Abastecimento, Avenida Rômulo Joviano, s/n, Caixa postal 35/50, Pedro Leopoldo, MG 33600-000, Brasil. taniafrei@hotmail.com The techniques of leucocytes haemadsorption (HAD for the African Swine Fever (ASF virus isolation and the fluorescent antigens tissue samples (FATS for virus antigens detection were implanted in the ASF eradication campaign in the country. The complementary of techniques was studied considering the results obtained when the HAD and FATS were concomitantly applied on the same pig tissue samples. The results of 22, 56 and 30 pigs samples from of the States of Rio de Janeiro (RJ, São Paulo (SP and Paraná (PR, respectively, showed that in RJ 11 (50%; in SP, 28 (50% and in PR, 15 (50% samples were positive in the HAD, while, RJ, 18 (82%; SP, 33 (58% and PR, 17 (57% were positive in the FATS. In the universe of 108 samples submitted to both the tests, 83 (76.85% were positive in at least one of the tests, which characterized ASF positivity. Among the positive samples, 28 (34% have presented HAD negative results and 15 (18% have presented FATS negative results. The achievement of applying simultaneously the both tests was the reduction of false- negative results, conferring more ASF accurate laboratorial diagnosis, besides to show the tests complementary. This aspect is fundamentally importance concern with a disease eradiation program to must avoid false negative results. Evidences of low virulence ASFV strains in Brazilian ASF outbreaks and also the distribution of ASF outbreaks by the mesoregions of each State were discussed

  10. A mixed methods evaluation of team-based learning for applied pathophysiology in undergraduate nursing education.

    Science.gov (United States)

    Branney, Jonathan; Priego-Hernández, Jacqueline

    2018-02-01

    It is important for nurses to have a thorough understanding of the biosciences such as pathophysiology that underpin nursing care. These courses include content that can be difficult to learn. Team-based learning is emerging as a strategy for enhancing learning in nurse education due to the promotion of individual learning as well as learning in teams. In this study we sought to evaluate the use of team-based learning in the teaching of applied pathophysiology to undergraduate student nurses. A mixed methods observational study. In a year two, undergraduate nursing applied pathophysiology module circulatory shock was taught using Team-based Learning while all remaining topics were taught using traditional lectures. After the Team-based Learning intervention the students were invited to complete the Team-based Learning Student Assessment Instrument, which measures accountability, preference and satisfaction with Team-based Learning. Students were also invited to focus group discussions to gain a more thorough understanding of their experience with Team-based Learning. Exam scores for answers to questions based on Team-based Learning-taught material were compared with those from lecture-taught material. Of the 197 students enrolled on the module, 167 (85% response rate) returned the instrument, the results from which indicated a favourable experience with Team-based Learning. Most students reported higher accountability (93%) and satisfaction (92%) with Team-based Learning. Lectures that promoted active learning were viewed as an important feature of the university experience which may explain the 76% exhibiting a preference for Team-based Learning. Most students wanted to make a meaningful contribution so as not to let down their team and they saw a clear relevance between the Team-based Learning activities and their own experiences of teamwork in clinical practice. Exam scores on the question related to Team-based Learning-taught material were comparable to those

  11. Method developments approaches in supercritical fluid chromatography applied to the analysis of cosmetics.

    Science.gov (United States)

    Lesellier, E; Mith, D; Dubrulle, I

    2015-12-04

    necessary, two-step gradient elution. The developed methods were then applied to real cosmetic samples to assess the method specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations

    International Nuclear Information System (INIS)

    Arimescu, V.E.; Heins, L.

    2001-01-01

    method, which is computationally efficient, is presented for the evaluation of the global statement. It is proved that, r, the expected fraction of fuel rods exceeding a certain limit is equal to the (1-r)-quantile of the overall distribution of all possible values from all fuel rods. In this way, the problem is reduced to that of estimating a certain quantile of the overall distribution, and the same techniques used for a single rod distribution can be applied again. A simplified test case was devised to verify and validate the methodology. The fuel code was replaced by a transfer function dependent on two input parameters. The function was chosen so that analytic results could be obtained for the distribution of the output. This offers a direct validation for the statistical procedure. Also, a sensitivity study has been performed to analyze the effect on the final outcome of the sampling procedure, simple Monte Carlo and Latin Hypercube Sampling. Also, the effect on the accuracy and bias of the statistical results due to the size of the sample was studied and the conclusion was reached that the results of the statistical methodology are typically conservative. In the end, an example of applying these statistical techniques to a PWR reload is presented together with the improvements and new insights the statistical methodology brings to fuel rod design calculations. (author)

  13. Analytical Methods INAA and PIXE Applied to Characterization of Airborne Particulate Matter in Bandung, Indonesia

    Directory of Open Access Journals (Sweden)

    D.D. Lestiani

    2011-08-01

    Full Text Available Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA and particles induced X-ray emission (PIXE. Particle samples in the PM2.5 and PM2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preffered, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment.

  14. Stochastic Methods Applied to Power System Operations with Renewable Energy: A Review

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z. [Argonne National Lab. (ANL), Argonne, IL (United States); Liu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Electric Reliability Council of Texas (ERCOT), Austin, TX (United States); Botterud, A. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-08-01

    Renewable energy resources have been rapidly integrated into power systems in many parts of the world, contributing to a cleaner and more sustainable supply of electricity. Wind and solar resources also introduce new challenges for system operations and planning in terms of economics and reliability because of their variability and uncertainty. Operational strategies based on stochastic optimization have been developed recently to address these challenges. In general terms, these stochastic strategies either embed uncertainties into the scheduling formulations (e.g., the unit commitment [UC] problem) in probabilistic forms or develop more appropriate operating reserve strategies to take advantage of advanced forecasting techniques. Other approaches to address uncertainty are also proposed, where operational feasibility is ensured within an uncertainty set of forecasting intervals. In this report, a comprehensive review is conducted to present the state of the art through Spring 2015 in the area of stochastic methods applied to power system operations with high penetration of renewable energy. Chapters 1 and 2 give a brief introduction and overview of power system and electricity market operations, as well as the impact of renewable energy and how this impact is typically considered in modeling tools. Chapter 3 reviews relevant literature on operating reserves and specifically probabilistic methods to estimate the need for system reserve requirements. Chapter 4 looks at stochastic programming formulations of the UC and economic dispatch (ED) problems, highlighting benefits reported in the literature as well as recent industry developments. Chapter 5 briefly introduces alternative formulations of UC under uncertainty, such as robust, chance-constrained, and interval programming. Finally, in Chapter 6, we conclude with the main observations from our review and important directions for future work.

  15. The Global Survey Method Applied to Ground-level Cosmic Ray Measurements

    Science.gov (United States)

    Belov, A.; Eroshenko, E.; Yanke, V.; Oleneva, V.; Abunin, A.; Abunina, M.; Papaioannou, A.; Mavromichalaki, H.

    2018-04-01

    The global survey method (GSM) technique unites simultaneous ground-level observations of cosmic rays in different locations and allows us to obtain the main characteristics of cosmic-ray variations outside of the atmosphere and magnetosphere of Earth. This technique has been developed and applied in numerous studies over many years by the Institute of Terrestrial Magnetism, Ionosphere and Radiowave Propagation (IZMIRAN). We here describe the IZMIRAN version of the GSM in detail. With this technique, the hourly data of the world-wide neutron-monitor network from July 1957 until December 2016 were processed, and further processing is enabled upon the receipt of new data. The result is a database of homogeneous and continuous hourly characteristics of the density variations (an isotropic part of the intensity) and the 3D vector of the cosmic-ray anisotropy. It includes all of the effects that could be identified in galactic cosmic-ray variations that were caused by large-scale disturbances of the interplanetary medium in more than 50 years. These results in turn became the basis for a database on Forbush effects and interplanetary disturbances. This database allows correlating various space-environment parameters (the characteristics of the Sun, the solar wind, et cetera) with cosmic-ray parameters and studying their interrelations. We also present features of the coupling coefficients for different neutron monitors that enable us to make a connection from ground-level measurements to primary cosmic-ray variations outside the atmosphere and the magnetosphere. We discuss the strengths and weaknesses of the current version of the GSM as well as further possible developments and improvements. The method developed allows us to minimize the problems of the neutron-monitor network, which are typical for experimental physics, and to considerably enhance its advantages.

  16. Multicriterial Hierarchy Methods Applied in Consumption Demand Analysis. The Case of Romania

    Directory of Open Access Journals (Sweden)

    Constantin Bob

    2008-03-01

    Full Text Available The basic information for computing the quantitative statistical indicators, that characterize the demand of industrial products and services are collected by the national statistics organizations, through a series of statistical surveys (most of them periodical and partial. The source for data we used in the present paper is an statistical investigation organized by the National Institute of Statistics, "Family budgets survey" that allows to collect information regarding the households composition, income, expenditure, consumption and other aspects of population living standard. In 2005, in Romania, a person spent monthly in average 391,2 RON, meaning about 115,1 Euros for purchasing the consumed food products and beverage, as well as non-foods products, services, investments and other taxes. 23% of this sum was spent for purchasing the consumed food products and beverages, 21.6% of the total sum was spent for purchasing non-food goods and 18,1%  for payment of different services. There is a discrepancy between the different development regions in Romania, regarding total households expenditure composition. For this reason, in the present paper we applied statistical methods for ranking the various development regions in Romania, using the share of householdsí expenditure on categories of products and services as ranking criteria.

  17. Applying the Communicative Methodic in Learning Lithuanian as a Second Language

    Directory of Open Access Journals (Sweden)

    Vaida Buivydienė

    2011-04-01

    Full Text Available One of the strengths of European countries is their multilingual nature. That was stressed by the European Council during different international projects. Every citizen of Europe should be given the opportunity to learn languages life long, as languages open new perspectives in the modern world. Besides, learning languages brings tolerance and understanding to people from different cultures. The article presents the idea, based on the experience of foreign language teaching, that communicative method in learning languages should be applied also to Lithuanian as a foreign language teaching. According to international SOCRATES exchange programme, every year a lot of students and teachers from abroad come to Lithuanian Higher Schools (VGTU included. They should also be provided with opportunities to gain the best language learning, cultural and educational experience. Most of the students that came to VGTU pointed out Lithuanian language learning being one of the subjects to be chosen. That leads to organizing interesting and useful short-lasting Lithuanian language courses. The survey carried in VGTU and the analysis of the materials gathered leads to the conclusion that the communicative approach in language teaching is the best to cater the needs and interests of the learners to master the survival Lithuanian.

  18. A new method of identifying target groups for pronatalist policy applied to Australia.

    Directory of Open Access Journals (Sweden)

    Mengni Chen

    Full Text Available A country's total fertility rate (TFR depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup's potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies.

  19. A new method of identifying target groups for pronatalist policy applied to Australia.

    Science.gov (United States)

    Chen, Mengni; Lloyd, Chris J; Yip, Paul S F

    2018-01-01

    A country's total fertility rate (TFR) depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup's potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies.

  20. Imperialist Competitive Algorithm with Dynamic Parameter Adaptation Using Fuzzy Logic Applied to the Optimization of Mathematical Functions

    Directory of Open Access Journals (Sweden)

    Emer Bernal

    2017-01-01

    Full Text Available In this paper we are presenting a method using fuzzy logic for dynamic parameter adaptation in the imperialist competitive algorithm, which is usually known by its acronym ICA. The ICA algorithm was initially studied in its original form to find out how it works and what parameters have more effect upon its results. Based on this study, several designs of fuzzy systems for dynamic adjustment of the ICA parameters are proposed. The experiments were performed on the basis of solving complex optimization problems, particularly applied to benchmark mathematical functions. A comparison of the original imperialist competitive algorithm and our proposed fuzzy imperialist competitive algorithm was performed. In addition, the fuzzy ICA was compared with another metaheuristic using a statistical test to measure the advantage of the proposed fuzzy approach for dynamic parameter adaptation.

  1. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    Science.gov (United States)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade

  2. Modern structure of methods and techniques of marketing research, applied by the world and Ukrainian research companies

    Directory of Open Access Journals (Sweden)

    Bezkrovnaya Yulia

    2015-08-01

    Full Text Available The article presents the results of empiric justification of the structure of methods and techniques of marketing research of consumer decisions, applied by the world and Ukrainian research companies.

  3. A comparative study on using meta-heuristic algorithms for road maintenance planning: Insights from field study in a developing country

    Directory of Open Access Journals (Sweden)

    Ali Gerami Matin

    2017-10-01

    Full Text Available Optimized road maintenance planning seeks for solutions that can minimize the life-cycle cost of a road network and concurrently maximize pavement condition. Aiming at proposing an optimal set of road maintenance solutions, robust meta-heuristic algorithms are used in research. Two main optimization techniques are applied including single-objective and multi-objective optimization. Genetic algorithms (GA, particle swarm optimization (PSO, and combination of genetic algorithm and particle swarm optimization (GAPSO as single-objective techniques are used, while the non-domination sorting genetic algorithm II (NSGAII and multi-objective particle swarm optimization (MOPSO which are sufficient for solving computationally complex large-size optimization problems as multi-objective techniques are applied and compared. A real case study from the rural transportation network of Iran is employed to illustrate the sufficiency of the optimum algorithm. The formulation of the optimization model is carried out in such a way that a cost-effective maintenance strategy is reached by preserving the performance level of the road network at a desirable level. So, the objective functions are pavement performance maximization and maintenance cost minimization. It is concluded that multi-objective algorithms including non-domination sorting genetic algorithm II (NSGAII and multi-objective particle swarm optimization performed better than the single objective algorithms due to the capability to balance between both objectives. And between multi-objective algorithms the NSGAII provides the optimum solution for the road maintenance planning.

  4. Analysis of flow boiling heat transfer in narrow annular gaps applying the design of experiments method

    Directory of Open Access Journals (Sweden)

    Gunar Boye

    2015-06-01

    Full Text Available The axial heat transfer coefficient during flow boiling of n-hexane was measured using infrared thermography to determine the axial wall temperature in three geometrically similar annular gaps with different widths (s = 1.5 mm, s = 1 mm, s = 0.5 mm. During the design and evaluation process, the methods of statistical experimental design were applied. The following factors/parameters were varied: the heat flux q · = 30 − 190 kW / m 2 , the mass flux m · = 30 − 700 kg / m 2 s , the vapor quality x · = 0 . 2 − 0 . 7 , and the subcooled inlet temperature T U = 20 − 60 K . The test sections with gap widths of s = 1.5 mm and s = 1 mm had very similar heat transfer characteristics. The heat transfer coefficient increases significantly in the range of subcooled boiling, and after reaching a maximum at the transition to the saturated flow boiling, it drops almost monotonically with increasing vapor quality. With a gap width of 0.5 mm, however, the heat transfer coefficient in the range of saturated flow boiling first has a downward trend and then increases at higher vapor qualities. For each test section, two correlations between the heat transfer coefficient and the operating parameters have been created. The comparison also shows a clear trend of an increasing heat transfer coefficient with increasing heat flux for test sections s = 1.5 mm and s = 1.0 mm, but with increasing vapor quality, this trend is reversed for test section 0.5 mm.

  5. A new method of identifying target groups for pronatalist policy applied to Australia

    Science.gov (United States)

    Chen, Mengni; Lloyd, Chris J.

    2018-01-01

    A country’s total fertility rate (TFR) depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup’s potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies. PMID:29425220

  6. A Novel Pareto-Based Meta-Heuristic Algorithm to Optimize Multi-Facility Location-Allocation Problem

    OpenAIRE

    Vahid Hajipour; Samira V. Noshafagh; Reza Tavakkoli-Moghaddam

    2013-01-01

    This article proposes a novel Pareto-based multiobjective meta-heuristic algorithm named non-dominated ranking genetic algorithm (NRGA) to solve multi-facility location-allocation problem. In NRGA, a fitness value representing rank is assigned to each individual of the population. Moreover, two features ranked based roulette wheel selection including select the fronts and choose solutions from the fronts, are utilized. The proposed solving methodology is validated using s...

  7. Non-destructive scanning for applied stress by the continuous magnetic Barkhausen noise method

    Science.gov (United States)

    Franco Grijalba, Freddy A.; Padovese, L. R.

    2018-01-01

    This paper reports the use of a non-destructive continuous magnetic Barkhausen noise technique to detect applied stress on steel surfaces. The stress profile generated in a sample of 1070 steel subjected to a three-point bending test is analyzed. The influence of different parameters such as pickup coil type, scanner speed, applied magnetic field and frequency band analyzed on the effectiveness of the technique is investigated. A moving smoothing window based on a second-order statistical moment is used to analyze the time signal. The findings show that the technique can be used to detect applied stress profiles.

  8. Krylov Subspace and Multigrid Methods Applied to the Incompressible Navier-Stokes Equations

    Science.gov (United States)

    Vuik, C.; Wesseling, P.; Zeng, S.

    1996-01-01

    We consider numerical solution methods for the incompressible Navier-Stokes equations discretized by a finite volume method on staggered grids in general coordinates. We use Krylov subspace and multigrid methods as well as their combinations. Numerical experiments are carried out on a scalar and a vector computer. Robustness and efficiency of these methods are studied. It appears that good methods result from suitable combinations of GCR and multigrid methods.

  9. Research on applying neutron transport Monte Carlo method in materials with continuously varying cross sections

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Zhang, Xisi

    2011-01-01

    In traditional Monte Carlo method, the material properties in a certain cell are assumed to be constant, but this is no longer applicable in continuous varying materials where the material's nuclear cross-sections vary over the particle's flight path. So, three Monte Carlo methods, including sub stepping method, delta-tracking method and direct sampling method, are discussed in this paper to solve the problems with continuously varying materials. After the verification and comparison of these methods in 1-D models, the basic specialties of these methods are discussed and then we choose the delta-tracking method as the main method to solve the problems with continuously varying materials, especially 3-D problems. To overcome the drawbacks of the original delta-tracking method, an improved delta-tracking method is proposed in this paper to make this method more efficient in solving problems where the material's cross-sections vary sharply over the particle's flight path. To use this method in practical calculation, we implemented the improved delta-tracking method into the 3-D Monte Carlo code RMC developed by Department of Engineering Physics, Tsinghua University. Two problems based on Godiva system were constructed and calculations were made using both improved delta-tracking method and the sub stepping method, and the results proved the effects of improved delta-tracking method. (author)

  10. The Density-Enthalpy Method Applied to Model Two–phase Darcy Flow

    NARCIS (Netherlands)

    Ibrahim, D.

    2012-01-01

    In this thesis, we use a more recent method to numerically solve two-phase fluid flow problems. The method is developed at TNO and it is presented by Arendsen et al. in [1] for spatially homogeneous systems. We will refer to this method as the densityenthalpy method (DEM) because the

  11. Non-regularized inversion method from light scattering applied to ferrofluid magnetization curves for magnetic size distribution analysis

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2014-01-01

    A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online

  12. Material point methods applied to one-dimensional shock waves and dual domain material point method with sub-points

    Science.gov (United States)

    Dhakal, Tilak R.; Zhang, Duan Z.

    2016-11-01

    Using a simple one-dimensional shock problem as an example, the present paper investigates numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the dual domain material point (DDMP) method. For a weak isothermal shock of ideal gas, the MPM cannot be used with accuracy. With a small number of particles per cell, GIMP and CPDI produce reasonable results. However, as the number of particles increases the methods fail to converge and produce pressure spikes. The DDMP method behaves in an opposite way. With a small number of particles per cell, DDMP results are unsatisfactory. As the number of particles increases, the DDMP results converge to correct solutions, but the large number of particles needed for convergence makes the method very expensive to use in these types of shock wave problems in two- or three-dimensional cases. The cause for producing the unsatisfactory DDMP results is identified. A simple improvement to the method is introduced by using sub-points. With this improvement, the DDMP method produces high quality numerical solutions with a very small number of particles. Although in the present paper, the numerical examples are one-dimensional, all derivations are for multidimensional problems. With the technique of approximately tracking particle domains of CPDI, the extension of this sub-point method to multidimensional problems is straightforward. This new method preserves the conservation properties of the DDMP method, which conserves mass and momentum exactly and conserves energy to the second order in both spatial and temporal discretizations.

  13. FOUR SQUARE WRITING METHOD APPLIED IN PRODUCT AND PROCESS BASED APPROACHES COMBINATION TO TEACHING WRITING DISCUSSION TEXT

    Directory of Open Access Journals (Sweden)

    Vina Agustiana

    2017-12-01

    Full Text Available Four Square Writing Method is a writing method which helps students in organizing concept to write by using a graphic organizer. This study aims to examine the influence of applying FSWM in combination of product and process based approaches to teaching writing discussion texts toward students’ writing skill, the teaching-learning writing process and the students’ attitude toward the implementation of the writing method. This study applies a mixed-method through applying an embedded design. 26 EFL university students of a private university in West Java, Indonesia, are involved in the study. There are 3 kinds of instrument used, namely tests (pre and post-test, field notes, and questionnaires. Data taken from students’ writing test are analyzed statistically to identify the influence of applying the writing method toward students’ writing skill; data taken from field notes are analyzed qualitatively to examine the learning writing activities at the time the writing method is implemented; and data taken from questionnaires are analyzed descriptive statistic to explore students’ attitude toward the implementation of the writing method. Regarding the result of paired t-test, the writing method is effective in improving students’ writing skill since level of significant (two-tailed is less than alpha (0.000<0.05. Furthermore, the result taken from field notes shows that each steps applied and graphic organizer used in the writing method lead students to compose discussion texts which meet a demand of genre. In addition, regard with the result taken from questionnaire, the students show highly positive attitude toward the treatment since the mean score is 4.32.

  14. A global method for calculating plant CSR ecological strategies applied across biomes world-wide

    NARCIS (Netherlands)

    Pierce, S.; Negreiros, D.; Cerabolini, B.E.L.; Kattge, J.; Díaz, S.; Kleyer, M.; Shipley, B.; Wright, S.J.; Soudzilovskaia, N.A.; Onipchenko, V.G.; van Bodegom, P.M.; Frenette-Dussault, C.; Weiher, E.; Pinho, B.X.; Cornelissen, J.H.C.; Grime, J.P.; Thompson, K.; Hunt, R.; Wilson, P.J.; Buffa, G.; Nyakunga, O.C.; Reich, P.B.; Caccianiga, M.; Mangili, F.; Ceriani, R.M.; Luzzaro, A.; Brusa, G.; Siefert, A.; Barbosa, N.P.U.; Chapin III, F.S.; Cornwell, W.K.; Fang, Jingyun; Wilson Fernandez, G.; Garnier, E.; Le Stradic, S.; Peñuelas, J.; Melo, F.P.L.; Slaviero, A.; Tabarrelli, M.; Tampucci, D.

    2017-01-01

    Competitor, stress-tolerator, ruderal (CSR) theory is a prominent plant functional strategy scheme previously applied to local floras. Globally, the wide geographic and phylogenetic coverage of available values of leaf area (LA), leaf dry matter content (LDMC) and specific leaf area (SLA)

  15. Structure analysis of interstellar clouds - II. Applying the Delta-variance method to interstellar turbulence

    NARCIS (Netherlands)

    Ossenkopf, V.; Krips, M.; Stutzki, J.

    Context. The Delta-variance analysis is an efficient tool for measuring the structural scaling behaviour of interstellar turbulence in astronomical maps. It has been applied both to simulations of interstellar turbulence and to observed molecular cloud maps. In Paper I we proposed essential

  16. Evaluation of Two Fitting Methods Applied for Thin-Layer Drying of Cape Gooseberry Fruits

    Directory of Open Access Journals (Sweden)

    Erkan Karacabey

    Full Text Available ABSTRACT Drying data of cape gooseberry was used to compare two fitting methods: namely 2-step and 1-step methods. Literature data was also used to confirm the results. To demonstrate the applicability of these methods, two primary models (Page, Two-term-exponential were selected. Linear equation was used as secondary model. As well-known from the previous modelling studies on drying, 2-step method required at least two regressions: One is primary model and one is secondary (if you have only one environmental condition such as temperature. On the other hand, one regression was enough for 1-step method. Although previous studies on kinetic modelling of drying of foods were based on 2-step method, this study indicated that 1-step method may also be a good alternative with some advantages such as drawing an informative figure and reducing time of calculations.

  17. A Hybrid Metaheuristic-Based Approach for the Aerodynamic Optimization of Small Hybrid Wind Turbine Rotors

    Directory of Open Access Journals (Sweden)

    José F. Herbert-Acero

    2014-01-01

    Full Text Available This work presents a novel framework for the aerodynamic design and optimization of blades for small horizontal axis wind turbines (WT. The framework is based on a state-of-the-art blade element momentum model, which is complemented with the XFOIL 6.96 software in order to provide an estimate of the sectional blade aerodynamics. The framework considers an innovative nested-hybrid solution procedure based on two metaheuristics, the virtual gene genetic algorithm and the simulated annealing algorithm, to provide a near-optimal solution to the problem. The objective of the study is to maximize the aerodynamic efficiency of small WT (SWT rotors for a wide range of operational conditions. The design variables are (1 the airfoil shape at the different blade span positions and the radial variation of the geometrical variables of (2 chord length, (3 twist angle, and (4 thickness along the blade span. A wind tunnel validation study of optimized rotors based on the NACA 4-digit airfoil series is presented. Based on the experimental data, improvements in terms of the aerodynamic efficiency, the cut-in wind speed, and the amount of material used during the manufacturing process were achieved. Recommendations for the aerodynamic design of SWT rotors are provided based on field experience.

  18. Mathematical and Metaheuristic Applications in Design Optimization of Steel Frame Structures: An Extensive Review

    Directory of Open Access Journals (Sweden)

    Mehmet Polat Saka

    2013-01-01

    Full Text Available The type of mathematical modeling selected for the optimum design problems of steel skeletal frames affects the size and mathematical complexity of the programming problem obtained. Survey on the structural optimization literature reveals that there are basically two types of design optimization formulation. In the first type only cross sectional properties of frame members are taken as design variables. In such formulation when the values of design variables change during design cycles, it becomes necessary to analyze the structure and update the response of steel frame to the external loading. Structural analysis in this type is a complementary part of the design process. In the second type joint coordinates are also treated as design variables in addition to the cross sectional properties of members. Such formulation eliminates the necessity of carrying out structural analysis in every design cycle. The values of the joint displacements are determined by the optimization techniques in addition to cross sectional properties. The structural optimization literature contains structural design algorithms that make use of both type of formulation. In this study a review is carried out on mathematical and metaheuristic algorithms where the effect of the mathematical modeling on the efficiency of these algorithms is discussed.

  19. An Efficient Combined Meta-Heuristic Algorithm for Solving the Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Majid Yousefikhoshbakht

    2016-08-01

    Full Text Available The traveling salesman problem (TSP is one of the most important NP-hard Problems and probably the most famous and extensively studied problem in the field of combinatorial optimization. In this problem, a salesman is required to visit each of n given nodes once and only once, starting from any node and returning to the original place of departure. This paper presents an efficient evolutionary optimization algorithm developed through combining imperialist competitive algorithm and lin-kernighan algorithm called (MICALK in order to solve the TSP. The MICALK is tested on 44 TSP instances involving from 24 to 1655 nodes from the literature so that 26 best known solutions of the benchmark problem are also found by our algorithm. Furthermore, the performance of MICALK is compared with several metaheuristic algorithms, including GA, BA, IBA, ICA, GSAP, ABO, PSO and BCO on 32 instances from TSPLIB. The results indicate that the MICALK performs well and is quite competitive with the above algorithms.

  20. Wisdom. A metaheuristic (pragmatic) to orchestrate mind and virtue toward excellence.

    Science.gov (United States)

    Baltes, P B; Staudinger, U M

    2000-01-01

    The primary focus of this article is on the presentation of wisdom research conducted under the heading of the Berlin wisdom paradigm. Informed by a cultural-historical analysis, wisdom in this paradigm is defined as an expert knowledge system concerning the fundamental pragmatics of life. These include knowledge and judgment about the meaning and conduct of life and the orchestration of human development toward excellence while attending conjointly to personal and collective well-being. Measurement includes think-aloud protocols concerning various problems of life associated with life planning, life management, and life review. Responses are evaluated with reference to a family of 5 criteria: rich factual and procedural knowledge, lifespan contextualism, relativism of values and life priorities, and recognition and management of uncertainty. A series of studies is reported that aim to describe, explain, and optimize wisdom. The authors conclude with a new theoretical perspective that characterizes wisdom as a cognitive and motivational metaheuristic (pragmatic) that organizes and orchestrates knowledge toward human excellence in mind and virtue, both individually and collectively.

  1. Optimum gradient material for a functionally graded dental implant using metaheuristic algorithms.

    Science.gov (United States)

    Sadollah, Ali; Bahreininejad, Ardeshir

    2011-10-01

    Despite dental implantation being a great success, one of the key issues facing it is a mismatch of mechanical properties between engineered and native biomaterials, which makes osseointegration and bone remodeling problematical. Functionally graded material (FGM) has been proposed as a potential upgrade to some conventional implant materials such as titanium for selection in prosthetic dentistry. The idea of an FGM dental implant is that the property would vary in a certain pattern to match the biomechanical characteristics required at different regions in the hosting bone. However, matching the properties does not necessarily guarantee the best osseointegration and bone remodeling. Little existing research has been reported on developing an optimal design of an FGM dental implant for promoting long-term success. Based upon remodeling results, metaheuristic algorithms such as the genetic algorithms (GAs) and simulated annealing (SA) have been adopted to develop a multi-objective optimal design for FGM implantation design. The results are compared with those in literature. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. A hybrid metaheuristic for the time-dependent vehicle routing problem with hard time windows

    Directory of Open Access Journals (Sweden)

    N. Rincon-Garcia

    2017-01-01

    Full Text Available This article paper presents a hybrid metaheuristic algorithm to solve the time-dependent vehicle routing problem with hard time windows. Time-dependent travel times are influenced by different congestion levels experienced throughout the day. Vehicle scheduling without consideration of congestion might lead to underestimation of travel times and consequently missed deliveries. The algorithm presented in this paper makes use of Large Neighbourhood Search approaches and Variable Neighbourhood Search techniques to guide the search. A first stage is specifically designed to reduce the number of vehicles required in a search space by the reduction of penalties generated by time-window violations with Large Neighbourhood Search procedures. A second stage minimises the travel distance and travel time in an ‘always feasible’ search space. Comparison of results with available test instances shows that the proposed algorithm is capable of obtaining a reduction in the number of vehicles (4.15%, travel distance (10.88% and travel time (12.00% compared to previous implementations in reasonable time.

  3. Geochronology and geochemistry by nuclear tracks method: some utilization examples in geologic applied

    International Nuclear Information System (INIS)

    Poupeau, G.; Soliani Junior, E.

    1988-01-01

    This article discuss some applications of the 'nuclear tracks method' in geochronology, geochemistry and geophysic. In geochronology, after rapid presentation of the dating principles by 'Fission Track' and the kinds of geological events mensurable by this method, is showed some application in metallogeny and in petroleum geolocy. In geochemistry the 'fission tracks' method utilizations are related with mining prospecting and uranium prospecting. In geophysics an important application is the earthquake prevision, through the Ra 222 emanations continous control. (author) [pt

  4. The development of a curved beam element model applied to finite elements method

    International Nuclear Information System (INIS)

    Bento Filho, A.

    1980-01-01

    A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt

  5. Applying formal method to design of nuclear power plant embedded protection system

    International Nuclear Information System (INIS)

    Kim, Jin Hyun; Kim, Il Gon; Sung, Chang Hoon; Choi, Jin Young; Lee, Na Young

    2001-01-01

    Nuclear power embedded protection systems is a typical safety-critical system, which detects its failure and shutdowns its operation of nuclear reactor. These systems are very dangerous so that it absolutely requires safety and reliability. Therefore nuclear power embedded protection system should fulfill verification and validation completely from the design stage. To develop embedded system, various V and V method have been provided and especially its design using Formal Method is studied in other advanced country. In this paper, we introduce design method of nuclear power embedded protection systems using various Formal-Method in various respect following nuclear power plant software development guideline

  6. A robust moving mesh finite volume method applied to 1D hyperbolic conservation laws from magnetohydrodynamics

    NARCIS (Netherlands)

    Dam, A. van; Zegeling, P.A.

    2006-01-01

    In this paper we describe a one-dimensional adaptive moving mesh method and its application to hyperbolic conservation laws from magnetohydrodynamics (MHD). The method is robust, because it employs automatic control of mesh adaptation when a new model is considered, without manually-set

  7. Statistical methods applied to gamma-ray spectroscopy algorithms in nuclear security missions.

    Science.gov (United States)

    Fagan, Deborah K; Robinson, Sean M; Runkle, Robert C

    2012-10-01

    Gamma-ray spectroscopy is a critical research and development priority to a range of nuclear security missions, specifically the interdiction of special nuclear material involving the detection and identification of gamma-ray sources. We categorize existing methods by the statistical methods on which they rely and identify methods that have yet to be considered. Current methods estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty, which may be significantly more complex. Thus, significantly improving algorithm performance may require greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods, could reduce decision uncertainty by rigorously and comprehensively incorporating all sources of uncertainty. Application of such methods should further meet the needs of nuclear security missions by improving upon the existing numerical infrastructure for which these analyses have not been conducted. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Adjoint Weighting Methods Applied to Monte Carlo Simulations of Applications and Experiments in Nuclear Criticality

    Energy Technology Data Exchange (ETDEWEB)

    Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-11

    The goals of this project are to develop Monte Carlo radiation transport methods and simulation software for engineering analysis that are robust, efficient and easy to use; and provide computational resources to assess and improve the predictive capability of radiation transport methods and nuclear data.

  9. T2-01 A Method for Prioritizing Chemical Hazards in Food applied to Antibiotics

    NARCIS (Netherlands)

    Asselt, van E.D.; Spiegel, van der M.; Noordam, M.Y.; Pikkemaat, M.G.; Fels, van der H.J.

    2014-01-01

    Introduction: Part of risk based control is the prioritization of hazard-food combinations for monitoring food safety. There are currently many methods for ranking microbial hazards ranging from quantitative to qualitative methods, but there is hardly any information available for prioritizing

  10. A New Machine Classification Method Applied to Human Peripheral Blood Leukocytes.

    Science.gov (United States)

    Rorvig, Mark E.; And Others

    1993-01-01

    Discusses pattern classification of images by computer and describes the Two Domain Method in which expert knowledge is acquired using multidimensional scaling of judgments of dissimilarities and linear mapping. An application of the Two Domain Method that tested its power to discriminate two patterns of human blood leukocyte distribution is…

  11. Studying the properties of Variational Data Assimilation Methods by Applying a Set of Test-Examples

    DEFF Research Database (Denmark)

    Thomsen, Per Grove; Zlatev, Zahari

    2007-01-01

    data assimilation methods are used. The main idea, on which the variational data assimilation methods are based, is pretty general. A functional is formed by using a weighted inner product of differences of model results and measurements. The value of this functional is to be minimized. Forward...

  12. The quasi-exactly solvable potentials method applied to the three-body problem

    International Nuclear Information System (INIS)

    Chafa, F.; Chouchaoui, A.; Hachemane, M.; Ighezou, F.Z.

    2007-01-01

    The quasi-exactly solved potentials method is used to determine the energies and the corresponding exact eigenfunctions for three families of potentials playing an important role in the description of interactions occurring between three particles of equal mass. The obtained results may also be used as a test in evaluating the performance of numerical methods

  13. A linear perturbation computation method applied to hydrodynamic instability growth predictions in ICF targets

    International Nuclear Information System (INIS)

    Clarisse, J.M.; Boudesocque-Dubois, C.; Leidinger, J.P.; Willien, J.L.

    2006-01-01

    A linear perturbation computation method is used to compute hydrodynamic instability growth in model implosions of inertial confinement fusion direct-drive and indirect-drive designed targets. Accurate descriptions of linear perturbation evolutions for Legendre mode numbers up to several hundreds have thus been obtained in a systematic way, motivating further improvements of the physical modeling currently handled by the method. (authors)

  14. Heterogeneity among violence-exposed women: applying person-oriented research methods.

    Science.gov (United States)

    Nurius, Paula S; Macy, Rebecca J

    2008-03-01

    Variability of experience and outcomes among violence-exposed people pose considerable challenges toward developing effective prevention and treatment protocols. To address these needs, the authors present an approach to research and a class of methodologies referred to as person oriented. Person-oriented tools support assessment of meaningful patterns among people that distinguish one group from another, subgroups for whom different interventions are indicated. The authors review the conceptual base of person-oriented methods, outline their distinction from more familiar variable-oriented methods, present descriptions of selected methods as well as empirical applications of person-oriented methods germane to violence exposure, and conclude with discussion of implications for future research and translation between research and practice. The authors focus on violence against women as a population, drawing on stress and coping theory as a theoretical framework. However, person-oriented methods hold utility for investigating diversity among violence-exposed people's experiences and needs across populations and theoretical foundations.

  15. Applying cognitive developmental psychology to middle school physics learning: The rule assessment method

    Science.gov (United States)

    Hallinen, Nicole R.; Chi, Min; Chin, Doris B.; Prempeh, Joe; Blair, Kristen P.; Schwartz, Daniel L.

    2013-01-01

    Cognitive developmental psychology often describes children's growing qualitative understanding of the physical world. Physics educators may be able to use the relevant methods to advantage for characterizing changes in students' qualitative reasoning. Siegler developed the "rule assessment" method for characterizing levels of qualitative understanding for two factor situations (e.g., volume and mass for density). The method assigns children to rule levels that correspond to the degree they notice and coordinate the two factors. Here, we provide a brief tutorial plus a demonstration of how we have used this method to evaluate instructional outcomes with middle-school students who learned about torque, projectile motion, and collisions using different instructional methods with simulations.

  16. Metaheurísticas híbridas para resolução do problema do caixeiro viajante com coleta de prêmios Hybrid metaheuristics for solve the prize collecting traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Antonio Augusto Chaves

    2007-08-01

    Full Text Available O Problema do Caixeiro Viajante com Coleta de Prêmios (PCVCP pode ser associado a um caixeiro que coleta um prêmio em cada cidade visitada e paga uma penalidade para cada cidade não visitada, com um custo de deslocamento entre as cidades. O problema encontra-se em minimizar o somatório dos custos da viagem e penalidades, enquanto inclui na sua rota um número suficiente de cidades que lhe permita coletar um prêmio mínimo preestabelecido. Este trabalho contribui com o desenvolvimento de metaheurísticas híbridas para o PCVCP, baseadas em GRASP e métodos de busca em vizinhança variável (VNS/VND para solucionar aproximadamente o PCVCP. De forma a validar as soluções obtidas, propõe-se uma formulação matemática a ser resolvida por um solver comercial, objetivando encontrar a solução ótima para o problema, sendo este solver aplicado a problemas de pequeno porte. Resultados computacionais demonstram a eficiência da abordagem híbrida proposta, tanto em relação à qualidade da solução final obtida quanto em relação ao tempo de execução.The Prize Collecting Traveling Salesman Problem (PCTSP can be associated to a salesman who collects a prize in each city visited and pays a penalty for each city not visited, with travel costs among the cities. The objective is to minimize the sum of the travel costs and penalties, including in the tour enough number of cities that allow collecting a minimum prize. This paper contributes with the development of a hybrid metaheuristic to PCTSP, based on GRASP and search methods in variable neighborhood (VNS/VND to solve PCTSP approximately. In order to validate the obtained solutions, we proposed a mathematical formulation to be solved by a commercial solver to find the best solution to the problem, being this solver applied to small problems. Computational results demonstrate the efficiency of the proposed method, as much in relation to the quality of the obtained final solution as in relation

  17. A multiparametric method of interpolation using WOA05 applied to anthropogenic CO2 in the Atlantic

    Directory of Open Access Journals (Sweden)

    Anton Velo

    2010-11-01

    Full Text Available This paper describes the development of a multiparametric interpolation method and its application to anthropogenic carbon (CANT in the Atlantic, calculated by two estimation methods using the CARINA database. The multiparametric interpolation proposed uses potential temperature (θ, salinity, conservative ‘NO’ and ‘PO’ as conservative parameters for the gridding, and the World Ocean Atlas (WOA05 as a reference for the grid structure and the indicated parameters. We thus complement CARINA data with WOA05 database in an attempt to obtain better gridded values by keeping the physical-biogeochemical sea structures. The algorithms developed here also have the prerequisite of being simple and easy to implement. To test the improvements achieved, a comparison between the proposed multiparametric method and a pure spatial interpolation for an independent parameter (O2 was made. As an application case study, CANT estimations by two methods (φCTº and TrOCA were performed on the CARINA database and then gridded by both interpolation methods (spatial and multiparametric. Finally, a calculation of CANT inventories for the whole Atlantic Ocean was performed with the gridded values and using ETOPO2v2 as the sea bottom. Thus, the inventories were between 55.1 and 55.2 Pg-C with the φCTº method and between 57.9 and 57.6 Pg-C with the TrOCA method.

  18. THE COST MANAGEMENT BY APPLYING THE STANDARD COSTING METHOD IN THE FURNITURE INDUSTRY-Case study

    Directory of Open Access Journals (Sweden)

    Radu Mărginean

    2013-06-01

    Full Text Available Among the modern calculation methods used in managerial accounting, with a large applicability in the industrial production field, we can find the standard costing method. This managerial approach of cost calculation has a real value in the managerial accounting field, due to its usefulness in forecasting production costs, helping the managers in the decision making process. The standard costing method in managerial accounting is part of modern managerial accounting methods, used in many enterprises with production activity. As research objectives for this paper, we propose studying the possibility of implementing this modern method of cost calculation in a company from the Romanian furniture industry, using real financial data. In order to achieve this aim, we used some specialized literature in the field of managerial accounting, showing the strengths and weaknesses of this method. The case study demonstrates that the standard costing modern method of cost calculation has full applicability in our case, and in conclusion it has a real value in the cost management process for enterprises in the Romanian furniture industry.

  19. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    Directory of Open Access Journals (Sweden)

    Tsung-Ming Yang

    2014-04-01

    Full Text Available Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  20. Applying the Taguchi method to river water pollution remediation strategy optimization.

    Science.gov (United States)

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-15

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.