A comparative analysis of three metaheuristic methods applied to fuzzy cognitive maps learning
Directory of Open Access Journals (Sweden)
Bruno A. Angélico
2013-12-01
Full Text Available This work analyses the performance of three different population-based metaheuristic approaches applied to Fuzzy cognitive maps (FCM learning in qualitative control of processes. Fuzzy cognitive maps permit to include the previous specialist knowledge in the control rule. Particularly, Particle Swarm Optimization (PSO, Genetic Algorithm (GA and an Ant Colony Optimization (ACO are considered for obtaining appropriate weight matrices for learning the FCM. A statistical convergence analysis within 10000 simulations of each algorithm is presented. In order to validate the proposed approach, two industrial control process problems previously described in the literature are considered in this work.
METAHEURISTIC OPTIMIZATION METHODS FOR PARAMETERS ESTIMATION OF DYNAMIC SYSTEMS
Directory of Open Access Journals (Sweden)
V. Panteleev Andrei
2017-01-01
Full Text Available The article considers the usage of metaheuristic methods of constrained global optimization: “Big Bang - Big Crunch”, “Fireworks Algorithm”, “Grenade Explosion Method” in parameters of dynamic systems estimation, described with algebraic-differential equations. Parameters estimation is based upon the observation results from mathematical model behavior. Their values are derived after criterion minimization, which describes the total squared error of state vector coordinates from the deduced ones with precise values observation at different periods of time. Paral- lelepiped type restriction is imposed on the parameters values. Used for solving problems, metaheuristic methods of constrained global extremum don’t guarantee the result, but allow to get a solution of a rather good quality in accepta- ble amount of time. The algorithm of using metaheuristic methods is given. Alongside with the obvious methods for solving algebraic-differential equation systems, it is convenient to use implicit methods for solving ordinary differen- tial equation systems. Two ways of solving the problem of parameters evaluation are given, those parameters differ in their mathematical model. In the first example, a linear mathematical model describes the chemical action parameters change, and in the second one, a nonlinear mathematical model describes predator-prey dynamics, which characterize the changes in both kinds’ population. For each of the observed examples there are calculation results from all the three methods of optimization, there are also some recommendations for how to choose methods parameters. The obtained numerical results have demonstrated the efficiency of the proposed approach. The deduced parameters ap- proximate points slightly differ from the best known solutions, which were deduced differently. To refine the results one should apply hybrid schemes that combine classical methods of optimization of zero, first and second orders and
Theory and principled methods for the design of metaheuristics
Borenstein, Yossi
2013-01-01
Metaheuristics, and evolutionary algorithms in particular, are known to provide efficient, adaptable solutions for many real-world problems, but the often informal way in which they are defined and applied has led to misconceptions, and even successful applications are sometimes the outcome of trial and error. Ideally, theoretical studies should explain when and why metaheuristics work, but the challenge is huge: mathematical analysis requires significant effort even for simple scenarios and real-life problems are usually quite complex. In this book the editors establish a bridge between theo
A hybrid approach for efficient anomaly detection using metaheuristic methods
Directory of Open Access Journals (Sweden)
Tamer F. Ghanem
2015-07-01
Full Text Available Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.
Directory of Open Access Journals (Sweden)
Krystel K. Castillo-Villar
2014-11-01
Full Text Available Bioenergy is a new source of energy that accounts for a substantial portion of the renewable energy production in many countries. The production of bioenergy is expected to increase due to its unique advantages, such as no harmful emissions and abundance. Supply-related problems are the main obstacles precluding the increase of use of biomass (which is bulky and has low energy density to produce bioenergy. To overcome this challenge, large-scale optimization models are needed to be solved to enable decision makers to plan, design, and manage bioenergy supply chains. Therefore, the use of effective optimization approaches is of great importance. The traditional mathematical methods (such as linear, integer, and mixed-integer programming frequently fail to find optimal solutions for non-convex and/or large-scale models whereas metaheuristics are efficient approaches for finding near-optimal solutions that use less computational resources. This paper presents a comprehensive review by studying and analyzing the application of metaheuristics to solve bioenergy supply chain models as well as the exclusive challenges of the mathematical problems applied in the bioenergy supply chain field. The reviewed metaheuristics include: (1 population approaches, such as ant colony optimization (ACO, the genetic algorithm (GA, particle swarm optimization (PSO, and bee colony algorithm (BCA; and (2 trajectory approaches, such as the tabu search (TS and simulated annealing (SA. Based on the outcomes of this literature review, the integrated design and planning of bioenergy supply chains problem has been solved primarily by implementing the GA. The production process optimization was addressed primarily by using both the GA and PSO. The supply chain network design problem was treated by utilizing the GA and ACO. The truck and task scheduling problem was solved using the SA and the TS, where the trajectory-based methods proved to outperform the population
Optimization in engineering sciences approximate and metaheuristic methods
Stefanoiu, Dan; Popescu, Dumitru; Filip, Florin Gheorghe; El Kamel, Abdelkader
2014-01-01
The purpose of this book is to present the main metaheuristics and approximate and stochastic methods for optimization of complex systems in Engineering Sciences. It has been written within the framework of the European Union project ERRIC (Empowering Romanian Research on Intelligent Information Technologies), which is funded by the EU's FP7 Research Potential program and has been developed in co-operation between French and Romanian teaching researchers. Through the principles of various proposed algorithms (with additional references) this book allows the reader to explore various methods o
International Nuclear Information System (INIS)
Sacco, Wagner F.; Oliveira, Cassiano R.E. de
2005-01-01
A new metaheuristic called 'Gravitational Attraction Algorithm' (GAA) is introduced in this article. It is an analogy with the gravitational force field, where a body attracts another proportionally to both masses and inversely to their distances. The GAA is a populational algorithm where, first of all, the solutions are clustered using the Fuzzy Clustering Means (FCM) algorithm. Following that, the gravitational forces of the individuals in relation to each cluster are evaluated and this individual or solution is displaced to the cluster with the greatest attractive force. Once it is inside this cluster, the solution receives small stochastic variations, performing a local exploration. Then the solutions are crossed over and the process starts all over again. The parameters required by the GAA are the 'diversity factor', which is used to create a random diversity in a fashion similar to genetic algorithm's mutation, and the number of clusters for the FCM. GAA is applied to the reactor core design optimization problem which consists in adjusting several reactor cell parameters in order to minimize the average peak-factor in a 3-enrichment-zone reactor, considering operational restrictions. This problem was previously attacked using the canonical genetic algorithm (GA) and a Niching Genetic Algorithm (NGA). The new metaheuristic is then compared to those two algorithms. The three algorithms are submitted to the same computational effort and GAA reaches the best results, showing its potential for other applications in the nuclear engineering field as, for instance, the nuclear core reload optimization problem. (author)
A Meta-Heuristic Applying for the Transportation of Wood Raw Material
Directory of Open Access Journals (Sweden)
Erhan Çalışkan
2009-04-01
Full Text Available Primary products in Turkish forestry are wood material. Thus, an operational organization is necessary to transport these main products to depots and then to the consumers without quality and volume loss. This organization starts from harvesting area in the stand and continues to roadside depots or ramps and to main depots and even to manufactures from there. The computer-assisted models, which aim to examine the optimum path in transportation, can be utilized in solving this quite complex problem. In this study, an evaluation has been performed in importance and current status of transporting wood material, classification of wood transportation, computer-assisted heuristic and meta-heuristic methods, and possibilities of using these methods in transportation of wood materials.
Metaheuristics and optimization in civil engineering
Bekdaş, Gebrail; Nigdeli, Sinan
2016-01-01
This timely book deals with a current topic, i.e. the applications of metaheuristic algorithms, with a primary focus on optimization problems in civil engineering. The first chapter offers a concise overview of different kinds of metaheuristic algorithms, explaining their advantages in solving complex engineering problems that cannot be effectively tackled by traditional methods, and citing the most important works for further reading. The remaining chapters report on advanced studies on the applications of certain metaheuristic algorithms to specific engineering problems. Genetic algorithm, bat algorithm, cuckoo search, harmony search and simulated annealing are just some of the methods presented and discussed step by step in real-application contexts, in which they are often used in combination with each other. Thanks to its synthetic yet meticulous and practice-oriented approach, the book is a perfect guide for graduate students, researchers and professionals willing to applying metaheuristic algorithms in...
Applying a multiobjective metaheuristic inspired by honey bees to phylogenetic inference.
Santander-Jiménez, Sergio; Vega-Rodríguez, Miguel A
2013-10-01
The development of increasingly popular multiobjective metaheuristics has allowed bioinformaticians to deal with optimization problems in computational biology where multiple objective functions must be taken into account. One of the most relevant research topics that can benefit from these techniques is phylogenetic inference. Throughout the years, different researchers have proposed their own view about the reconstruction of ancestral evolutionary relationships among species. As a result, biologists often report different phylogenetic trees from a same dataset when considering distinct optimality principles. In this work, we detail a multiobjective swarm intelligence approach based on the novel Artificial Bee Colony algorithm for inferring phylogenies. The aim of this paper is to propose a complementary view of phylogenetics according to the maximum parsimony and maximum likelihood criteria, in order to generate a set of phylogenetic trees that represent a compromise between these principles. Experimental results on a variety of nucleotide data sets and statistical studies highlight the relevance of the proposal with regard to other multiobjective algorithms and state-of-the-art biological methods. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
International Nuclear Information System (INIS)
Trovão, João P.; Antunes, Carlos Henggeler
2015-01-01
Highlights: • Two meta-heuristic approaches are evaluated for multi-ESS management in electric vehicles. • An online global energy management strategy with two different layers is studied. • Meta-heuristic techniques are used to define optimized energy sharing mechanisms. • A comparative analysis for ARTEMIS driving cycle is addressed. • The effectiveness of the double-layer management with meta-heuristic is presented. - Abstract: This work is focused on the performance evaluation of two meta-heuristic approaches, simulated annealing and particle swarm optimization, to deal with power management of a dual energy storage system for electric vehicles. The proposed strategy is based on a global energy management system with two layers: long-term (energy) and short-term (power) management. A rule-based system deals with the long-term (strategic) layer and for the short-term (action) layer meta-heuristic techniques are developed to define optimized online energy sharing mechanisms. Simulations have been made for several driving cycles to validate the proposed strategy. A comparative analysis for ARTEMIS driving cycle is presented evaluating three performance indicators (computation time, final value of battery state of charge, and minimum value of supercapacitors state of charge) as a function of input parameters. The results show the effectiveness of an implementation based on a double-layer management system using meta-heuristic methods for online power management supported by a rule set that restricts the search space
A meta-heuristic method for solving scheduling problem: crow search algorithm
Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi
2018-04-01
Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.
Metaheuristics for bi-level optimization
2013-01-01
This book provides a complete background on metaheuristics to solve complex bi-level optimization problems (continuous/discrete, mono-objective/multi-objective) in a diverse range of application domains. Readers learn to solve large scale bi-level optimization problems by efficiently combining metaheuristics with complementary metaheuristics and mathematical programming approaches. Numerous real-world examples of problems demonstrate how metaheuristics are applied in such fields as networks, logistics and transportation, engineering design, finance and security.
Directory of Open Access Journals (Sweden)
E. A. Boytsov
2014-01-01
Full Text Available A multi-tenant database cluster is a concept of a data-storage subsystem for cloud applications with the multi-tenant architecture. The cluster is a set of relational database servers with the single entry point, combined into one unit with a cluster controller. This system is aimed to be used by applications developed according to Software as a Service (SaaS paradigm and allows to place tenants at database servers so that providing their isolation, data backup and the most effective usage of available computational power. One of the most important problems about such a system is an effective distribution of data into servers, which affects the degree of individual cluster nodes load and faulttolerance. This paper considers the data-management approach, based on the usage of a load-balancing quality measure function. This function is used during initial placement of new tenants and also during placement optimization steps. Standard schemes of metaheuristic optimization such as simulated annealing and tabu search are used to find a better tenant placement.
A hybrid metaheuristic method to optimize the order of the sequences in continuous-casting
Directory of Open Access Journals (Sweden)
Achraf Touil
2016-06-01
Full Text Available In this paper, we propose a hybrid metaheuristic algorithm to maximize the production and to minimize the processing time in the steel-making and continuous casting (SCC by optimizing the order of the sequences where a sequence is a group of jobs with the same chemical characteristics. Based on the work Bellabdaoui and Teghem (2006 [Bellabdaoui, A., & Teghem, J. (2006. A mixed-integer linear programming model for the continuous casting planning. International Journal of Production Economics, 104(2, 260-270.], a mixed integer linear programming for scheduling steelmaking continuous casting production is presented to minimize the makespan. The order of the sequences in continuous casting is assumed to be fixed. The main contribution is to analyze an additional way to determine the optimal order of sequences. A hybrid method based on simulated annealing and genetic algorithm restricted by a tabu list (SA-GA-TL is addressed to obtain the optimal order. After parameter tuning of the proposed algorithm, it is tested on different instances using a.NET application and the commercial software solver Cplex v12.5. These results are compared with those obtained by SA-TL (simulated annealing restricted by tabu list.
Metaheuristics for medicine and biology
Talbi, El-Ghazali
2017-01-01
This book highlights recent research on metaheuristics for biomedical engineering, addressing both theoretical and applications aspects. Given the multidisciplinary nature of bio-medical image analysis, it has now become one of the most central topics in computer science, computer engineering and electrical and electronic engineering, and attracted the interest of many researchers. To deal with these problems, many traditional and recent methods, algorithms and techniques have been proposed. Among them, metaheuristics is the most common choice. This book provides essential content for senior and young researchers interested in methodologies for implementing metaheuristics to help solve biomedical engineering problems.
Metaheuristic optimization in power engineering
Radosavljević, Jordan
2018-01-01
This book describes the principles of solving various problems in power engineering via the application of selected metaheuristic optimization methods including genetic algorithms, particle swarm optimization, and the gravitational search algorithm.
Penas, David R.; González, Patricia; Egea, José A.; Banga, Julio R.; Doallo, Ramón
2015-01-01
Metaheuristics are gaining increased attention as efficient solvers for hard global optimization problems arising in bioinformatics and computational systems biology. Scatter Search (SS) is one of the recent outstanding algorithms in that class. However, its application to very hard problems, like those considering parameter estimation in dynamic models of systems biology, still results in excessive computation times. In order to reduce the computational cost of the SS and improve its success...
Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid
2017-10-01
Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.
Directory of Open Access Journals (Sweden)
Eduardo Batista de Moraes Barbosa
2017-01-01
Full Text Available Usually, metaheuristic algorithms are adapted to a large set of problems by applying few modifications on parameters for each specific case. However, this flexibility demands a huge effort to correctly tune such parameters. Therefore, the tuning of metaheuristics arises as one of the most important challenges in the context of research of these algorithms. Thus, this paper aims to present a methodology combining Statistical and Artificial Intelligence methods in the fine-tuning of metaheuristics. The key idea is a heuristic method, called Heuristic Oriented Racing Algorithm (HORA, which explores a search space of parameters looking for candidate configurations close to a promising alternative. To confirm the validity of this approach, we present a case study for fine-tuning two distinct metaheuristics: Simulated Annealing (SA and Genetic Algorithm (GA, in order to solve the classical traveling salesman problem. The results are compared considering the same metaheuristics tuned through a racing method. Broadly, the proposed approach proved to be effective in terms of the overall time of the tuning process. Our results reveal that metaheuristics tuned by means of HORA achieve, with much less computational effort, similar results compared to the case when they are tuned by the other fine-tuning approach.
Kochenberger, Gary
2003-01-01
Metaheuristics, in their original definition, are solution methods that orchestrate an interaction between local improvement procedures and higher level strategies to create a process capable of escaping from local optima and performing a robust search of a solution space. Over time, these methods have also come to include any procedures that employ strategies for overcoming the trap of local optimality in complex solution spaces, especially those procedures that utilize one or more neighborhood structures as a means of defining admissible moves to transition from one solution to another, or to build or destroy solutions in constructive and destructive processes. The degree to which neighborhoods are exploited varies according to the type of procedure. In the case of certain population-based procedures, such as genetic al- rithms, neighborhoods are implicitly (and somewhat restrictively) defined by reference to replacing components of one solution with those of another, by variously chosen rules of exchange p...
Directory of Open Access Journals (Sweden)
Hossein Karimi
2011-04-01
Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.
Advances in metaheuristic algorithms for optimal design of structures
Kaveh, A
2017-01-01
This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally ...
Advances in metaheuristic algorithms for optimal design of structures
Kaveh, A
2014-01-01
This book presents efficient metaheuristic algorithms for optimal design of structures. Many of these algorithms are developed by the author and his colleagues, consisting of Democratic Particle Swarm Optimization, Charged System Search, Magnetic Charged System Search, Field of Forces Optimization, Dolphin Echolocation Optimization, Colliding Bodies Optimization, Ray Optimization. These are presented together with algorithms which were developed by other authors and have been successfully applied to various optimization problems. These consist of Particle Swarm Optimization, Big Bang-Big Crunch Algorithm, Cuckoo Search Optimization, Imperialist Competitive Algorithm, and Chaos Embedded Metaheuristic Algorithms. Finally a multi-objective optimization method is presented to solve large-scale structural problems based on the Charged System Search algorithm. The concepts and algorithms presented in this book are not only applicable to optimization of skeletal structures and finite element models, but can equally ...
A Meta-heuristic Approach for Variants of VRP in Terms of Generalized Saving Method
Shimizu, Yoshiaki
Global logistic design is becoming a keen interest to provide an essential infrastructure associated with modern societal provision. For examples, we can designate green and/or robust logistics in transportation systems, smart grids in electricity utilization systems, and qualified service in delivery systems, and so on. As a key technology for such deployments, we engaged in practical vehicle routing problem on a basis of the conventional saving method. This paper extends such idea and gives a general framework available for various real-world applications. It can cover not only delivery problems but also two kind of pick-up problems, i.e., straight and drop-by routings. Moreover, multi-depot problem is considered by a hybrid approach with graph algorithm and its solution method is realized in a hierarchical manner. Numerical experiments have been taken place to validate effectiveness of the proposed method.
A novel method for retinal optic disc detection using bat meta-heuristic algorithm.
Abdullah, Ahmad S; Özok, Yasa Ekşioğlu; Rahebi, Javad
2018-05-09
Normally, the optic disc detection of retinal images is useful during the treatment of glaucoma and diabetic retinopathy. In this paper, the novel preprocessing of a retinal image with a bat algorithm (BA) optimization is proposed to detect the optic disc of the retinal image. As the optic disk is a bright area and the vessels that emerge from it are dark, these facts lead to the selected segments being regions with a great diversity of intensity, which does not usually happen in pathological regions. First, in the preprocessing stage, the image is fully converted into a gray image using a gray scale conversion, and then morphological operations are implemented in order to remove dark elements such as blood vessels, from the images. In the next stage, a bat algorithm (BA) is used to find the optimum threshold value for the optic disc location. In order to improve the accuracy and to obtain the best result for the segmented optic disc, the ellipse fitting approach was used in the last stage to enhance and smooth the segmented optic disc boundary region. The ellipse fitting is carried out using the least square distance approach. The efficiency of the proposed method was tested on six publicly available datasets, MESSIDOR, DRIVE, DIARETDB1, DIARETDB0, STARE, and DRIONS-DB. The optic disc segmentation average overlaps and accuracy was in the range of 78.5-88.2% and 96.6-99.91% in these six databases. The optic disk of the retinal images was segmented in less than 2.1 s per image. The use of the proposed method improved the optic disc segmentation results for healthy and pathological retinal images in a low computation time. Graphical abstract ᅟ.
Metaheuristics progress in complex systems optimization
Doerner, Karl F; Greistorfer, Peter; Gutjahr, Walter; Hartl, Richard F; Reimann, Marc
2007-01-01
The aim of ""Metaheuristics: Progress in Complex Systems Optimization"" is to provide several different kinds of information: a delineation of general metaheuristics methods, a number of state-of-the-art articles from a variety of well-known classical application areas as well as an outlook to modern computational methods in promising new areas. Therefore, this book may equally serve as a textbook in graduate courses for students, as a reference book for people interested in engineering or social sciences, and as a collection of new and promising avenues for researchers working in this field.
Metaheuristic Algorithms for Convolution Neural Network.
Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).
Metaheuristic Algorithms for Convolution Neural Network
Directory of Open Access Journals (Sweden)
L. M. Rasdi Rere
2016-01-01
Full Text Available A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN, a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent.
Potvin, Jean-Yves
2010-01-01
“… an excellent book if you want to learn about a number of individual metaheuristics." (U. Aickelin, Journal of the Operational Research Society, Issue 56, 2005, on the First Edition) The first edition of the Handbook of Metaheuristics was published in 2003 under the editorship of Fred Glover and Gary A. Kochenberger. Given the numerous developments observed in the field of metaheuristics in recent years, it appeared that the time was ripe for a second edition of the Handbook. When Glover and Kochenberger were unable to prepare this second edition, they suggested that Michel Gendreau and Jean-Yves Potvin should take over the editorship, and so this important new edition is now available. Through its 21 chapters, this second edition is designed to provide a broad coverage of the concepts, implementations and applications in this important field of optimization. Original contributors either revised or updated their work, or provided entirely new chapters. The Handbook now includes updated chapters on the b...
Metaheuristic applications to speech enhancement
Kunche, Prajna
2016-01-01
This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.
TOWARDS A UNIFIED VIEW OF METAHEURISTICS
Directory of Open Access Journals (Sweden)
El-Ghazali Talbi
2013-02-01
Full Text Available This talk provides a complete background on metaheuristics and presents in a unified view the main design questions for all families of metaheuristics and clearly illustrates how to implement the algorithms under a software framework to reuse both the design and code. The key search components of metaheuristics are considered as a toolbox for: - Designing efficient metaheuristics (e.g. local search, tabu search, simulated annealing, evolutionary algorithms, particle swarm optimization, scatter search, ant colonies, bee colonies, artificial immune systems for optimization problems. - Designing efficient metaheuristics for multi-objective optimization problems. - Designing hybrid, parallel and distributed metaheuristics. - Implementing metaheuristics on sequential and parallel machines.
Metaheuristics in the service industry
Geiger, Martin Josef; Sevaux, Marc; Sörensen, Kenneth
2009-01-01
This book presents novel methodological approaches and improved results of metaheuristics for modern services. It examines applications in the area of transportation and logistics, while other areas include production and financial services.
Applied Bayesian hierarchical methods
National Research Council Canada - National Science Library
Congdon, P
2010-01-01
... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...
Methods of applied mathematics
Hildebrand, Francis B
1992-01-01
This invaluable book offers engineers and physicists working knowledge of a number of mathematical facts and techniques not commonly treated in courses in advanced calculus, but nevertheless extremely useful when applied to typical problems in many different fields. It deals principally with linear algebraic equations, quadratic and Hermitian forms, operations with vectors and matrices, the calculus of variations, and the formulations and theory of linear integral equations. Annotated problems and exercises accompany each chapter.
Directory of Open Access Journals (Sweden)
Tashkova Katerina
2011-10-01
Full Text Available Abstract Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA, particle-swarm optimization (PSO, and differential evolution (DE, as well as a local-search derivative-based algorithm 717 (A717 to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE clearly and significantly outperform the local derivative-based method (A717. Among the three meta-heuristics, differential evolution (DE performs best in terms of the objective function, i.e., reconstructing the output, and in terms of
Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo
2011-10-11
We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and
Metaheuristic Approaches for Hydropower System Scheduling
Directory of Open Access Journals (Sweden)
Ieda G. Hidalgo
2015-01-01
Full Text Available This paper deals with the short-term scheduling problem of hydropower systems. The objective is to meet the daily energy demand in an economic and safe way. The individuality of the generating units and the nonlinearity of their efficiency curves are taken into account. The mathematical model is formulated as a dynamic, mixed integer, nonlinear, nonconvex, combinatorial, and multiobjective optimization problem. We propose two solution methods using metaheuristic approaches. They combine Genetic Algorithm with Strength Pareto Evolutionary Algorithm and Ant Colony Optimization. Both approaches are divided into two phases. In the first one, to maximize the plant’s net generation, the problem is solved for each hour of the day (static dispatch. In the second phase, to minimize the units’ switching on-off, the day is considered as a whole (dynamic dispatch. The proposed methodology is applied to two Brazilian hydroelectric plants, in cascade, that belong to the national interconnected system. The nondominated solutions from both approaches are presented. All of them meet demand respecting the physical, electrical, and hydraulic constraints.
Advanced metaheuristic algorithms for laser optimization
International Nuclear Information System (INIS)
Tomizawa, H.
2010-01-01
A laser is one of the most important experimental tools. In synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for Xray-FELs, a laser has important roles as a seed light source or photo-cathode-illuminating light source to generate a high brightness electron bunch. The controls of laser pulse characteristics are required for many kinds of experiments. However, the laser should be tuned and customized for each requirement by laser experts. The automatic tuning of laser is required to realize with some sophisticated algorithms. The metaheuristic algorithm is one of the useful candidates to find one of the best solutions as acceptable as possible. The metaheuristic laser tuning system is expected to save our human resources and time for the laser preparations. I have shown successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles and a hill climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each experimental requirement. (author)
Comparison of metaheuristic techniques to determine optimal placement of biomass power plants
International Nuclear Information System (INIS)
Reche-Lopez, P.; Ruiz-Reyes, N.; Garcia Galan, S.; Jurado, F.
2009-01-01
This paper deals with the application and comparison of several metaheuristic techniques to optimize the placement and supply area of biomass-fueled power plants. Both, trajectory and population-based methods are applied for our goal. In particular, two well-known trajectory method, such as Simulated Annealing (SA) and Tabu Search (TS), and two commonly used population-based methods, such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are hereby considered. In addition, a new binary PSO algorithm has been proposed, which incorporates an inertia weight factor, like the classical continuous approach. The fitness function for the metaheuristics is the profitability index, defined as the ratio between the net present value and the initial investment. In this work, forest residues are considered as biomass source, and the problem constraints are: the generation system must be located inside the supply area, and its maximum electric power is 5 MW. The comparative results obtained by all considered metaheuristics are discussed. Random walk has also been assessed for the problem we deal with.
Comparison of metaheuristic techniques to determine optimal placement of biomass power plants
Energy Technology Data Exchange (ETDEWEB)
Reche-Lopez, P.; Ruiz-Reyes, N.; Garcia Galan, S. [Telecommunication Engineering Department, University of Jaen Polytechnic School, C/ Alfonso X el Sabio 28, 23700 Linares, Jaen (Spain); Jurado, F. [Electrical Engineering Department, University of Jaen Polytechnic School, C/ Alfonso X el Sabio 28, 23700 Linares, Jaen (Spain)
2009-08-15
This paper deals with the application and comparison of several metaheuristic techniques to optimize the placement and supply area of biomass-fueled power plants. Both, trajectory and population-based methods are applied for our goal. In particular, two well-known trajectory method, such as Simulated Annealing (SA) and Tabu Search (TS), and two commonly used population-based methods, such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are hereby considered. In addition, a new binary PSO algorithm has been proposed, which incorporates an inertia weight factor, like the classical continuous approach. The fitness function for the metaheuristics is the profitability index, defined as the ratio between the net present value and the initial investment. In this work, forest residues are considered as biomass source, and the problem constraints are: the generation system must be located inside the supply area, and its maximum electric power is 5 MW. The comparative results obtained by all considered metaheuristics are discussed. Random walk has also been assessed for the problem we deal with. (author)
Solving Large Clustering Problems with Meta-Heuristic Search
DEFF Research Database (Denmark)
Turkensteen, Marcel; Andersen, Kim Allan; Bang-Jensen, Jørgen
In Clustering Problems, groups of similar subjects are to be retrieved from data sets. In this paper, Clustering Problems with the frequently used Minimum Sum-of-Squares Criterion are solved using meta-heuristic search. Tabu search has proved to be a successful methodology for solving optimization...... problems, but applications to large clustering problems are rare. The simulated annealing heuristic has mainly been applied to relatively small instances. In this paper, we implement tabu search and simulated annealing approaches and compare them to the commonly used k-means approach. We find that the meta-heuristic...
A hybrid metaheuristic DE/CS algorithm for UCAV three-dimension path planning.
Wang, Gaige; Guo, Lihong; Duan, Hong; Wang, Heqi; Liu, Luo; Shao, Mingzhen
2012-01-01
Three-dimension path planning for uninhabited combat air vehicle (UCAV) is a complicated high-dimension optimization problem, which primarily centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. A new hybrid metaheuristic differential evolution (DE) and cuckoo search (CS) algorithm is proposed to solve the UCAV three-dimension path planning problem. DE is applied to optimize the process of selecting cuckoos of the improved CS model during the process of cuckoo updating in nest. The cuckoos can act as an agent in searching the optimal UCAV path. And then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic CS. The realization procedure for this hybrid metaheuristic approach DE/CS is also presented. In order to make the optimized UCAV path more feasible, the B-Spline curve is adopted for smoothing the path. To prove the performance of this proposed hybrid metaheuristic method, it is compared with basic CS algorithm. The experiment shows that the proposed approach is more effective and feasible in UCAV three-dimension path planning than the basic CS model.
Water distribution systems design optimisation using metaheuristics and hyperheuristics
Directory of Open Access Journals (Sweden)
DN Raad
2011-06-01
Full Text Available The topic of multi-objective water distribution systems (WDS design optimisation using metaheuristics is investigated, comparing numerous modern metaheuristics, including sev- eral multi-objective evolutionary algorithms, an estimation of distribution algorithm and a recent hyperheuristic named AMALGAM (an evolutionary framework for the simultaneous incorporation of multiple metaheuristics, in order to determine which approach is most capa- ble with respect to WDS design optimisation. Novel metaheuristics and variants of existing algorithms are developed, for a total of twenty-three algorithms examined. Testing with re- spect to eight small-to-large-sized WDS benchmarks from the literature reveal that the four top-performing algorithms are mutually non-dominated with respect to the various perfor- mance metrics used. These algorithms are NSGA-II, TAMALGAMJndu , TAMALGAMndu and AMALGAMSndp (the last three being novel variants of AMALGAM. However, when these four algorithms are applied to the design of a very large real-world benchmark, the AMALGAM paradigm outperforms NSGA-II convincingly, with AMALGAMSndp exhibiting the best performance overall.
Water distribution systems design optimisation using metaheuristics ...
African Journals Online (AJOL)
The topic of multi-objective water distribution systems (WDS) design optimisation using metaheuristics is investigated, comparing numerous modern metaheuristics, including several multi-objective evolutionary algorithms, an estimation of distribution algorithm and a recent hyperheuristic named AMALGAM (an evolutionary ...
A well-scalable metaheuristic for the fleet size and mix vehicle routing problem with time windows
Bräysy, Olli; Porkka, Pasi P.; Dullaert, Wout; Repoussis, Panagiotis P.; Tarantilis, Christos D.
This paper presents an efficient and well-scalable metaheuristic for fleet size and mix vehicle routing with time windows. The suggested solution method combines the strengths of well-known threshold accepting and guided local search metaheuristics to guide a set of four local search heuristics. The
Metaheuristics progress as real problem solvers
Nonobe, Koji; Yagiura, Mutsunori
2005-01-01
Metaheuristics: Progress as Real Problem Solvers is a peer-reviewed volume of eighteen current, cutting-edge papers by leading researchers in the field. Included are an invited paper by F. Glover and G. Kochenberger, which discusses the concept of Metaheuristic agent processes, and a tutorial paper by M.G.C. Resende and C.C. Ribeiro discussing GRASP with path-relinking. Other papers discuss problem-solving approaches to timetabling, automated planograms, elevators, space allocation, shift design, cutting stock, flexible shop scheduling, colorectal cancer and cartography. A final group of methodology papers clarify various aspects of Metaheuristics from the computational view point.
Directory of Open Access Journals (Sweden)
N. Okati
2017-12-01
Full Text Available Node cooperation can protect wireless networks from eavesdropping by using the physical characteristics of wireless channels rather than cryptographic methods. Allocating the proper amount of power to cooperative nodes is a challenging task. In this paper, we use three cooperative nodes, one as relay to increase throughput at the destination and two friendly jammers to degrade eavesdropper’s link. For this scenario, the secrecy rate function is a non-linear non-convex problem. So, in this case, exact optimization methods can only achieve suboptimal solution. In this paper, we applied different meta-heuristic optimization techniques, like Genetic Algorithm (GA, Partial Swarm Optimization (PSO, Bee Algorithm (BA, Tabu Search (TS, Simulated Annealing (SA and Teaching-Learning-Based Optimization (TLBO. They are compared with each other to obtain solution for power allocation in a wiretap wireless network. Although all these techniques find suboptimal solutions, but they appear superlative to exact optimization methods. Finally, we define a Figure of Merit (FOM as a rule of thumb to determine the best meta-heuristic algorithm. This FOM considers quality of solution, number of required iterations to converge, and CPU time.
Metaheuristic algorithms for building Covering Arrays: A review
Directory of Open Access Journals (Sweden)
Jimena Adriana Timaná-Peña
2016-09-01
Full Text Available Covering Arrays (CA are mathematical objects used in the functional testing of software components. They enable the testing of all interactions of a given size of input parameters in a procedure, function, or logical unit in general, using the minimum number of test cases. Building CA is a complex task (NP-complete problem that involves lengthy execution times and high computational loads. The most effective methods for building CAs are algebraic, Greedy, and metaheuristic-based. The latter have reported the best results to date. This paper presents a description of the major contributions made by a selection of different metaheuristics, including simulated annealing, tabu search, genetic algorithms, ant colony algorithms, particle swarm algorithms, and harmony search algorithms. It is worth noting that simulated annealing-based algorithms have evolved as the most competitive, and currently form the state of the art.
Applied Formal Methods for Elections
DEFF Research Database (Denmark)
Wang, Jian
development time, or second dynamically, i.e. monitoring while an implementation is used during an election, or after the election is over, for forensic analysis. This thesis contains two chapters on this subject: the chapter Analyzing Implementations of Election Technologies describes a technique...... process. The chapter Measuring Voter Lines describes an automated data collection method for measuring voters' waiting time, and discusses statistical models designed to provide an understanding of the voter behavior in polling stations....
Directory of Open Access Journals (Sweden)
Hiwa Farughi
2017-01-01
Full Text Available Nowadays enterprises should consider seeking to reduce the supply chain risks as a crucial part of their activities in order to improve their competitiveness in the international context. Choosing the suitable strategy in connection with assigning some parts of the production process to outside the organization is a complex multi-criteria decision making problem and it gets more complicated when supply chain risk factors as the factors to select the strategy as well as dependence and the close ties between these criteria also be considered. In this paper, after the identification of risks in the supply chain of a medical equipment manufacturer company, dependence and ties between criteria in line with choosing the best strategy among existing alternatives has been examined in the form of a combined ANP-ELECTRE method. This combined model is of high performance to give a solution to the problem considered in this paper. But given the complex and time consuming nature of the AHP and ELECTRE, in this study a meta-heuristic algorithm is developed called SIMANP that despite the simplicity of computing and high-speed, is good in the terms of precision and efficiency. The results of comparing SIMANP algorithm and the proposed ANP - ELECTRE method are presented at the end.
Applied Formal Methods for Elections
DEFF Research Database (Denmark)
Wang, Jian
Information technology is changing the way elections are organized. Technology renders the electoral process more efficient, but things could also go wrong: Voting software is complex, it consists of over thousands of lines of code, which makes it error-prone. Technical problems may cause delays...... bounded model-checking and satisfiability modulo theories (SMT) solvers can be used to check these criteria. Voter Experience: Technology profoundly affects the voter experience. These effects need to be measured and the data should be used to make decisions regarding the implementation of the electoral...... at polling stations, or even delay the announcement of the final result. This thesis describes a set of methods to be used, for example, by system developers, administrators, or decision makers to examine election technologies, social choice algorithms and voter experience. Technology: Verifiability refers...
Metaheuristic analysis in reverse logistics of waste
Energy Technology Data Exchange (ETDEWEB)
Serrano Elena, A.
2016-07-01
This paper focuses in the use of search metaheuristic techniques on a dynamic and deterministic model to analyze and solve cost optimization problems and location in reverse logistics, within the field of municipal waste management of Málaga (Spain). In this work we have selected two metaheuristic techniques having relevance in present research, to test the validity of the proposed approach: an important technique for its international presence as is the Genetic Algorithm (GA) and another interesting technique that works with swarm intelligence as is the Particles Swarm Optimization (PSO). These metaheuristic techniques will be used to solve cost optimization problems and location of MSW recovery facilities (transfer centers and treatment plants). (Author)
Generalized Response Surface Methodology : A New Metaheuristic
Kleijnen, J.P.C.
2006-01-01
Generalized Response Surface Methodology (GRSM) is a novel general-purpose metaheuristic based on Box and Wilson.s Response Surface Methodology (RSM).Both GRSM and RSM estimate local gradients to search for the optimal solution.These gradients use local first-order polynomials.GRSM, however, uses
Directory of Open Access Journals (Sweden)
Vimal J. Savsani
2017-04-01
The static and dynamic responses to the TTO problems are challenging due to its search space, which is implicit, non-convex, non-linear, and often leading to divergence. Modified meta-heuristics are effective optimization methods to handle such problems in actual fact. In this paper, modified versions of Teaching–Learning-Based Optimization (TLBO, Heat Transfer Search (HTS, Water Wave Optimization (WWO, and Passing Vehicle Search (PVS are proposed by integrating the random mutation-based search technique with them. This paper compares the performance of four modified and four basic meta-heuristics to solve discrete TTO problems.
[Montessori method applied to dementia - literature review].
Brandão, Daniela Filipa Soares; Martín, José Ignacio
2012-06-01
The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study.
Geostatistical methods applied to field model residuals
DEFF Research Database (Denmark)
Maule, Fox; Mosegaard, K.; Olsen, Nils
consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...
Directory of Open Access Journals (Sweden)
Ghanshyam G. Tejani
2018-04-01
Full Text Available In this study, simultaneous size, shape, and topology optimization of planar and space trusses are investigated. Moreover, the trusses are subjected to constraints for element stresses, nodal displacements, and kinematic stability conditions. Truss Topology Optimization (TTO removes the superfluous elements and nodes from the ground structure. In this method, the difficulties arise due to unacceptable and singular topologies; therefore, the Grubler’s criterion and the positive definiteness are used to handle such issue. Moreover, the TTO is challenging due to its search space, which is implicit, non-convex, non-linear, and often leading to divergence. Therefore, mutation-based metaheuristics are proposed to investigate them. This study compares the performance of four improved metaheuristics (viz. Improved Teaching–Learning-Based Optimization (ITLBO, Improved Heat Transfer Search (IHTS, Improved Water Wave Optimization (IWWO, and Improved Passing Vehicle Search (IPVS and four basic metaheuristics (viz. TLBO, HTS, WWO, and PVS in order to solve structural optimization problems. Keywords: Structural optimization, Mutation operator, Improved metaheuristics, Modified algorithms, Truss topology optimization
METAHEURISTICS EVALUATION: A PROPOSAL FOR A MULTICRITERIA METHODOLOGY
Directory of Open Access Journals (Sweden)
Valdir Agustinho de Melo
2015-12-01
Full Text Available ABSTRACT In this work we propose a multicriteria evaluation scheme for heuristic algorithms based on the classic Condorcet ranking technique. Weights are associated to the ranking of an algorithm among a set being object of comparison. We used five criteria and a function on the set of natural numbers to create a ranking. The discussed comparison involves three well-known problems of combinatorial optimization - Traveling Salesperson Problem (TSP, Capacitated Vehicle Routing Problem (CVRP and Quadratic Assignment Problem (QAP. The tested instances came from public libraries. Each algorithm was used with essentially the same structure, the same local search was applied and the initial solutions were similarly built. It is important to note that the work does not make proposals involving algorithms: the results for the three problems are shown only to illustrate the operation of the evaluation technique. Four metaheuristics - GRASP, Tabu Search, ILS and VNS - are therefore only used for the comparisons.
Metaheuristics for Engineering and Architectural Design of Hospitals
DEFF Research Database (Denmark)
Holst, Malene Kirstine Østergaard; Kirkegaard, Poul Henning
2014-01-01
This paper presents an approach for optimized hospital layout design based on metaheuristics. Through the use of metaheuristics the hospital functionalities are decomposed into geometric units. The units define the baseline for the design of the hospital, as the units are based on correlations of...
Putting Continuous Metaheuristics to Work in Binary Search Spaces
Directory of Open Access Journals (Sweden)
Broderick Crawford
2017-01-01
Full Text Available In the real world, there are a number of optimization problems whose search space is restricted to take binary values; however, there are many continuous metaheuristics with good results in continuous search spaces. These algorithms must be adapted to solve binary problems. This paper surveys articles focused on the binarization of metaheuristics designed for continuous optimization.
Metaheuristics in water, geotechnical and transport engineering
Yang, Xin-She; Talatahari, Siamak; Alavi, Amir Hossein
2013-01-01
Due to an ever-decreasing supply in raw materials and stringent constraints on conventional energy sources, demand for lightweight, efficient and low cost structures has become crucially important in modern engineering design. This requires engineers to search for optimal and robust design options to address design problems that are often large in scale and highly nonlinear, making finding solutions challenging. In the past two decades, metaheuristic algorithms have shown promising power, efficiency and versatility in solving these difficult optimization problems. This book examines the la
Applied mathematical methods in nuclear thermal hydraulics
International Nuclear Information System (INIS)
Ransom, V.H.; Trapp, J.A.
1983-01-01
Applied mathematical methods are used extensively in modeling of nuclear reactor thermal-hydraulic behavior. This application has required significant extension to the state-of-the-art. The problems encountered in modeling of two-phase fluid transients and the development of associated numerical solution methods are reviewed and quantified using results from a numerical study of an analogous linear system of differential equations. In particular, some possible approaches for formulating a well-posed numerical problem for an ill-posed differential model are investigated and discussed. The need for closer attention to numerical fidelity is indicated
Entropy viscosity method applied to Euler equations
International Nuclear Information System (INIS)
Delchini, M. O.; Ragusa, J. C.; Berry, R. A.
2013-01-01
The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)
Analytical methods applied to water pollution
International Nuclear Information System (INIS)
Baudin, G.
1977-01-01
A comparison of different methods applied to water analysis is given. The discussion is limited to the problems presented by inorganic elements, accessible to nuclear activation analysis methods. The following methods were compared: activation analysis: with gamma-ray spectrometry, atomic absorption spectrometry, fluorimetry, emission spectrometry, colorimetry or spectrophotometry, X-ray fluorescence, mass spectrometry, voltametry, polarography or other electrochemical methods, activation analysis-beta measurements. Drinking-water, irrigation waters, sea waters, industrial wastes and very pure waters are the subjects of the investigations. The comparative evaluation is made on the basis of storage of samples, in situ analysis, treatment and concentration, specificity and interference, monoelement or multielement analysis, analysis time and accuracy. The significance of the neutron analysis is shown. (T.G.)
Protein structure prediction using bee colony optimization metaheuristic
DEFF Research Database (Denmark)
Fonseca, Rasmus; Paluszewski, Martin; Winter, Pawel
2010-01-01
of the proteins structure, an energy potential and some optimization algorithm that ¿nds the structure with minimal energy. Bee Colony Optimization (BCO) is a relatively new approach to solving opti- mization problems based on the foraging behaviour of bees. Several variants of BCO have been suggested......Predicting the native structure of proteins is one of the most challenging problems in molecular biology. The goal is to determine the three-dimensional struc- ture from the one-dimensional amino acid sequence. De novo prediction algorithms seek to do this by developing a representation...... our BCO method to generate good solutions to the protein structure prediction problem. The results show that BCO generally ¿nds better solutions than simulated annealing which so far has been the metaheuristic of choice for this problem....
Metaheuristic simulation optimisation for the stochastic multi-retailer supply chain
Omar, Marina; Mustaffa, Noorfa Haszlinna H.; Othman, Siti Norsyahida
2013-04-01
Supply Chain Management (SCM) is an important activity in all producing facilities and in many organizations to enable vendors, manufacturers and suppliers to interact gainfully and plan optimally their flow of goods and services. A simulation optimization approach has been widely used in research nowadays on finding the best solution for decision-making process in Supply Chain Management (SCM) that generally faced a complexity with large sources of uncertainty and various decision factors. Metahueristic method is the most popular simulation optimization approach. However, very few researches have applied this approach in optimizing the simulation model for supply chains. Thus, this paper interested in evaluating the performance of metahueristic method for stochastic supply chains in determining the best flexible inventory replenishment parameters that minimize the total operating cost. The simulation optimization model is proposed based on the Bees algorithm (BA) which has been widely applied in engineering application such as training neural networks for pattern recognition. BA is a new member of meta-heuristics. BA tries to model natural behavior of honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new algorithms for solving optimization problems. This model considers an outbound centralised distribution system consisting of one supplier and 3 identical retailers and is assumed to be independent and identically distributed with unlimited supply capacity at supplier.
A study of metaheuristic algorithms for high dimensional feature selection on microarray data
Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna
2017-11-01
Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.
a new meta-heuristic optimization algorithm
Indian Academy of Sciences (India)
N Archana
programming obtain optimal solution to the problem by rigorous methods supplemented by gradient information. Classical methods are good for solving problems with only ... ronment for their survival and apply the concepts in finding.
Search and optimization by metaheuristics techniques and algorithms inspired by nature
Du, Ke-Lin
2016-01-01
This textbook provides a comprehensive introduction to nature-inspired metaheuristic methods for search and optimization, including the latest trends in evolutionary algorithms and other forms of natural computing. Over 100 different types of these methods are discussed in detail. The authors emphasize non-standard optimization problems and utilize a natural approach to the topic, moving from basic notions to more complex ones. An introductory chapter covers the necessary biological and mathematical backgrounds for understanding the main material. Subsequent chapters then explore almost all of the major metaheuristics for search and optimization created based on natural phenomena, including simulated annealing, recurrent neural networks, genetic algorithms and genetic programming, differential evolution, memetic algorithms, particle swarm optimization, artificial immune systems, ant colony optimization, tabu search and scatter search, bee and bacteria foraging algorithms, harmony search, biomolecular computin...
Applications of metaheuristic optimization algorithms in civil engineering
Kaveh, A
2017-01-01
The book presents recently developed efficient metaheuristic optimization algorithms and their applications for solving various optimization problems in civil engineering. The concepts can also be used for optimizing problems in mechanical and electrical engineering.
Neural model of gene regulatory network: a survey on supportive meta-heuristics.
Biswas, Surama; Acharyya, Sriyankar
2016-06-01
Gene regulatory network (GRN) is produced as a result of regulatory interactions between different genes through their coded proteins in cellular context. Having immense importance in disease detection and drug finding, GRN has been modelled through various mathematical and computational schemes and reported in survey articles. Neural and neuro-fuzzy models have been the focus of attraction in bioinformatics. Predominant use of meta-heuristic algorithms in training neural models has proved its excellence. Considering these facts, this paper is organized to survey neural modelling schemes of GRN and the efficacy of meta-heuristic algorithms towards parameter learning (i.e. weighting connections) within the model. This survey paper renders two different structure-related approaches to infer GRN which are global structure approach and substructure approach. It also describes two neural modelling schemes, such as artificial neural network/recurrent neural network based modelling and neuro-fuzzy modelling. The meta-heuristic algorithms applied so far to learn the structure and parameters of neutrally modelled GRN have been reviewed here.
Two parameter-tuned metaheuristic algorithms for the multi-level lot sizing and scheduling problem
Directory of Open Access Journals (Sweden)
S.M.T. Fatemi Ghomi
2012-10-01
Full Text Available This paper addresses the problem of lot sizing and scheduling problem for n-products and m-machines in flow shop environment where setups among machines are sequence-dependent and can be carried over. Many products must be produced under capacity constraints and allowing backorders. Since lot sizing and scheduling problems are well-known strongly NP-hard, much attention has been given to heuristics and metaheuristics methods. This paper presents two metaheuristics algorithms namely, Genetic Algorithm (GA and Imperialist Competitive Algorithm (ICA. Moreover, Taguchi robust design methodology is employed to calibrate the parameters of the algorithms for different size problems. In addition, the parameter-tuned algorithms are compared against a presented lower bound on randomly generated problems. At the end, comprehensive numerical examples are presented to demonstrate the effectiveness of the proposed algorithms. The results showed that the performance of both GA and ICA are very promising and ICA outperforms GA statistically.
Applied Mathematical Methods in Theoretical Physics
Masujima, Michio
2005-04-01
All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.
Applying scrum methods to ITS projects.
2017-08-01
The introduction of new technology generally brings new challenges and new methods to help with deployments. Agile methodologies have been introduced in the information technology industry to potentially speed up development. The Federal Highway Admi...
Applying Fuzzy Possibilistic Methods on Critical Objects
DEFF Research Database (Denmark)
Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz
2016-01-01
Providing a ﬂexible environment to process data objects is a desirable goal of machine learning algorithms. In fuzzy and possibilistic methods, the relevance of data objects is evaluated and a membership degree is assigned. However, some critical objects objects have the potential ability to affect...... the performance of the clustering algorithms if they remain in a speciﬁc cluster or they are moved into another. In this paper we analyze and compare how critical objects affect the behaviour of fuzzy possibilistic methods in several data sets. The comparison is based on the accuracy and ability of learning...... methods to provide a proper searching space for data objects. The membership functions used by each method when dealing with critical objects is also evaluated. Our results show that relaxing the conditions of participation for data objects in as many partitions as they can, is beneﬁcial....
Quality assurance and applied statistics. Method 3
International Nuclear Information System (INIS)
1992-01-01
This German-Industry-Standards-paperback contains the International Standards from the Series ISO 9000 (or, as the case may be, the European Standards from the Series EN 29000) concerning quality assurance and including the already completed supplementary guidelines with ISO 9000- and ISO 9004-section numbers, which have been adopted as German Industry Standards and which are observed and applied world-wide to a great extent. It also includes the German-Industry-Standards ISO 10011 parts 1, 2 and 3 concerning the auditing of quality-assurance systems and the German-Industry-Standard ISO 10012 part 1 concerning quality-assurance demands (confirmation system) for measuring devices. The standards also include English and French versions. They are applicable independent of the user's line of industry and thus constitute basic standards. (orig.) [de
Multi-objective optimization in computer networks using metaheuristics
Donoso, Yezid
2007-01-01
Metaheuristics are widely used to solve important practical combinatorial optimization problems. Many new multicast applications emerging from the Internet-such as TV over the Internet, radio over the Internet, and multipoint video streaming-require reduced bandwidth consumption, end-to-end delay, and packet loss ratio. It is necessary to design and to provide for these kinds of applications as well as for those resources necessary for functionality. Multi-Objective Optimization in Computer Networks Using Metaheuristics provides a solution to the multi-objective problem in routing computer networks. It analyzes layer 3 (IP), layer 2 (MPLS), and layer 1 (GMPLS and wireless functions). In particular, it assesses basic optimization concepts, as well as several techniques and algorithms for the search of minimals; examines the basic multi-objective optimization concepts and the way to solve them through traditional techniques and through several metaheuristics; and demonstrates how to analytically model the compu...
METAHEURISTICS FOR OPTIMIZING SAFETY STOCK IN MULTI STAGE INVENTORY SYSTEM
Directory of Open Access Journals (Sweden)
Gordan Badurina
2013-02-01
Full Text Available Managing the right level of inventory is critical in order to achieve the targeted level of customer service, but it also carries significant cost in supply chain. In majority of cases companies define safety stock on the most downstream level, i.e. the finished product level, using different analytical methods. Safety stock on upstream level, however, usually covers only those problems which companies face on that particular level (uncertainty of delivery, issues in production, etc.. This paper looks into optimizing safety stock in a pharmaceutical supply considering the three stages inventory system. The problem is defined as a single criterion mixed integer programming problem. The objective is to minimize the inventory cost while the service level is predetermined. In order to coordinate inventories at all echelons, the variable representing the so-called service time is introduced. Because of the problem dimensions, metaheuristics based on genetic algorithm and simulated annealing are constructed and compared, using real data from a Croatian pharmaceutical company. The computational results are presented evidencing improvements in minimizing inventory costs.
Lavine method applied to three body problems
International Nuclear Information System (INIS)
Mourre, Eric.
1975-09-01
The methods presently proposed for the three body problem in quantum mechanics, using the Faddeev approach for proving the asymptotic completeness, come up against the presence of new singularities when the potentials considered v(α)(x(α)) for two-particle interactions decay less rapidly than /x(α)/ -2 ; and also when trials are made for solving the problem with a representation space whose dimension for a particle is lower than three. A method is given that allows the mathematical approach to be extended to three body problem, in spite of singularities. Applications are given [fr
Applying Human Computation Methods to Information Science
Harris, Christopher Glenn
2013-01-01
Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…
Applying Mixed Methods Techniques in Strategic Planning
Voorhees, Richard A.
2008-01-01
In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…
[The diagnostic methods applied in mycology].
Kurnatowska, Alicja; Kurnatowski, Piotr
2008-01-01
The systemic fungal invasions are recognized with increasing frequency and constitute a primary cause of morbidity and mortality, especially in immunocompromised patients. Early diagnosis improves prognosis, but remains a problem because there is lack of sensitive tests to aid in the diagnosis of systemic mycoses on the one hand, and on the other the patients only present unspecific signs and symptoms, thus delaying early diagnosis. The diagnosis depends upon a combination of clinical observation and laboratory investigation. The successful laboratory diagnosis of fungal infection depends in major part on the collection of appropriate clinical specimens for investigations and on the selection of appropriate microbiological test procedures. So these problems (collection of specimens, direct techniques, staining methods, cultures on different media and non-culture-based methods) are presented in article.
Monte Carlo method applied to medical physics
International Nuclear Information System (INIS)
Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.
2000-01-01
The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)
Proteomics methods applied to malaria: Plasmodium falciparum
International Nuclear Information System (INIS)
Cuesta Astroz, Yesid; Segura Latorre, Cesar
2012-01-01
Malaria is a parasitic disease that has a high impact on public health in developing countries. The sequencing of the plasmodium falciparum genome and the development of proteomics have enabled a breakthrough in understanding the biology of the parasite. Proteomics have allowed to characterize qualitatively and quantitatively the parasite s expression of proteins and has provided information on protein expression under conditions of stress induced by antimalarial. Given the complexity of their life cycle, this takes place in the vertebrate host and mosquito vector. It has proven difficult to characterize the protein expression during each stage throughout the infection process in order to determine the proteome that mediates several metabolic, physiological and energetic processes. Two dimensional electrophoresis, liquid chromatography and mass spectrometry have been useful to assess the effects of antimalarial on parasite protein expression and to characterize the proteomic profile of different p. falciparum stages and organelles. The purpose of this review is to present state of the art tools and advances in proteomics applied to the study of malaria, and to present different experimental strategies used to study the parasite's proteome in order to show the advantages and disadvantages of each one.
METHOD OF APPLYING NICKEL COATINGS ON URANIUM
Gray, A.G.
1959-07-14
A method is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.
Versatile Formal Methods Applied to Quantum Information.
Energy Technology Data Exchange (ETDEWEB)
Witzel, Wayne [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Rudinger, Kenneth Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sarovar, Mohan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-11-01
Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.
Optimization methods applied to hybrid vehicle design
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
Directory of Open Access Journals (Sweden)
Sajad Sabzi
2018-03-01
Full Text Available Accurate classification of fruit varieties in processing factories and during post-harvesting applications is a challenge that has been widely studied. This paper presents a novel approach to automatic fruit identification applied to three common varieties of oranges (Citrus sinensis L., namely Bam, Payvandi and Thomson. A total of 300 color images were used for the experiments, 100 samples for each orange variety, which are publicly available. After segmentation, 263 parameters, including texture, color and shape features, were extracted from each sample using image processing. Among them, the 6 most effective features were automatically selected by using a hybrid approach consisting of an artificial neural network and particle swarm optimization algorithm (ANN-PSO. Then, three different classifiers were applied and compared: hybrid artificial neural network – artificial bee colony (ANN-ABC; hybrid artificial neural network – harmony search (ANN-HS; and k-nearest neighbors (kNN. The experimental results show that the hybrid approaches outperform the results of kNN. The average correct classification rate of ANN-HS was 94.28%, while ANN-ABS achieved 96.70% accuracy with the available data, contrasting with the 70.9% baseline accuracy of kNN. Thus, this new proposed methodology provides a fast and accurate way to classify multiple fruits varieties, which can be easily implemented in processing factories. The main contribution of this work is that the method can be directly adapted to other use cases, since the selection of the optimal features and the configuration of the neural network are performed automatically using metaheuristic algorithms.
Applying the Socratic Method to Physics Education
Corcoran, Ed
2005-04-01
We have restructured University Physics I and II in accordance with methods that PER has shown to be effective, including a more interactive discussion- and activity-based curriculum based on the premise that developing understanding requires an interactive process in which students have the opportunity to talk through and think through ideas with both other students and the teacher. Studies have shown that in classes implementing this approach to teaching as compared to classes using a traditional approach, students have significantly higher gains on the Force Concept Inventory (FCI). This has been true in UPI. However, UPI FCI results seem to suggest that there is a significant conceptual hole in students' understanding of Newton's Second Law. Two labs in UPI which teach Newton's Second Law will be redesigned replacing more activity with students as a group talking through, thinking through, and answering conceptual questions asked by the TA. The results will be measured by comparing FCI results to those from previous semesters, coupled with interviews. The results will be analyzed, and we will attempt to understand why gains were or were not made.
Scanning probe methods applied to molecular electronics
Energy Technology Data Exchange (ETDEWEB)
Pavlicek, Niko
2013-08-01
Scanning probe methods on insulating films offer a rich toolbox to study electronic, structural and spin properties of individual molecules. This work discusses three issues in the field of molecular and organic electronics. An STM head to be operated in high magnetic fields has been designed and built up. The STM head is very compact and rigid relying on a robust coarse approach mechanism. This will facilitate investigations of the spin properties of individual molecules in the future. Combined STM/AFM studies revealed a reversible molecular switch based on two stable configurations of DBTH molecules on ultrathin NaCl films. AFM experiments visualize the molecular structure in both states. Our experiments allowed to unambiguously determine the pathway of the switch. Finally, tunneling into and out of the frontier molecular orbitals of pentacene molecules has been investigated on different insulating films. These experiments show that the local symmetry of initial and final electron wave function are decisive for the ratio between elastic and vibration-assisted tunneling. The results can be generalized to electron transport in organic materials.
Reflections on Mixing Methods in Applied Linguistics Research
Hashemi, Mohammad R.
2012-01-01
This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…
Directory of Open Access Journals (Sweden)
Igor Stojanović
2017-01-01
Full Text Available The continuous planar facility location problem with the connected region of feasible solutions bounded by arcs is a particular case of the constrained Weber problem. This problem is a continuous optimization problem which has a nonconvex feasible set of constraints. This paper suggests appropriate modifications of four metaheuristic algorithms which are defined with the aim of solving this type of nonconvex optimization problems. Also, a comparison of these algorithms to each other as well as to the heuristic algorithm is presented. The artificial bee colony algorithm, firefly algorithm, and their recently proposed improved versions for constrained optimization are appropriately modified and applied to the case study. The heuristic algorithm based on modified Weiszfeld procedure is also implemented for the purpose of comparison with the metaheuristic approaches. Obtained numerical results show that metaheuristic algorithms can be successfully applied to solve the instances of this problem of up to 500 constraints. Among these four algorithms, the improved version of artificial bee algorithm is the most efficient with respect to the quality of the solution, robustness, and the computational efficiency.
Applying homotopy analysis method for solving differential-difference equation
International Nuclear Information System (INIS)
Wang Zhen; Zou Li; Zhang Hongqing
2007-01-01
In this Letter, we apply the homotopy analysis method to solving the differential-difference equations. A simple but typical example is applied to illustrate the validity and the great potential of the generalized homotopy analysis method in solving differential-difference equation. Comparisons are made between the results of the proposed method and exact solutions. The results show that the homotopy analysis method is an attractive method in solving the differential-difference equations
Directory of Open Access Journals (Sweden)
Nader Ghaffari-Nasab
2010-07-01
Full Text Available During the past two decades, there have been increasing interests on permutation flow shop with different types of objective functions such as minimizing the makespan, the weighted mean flow-time etc. The permutation flow shop is formulated as a mixed integer programming and it is classified as NP-Hard problem. Therefore, a direct solution is not available and meta-heuristic approaches need to be used to find the near-optimal solutions. In this paper, we present a new discrete firefly meta-heuristic to minimize the makespan for the permutation flow shop scheduling problem. The results of implementation of the proposed method are compared with other existing ant colony optimization technique. The preliminary results indicate that the new proposed method performs better than the ant colony for some well known benchmark problems.
Application of genetic programming in shape optimization of concrete gravity dams by metaheuristics
Directory of Open Access Journals (Sweden)
Abdolhossein Baghlani
2014-12-01
Full Text Available A gravity dam maintains its stability against the external loads by its massive size. Hence, minimization of the weight of the dam can remarkably reduce the construction costs. In this paper, a procedure for finding optimal shape of concrete gravity dams with a computationally efficient approach is introduced. Genetic programming (GP in conjunction with metaheuristics is used for this purpose. As a case study, shape optimization of the Bluestone dam is presented. Pseudo-dynamic analysis is carried out on a total number of 322 models in order to establish a database of the results. This database is then used to find appropriate relations based on GP for design criteria of the dam. This procedure eliminates the necessity of the time-consuming process of structural analyses in evolutionary optimization methods. The method is hybridized with three different metaheuristics, including particle swarm optimization, firefly algorithm (FA, and teaching–learning-based optimization, and a comparison is made. The results show that although all algorithms are very suitable, FA is slightly superior to other two algorithms in finding a lighter structure in less number of iterations. The proposed method reduces the weight of dam up to 14.6% with very low computational effort.
Population-based metaheuristic optimization in neutron optics and shielding design
Energy Technology Data Exchange (ETDEWEB)
DiJulio, D.D., E-mail: Douglas.DiJulio@esss.se [European Spallation Source ERIC, P.O. Box 176, SE-221 00 Lund (Sweden); Division of Nuclear Physics, Lund University, SE-221 00 Lund (Sweden); Björgvinsdóttir, H. [European Spallation Source ERIC, P.O. Box 176, SE-221 00 Lund (Sweden); Department of Physics and Astronomy, Uppsala University, SE-751 20 Uppsala (Sweden); Zendler, C. [European Spallation Source ERIC, P.O. Box 176, SE-221 00 Lund (Sweden); Bentley, P.M. [European Spallation Source ERIC, P.O. Box 176, SE-221 00 Lund (Sweden); Department of Physics and Astronomy, Uppsala University, SE-751 20 Uppsala (Sweden)
2016-11-01
Population-based metaheuristic algorithms are powerful tools in the design of neutron scattering instruments and the use of these types of algorithms for this purpose is becoming more and more commonplace. Today there exists a wide range of algorithms to choose from when designing an instrument and it is not always initially clear which may provide the best performance. Furthermore, due to the nature of these types of algorithms, the final solution found for a specific design scenario cannot always be guaranteed to be the global optimum. Therefore, to explore the potential benefits and differences between the varieties of these algorithms available, when applied to such design scenarios, we have carried out a detailed study of some commonly used algorithms. For this purpose, we have developed a new general optimization software package which combines a number of common metaheuristic algorithms within a single user interface and is designed specifically with neutronic calculations in mind. The algorithms included in the software are implementations of Particle-Swarm Optimization (PSO), Differential Evolution (DE), Artificial Bee Colony (ABC), and a Genetic Algorithm (GA). The software has been used to optimize the design of several problems in neutron optics and shielding, coupled with Monte-Carlo simulations, in order to evaluate the performance of the various algorithms. Generally, the performance of the algorithms depended on the specific scenarios, however it was found that DE provided the best average solutions in all scenarios investigated in this work.
Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems
Directory of Open Access Journals (Sweden)
E. Osaba
2014-01-01
Full Text Available Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB. The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem and one combinatorial design problem (the one-dimensional bin packing problem have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.
Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems
Osaba, E.; Diaz, F.; Carballedo, R.; Onieva, E.; Perallos, A.
2014-01-01
Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results. PMID:25165742
Printing method and printer used for applying this method
2006-01-01
The invention pertains to a method for transferring ink to a receiving material using an inkjet printer having an ink chamber (10) with a nozzle (8) and an electromechanical transducer (16) in cooperative connection with the ink chamber, comprising actuating the transducer to generate a pressure
Directory of Open Access Journals (Sweden)
Ricardo Faia
2017-06-01
Full Text Available The deregulation of the electricity sector has culminated in the introduction of competitive markets. In addition, the emergence of new forms of electric energy production, namely the production of renewable energy, has brought additional changes in electricity market operation. Renewable energy has significant advantages, but at the cost of an intermittent character. The generation variability adds new challenges for negotiating players, as they have to deal with a new level of uncertainty. In order to assist players in their decisions, decision support tools enabling assisting players in their negotiations are crucial. Artificial intelligence techniques play an important role in this decision support, as they can provide valuable results in rather small execution times, namely regarding the problem of optimizing the electricity markets participation portfolio. This paper proposes a heuristic method that provides an initial solution that allows metaheuristic techniques to improve their results through a good initialization of the optimization process. Results show that by using the proposed heuristic, multiple metaheuristic optimization methods are able to improve their solutions in a faster execution time, thus providing a valuable contribution for players support in energy markets negotiations.
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-03-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
A Meta-Heuristic Load Balancer for Cloud Computing Systems
Sliwko, L.; Getov, Vladimir
2015-01-01
This paper introduces a strategy to allocate services on a cloud system without overloading the nodes and maintaining the system stability with minimum cost. We specify an abstract model of cloud resources utilization, including multiple types of resources as well as considerations for the service migration costs. A prototype meta-heuristic load balancer is demonstrated and experimental results are presented and discussed. We also propose a novel genetic algorithm, where population is seeded ...
The use of meta-heuristics for airport gate assignment
DEFF Research Database (Denmark)
Cheng, Chun-Hung; Ho, Sin C.; Kwan, Cheuk-Lam
2012-01-01
proposed to generate good solutions within a reasonable timeframe. In this work, we attempt to assess the performance of three meta-heuristics, namely, genetic algorithm (GA), tabu search (TS), simulated annealing (SA) and a hybrid approach based on SA and TS. Flight data from Incheon International Airport...... are collected to carry out the computational comparison. Although the literature has documented these algorithms, this work may be a first attempt to evaluate their performance using a set of realistic flight data....
Discrimination symbol applying method for sintered nuclear fuel product
International Nuclear Information System (INIS)
Ishizaki, Jin
1998-01-01
The present invention provides a symbol applying method for applying discrimination information such as an enrichment degree on the end face of a sintered nuclear product. Namely, discrimination symbols of information of powders are applied by a sintering aid to the end face of a molded member formed by molding nuclear fuel powders under pressure. Then, the molded product is sintered. The sintering aid comprises aluminum oxide, a mixture of aluminum oxide and silicon dioxide, aluminum hydride or aluminum stearate alone or in admixture. As an applying means of the sintering aid, discrimination symbols of information of powders are drawn by an isostearic acid on the end face of the molded product, and the sintering aid is sprayed thereto, or the sintering aid is applied directly, or the sintering aid is suspended in isostearic acid, and the suspension is applied with a brush. As a result, visible discrimination information can be applied to the sintered member easily. (N.H.)
Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem
Directory of Open Access Journals (Sweden)
S. Molla-Alizadeh-Zavardehi
2014-01-01
Full Text Available This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA, variable neighborhood search (VNS, and simulated annealing (SA frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms.
Building "Applied Linguistic Historiography": Rationale, Scope, and Methods
Smith, Richard
2016-01-01
In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space
Directory of Open Access Journals (Sweden)
Shaeen Kalathil
2015-11-01
Full Text Available This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB using canonic signed digit (CSD coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.
Applying Mixed Methods Research at the Synthesis Level: An Overview
Heyvaert, Mieke; Maes, Bea; Onghena, Patrick
2011-01-01
Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…
Quantitative EEG Applying the Statistical Recognition Pattern Method
DEFF Research Database (Denmark)
Engedal, Knut; Snaedal, Jon; Hoegh, Peter
2015-01-01
BACKGROUND/AIM: The aim of this study was to examine the discriminatory power of quantitative EEG (qEEG) applying the statistical pattern recognition (SPR) method to separate Alzheimer's disease (AD) patients from elderly individuals without dementia and from other dementia patients. METHODS...
Energy Technology Data Exchange (ETDEWEB)
Fesanghary, M. [Department of Mechanical Engineering, Louisiana State University, 2508 Patrick Taylor Hall, Baton Rouge, LA 70808 (United States); Ardehali, M.M. [Energy Research Center, Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), 424-Hafez Avenue, 15875-4413 Tehran (Iran)
2009-06-15
The increasing costs of fuel and operation of thermal power generating units warrant development of optimization methodologies for economic dispatch (ED) problems. Optimization methodologies that are based on meta-heuristic procedures could assist power generation policy analysts to achieve the goal of minimizing the generation costs. In this context, the objective of this study is to present a novel approach based on harmony search (HS) algorithm for solving ED problems, aiming to provide a practical alternative for conventional methods. To demonstrate the efficiency and applicability of the proposed method and for the purposes of comparison, various types of ED problems are examined. The results of this study show that the new proposed approach is able to find more economical loads than those determined by other methods. (author)
Electronic-projecting Moire method applying CBR-technology
Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.
2018-01-01
Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.
A nuclear heuristic for application to metaheuristics in-core fuel management optimization
Energy Technology Data Exchange (ETDEWEB)
Meneses, Anderson Alvarenga de Moura, E-mail: ameneses@lmp.ufrj.b [COPPE/Federal University of Rio de Janeiro, RJ (Brazil). Nuclear Engineering Program; Dalle Molle Institute for Artificial Intelligence (IDSIA), Manno-Lugano, TI (Switzerland); Gambardella, Luca Maria, E-mail: luca@idsia.c [Dalle Molle Institute for Artificial Intelligence (IDSIA), Manno-Lugano, TI (Switzerland); Schirru, Roberto, E-mail: schirru@lmp.ufrj.b [COPPE/Federal University of Rio de Janeiro, RJ (Brazil). Nuclear Engineering Program
2009-07-01
The In-Core Fuel Management Optimization (ICFMO) is a well-known problem of nuclear engineering whose features are complexity, high number of feasible solutions, and a complex evaluation process with high computational cost, thus it is prohibitive to have a great number of evaluations during an optimization process. Heuristics are criteria or principles for deciding which among several alternative courses of action are more effective with respect to some goal. In this paper, we propose a new approach for the use of relational heuristics for the search in the ICFMO. The Heuristic is based on the reactivity of the fuel assemblies and their position into the reactor core. It was applied to random search, resulting in less computational effort concerning the number of evaluations of loading patterns during the search. The experiments demonstrate that it is possible to achieve results comparable to results in the literature, for future application to metaheuristics in the ICFMO. (author)
A nuclear heuristic for application to metaheuristics in-core fuel management optimization
International Nuclear Information System (INIS)
Meneses, Anderson Alvarenga de Moura; Gambardella, Luca Maria; Schirru, Roberto
2009-01-01
The In-Core Fuel Management Optimization (ICFMO) is a well-known problem of nuclear engineering whose features are complexity, high number of feasible solutions, and a complex evaluation process with high computational cost, thus it is prohibitive to have a great number of evaluations during an optimization process. Heuristics are criteria or principles for deciding which among several alternative courses of action are more effective with respect to some goal. In this paper, we propose a new approach for the use of relational heuristics for the search in the ICFMO. The Heuristic is based on the reactivity of the fuel assemblies and their position into the reactor core. It was applied to random search, resulting in less computational effort concerning the number of evaluations of loading patterns during the search. The experiments demonstrate that it is possible to achieve results comparable to results in the literature, for future application to metaheuristics in the ICFMO. (author)
A Lagrangian meshfree method applied to linear and nonlinear elasticity.
Walker, Wade A
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.
Applying the Taguchi method for optimized fabrication of bovine ...
African Journals Online (AJOL)
SERVER
2008-02-19
Feb 19, 2008 ... Nanobiotechnology Research Lab., School of Chemical Engineering, Babol University of Technology, Po.Box: 484, ... nanoparticle by applying the Taguchi method with characterization of the ... of BSA/ethanol and organic solvent adding rate. ... Sodium aside and all other chemicals were purchased from.
International Nuclear Information System (INIS)
Wong, Ka In; Wong, Pak Kin
2017-01-01
Highlights: • A new calibration method is proposed for dual-injection engines under biofuel blends. • Sparse Bayesian extreme learning machine and flower pollination algorithm are employed in the proposed method. • An SI engine is retrofitted for operating under dual-injection strategy. • The proposed method is verified experimentally under the two idle speed conditions. • Comparison with other machine learning methods and optimization algorithms is conducted. - Abstract: Although many combinations of biofuel blends are available in the market, it is more beneficial to vary the ratio of biofuel blends at different engine operating conditions for optimal engine performance. Dual-injection engines have the potential to implement such function. However, while optimal engine calibration is critical for achieving high performance, the use of two injection systems, together with other modern engine technologies, leads the calibration of the dual-injection engines to a very complicated task. Traditional trial-and-error-based calibration approach can no longer be adopted as it would be time-, fuel- and labor-consuming. Therefore, a new and fast calibration method based on sparse Bayesian extreme learning machine (SBELM) and metaheuristic optimization is proposed to optimize the dual-injection engines operating with biofuels. A dual-injection spark-ignition engine fueled with ethanol and gasoline is employed for demonstration purpose. The engine response for various parameters is firstly acquired, and an engine model is then constructed using SBELM. With the engine model, the optimal engine settings are determined based on recently proposed metaheuristic optimization methods. Experimental results validate the optimal settings obtained with the proposed methodology, indicating that the use of machine learning and metaheuristic optimization for dual-injection engine calibration is effective and promising.
International Nuclear Information System (INIS)
Meneses, Anderson Alvarenga de Moura; Araujo, Lenilson Moreira; Nast, Fernando Nogueira; Da Silva, Patrick Vasconcelos; Schirru, Roberto
2018-01-01
Highlights: •Metaheuristics were applied to Loading Pattern Optimization problems and compared. •The problems are based on data of the benchmarks IAEA and BIBLIS. •The metaheuristics compared were PSO, Cross-Entropy, PBIL and Artificial Bee Colony. •Angra 1 NPP data were also used for further comparison of the algorithms. -- Abstract: The Loading Pattern Optimization (LPO) of a Nuclear Power Plant (NPP), or in-core fuel management optimization, is a real-world and prominent problem in Nuclear Engineering with the goal of finding an optimal (or near-optimal) Loading Pattern (LP), in terms of energy production, within adequate safety margins. Most of the reactor models used in the LPO problem are particular cases, such as research or power reactors with technical data that cannot be made available for several reasons, which makes the reproducibility of tests unattainable. In the present article we report the results of LPO of problems based upon reactor physics benchmarks. Since such data are well-known and widely available in the literature, it is possible to reproduce tests for comparison of techniques. We performed the LPO with the data of the benchmarks IAEA-3D and BIBLIS-2D. The Reactor Physics code RECNOD, which was used in previous works for the optimization of Angra 1 NPP in Brazil, was also used for further comparison. Four Optimization Metaheuristics (OMHs) were applied to those problems: Particle Swarm Optimization (PSO), Cross-Entropy algorithm (CE), Artificial Bee Colony (ABC) and Population-Based Incremental Learning (PBIL). For IAEA-3D, the best algorithm was the ABC. For BIBLIS-2D, PBIL was the best OMH. For Angra 1 / RECNOD optimization problem, PBIL, ABC and CE were the best OMHs.
Aircraft operability methods applied to space launch vehicles
Young, Douglas
1997-01-01
The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The ``building in'' of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program.
Magnetic stirring welding method applied to nuclear power plant
International Nuclear Information System (INIS)
Hirano, Kenji; Watando, Masayuki; Morishige, Norio; Enoo, Kazuhide; Yasuda, Yuuji
2002-01-01
In construction of a new nuclear power plant, carbon steel and stainless steel are used as base materials for the bottom linear plate of Reinforced Concrete Containment Vessel (RCCV) to achieve maintenance-free requirement, securing sufficient strength of structure. However, welding such different metals is difficult by ordinary method. To overcome the difficulty, the automated Magnetic Stirring Welding (MSW) method that can demonstrate good welding performance was studied for practical use, and weldability tests showed the good results. Based on the study, a new welding device for the MSW method was developed to apply it weld joints of different materials, and it practically used in part of a nuclear power plant. (author)
Linear algebraic methods applied to intensity modulated radiation therapy.
Crooks, S M; Xing, L
2001-10-01
Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.
Meta-heuristic cuckoo search algorithm for the correction of faulty array antenna
International Nuclear Information System (INIS)
Khan, S.U.; Qureshi, I.M.
2015-01-01
In this article, we introduce a CSA (Cuckoo Search Algorithm) for compensation of faulty array antenna. It is assumed that the faulty elemental location is also known. When the sensor fails, it disturbs the power pattern, owing to which its SLL (Sidelobe Level) raises and nulls are shifted from their required positions. In this approach, the CSA optimizes the weights of the active elements for the reduction of SLL and null position in the desired direction. The meta-heuristic CSA is used for the control of SLL and steering of nulls at their required positions. The CSA is based on the necessitated kids bloodsucking behavior of cuckoo sort in arrangement with the Levy flight manners. The fitness function is used to reduce the error between the preferred and probable pattern along with null constraints. Imitational consequences for various scenarios are given to exhibit the validity and presentation of the proposed method. (author)
Models and Tabu Search Metaheuristics for Service Network Design with Asset-Balance Requirements
DEFF Research Database (Denmark)
Pedersen, Michael Berliner; Crainic, T.G.; Madsen, Oli B.G.
2009-01-01
This paper focuses on a generic model for service network design, which includes asset positioning and utilization through constraints on asset availability at terminals. We denote these relations as "design-balance constraints" and focus on the design-balanced capacitated multicommodity network...... design model, a generalization of the capacitated multicommodity network design model generally used in service network design applications. Both arc-and cycle-based formulations for the new model are presented. The paper also proposes a tabu search metaheuristic framework for the arc-based formulation....... Results on a wide range of network design problem instances from the literature indicate the proposed method behaves very well in terms of computational efficiency and solution quality....
Metaheuristic Based Scheduling Meta-Tasks in Distributed Heterogeneous Computing Systems
Directory of Open Access Journals (Sweden)
Hesam Izakian
2009-07-01
Full Text Available Scheduling is a key problem in distributed heterogeneous computing systems in order to benefit from the large computing capacity of such systems and is an NP-complete problem. In this paper, we present a metaheuristic technique, namely the Particle Swarm Optimization (PSO algorithm, for this problem. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. The scheduler aims at minimizing makespan, which is the time when finishes the latest task. Experimental studies show that the proposed method is more efficient and surpasses those of reported PSO and GA approaches for this problem.
A review of metaheuristic scheduling techniques in cloud computing
Directory of Open Access Journals (Sweden)
Mala Kalra
2015-11-01
Full Text Available Cloud computing has become a buzzword in the area of high performance distributed computing as it provides on-demand access to shared pool of resources over Internet in a self-service, dynamically scalable and metered manner. Cloud computing is still in its infancy, so to reap its full benefits, much research is required across a broad array of topics. One of the important research issues which need to be focused for its efficient performance is scheduling. The goal of scheduling is to map tasks to appropriate resources that optimize one or more objectives. Scheduling in cloud computing belongs to a category of problems known as NP-hard problem due to large solution space and thus it takes a long time to find an optimal solution. There are no algorithms which may produce optimal solution within polynomial time to solve these problems. In cloud environment, it is preferable to find suboptimal solution, but in short period of time. Metaheuristic based techniques have been proved to achieve near optimal solutions within reasonable time for such problems. In this paper, we provide an extensive survey and comparative analysis of various scheduling algorithms for cloud and grid environments based on three popular metaheuristic techniques: Ant Colony Optimization (ACO, Genetic Algorithm (GA and Particle Swarm Optimization (PSO, and two novel techniques: League Championship Algorithm (LCA and BAT algorithm.
A metaheuristic optimization framework for informative gene selection
Directory of Open Access Journals (Sweden)
Kaberi Das
Full Text Available This paper presents a metaheuristic framework using Harmony Search (HS with Genetic Algorithm (GA for gene selection. The internal architecture of the proposed model broadly works in two phases, in the first phase, the model allows the hybridization of HS with GA to compute and evaluate the fitness of the randomly selected solutions of binary strings and then HS ranks the solutions in descending order of their fitness. In the second phase, the offsprings are generated using crossover and mutation operations of GA and finally, those offsprings were selected for the next generation whose fitness value is more than their parents evaluated by SVM classifier. The accuracy of the final gene subsets obtained from this model has been evaluated using SVM classifiers. The merit of this approach is analyzed by experimental results on five benchmark datasets and the results showed an impressive accuracy over existing feature selection approaches. The occurrence of gene subsets selected from this model have also been computed and the most often selected gene subsets with the probability of [0.1â0.9] have been chosen as optimal sets of informative genes. Finally, the performance of those selected informative gene subsets have been measured and established through probabilistic measures. Keywords: Gene Selection, Metaheuristic, Harmony Search Algorithm, Genetic Algorithm, SVM
Methods of applied mathematics with a software overview
Davis, Jon H
2016-01-01
This textbook, now in its second edition, provides students with a firm grasp of the fundamental notions and techniques of applied mathematics as well as the software skills to implement them. The text emphasizes the computational aspects of problem solving as well as the limitations and implicit assumptions inherent in the formal methods. Readers are also given a sense of the wide variety of problems in which the presented techniques are useful. Broadly organized around the theme of applied Fourier analysis, the treatment covers classical applications in partial differential equations and boundary value problems, and a substantial number of topics associated with Laplace, Fourier, and discrete transform theories. Some advanced topics are explored in the final chapters such as short-time Fourier analysis and geometrically based transforms applicable to boundary value problems. The topics covered are useful in a variety of applied fields such as continuum mechanics, mathematical physics, control theory, and si...
Directory of Open Access Journals (Sweden)
Cenk Demirkır
2014-04-01
Full Text Available Plywood, which is one of the most important wood based panels, has many usage areas changing from traffic signs to building constructions in many countries. It is known that the high quality plywood panel manufacturing has been achieved with a good bonding under the optimum pressure conditions depending on adhesive type. This is a study of determining the using possibilities of modern meta-heuristic hybrid artificial intelligence techniques such as IKE and AANN methods for prediction of bonding strength of plywood panels. This study has composed of two main parts as experimental and analytical. Scots pine, maritime pine and European black pine logs were used as wood species. The pine veneers peeled at 32°C and 50°C were dried at 110°C, 140°C and 160°C temperatures. Phenol formaldehyde and melamine urea formaldehyde resins were used as adhesive types. EN 314-1 standard was used to determine the bonding shear strength values of plywood panels in experimental part of this study. Then the intuitive k-nearest neighbor estimator (IKE and adaptive artificial neural network (AANN were used to estimate bonding strength of plywood panels. The best estimation performance was obtained from MA metric for k-value=10. The most effective factor on bonding strength was determined as adhesive type. Error rates were determined less than 5% for both of the IKE and AANN. It may be recommended that proposed methods could be used in applying to estimation of bonding strength values of plywood panels.
Which DTW Method Applied to Marine Univariate Time Series Imputation
Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André
2017-01-01
International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...
Applying Qualitative Research Methods to Narrative Knowledge Engineering
O'Neill, Brian; Riedl, Mark
2014-01-01
We propose a methodology for knowledge engineering for narrative intelligence systems, based on techniques used to elicit themes in qualitative methods research. Our methodology uses coding techniques to identify actions in natural language corpora, and uses these actions to create planning operators and procedural knowledge, such as scripts. In an iterative process, coders create a taxonomy of codes relevant to the corpus, and apply those codes to each element of that corpus. These codes can...
APPLYING SPECTROSCOPIC METHODS ON ANALYSES OF HAZARDOUS WASTE
Dobrinić, Julijan; Kunić, Marija; Ciganj, Zlatko
2000-01-01
Abstract The paper presents results of measuring the content of heavy and other metals in waste samples from the hazardous waste disposal site of Sovjak near Rijeka. The preliminary design elaboration and the choice of the waste disposal sanification technology were preceded by the sampling and physico-chemical analyses of disposed waste, enabling its categorization. The following spectroscopic methods were applied on metal content analysis: Atomic absorption spectroscopy (AAS) and plas...
A new method of AHP applied to personal credit evaluation
Institute of Scientific and Technical Information of China (English)
JIANG Ming-hui; XIONG Qi; CAO Jing
2006-01-01
This paper presents a new negative judgment matrix that combines the advantages of the reciprocal judgment matrix and the fuzzy complementary judgment matrix, and then puts forth the properties of this new matrix. In view of these properties, this paper derives a clear sequencing formula for the new negative judgment matrix, which improves the sequencing principle of AHP. Finally, this new method is applied to personal credit evaluation to show its advantages of conciseness and swiftness.
Novel biodosimetry methods applied to victims of the Goiania accident
International Nuclear Information System (INIS)
Straume, T.; Langlois, R.G.; Lucas, J.; Jensen, R.H.; Bigbee, W.L.; Ramalho, A.T.; Brandao-Mello, C.E.
1991-01-01
Two biodosimetric methods under development at the Lawrence Livermore National Laboratory were applied to five persons accidentally exposed to a 137Cs source in Goiania, Brazil. The methods used were somatic null mutations at the glycophorin A locus detected as missing proteins on the surface of blood erythrocytes and chromosome translocations in blood lymphocytes detected using fluorescence in-situ hybridization. Biodosimetric results obtained approximately 1 y after the accident using these new and largely unvalidated methods are in general agreement with results obtained immediately after the accident using dicentric chromosome aberrations. Additional follow-up of Goiania accident victims will (1) help provide the information needed to validate these new methods for use in biodosimetry and (2) provide independent estimates of dose
Newton-Krylov methods applied to nonequilibrium radiation diffusion
International Nuclear Information System (INIS)
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
1998-01-01
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton's method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton's method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step
GPS surveying method applied to terminal area navigation flight experiments
Energy Technology Data Exchange (ETDEWEB)
Murata, M; Shingu, H; Satsushima, K; Tsuji, T; Ishikawa, K; Miyazawa, Y; Uchida, T [National Aerospace Laboratory, Tokyo (Japan)
1993-03-01
With an objective of evaluating accuracy of new landing and navigation systems such as microwave landing guidance system and global positioning satellite (GPS) system, flight experiments are being carried out using experimental aircraft. This aircraft mounts a GPS and evaluates its accuracy by comparing the standard orbits spotted by a Kalman filter from the laser tracing data on the aircraft with the navigation results. The GPS outputs position and speed information from an earth-centered-earth-fixed system called the World Geodetic System, 1984 (WGS84). However, in order to compare the navigation results with output from a reference orbit sensor or other navigation sensor, it is necessary to structure a high-precision reference coordinates system based on the WGS84. A method that applies the GPS phase interference measurement for this problem was proposed, and used actually in analyzing a flight experiment data. As referred to a case of the method having been applied to evaluating an independent navigation accuracy, the method was verified sufficiently effective and reliable not only in navigation method analysis, but also in the aspect of navigational operations. 12 refs., 10 figs., 5 tabs.
Emami Niri, Mohammad; Amiri Kolajoobi, Rasool; Khodaiy Arbat, Mohammad; Shahbazi Raz, Mahdi
2018-06-01
Seismic wave velocities, along with petrophysical data, provide valuable information during the exploration and development stages of oil and gas fields. The compressional-wave velocity (VP ) is acquired using conventional acoustic logging tools in many drilled wells. But the shear-wave velocity (VS ) is recorded using advanced logging tools only in a limited number of wells, mainly because of the high operational costs. In addition, laboratory measurements of seismic velocities on core samples are expensive and time consuming. So, alternative methods are often used to estimate VS . Heretofore, several empirical correlations that predict VS by using well logging measurements and petrophysical data such as VP , porosity and density are proposed. However, these empirical relations can only be used in limited cases. The use of intelligent systems and optimization algorithms are inexpensive, fast and efficient approaches for predicting VS. In this study, in addition to the widely used Greenberg–Castagna empirical method, we implement three relatively recently developed metaheuristic algorithms to construct linear and nonlinear models for predicting VS : teaching–learning based optimization, imperialist competitive and artificial bee colony algorithms. We demonstrate the applicability and performance of these algorithms to predict Vs using conventional well logs in two field data examples, a sandstone formation from an offshore oil field and a carbonate formation from an onshore oil field. We compared the estimated VS using each of the employed metaheuristic approaches with observed VS and also with those predicted by Greenberg–Castagna relations. The results indicate that, for both sandstone and carbonate case studies, all three implemented metaheuristic algorithms are more efficient and reliable than the empirical correlation to predict VS . The results also demonstrate that in both sandstone and carbonate case studies, the performance of an artificial bee
Methods for model selection in applied science and engineering.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2004-10-01
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
Analysis of concrete beams using applied element method
Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen
2018-03-01
The Applied Element Method (AEM) is a displacement based method of structural analysis. Some of its features are similar to that of Finite Element Method (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.
The Lattice Boltzmann Method applied to neutron transport
International Nuclear Information System (INIS)
Erasmus, B.; Van Heerden, F. A.
2013-01-01
In this paper the applicability of the Lattice Boltzmann Method to neutron transport is investigated. One of the main features of the Lattice Boltzmann method is the simultaneous discretization of the phase space of the problem, whereby particles are restricted to move on a lattice. An iterative solution of the operator form of the neutron transport equation is presented here, with the first collision source as the starting point of the iteration scheme. A full description of the discretization scheme is given, along with the quadrature set used for the angular discretization. An angular refinement scheme is introduced to increase the angular coverage of the problem phase space and to mitigate lattice ray effects. The method is applied to a model problem to investigate its applicability to neutron transport and the results are compared to a reference solution calculated, using MCNP. (authors)
Advanced methods for image registration applied to JET videos
Energy Technology Data Exchange (ETDEWEB)
Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)
2015-10-15
Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.
Jafari, Hamed; Salmasi, Nasser
2015-09-01
The nurse scheduling problem (NSP) has received a great amount of attention in recent years. In the NSP, the goal is to assign shifts to the nurses in order to satisfy the hospital's demand during the planning horizon by considering different objective functions. In this research, we focus on maximizing the nurses' preferences for working shifts and weekends off by considering several important factors such as hospital's policies, labor laws, governmental regulations, and the status of nurses at the end of the previous planning horizon in one of the largest hospitals in Iran i.e., Milad Hospital. Due to the shortage of available nurses, at first, the minimum total number of required nurses is determined. Then, a mathematical programming model is proposed to solve the problem optimally. Since the proposed research problem is NP-hard, a meta-heuristic algorithm based on simulated annealing (SA) is applied to heuristically solve the problem in a reasonable time. An initial feasible solution generator and several novel neighborhood structures are applied to enhance performance of the SA algorithm. Inspired from our observations in Milad hospital, random test problems are generated to evaluate the performance of the SA algorithm. The results of computational experiments indicate that the applied SA algorithm provides solutions with average percentage gap of 5.49 % compared to the upper bounds obtained from the mathematical model. Moreover, the applied SA algorithm provides significantly better solutions in a reasonable time than the schedules provided by the head nurses.
Classification of Specialized Farms Applying Multivariate Statistical Methods
Directory of Open Access Journals (Sweden)
Zuzana Hloušková
2017-01-01
Full Text Available Classification of specialized farms applying multivariate statistical methods The paper is aimed at application of advanced multivariate statistical methods when classifying cattle breeding farming enterprises by their economic size. Advantage of the model is its ability to use a few selected indicators compared to the complex methodology of current classification model that requires knowledge of detailed structure of the herd turnover and structure of cultivated crops. Output of the paper is intended to be applied within farm structure research focused on future development of Czech agriculture. As data source, the farming enterprises database for 2014 has been used, from the FADN CZ system. The predictive model proposed exploits knowledge of actual size classes of the farms tested. Outcomes of the linear discriminatory analysis multifactor classification method have supported the chance of filing farming enterprises in the group of Small farms (98 % filed correctly, and the Large and Very Large enterprises (100 % filed correctly. The Medium Size farms have been correctly filed at 58.11 % only. Partial shortages of the process presented have been found when discriminating Medium and Small farms.
Metaheuristic approaches to order sequencing on a unidirectional picking line
Directory of Open Access Journals (Sweden)
AP de Villiers
2013-06-01
Full Text Available In this paper the sequencing of orders on a unidirectional picking line is considered. The aim of the order sequencing is to minimise the number of cycles travelled by a picker within the picking line to complete all orders. A tabu search, simulated annealing, genetic algorithm, generalised extremal optimisation and a random local search are presented as possible solution approaches. Computational results based on real life data instances are presented for these metaheuristics and compared to the performance of a lower bound and the solutions used in practise. The random local search exhibits the best overall solution quality, however, the generalised extremal optimisation approach delivers comparable results in considerably shorter computational times.
CASTING IMPROVEMENT BASED ON METAHEURISTIC OPTIMIZATION AND NUMERICAL SIMULATION
Directory of Open Access Journals (Sweden)
Radomir Radiša
2017-12-01
Full Text Available This paper presents the use of metaheuristic optimization techniques to support the improvement of casting process. Genetic algorithm (GA, Ant Colony Optimization (ACO, Simulated annealing (SA and Particle Swarm Optimization (PSO have been considered as optimization tools to define the geometry of the casting part’s feeder. The proposed methodology has been demonstrated in the design of the feeder for casting Pelton turbine bucket. The results of the optimization are dimensional characteristics of the feeder, and the best result from all the implemented optimization processes has been adopted. Numerical simulation has been used to verify the validity of the presented design methodology and the feeding system optimization in the casting system of the Pelton turbine bucket.
Metrological evaluation of characterization methods applied to nuclear fuels
International Nuclear Information System (INIS)
Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho
2010-01-01
In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO 2 that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO 2 samples were focused. The thermal characterization of UO 2 samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of the
Nuclear and nuclear related analytical methods applied in environmental research
International Nuclear Information System (INIS)
Popescu, Ion V.; Gheboianu, Anca; Bancuta, Iulian; Cimpoca, G. V; Stihi, Claudia; Radulescu, Cristiana; Oros Calin; Frontasyeva, Marina; Petre, Marian; Dulama, Ioana; Vlaicu, G.
2010-01-01
Nuclear Analytical Methods can be used for research activities on environmental studies like water quality assessment, pesticide residues, global climatic change (transboundary), pollution and remediation. Heavy metal pollution is a problem associated with areas of intensive industrial activity. In this work the moss bio monitoring technique was employed to study the atmospheric deposition in Dambovita County Romania. Also, there were used complementary nuclear and atomic analytical methods: Neutron Activation Analysis (NAA), Atomic Absorption Spectrometry (AAS) and Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES). These high sensitivity analysis methods were used to determine the chemical composition of some samples of mosses placed in different areas with different pollution industrial sources. The concentrations of Cr, Fe, Mn, Ni and Zn were determined. The concentration of Fe from the same samples was determined using all these methods and we obtained a very good agreement, in statistical limits, which demonstrate the capability of these analytical methods to be applied on a large spectrum of environmental samples with the same results. (authors)
Applied systems ecology: models, data, and statistical methods
Energy Technology Data Exchange (ETDEWEB)
Eberhardt, L L
1976-01-01
In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.
Analysis of Brick Masonry Wall using Applied Element Method
Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen
2018-03-01
The Applied Element Method (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite Element Method (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.
Thermally stimulated current method applied to highly irradiated silicon diodes
Pintilie, I; Pintilie, I; Moll, Michael; Fretwurst, E; Lindström, G
2002-01-01
We propose an improved method for the analysis of Thermally Stimulated Currents (TSC) measured on highly irradiated silicon diodes. The proposed TSC formula for the evaluation of a set of TSC spectra obtained with different reverse biases leads not only to the concentration of electron and hole traps visible in the spectra but also gives an estimation for the concentration of defects which not give rise to a peak in the 30-220 K TSC temperature range (very shallow or very deep levels). The method is applied to a diode irradiated with a neutron fluence of phi sub n =1.82x10 sup 1 sup 3 n/cm sup 2.
Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter
2018-05-01
This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.
Hybrid electrokinetic method applied to mix contaminated soil
Energy Technology Data Exchange (ETDEWEB)
Mansour, H.; Maria, E. [Dept. of Building Civil and Environmental Engineering, Concordia Univ., Montreal (Canada)
2001-07-01
Several industrials and municipal areas in North America are contaminated with heavy metals and petroleum products. This mix contamination presents a particularly difficult task for remediation when is exposed in clayey soil. The objective of this research was to find a method to cleanup mix contaminated clayey soils. Finally, a multifunctional hybrid electrokinetic method was investigated. Clayey soil was contaminated with lead and nickel (heavy metals) at the level of 1000 ppm and phenanthrene (PAH) of 600 ppm. Electrokinetic surfactant supply system was applied to mobilize, transport and removal of phenanthrene. A chelation agent (EDTA) was also electrokinetically supplied to mobilize heavy metals. The studies were performed on 8 lab scale electrokinetic cells. The mix contaminated clayey soil was subjected to DC total voltage gradient of 0.3 V/cm. Supplied liquids (surfactant and EDTA) were introduced in different periods of time (22 days, 42 days) in order to optimize the most excessive removal of contaminants. The ph, electrical parameters, volume supplied, and volume discharged was monitored continuously during each experiment. At the end of these tests soil and cathalyte were subjected to physico-chemical analysis. The paper discusses results of experiments including the optimal energy use, removal efficiency of phenanthrene, as well, transport and removal of heavy metals. The results of this study can be applied for in-situ hybrid electrokinetic technology to remediate clayey sites contaminated with petroleum product mixed with heavy metals (e.g. manufacture Gas Plant Sites). (orig.)
FC-TLBO: fully constrained meta-heuristic algorithm for abundance ...
Indian Academy of Sciences (India)
Omprakash Tembhurne
hyperspectral unmixing; meta-heuristic approach; teaching-learning-based optimisation (TLBO). 1. ... area of research due to its real-time applications. Satellite .... describes the detailed methodology of proposed FC-TLBO. Section 4 contains ...
A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy
Directory of Open Access Journals (Sweden)
Oktay Büyükaşık
2010-12-01
Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Analytical methods applied to diverse types of Brazilian propolis
Directory of Open Access Journals (Sweden)
Marcucci Maria
2011-06-01
Full Text Available Abstract Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented.
Teaching organization theory for healthcare management: three applied learning methods.
Olden, Peter C
2006-01-01
Organization theory (OT) provides a way of seeing, describing, analyzing, understanding, and improving organizations based on patterns of organizational design and behavior (Daft 2004). It gives managers models, principles, and methods with which to diagnose and fix organization structure, design, and process problems. Health care organizations (HCOs) face serious problems such as fatal medical errors, harmful treatment delays, misuse of scarce nurses, costly inefficiency, and service failures. Some of health care managers' most critical work involves designing and structuring their organizations so their missions, visions, and goals can be achieved-and in some cases so their organizations can survive. Thus, it is imperative that graduate healthcare management programs develop effective approaches for teaching OT to students who will manage HCOs. Guided by principles of education, three applied teaching/learning activities/assignments were created to teach OT in a graduate healthcare management program. These educationalmethods develop students' competency with OT applied to HCOs. The teaching techniques in this article may be useful to faculty teaching graduate courses in organization theory and related subjects such as leadership, quality, and operation management.
Six Sigma methods applied to cryogenic coolers assembly line
Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René
2009-05-01
Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.
Metrological evaluation of characterization methods applied to nuclear fuels
Energy Technology Data Exchange (ETDEWEB)
Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho, E-mail: kellyfisica@gmail.co, E-mail: fernando.lameiras@pq.cnpq.b, E-mail: dmc@cdtn.b, E-mail: ranf@cdtn.b, E-mail: flmigliorini@hotmail.co, E-mail: lucsc@hotmail.co, E-mail: egonn@ufmg.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2010-07-01
In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO{sub 2} that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO{sub 2} samples were focused. The thermal characterization of UO{sub 2} samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of
Applying systems ergonomics methods in sport: A systematic review.
Hulme, Adam; Thompson, Jason; Plant, Katherine L; Read, Gemma J M; Mclean, Scott; Clacy, Amanda; Salmon, Paul M
2018-04-16
As sports systems become increasingly more complex, competitive, and technology-centric, there is a greater need for systems ergonomics methods to consider the performance, health, and safety of athletes in context with the wider settings in which they operate. Therefore, the purpose of this systematic review was to identify and critically evaluate studies which have applied a systems ergonomics research approach in the context of sports performance and injury management. Five databases (PubMed, Scopus, ScienceDirect, Web of Science, and SPORTDiscus) were searched for the dates 01 January 1990 to 01 August 2017, inclusive, for original peer-reviewed journal articles and conference papers. Reported analyses were underpinned by a recognised systems ergonomics method, and study aims were related to the optimisation of sports performance (e.g. communication, playing style, technique, tactics, or equipment), and/or the management of sports injury (i.e. identification, prevention, or treatment). A total of seven articles were identified. Two articles were focussed on understanding and optimising sports performance, whereas five examined sports injury management. The methods used were the Event Analysis of Systemic Teamwork, Cognitive Work Analysis (the Work Domain Analysis Abstraction Hierarchy), Rasmussen's Risk Management Framework, and the Systems Theoretic Accident Model and Processes method. The individual sport application was distance running, whereas the team sports contexts examined were cycling, football, Australian Football League, and rugby union. The included systems ergonomics applications were highly flexible, covering both amateur and elite sports contexts. The studies were rated as valuable, providing descriptions of injury controls and causation, the factors influencing injury management, the allocation of responsibilities for injury prevention, as well as the factors and their interactions underpinning sports performance. Implications and future
The virtual fields method applied to spalling tests on concrete
Directory of Open Access Journals (Sweden)
Forquin P.
2012-08-01
Full Text Available For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s−1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM. First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative ‘load cell’. This method applied to three spalling tests allowed to identify Young’s modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.
Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model
Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled
2018-03-01
The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.
Flood Hazard Mapping by Applying Fuzzy TOPSIS Method
Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.
2017-12-01
There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS
DEFF Research Database (Denmark)
The following topics are dealt with: parallel scientific computing; numerical algorithms; parallel nonnumerical algorithms; cloud computing; evolutionary computing; metaheuristics; applied mathematics; GPU computing; multicore systems; hybrid architectures; hierarchical parallelism; HPC systems......; power monitoring; energy monitoring; and distributed computing....
Trueba, Isidoro
fossil fuels to biofuels. In many ways biomass is a unique renewable resource. It can be stored and transported relatively easily in contrast to renewable options such as wind and solar, which create intermittent electrical power that requires immediate consumption and a connection to the grid. This thesis presents two different models for the design optimization of a biomass-to-biorefinery logistics system through bio-inspired metaheuristic optimization considering multiple types of feedstocks. This work compares the performance and solutions obtained by two types of metaheuristic approaches; genetic algorithm and ant colony optimization. Compared to rigorous mathematical optimization methods or iterative algorithms, metaheuristics do not guarantee that a global optimal solution can be found on some class of problems. Problems with similar characteristics to the one presented in this thesis have been previously solved using linear programming, integer programming and mixed integer programming methods. However, depending on the type of problem, these mathematical or complete methods might need exponential computation time in the worst-case. This often leads to computation times too high for practical purposes. Therefore, this thesis develops two types of metaheuristic approaches for the design optimization of a biomass-to-biorefinery logistics system considering multiple types of feedstocks and shows that metaheuristics are highly suitable to solve hard combinatorial optimization problems such as the one addressed in this research work.
Applying sociodramatic methods in teaching transition to palliative care.
Baile, Walter F; Walters, Rebecca
2013-03-01
We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation method was applied in a workshop format to address the challenge of discussing transition to palliative care. We describe how warm-up exercises prepared 15 learners who provide direct clinical care to patients with cancer for a dramatic portrayal of this dilemma. We then show how small-group brainstorming led to the creation of a challenging scenario wherein highly optimistic family members of a 20-year-old young man with terminal acute lymphocytic leukemia responded to information about the lack of further anticancer treatment with anger and blame toward the staff. We illustrate how the facilitators, using sociodramatic techniques of doubling and role reversal, helped learners to understand and articulate the hidden feelings of fear and loss behind the family's emotional reactions. By modeling effective communication skills, the facilitators demonstrated how key communication skills, such as empathic responses to anger and blame and using "wish" statements, could transform the conversation from one of conflict to one of problem solving with the family. We also describe how we set up practice dyads to give the learners an opportunity to try out new skills with each other. An evaluation of the workshop and similar workshops we conducted is presented. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Applying multi-resolution numerical methods to geodynamics
Davies, David Rhodri
Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled
Analytic methods in applied probability in memory of Fridrikh Karpelevich
Suhov, Yu M
2002-01-01
This volume is dedicated to F. I. Karpelevich, an outstanding Russian mathematician who made important contributions to applied probability theory. The book contains original papers focusing on several areas of applied probability and its uses in modern industrial processes, telecommunications, computing, mathematical economics, and finance. It opens with a review of Karpelevich's contributions to applied probability theory and includes a bibliography of his works. Other articles discuss queueing network theory, in particular, in heavy traffic approximation (fluid models). The book is suitable
Reactor calculation in coarse mesh by finite element method applied to matrix response method
International Nuclear Information System (INIS)
Nakata, H.
1982-01-01
The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt
A new hybrid metaheuristic algorithm for wind farm micrositing
International Nuclear Information System (INIS)
Massan, S.U.R.; Wagan, A.I.; Shaikh, M.M.
2017-01-01
This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm) for the solution of the WTO (Wind Turbine Optimization) problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm) and the FA (Firefly Algorithm). The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm) used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together. (author)
Hybrid Metaheuristic Approach for Nonlocal Optimization of Molecular Systems.
Dresselhaus, Thomas; Yang, Jack; Kumbhar, Sadhana; Waller, Mark P
2013-04-09
Accurate modeling of molecular systems requires a good knowledge of the structure; therefore, conformation searching/optimization is a routine necessity in computational chemistry. Here we present a hybrid metaheuristic optimization (HMO) algorithm, which combines ant colony optimization (ACO) and particle swarm optimization (PSO) for the optimization of molecular systems. The HMO implementation meta-optimizes the parameters of the ACO algorithm on-the-fly by the coupled PSO algorithm. The ACO parameters were optimized on a set of small difluorinated polyenes where the parameters exhibited small variance as the size of the molecule increased. The HMO algorithm was validated by searching for the closed form of around 100 molecular balances. Compared to the gradient-based optimized molecular balance structures, the HMO algorithm was able to find low-energy conformations with a 87% success rate. Finally, the computational effort for generating low-energy conformation(s) for the phenylalanyl-glycyl-glycine tripeptide was approximately 60 CPU hours with the ACO algorithm, in comparison to 4 CPU years required for an exhaustive brute-force calculation.
A New Hybrid Metaheuristic Algorithm for Wind Farm Micrositing
Directory of Open Access Journals (Sweden)
SHAFIQ-UR-REHMAN MASSAN
2017-07-01
Full Text Available This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm for the solution of the WTO (Wind Turbine Optimization problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm and the FA (Firefly Algorithm. The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together.
Meta-heuristic CRPS minimization for the calibration of short-range probabilistic forecasts
Mohammadi, Seyedeh Atefeh; Rahmani, Morteza; Azadi, Majid
2016-08-01
This paper deals with the probabilistic short-range temperature forecasts over synoptic meteorological stations across Iran using non-homogeneous Gaussian regression (NGR). NGR creates a Gaussian forecast probability density function (PDF) from the ensemble output. The mean of the normal predictive PDF is a bias-corrected weighted average of the ensemble members and its variance is a linear function of the raw ensemble variance. The coefficients for the mean and variance are estimated by minimizing the continuous ranked probability score (CRPS) during a training period. CRPS is a scoring rule for distributional forecasts. In the paper of Gneiting et al. (Mon Weather Rev 133:1098-1118, 2005), Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used to minimize the CRPS. Since BFGS is a conventional optimization method with its own limitations, we suggest using the particle swarm optimization (PSO), a robust meta-heuristic method, to minimize the CRPS. The ensemble prediction system used in this study consists of nine different configurations of the weather research and forecasting model for 48-h forecasts of temperature during autumn and winter 2011 and 2012. The probabilistic forecasts were evaluated using several common verification scores including Brier score, attribute diagram and rank histogram. Results show that both BFGS and PSO find the optimal solution and show the same evaluation scores, but PSO can do this with a feasible random first guess and much less computational complexity.
Directory of Open Access Journals (Sweden)
Ali Akbar Hasani
2016-11-01
Full Text Available In this paper, a comprehensive model is proposed to design a network for multi-period, multi-echelon, and multi-product inventory controlled the supply chain. Various marketing strategies and guerrilla marketing approaches are considered in the design process under the static competition condition. The goal of the proposed model is to efficiently respond to the customers’ demands in the presence of the pre-existing competitors and the price inelasticity of demands. The proposed optimization model considers multiple objectives that incorporate both market share and total profit of the considered supply chain network, simultaneously. To tackle the proposed multi-objective mixed-integer nonlinear programming model, an efficient hybrid meta-heuristic algorithm is developed that incorporates a Taguchi-based non-dominated sorting genetic algorithm-II and a particle swarm optimization. A variable neighborhood decomposition search is applied to enhance a local search process of the proposed hybrid solution algorithm. Computational results illustrate that the proposed model and solution algorithm are notably efficient in dealing with the competitive pressure by adopting the proper marketing strategies.
Roozitalab, Ali; Asgharizadeh, Ezzatollah
2013-12-01
Warranty is now an integral part of each product. Since its length is directly related to the cost of production, it should be set in such a way that it would maximize revenue generation and customers' satisfaction. Furthermore, based on the behavior of customers, it is assumed that increasing the warranty period to earn the trust of more customers leads to more sales until the market is saturated. We should bear in mind that different groups of consumers have different consumption behaviors and that performance of the product has a direct impact on the failure rate over the life of the product. Therefore, the optimum duration for every group is different. In fact, we cannot present different warranty periods for various customer groups. In conclusion, using cuckoo meta-heuristic optimization algorithm, we try to find a common period for the entire population. Results with high convergence offer a term length that will maximize the aforementioned goals simultaneously. The study was tested using real data from Appliance Company. The results indicate a significant increase in sales when the optimization approach was applied; it provides a longer warranty through increased revenue from selling, not only reducing profit margins but also increasing it.
Razavi Termeh, Seyed Vahid; Kornejady, Aiding; Pourghasemi, Hamid Reza; Keesstra, Saskia
2018-02-15
Flood is one of the most destructive natural disasters which cause great financial and life losses per year. Therefore, producing susceptibility maps for flood management are necessary in order to reduce its harmful effects. The aim of the present study is to map flood hazard over the Jahrom Township in Fars Province using a combination of adaptive neuro-fuzzy inference systems (ANFIS) with different metaheuristics algorithms such as ant colony optimization (ACO), genetic algorithm (GA), and particle swarm optimization (PSO) and comparing their accuracy. A total number of 53 flood locations areas were identified, 35 locations of which were randomly selected in order to model flood susceptibility and the remaining 16 locations were used to validate the models. Learning vector quantization (LVQ), as one of the supervised neural network methods, was employed in order to estimate factors' importance. Nine flood conditioning factors namely: slope degree, plan curvature, altitude, topographic wetness index (TWI), stream power index (SPI), distance from river, land use/land cover, rainfall, and lithology were selected and the corresponding maps were prepared in ArcGIS. The frequency ratio (FR) model was used to assign weights to each class within particular controlling factor, then the weights was transferred into MATLAB software for further analyses and to combine with metaheuristic models. The ANFIS-PSO was found to be the most practical model in term of producing the highly focused flood susceptibility map with lesser spatial distribution related to highly susceptible classes. The chi-square result attests the same, where the ANFIS-PSO had the highest spatial differentiation within flood susceptibility classes over the study area. The area under the curve (AUC) obtained from ROC curve indicated the accuracy of 91.4%, 91.8%, 92.6% and 94.5% for the respective models of FR, ANFIS-ACO, ANFIS-GA, and ANFIS-PSO ensembles. So, the ensemble of ANFIS-PSO was introduced as the
Directory of Open Access Journals (Sweden)
Angel A. Juan
2015-12-01
Full Text Available Many combinatorial optimization problems (COPs encountered in real-world logistics, transportation, production, healthcare, financial, telecommunication, and computing applications are NP-hard in nature. These real-life COPs are frequently characterized by their large-scale sizes and the need for obtaining high-quality solutions in short computing times, thus requiring the use of metaheuristic algorithms. Metaheuristics benefit from different random-search and parallelization paradigms, but they frequently assume that the problem inputs, the underlying objective function, and the set of optimization constraints are deterministic. However, uncertainty is all around us, which often makes deterministic models oversimplified versions of real-life systems. After completing an extensive review of related work, this paper describes a general methodology that allows for extending metaheuristics through simulation to solve stochastic COPs. ‘Simheuristics’ allow modelers for dealing with real-life uncertainty in a natural way by integrating simulation (in any of its variants into a metaheuristic-driven framework. These optimization-driven algorithms rely on the fact that efficient metaheuristics already exist for the deterministic version of the corresponding COP. Simheuristics also facilitate the introduction of risk and/or reliability analysis criteria during the assessment of alternative high-quality solutions to stochastic COPs. Several examples of applications in different fields illustrate the potential of the proposed methodology.
Event based neutron activation spectroscopy and analysis algorithm using MLE and meta-heuristics
International Nuclear Information System (INIS)
Wallace, B.
2014-01-01
Techniques used in neutron activation analysis are often dependent on the experimental setup. In the context of developing a portable and high efficiency detection array, good energy resolution and half-life discrimination are difficult to obtain with traditional methods given the logistic and financial constraints. An approach different from that of spectrum addition and standard spectroscopy analysis was needed. The use of multiple detectors prompts the need for a flexible storage of acquisition data to enable sophisticated post processing of information. Analogously to what is done in heavy ion physics, gamma detection counts are stored as two-dimensional events. This enables post-selection of energies and time frames without the need to modify the experimental setup. This method of storage also permits the use of more complex analysis tools. Given the nature of the problem at hand, a light and efficient analysis code had to be devised. A thorough understanding of the physical and statistical processes involved was used to create a statistical model. Maximum likelihood estimation was combined with meta-heuristics to produce a sophisticated curve-fitting algorithm. Simulated and experimental data were fed into the analysis code prompting positive results in terms of half-life discrimination, peak identification and noise reduction. The code was also adapted to other fields of research such as heavy ion identification of the quasi-target (QT) and quasi-particle (QP). The approach used seems to be able to translate well into other fields of research. (author)
An Automatic Multilevel Image Thresholding Using Relative Entropy and Meta-Heuristic Algorithms
Directory of Open Access Journals (Sweden)
Josue R. Cuevas
2013-06-01
Full Text Available Multilevel thresholding has been long considered as one of the most popular techniques for image segmentation. Multilevel thresholding outputs a gray scale image in which more details from the original picture can be kept, while binary thresholding can only analyze the image in two colors, usually black and white. However, two major existing problems with the multilevel thresholding technique are: it is a time consuming approach, i.e., finding appropriate threshold values could take an exceptionally long computation time; and defining a proper number of thresholds or levels that will keep most of the relevant details from the original image is a difficult task. In this study a new evaluation function based on the Kullback-Leibler information distance, also known as relative entropy, is proposed. The property of this new function can help determine the number of thresholds automatically. To offset the expensive computational effort by traditional exhaustive search methods, this study establishes a procedure that combines the relative entropy and meta-heuristics. From the experiments performed in this study, the proposed procedure not only provides good segmentation results when compared with a well known technique such as Otsu’s method, but also constitutes a very efficient approach.
Event based neutron activation spectroscopy and analysis algorithm using MLE and metaheuristics
Wallace, Barton
2014-03-01
Techniques used in neutron activation analysis are often dependent on the experimental setup. In the context of developing a portable and high efficiency detection array, good energy resolution and half-life discrimination are difficult to obtain with traditional methods [1] given the logistic and financial constraints. An approach different from that of spectrum addition and standard spectroscopy analysis [2] was needed. The use of multiple detectors prompts the need for a flexible storage of acquisition data to enable sophisticated post processing of information. Analogously to what is done in heavy ion physics, gamma detection counts are stored as two-dimensional events. This enables post-selection of energies and time frames without the need to modify the experimental setup. This method of storage also permits the use of more complex analysis tools. Given the nature of the problem at hand, a light and efficient analysis code had to be devised. A thorough understanding of the physical and statistical processes [3] involved was used to create a statistical model. Maximum likelihood estimation was combined with metaheuristics to produce a sophisticated curve-fitting algorithm. Simulated and experimental data were fed into the analysis code prompting positive results in terms of half-life discrimination, peak identification and noise reduction. The code was also adapted to other fields of research such as heavy ion identification of the quasi-target (QT) and quasi-particle (QP). The approach used seems to be able to translate well into other fields of research.
A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures
Kaveh, A.; Ilchi Ghazaan, M.
2018-02-01
In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.
Valuing national effects of digital health investments: an applied method.
Hagens, Simon; Zelmer, Jennifer; Frazer, Cassandra; Gheorghiu, Bobby; Leaver, Chad
2015-01-01
This paper describes an approach which has been applied to value national outcomes of investments by federal, provincial and territorial governments, clinicians and healthcare organizations in digital health. Hypotheses are used to develop a model, which is revised and populated based upon the available evidence. Quantitative national estimates and qualitative findings are produced and validated through structured peer review processes. This methodology has applied in four studies since 2008.
Dose rate reduction method for NMCA applied BWR plants
International Nuclear Information System (INIS)
Nagase, Makoto; Aizawa, Motohiro; Ito, Tsuyoshi; Hosokawa, Hideyuki; Varela, Juan; Caine, Thomas
2012-09-01
BRAC (BWR Radiation Assessment and Control) dose rate is used as an indicator of the incorporation of activated corrosion by products into BWR recirculation piping, which is known to be a significant contributor to dose rate received by workers during refueling outages. In order to reduce radiation exposure of the workers during the outage, it is desirable to keep BRAC dose rates as low as possible. After HWC was adopted to reduce IGSCC, a BRAC dose rate increase was observed in many plants. As a countermeasure to these rapid dose rate increases under HWC conditions, Zn injection was widely adopted in United States and Europe resulting in a reduction of BRAC dose rates. However, BRAC dose rates in several plants remain high, prompting the industry to continue to investigate methods to achieve further reductions. In recent years a large portion of the BWR fleet has adopted NMCA (NobleChem TM ) to enhance the hydrogen injection effect to suppress SCC. After NMCA, especially OLNC (On-Line NobleChem TM ), BRAC dose rates were observed to decrease. In some OLNC applied BWR plants this reduction was observed year after year to reach a new reduced equilibrium level. This dose rate reduction trends suggest the potential dose reduction might be obtained by the combination of Pt and Zn injection. So, laboratory experiments and in-plant tests were carried out to evaluate the effect of Pt and Zn on Co-60 deposition behaviour. Firstly, laboratory experiments were conducted to study the effect of noble metal deposition on Co deposition on stainless steel surfaces. Polished type 316 stainless steel coupons were prepared and some of them were OLNC treated in the test loop before the Co deposition test. Water chemistry conditions to simulate HWC were as follows: Dissolved oxygen, hydrogen and hydrogen peroxide were below 5 ppb, 100 ppb and 0 ppb (no addition), respectively. Zn was injected to target a concentration of 5 ppb. The test was conducted up to 1500 hours at 553 K. Test
International Nuclear Information System (INIS)
Chou, Jui-Sheng; Ngo, Ngoc-Tri
2016-01-01
Highlights: • This study develops a novel time-series sliding window forecast system. • The system integrates metaheuristics, machine learning and time-series models. • Site experiment of smart grid infrastructure is installed to retrieve real-time data. • The proposed system accurately predicts energy consumption in residential buildings. • The forecasting system can help users minimize their electricity usage. - Abstract: Smart grids are a promising solution to the rapidly growing power demand because they can considerably increase building energy efficiency. This study developed a novel time-series sliding window metaheuristic optimization-based machine learning system for predicting real-time building energy consumption data collected by a smart grid. The proposed system integrates a seasonal autoregressive integrated moving average (SARIMA) model and metaheuristic firefly algorithm-based least squares support vector regression (MetaFA-LSSVR) model. Specifically, the proposed system fits the SARIMA model to linear data components in the first stage, and the MetaFA-LSSVR model captures nonlinear data components in the second stage. Real-time data retrieved from an experimental smart grid installed in a building were used to evaluate the efficacy and effectiveness of the proposed system. A k-week sliding window approach is proposed for employing historical data as input for the novel time-series forecasting system. The prediction system yielded high and reliable accuracy rates in 1-day-ahead predictions of building energy consumption, with a total error rate of 1.181% and mean absolute error of 0.026 kW h. Notably, the system demonstrates an improved accuracy rate in the range of 36.8–113.2% relative to those of the linear forecasting model (i.e., SARIMA) and nonlinear forecasting models (i.e., LSSVR and MetaFA-LSSVR). Therefore, end users can further apply the forecasted information to enhance efficiency of energy usage in their buildings, especially
Directory of Open Access Journals (Sweden)
Masoud Rabbani
2017-02-01
Full Text Available Nowadays, fiber-optic due to having greater bandwidth and being more efficient compared with other similar technologies, are counted as one the most important tools for data transfer. In this article, an integrated mathematical model for a three-level fiber-optic distribution network with consideration of simultaneous backbone and local access networks is presented in which the backbone network is a ring and the access networks has a star-star topology. The aim of the model is to determine the location of the central offices and splitters, how connections are made between central offices, and allocation of each demand node to a splitter or central office in a way that the wiring cost of fiber optical and concentrator installation are minimized. Moreover, each user’s desired bandwidth should be provided efficiently. Then, the proposed model is validated by GAMS software in small-sized problems, afterwards the model is solved by two meta-heuristic methods including differential evolution (DE and genetic algorithm (GA in large-scaled problems and the results of two algorithms are compared with respect to computational time and objective function obtained value. Finally, a sensitivity analysis is provided. Keyword: Fiber-optic, telecommunication network, hub-location, passive splitter, three-level network.
The harmonics detection method based on neural network applied ...
African Journals Online (AJOL)
Several different methods have been used to sense load currents and extract its ... in order to produce a reference current in shunt active power filters (SAPF), and ... technique compared to other similar methods are found quite satisfactory by ...
Muon radiography method for fundamental and applied research
Alexandrov, A. B.; Vladymyrov, M. S.; Galkin, V. I.; Goncharova, L. A.; Grachev, V. M.; Vasina, S. G.; Konovalova, N. S.; Malovichko, A. A.; Managadze, A. K.; Okat'eva, N. M.; Polukhina, N. G.; Roganova, T. M.; Starkov, N. I.; Tioukov, V. E.; Chernyavsky, M. M.; Shchedrina, T. V.
2017-12-01
This paper focuses on the basic principles of the muon radiography method, reviews the major muon radiography experiments, and presents the first results in Russia obtained by the authors using this method based on emulsion track detectors.
Methodical Aspects of Applying Strategy Map in an Organization
Piotr Markiewicz
2013-01-01
One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC). The method was c...
Classical and modular methods applied to Diophantine equations
Dahmen, S.R.
2008-01-01
Deep methods from the theory of elliptic curves and modular forms have been used to prove Fermat's last theorem and solve other Diophantine equations. These so-called modular methods can often benefit from information obtained by other, classical, methods from number theory; and vice versa. In our
The pseudo-harmonics method applied to depletion calculation
International Nuclear Information System (INIS)
Silva, F.C. da; Amaral, J.A.C.; Thome, Z.D.
1989-01-01
In this paper, a new method for performing depletion calculations, based on the use of the Pseudo-Harmonics perturbation method, was developed. The fuel burnup was considered as a global perturbation and the multigroup difusion equations were rewriten in such a way as to treat the soluble boron concentration as the eigenvalue. By doing this, the critical boron concentration can be obtained by a perturbation method. A test of the new method was performed for a H 2 O-colled, D 2 O-moderated reactor. Comparison with direct calculation showed that this method is very accurate and efficient. (author) [pt
Comparing the performance of different meta-heuristics for unweighted parallel machine scheduling
Directory of Open Access Journals (Sweden)
Adamu, Mumuni Osumah
2015-08-01
Full Text Available This article considers the due window scheduling problem to minimise the number of early and tardy jobs on identical parallel machines. This problem is known to be NP complete and thus finding an optimal solution is unlikely. Three meta-heuristics and their hybrids are proposed and extensive computational experiments are conducted. The purpose of this paper is to compare the performance of these meta-heuristics and their hybrids and to determine the best among them. Detailed comparative tests have also been conducted to analyse the different heuristics with the simulated annealing hybrid giving the best result.
Waste classification and methods applied to specific disposal sites
International Nuclear Information System (INIS)
Rogers, V.C.
1979-01-01
An adequate definition of the classes of radioactive wastes is necessary to regulating the disposal of radioactive wastes. A classification system is proposed in which wastes are classified according to characteristics relating to their disposal. Several specific sites are analyzed with the methodology in order to gain insights into the classification of radioactive wastes. Also presented is the analysis of ocean dumping as it applies to waste classification. 5 refs
nuclear and atomic methods applied in the determination of some
African Journals Online (AJOL)
NAA is a quantitative and qualitative method for the precise determination of a number of major, minor and trace elements in different types of geological, environmental and biological samples. It is based on nuclear reaction between neutron and target nuclei of a sample material. It is a useful method for the simultaneous.
Instructions for applying inverse method for reactivity measurement
International Nuclear Information System (INIS)
Milosevic, M.
1988-11-01
This report is a brief description of the completed method for reactivity measurement. It contains description of the experimental procedure needed instrumentation and computer code IM for determining reactivity. The objective of this instructions manual is to enable experiments and reactivity measurement on any critical system according to the methods adopted at the RB reactor
The spectral volume method as applied to transport problems
International Nuclear Information System (INIS)
McClarren, Ryan G.
2011-01-01
We present a new spatial discretization for transport problems: the spectral volume method. This method, rst developed by Wang for computational fluid dynamics, divides each computational cell into several sub-cells and enforces particle balance on each of these sub-cells. Also, these sub-cells are used to build a polynomial reconstruction in the cell. The idea of dividing cells into many cells is a generalization of the simple corner balance and other similar schemes. The spectral volume method preserves particle conservation and preserves the asymptotic diffusion limit. We present results from the method on two transport problems in slab geometry using discrete ordinates and second through sixth order spectral volume schemes. The numerical results demonstrate the accuracy and preservation of the diffusion limit of the spectral volume method. Future work will explore possible bene ts of the scheme for high-performance computing and for resolving diffusive boundary layers. (author)
Literature Review of Applying Visual Method to Understand Mathematics
Directory of Open Access Journals (Sweden)
Yu Xiaojuan
2015-01-01
Full Text Available As a new method to understand mathematics, visualization offers a new way of understanding mathematical principles and phenomena via image thinking and geometric explanation. It aims to deepen the understanding of the nature of concepts or phenomena and enhance the cognitive ability of learners. This paper collates and summarizes the application of this visual method in the understanding of mathematics. It also makes a literature review of the existing research, especially with a visual demonstration of Euler’s formula, introduces the application of this method in solving relevant mathematical problems, and points out the differences and similarities between the visualization method and the numerical-graphic combination method, as well as matters needing attention for its application.
Methodical Aspects of Applying Strategy Map in an Organization
Directory of Open Access Journals (Sweden)
Piotr Markiewicz
2013-06-01
Full Text Available One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC. The method was created as a result of implementing the project “Measuring performance in the Organization of the future” of 1990, completed by a team under the supervision of David Norton (Kaplan, Norton 2002. The developed method was used first of all to evaluate performance by decomposition of a strategy into four perspectives and identification of measures of achievement. In the middle of 1990s the method was improved by enriching it, first of all, with a strategy map, in which the process of transition of intangible assets into tangible financial effects is reflected (Kaplan, Norton 2001. Strategy map enables illustration of cause and effect relationship between processes in all four perspectives and performance indicators at the level of organization. The purpose of the study being prepared is to present methodical conditions of using strategy maps in the strategy implementation process in organizations of different nature.
Applying a life cycle approach to project management methods
Biggins, David; Trollsund, F.; Høiby, A.L.
2016-01-01
Project management is increasingly important to organisations because projects are the method\\ud by which organisations respond to their environment. A key element within project management\\ud is the standards and methods that are used to control and conduct projects, collectively known as\\ud project management methods (PMMs) and exemplified by PRINCE2, the Project Management\\ud Institute’s and the Association for Project Management’s Bodies of Knowledge (PMBOK and\\ud APMBOK. The purpose of t...
Method for curing alkyd resin compositions by applying ionizing radiation
International Nuclear Information System (INIS)
Watanabe, T.; Murata, K.; Maruyama, T.
1975-01-01
An alkyd resin composition is prepared by dissolving a polymerizable alkyd resin having from 10 to 50 percent of oil length into a vinyl monomer. The polymerizable alkyd resin is obtained by a half-esterification reaction of an acid anhydride having a polymerizable unsaturated group and an alkyd resin modified with conjugated unsaturated oil having at least one reactive hydroxyl group per one molecule. The alkyd resin composition thus obtained is coated on an article, and ionizing radiation is applied on the article to cure the coated film thereon. (U.S.)
The integral equation method applied to eddy currents
International Nuclear Information System (INIS)
Biddlecombe, C.S.; Collie, C.J.; Simkin, J.; Trowbridge, C.W.
1976-04-01
An algorithm for the numerical solution of eddy current problems is described, based on the direct solution of the integral equation for the potentials. In this method only the conducting and iron regions need to be divided into elements, and there are no boundary conditions. Results from two computer programs using this method for iron free problems for various two-dimensional geometries are presented and compared with analytic solutions. (author)
Apply of torque method at rationalization of work
Directory of Open Access Journals (Sweden)
Bandurová Miriam
2001-03-01
Full Text Available Aim of the study was to analyse consumption of time for profession - cylinder grinder, by torque method.Method of torque following is used for detection of sorts and size of time slope, on detection of portion of individual sorts of time consumption and cause of time slope. By this way it is possible to find out coefficient of employment and recovery of workers in organizational unit. Advantage of torque survey is low costs on informations acquirement, non-fastidiousness per worker and observer, which is easy trained. It is mentally acceptable method for objects of survey.Finding and detection of reserves in activity of cylinders grinder result of torque was surveys. Loss of time presents till 8% of working time. In 5 - shift service and average occupiying of shift by 4,4 grinder ( from statistic information of service , loss at grinder of cylinders are for whole centre 1,48 worker.According presented information it was recommended to cancel one job place - grinder of cylinders - and reduce state about one grinder. Next job place isn't possible cancel, because grindery of cylinders must to adapt to the grind line by number of polished cylinders in shift and semi - finishing of polished cylinders can not be high for often changes in area of grinding and sortiment changes.By this contribution we confirmed convenience of exploitation of torque method as one of the methods using during the job rationalization.
Thermoluminescence as a dating method applied to the Morocco Neolithic
International Nuclear Information System (INIS)
Ousmoi, M.
1989-09-01
Thermoluminescence is an absolute dating method which is well adapted to the study of burnt clays and so of the prehistoric ceramics belonging to the Neolithic period. The purpose of this study is to establish a first absolute chronology of the septentrional morocco Neolithic between 3000 and 7000 years before us and some improvements of the TL dating. The first part of the thesis contains some hypothesis about the morocco Neolithic and some problems to solve. Then we study the TL dating method along with new process to ameliorate the quality of the results like the shift of quartz TL peaks or the crushing of samples. The methods which were employed using 24 samples belonging to various civilisations are: the quartz inclusion method and the fine grain technique. For the dosimetry, several methods were used: determination of the K 2 O contents, alpha counting, site dosimetry using TL dosimeters and a scintillation counter. The results which were found bring some interesting answers to the archeologic question and ameliorate the chronologic schema of the Northern morocco Neolithic: development of the old cardial Neolithic in the North, and perhaps in the center of Morocco (the region of Rabat), between 5500 and 7000 before us. Development of the recent middle Neolithic around 4000-5000 before us, with a protocampaniforme (Skhirat), little older than the campaniforme recognized in the south of Spain. Development of the bronze age around 2000-4000 before us [fr
Modal method for crack identification applied to reactor recirculation pump
International Nuclear Information System (INIS)
Miller, W.H.; Brook, R.
1991-01-01
Nuclear reactors have been operating and producing useful electricity for many years. Within the last few years, several plants have found cracks in the reactor coolant pump shaft near the thermal barrier. The modal method and results described herein show the analytical results of using a Modal Analysis test method to determine the presence, size, and location of a shaft crack. The authors have previously demonstrated that the test method can analytically and experimentally identify shaft cracks as small as five percent (5%) of the shaft diameter. Due to small differences in material property distribution, the attempt to identify cracks smaller than 3% of the shaft diameter has been shown to be impractical. The rotor dynamics model includes a detailed motor rotor, external weights and inertias, and realistic total support stiffness. Results of the rotor dynamics model have been verified through a comparison with on-site vibration test data
Boron autoradiography method applied to the study of steels
International Nuclear Information System (INIS)
Gugelmeier, R.; Barcelo, G.N.; Boado, J.H.; Fernandez, C.
1986-01-01
The boron state, contained in the steel microestructure, is determined. The autoradiography by neutrons is used, permiting to obtain boron distribution images by means of additional information which is difficult to acquire by other methods. The application of the method is described, based on the neutronic irradiation of a polished steel sample, over which a celulose nitrate sheet or other appropriate material is fixed to constitute the detector. The particles generated by the neutron-boron interaction affect the detector sheet, which is subsequently revealed with a chemical treatment and can be observed at the optical microscope. In the case of materials used for the construction of nuclear reactors, special attention must be given to the presence of boron, since owing to the exceptionaly high capacity of neutron absorption, lowest quantities of boron acquire importance. The adaption of the method to metallurgical problems allows the obtainment of a correlation between the boron distribution images and the material's microstructure. (M.E.L.) [es
Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model
Directory of Open Access Journals (Sweden)
Oluwaseun Egbelowo
2017-05-01
Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.
Applying Nyquist's method for stability determination to solar wind observations
Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.
2017-10-01
The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.
Efficient electronic structure methods applied to metal nanoparticles
DEFF Research Database (Denmark)
Larsen, Ask Hjorth
of efficient approaches to density functional theory and the application of these methods to metal nanoparticles. We describe the formalism and implementation of localized atom-centered basis sets within the projector augmented wave method. Basis sets allow for a dramatic increase in performance compared....... The basis set method is used to study the electronic effects for the contiguous range of clusters up to several hundred atoms. The s-electrons hybridize to form electronic shells consistent with the jellium model, leading to electronic magic numbers for clusters with full shells. Large electronic gaps...... and jumps in Fermi level near magic numbers can lead to alkali-like or halogen-like behaviour when main-group atoms adsorb onto gold clusters. A non-self-consistent NewnsAnderson model is used to more closely study the chemisorption of main-group atoms on magic-number Au clusters. The behaviour at magic...
Variance reduction methods applied to deep-penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course
Non-perturbative methods applied to multiphoton ionization
International Nuclear Information System (INIS)
Brandi, H.S.; Davidovich, L.; Zagury, N.
1982-09-01
The use of non-perturbative methods in the treatment of atomic ionization is discussed. Particular attention is given to schemes of the type proposed by Keldysh where multiphoton ionization and tunnel auto-ionization occur for high intensity fields. These methods are shown to correspond to a certain type of expansion of the T-matrix in the intra-atomic potential; in this manner a criterium concerning the range of application of these non-perturbative schemes is suggested. A brief comparison between the ionization rate of atoms in the presence of linearly and circularly polarized light is presented. (Author) [pt
On second quantization methods applied to classical statistical mechanics
International Nuclear Information System (INIS)
Matos Neto, A.; Vianna, J.D.M.
1984-01-01
A method of expressing statistical classical results in terms of mathematical entities usually associated to quantum field theoretical treatment of many particle systems (Fock space, commutators, field operators, state vector) is discussed. It is developed a linear response theory using the 'second quantized' Liouville equation introduced by Schonberg. The relationship of this method to that of Prigogine et al. is briefly analyzed. The chain of equations and the spectral representations for the new classical Green's functions are presented. Generalized operators defined on Fock space are discussed. It is shown that the correlation functions can be obtained from Green's functions defined with generalized operators. (Author) [pt
Review of PCMS and heat transfer enhancement methods applied ...
African Journals Online (AJOL)
Most available PCMs have low thermal conductivity making heat transfer enhancement necessary for power applications. The various methods of heat transfer enhancement in latent heat storage systems were also reviewed systematically. The review showed that three commercially - available PCMs are suitable in the ...
E-LEARNING METHOD APPLIED TO TECHNICAL GRAPHICS SUBJECTS
Directory of Open Access Journals (Sweden)
GOANTA Adrian Mihai
2011-11-01
Full Text Available The paper presents some of the author’s endeavors in creating video courses for the students from the Faculty of Engineering in Braila related to subjects involving technical graphics . There are also mentioned the steps taken in completing the method and how to achieve a feedback on the rate of access to these types of courses by the students.
Harmony Search Method: Theory and Applications
Directory of Open Access Journals (Sweden)
X. Z. Gao
2015-01-01
Full Text Available The Harmony Search (HS method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Energy Technology Data Exchange (ETDEWEB)
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
Probabilist methods applied to electric source problems in nuclear safety
International Nuclear Information System (INIS)
Carnino, A.; Llory, M.
1979-01-01
Nuclear Safety has frequently been asked to quantify safety margins and evaluate the hazard. In order to do so, the probabilist methods have proved to be the most promising. Without completely replacing determinist safety, they are now commonly used at the reliability or availability stages of systems as well as for determining the likely accidental sequences. In this paper an application linked to the problem of electric sources is described, whilst at the same time indicating the methods used. This is the calculation of the probable loss of all the electric sources of a pressurized water nuclear power station, the evaluation of the reliability of diesels by event trees of failures and the determination of accidental sequences which could be brought about by the 'total electric source loss' initiator and affect the installation or the environment [fr
Theoretical and applied aerodynamics and related numerical methods
Chattot, J J
2015-01-01
This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...
Applying probabilistic methods for assessments and calculations for accident prevention
International Nuclear Information System (INIS)
Anon.
1984-01-01
The guidelines for the prevention of accidents require plant design-specific and radioecological calculations to be made in order to show that maximum acceptable expsoure values will not be exceeded in case of an accident. For this purpose, main parameters affecting the accident scenario have to be determined by probabilistic methods. This offers the advantage that parameters can be quantified on the basis of unambigious and realistic criteria, and final results can be defined in terms of conservativity. (DG) [de
Applying flow chemistry: methods, materials, and multistep synthesis.
McQuade, D Tyler; Seeberger, Peter H
2013-07-05
The synthesis of complex molecules requires control over both chemical reactivity and reaction conditions. While reactivity drives the majority of chemical discovery, advances in reaction condition control have accelerated method development/discovery. Recent tools include automated synthesizers and flow reactors. In this Synopsis, we describe how flow reactors have enabled chemical advances in our groups in the areas of single-stage reactions, materials synthesis, and multistep reactions. In each section, we detail the lessons learned and propose future directions.
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
The colour analysis method applied to homogeneous rocks
Directory of Open Access Journals (Sweden)
Halász Amadé
2015-12-01
Full Text Available Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.
Comparison Study of Subspace Identification Methods Applied to Flexible Structures
Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.
1998-09-01
In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.
Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation
Directory of Open Access Journals (Sweden)
Marlen Promann
2015-03-01
Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.
Evaluation of Slow Release Fertilizer Applying Chemical and Spectroscopic methods
International Nuclear Information System (INIS)
AbdEl-Kader, A.A.; Al-Ashkar, E.A.
2005-01-01
Controlled-release fertilizer offers a number of advantages in relation to crop production in newly reclaimed soils. Butadiene styrene latex emulsion is one of the promising polymer for different purposes. In this work, laboratory evaluation of butadiene styrene latex emulsion 24/76 polymer loaded with a mixed fertilizer was carried out. Macro nutrients (N, P and K) and micro-nutrients(Zn, Fe, and Cu) were extracted by basic extract from the polymer fertilizer mixtures. Micro-sampling technique was investigated and applied to measure Zn, Fe, and Cu using flame atomic absorption spectrometry in order to overcome the nebulization difficulties due to high salt content samples. The cumulative releases of macro and micro-nutrients have been assessed. From the obtained results, it is clear that the release depends on both nutrients and polymer concentration in the mixture. Macro-nutrients are released more efficient than micro-nutrients of total added. Therefore it can be used for minimizing micro-nutrients hazard in soils
Meta-Heuristics for Dynamic Lot Sizing: a review and comparison of solution approaches
R.F. Jans (Raf); Z. Degraeve (Zeger)
2004-01-01
textabstractProofs from complexity theory as well as computational experiments indicate that most lot sizing problems are hard to solve. Because these problems are so difficult, various solution techniques have been proposed to solve them. In the past decade, meta-heuristics such as tabu search,
DEFF Research Database (Denmark)
Herbert-Acero, José F.; Martínez-Lauranchet, Jaime; Probst, Oliver
2014-01-01
of the sectional blade aerodynamics. The framework considers an innovative nested-hybrid solution procedure based on two metaheuristics, the virtual gene genetic algorithm and the simulated annealing algorithm, to provide a near-optimal solution to the problem. The objective of the study is to maximize...
A Metaheuristic Scheduler for Time Division Multiplexed Network-on-Chip
DEFF Research Database (Denmark)
Sørensen, Rasmus Bo; Sparsø, Jens; Pedersen, Mark Ruvald
that this is possible with only negligible impact on the schedule period. We evaluate the scheduler with seven different applications from the MCSL NOC benchmark suite. We observe that the metaheuristics perform better than the greedy solution. In the special case of all-to-all communication with equal bandwidths...
The lumped heat capacity method applied to target heating
Rickards, J.
2013-01-01
The temperature of metal samples was measured while they were bombarded by the beam from the a particle accelerator. The evolution of the temperature with time can be explained using the lumped heat capacity method of heat transfer. A strong dependence on the type of mounting was found. Se midió la temperatura de muestras metálicas al ser bombardeadas por el haz de iones del Acelerador Pelletron del Instituto de Física. La evolución de la temperatura con el tiempo se puede explicar usando ...
Modern analytic methods applied to the art and archaeology
International Nuclear Information System (INIS)
Tenorio C, M. D.; Longoria G, L. C.
2010-01-01
The interaction of diverse areas as the analytic chemistry, the history of the art and the archaeology has allowed the development of a variety of techniques used in archaeology, in conservation and restoration. These methods have been used to date objects, to determine the origin of the old materials and to reconstruct their use and to identify the degradation processes that affect the integrity of the art works. The objective of this chapter is to offer a general vision on the researches that have been realized in the Instituto Nacional de Investigaciones Nucleares (ININ) in the field of cultural goods. A series of researches carried out in collaboration with national investigators and of the foreigner is described shortly, as well as with the great support of degree students and master in archaeology of the National School of Anthropology and History, since one of the goals that have is to diffuse the knowledge of the existence of these techniques among the young archaeologists, so that they have a wider vision of what they could use in an in mediate future and they can check hypothesis with scientific methods. (Author)
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation
Arotaritei, D.; Rotariu, C.
2015-09-01
In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).
Frequency domain methods applied to forecasting electricity markets
International Nuclear Information System (INIS)
Trapero, Juan R.; Pedregal, Diego J.
2009-01-01
The changes taking place in electricity markets during the last two decades have produced an increased interest in the problem of forecasting, either load demand or prices. Many forecasting methodologies are available in the literature nowadays with mixed conclusions about which method is most convenient. This paper focuses on the modeling of electricity market time series sampled hourly in order to produce short-term (1 to 24 h ahead) forecasts. The main features of the system are that (1) models are of an Unobserved Component class that allow for signal extraction of trend, diurnal, weekly and irregular components; (2) its application is automatic, in the sense that there is no need for human intervention via any sort of identification stage; (3) the models are estimated in the frequency domain; and (4) the robustness of the method makes possible its direct use on both load demand and price time series. The approach is thoroughly tested on the PJM interconnection market and the results improve on classical ARIMA models. (author)
Interesting Developments in Testing Methods Applied to Foundation Piles
Sobala, Dariusz; Tkaczyński, Grzegorz
2017-10-01
Both: piling technologies and pile testing methods are a subject of current development. New technologies, providing larger diameters or using in-situ materials, are very demanding in terms of providing proper quality of execution of works. That concerns the material quality and continuity which define the integral strength of pile. On the other side we have the capacity of the ground around the pile and its ability to carry the loads transferred by shaft and pile base. Inhomogeneous nature of soils and a relatively small amount of tested piles imposes very good understanding of small amount of results. In some special cases the capacity test itself form an important cost in the piling contract. This work presents a brief description of selected testing methods and authors remarks based on cooperation with Universities constantly developing new ideas. Paper presents some experience based remarks on integrity testing by means of low energy impact (low strain) and introduces selected (Polish) developments in the field of closed-end pipe piles testing based on bi-directional loading, similar to Osterberg idea, but without sacrificial hydraulic jack. Such test is suitable especially when steel piles are used for temporary support in the rivers, where constructing of conventional testing appliance with anchor piles or kentledge meets technical problems. According to the author’s experience, such tests were not yet used on the building site but they bring a real potential especially, when the displacement control can be provided from the river bank using surveying techniques.
Applying Simulation Method in Formulation of Gluten-Free Cookies
Directory of Open Access Journals (Sweden)
Nikitina Marina
2017-01-01
Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.
Nuclear method applied in archaeological sites at the Amazon basin
International Nuclear Information System (INIS)
Nicoli, Ieda Gomes; Bernedo, Alfredo Victor Bellido; Latini, Rose Mary
2002-01-01
The aim of this work was to use the nuclear methodology to character pottery discovered inside archaeological sites recognized with circular earth structure in Acre State - Brazil which may contribute to the research in the reconstruction of part of the pre-history of the Amazonic Basin. The sites are located mainly in the Hydrographic Basin of High Purus River. Three of them were strategic chosen to collect the ceramics: Lobao, in Sena Madureira County at north; Alto Alegre in Rio Branco County at east and Xipamanu I, in Xapuri County at south. Neutron Activation Analysis in conjunction with multivariate statistical methods were used for the ceramic characterization and classification. An homogeneous group was established by all the sherds collected from Alto Alegre and was distinct from the other two groups analyzed. Some of the sherds collected from Xipamunu I appeared in Lobao's urns, probably because they had the same fabrication process. (author)
Applying Multi-Criteria Analysis Methods for Fire Risk Assessment
Directory of Open Access Journals (Sweden)
Pushkina Julia
2015-11-01
Full Text Available The aim of this paper is to prove the application of multi-criteria analysis methods for optimisation of fire risk identification and assessment process. The object of this research is fire risk and risk assessment. The subject of the research is studying the application of analytic hierarchy process for modelling and influence assessment of various fire risk factors. Results of research conducted by the authors can be used by insurance companies to perform the detailed assessment of fire risks on the object and to calculate a risk extra charge to an insurance premium; by the state supervisory institutions to determine the compliance of a condition of object with requirements of regulations; by real state owners and investors to carry out actions for decrease in degree of fire risks and minimisation of possible losses.
Applied statistical methods in agriculture, health and life sciences
Lawal, Bayo
2014-01-01
This textbook teaches crucial statistical methods to answer research questions using a unique range of statistical software programs, including MINITAB and R. This textbook is developed for undergraduate students in agriculture, nursing, biology and biomedical research. Graduate students will also find it to be a useful way to refresh their statistics skills and to reference software options. The unique combination of examples is approached using MINITAB and R for their individual strengths. Subjects covered include among others data description, probability distributions, experimental design, regression analysis, randomized design and biological assay. Unlike other biostatistics textbooks, this text also includes outliers, influential observations in regression and an introduction to survival analysis. Material is taken from the author's extensive teaching and research in Africa, USA and the UK. Sample problems, references and electronic supplementary material accompany each chapter.
A new deconvolution method applied to ultrasonic images
International Nuclear Information System (INIS)
Sallard, J.
1999-01-01
This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)
Applying Human-Centered Design Methods to Scientific Communication Products
Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.
2016-12-01
Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.
Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.
Reyes Santos, Joost; Haimes, Yacov Y
2004-06-01
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model
Simplified Methods Applied to Nonlinear Motion of Spar Platforms
Energy Technology Data Exchange (ETDEWEB)
Haslum, Herbjoern Alf
2000-07-01
Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft
Variational methods applied to problems of diffusion and reaction
Strieder, William
1973-01-01
This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...
Nondestructive methods of analysis applied to oriental swords
Directory of Open Access Journals (Sweden)
Edge, David
2015-12-01
Full Text Available Various neutron techniques were employed at the Budapest Nuclear Centre in an attempt to find the most useful method for analysing the high-carbon steels found in Oriental arms and armour, such as those in the Wallace Collection, London. Neutron diffraction was found to be the most useful in terms of identifying such steels and also indicating the presence of hidden patternEn el Centro Nuclear de Budapest se han empleado varias técnicas neutrónicas con el fin de encontrar un método adecuado para analizar las armas y armaduras orientales con un alto contenido en carbono, como algunas de las que se encuentran en la Colección Wallace de Londres. El empleo de la difracción de neutrones resultó ser la técnica más útil de cara a identificar ese tipo de aceros y también para encontrar patrones escondidos.
Perturbation Method of Analysis Applied to Substitution Measurements of Buckling
Energy Technology Data Exchange (ETDEWEB)
Persson, Rolf
1966-11-15
Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.
Complexity methods applied to turbulence in plasma astrophysics
Vlahos, L.; Isliker, H.
2016-09-01
In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the
Directory of Open Access Journals (Sweden)
Rodrigo Rabello Golfeto
2008-12-01
Full Text Available This study presents a new mathematical model and a Greedy Randomized Adaptive Search Procedure (GRASP meta-heuristic to solve the ordered cutting stock problem. The ordered cutting stock problem was recently introduced in literature. It is appropriate to minimize the raw material used by industries that deal with reduced product inventories, such as industries that use the just-in-time basis for their production. In such cases, classic models for solving the cutting stock problem are useless. Results obtained from computational experiments for a set of random instances demonstrate that the proposed method can be applied to large industries that process cuts on their production lines and do not stock their products.Este estudio presenta un nuevo modelo matemático y un procedimiento meta-heurístico de búsqueda voraz adaptativa y aleatoria (GRASP, por sus siglas en inglés para resolver el problema de stock de corte ordenado. Éste problema ha sido introducido recientemente en la literatura. Es apropiado minimizar la materia prima usada por las industrias que manipulan inventarios reducidos de productos, tales como las industrias que usan la base justo a tiempo para su producción. En tales casos, los modelos clásicos para resolver el problema de stock de corte ordenado son inútiles. Los resultados obtenidos, mediante experimentos computacionales para un conjunto de ejemplos aleatorios, demuestran que el método propuesto puede ser aplicado a industrias grandes que procesan cortes en sus líneas de producción y no mantienen en stock sus productos.
Near-infrared radiation curable multilayer coating systems and methods for applying same
Bowman, Mark P; Verdun, Shelley D; Post, Gordon L
2015-04-28
Multilayer coating systems, methods of applying and related substrates are disclosed. The coating system may comprise a first coating comprising a near-IR absorber, and a second coating deposited on a least a portion of the first coating. Methods of applying a multilayer coating composition to a substrate may comprise applying a first coating comprising a near-IR absorber, applying a second coating over at least a portion of the first coating and curing the coating with near infrared radiation.
Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti
2010-01-01
In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…
Methods for otpimum and near optimum disassembly sequencing
Lambert, A.J.D.; Gupta, S.M.
2008-01-01
This paper considers disassembly sequencing problems subjected to sequence dependent disassembly costs. In practice, the methods for dealing with such problems rely mainly on metaheuristic and heuristic methods, which intrinsically generate suboptimum solutions. Exact methods are NP-hard and
Marco A. Contreras; Woodam Chung; Greg Jones
2008-01-01
Forest transportation planning problems (FTPP) have evolved from considering only the financial aspects of timber management to more holistic problems that also consider the environmental impacts of roads. These additional requirements have introduced side constraints, making FTPP larger and more complex. Mixed-integer programming (MIP) has been used to solve FTPP, but...
Metaheuristics applied to vehicle routing. A case study. Parte 1: formulating the problem
Directory of Open Access Journals (Sweden)
Guillermo González Vargas
2006-09-01
Full Text Available This paper deals with VRP (vehicle routing problem mathematical formulation and presents some methodologies used by different authors to solve VRP variation. This paper is presented as the springboard for introducing future papers about a manufacturing company’s location decisions based on the total distance traveled to distribute its product.
A metaheuristic for a numerical approximation to the mass transfer problem
Directory of Open Access Journals (Sweden)
Avendaño-Garrido Martha L.
2016-12-01
Full Text Available This work presents an improvement of the approximation scheme for the Monge-Kantorovich (MK mass transfer problem on compact spaces, which is studied by Gabriel et al. (2010, whose scheme discretizes the MK problem, reduced to solve a sequence of finite transport problems. The improvement presented in this work uses a metaheuristic algorithm inspired by scatter search in order to reduce the dimensionality of each transport problem. The new scheme solves a sequence of linear programming problems similar to the transport ones but with a lower dimension. The proposed metaheuristic is supported by a convergence theorem. Finally, examples with an exact solution are used to illustrate the performance of our proposal.
Directory of Open Access Journals (Sweden)
Afshin Mehrsai
2013-01-01
Full Text Available Alternative material flow strategies in logistics networks have crucial influences on the overall performance of the networks. Material flows can follow push, pull, or hybrid systems. To get the advantages of both push and pull flows in networks, the decoupling-point strategy is used as coordination mean. At this point, material pull has to get optimized concerning customer orders against pushed replenishment-rates. To compensate the ambiguity and uncertainty of both dynamic flows, fuzzy set theory can practically be applied. This paper has conceptual and mathematical parts to explain the performance of the push-pull flow strategy in a supply network and to give a novel solution for optimizing the pull side employing Conwip system. Alternative numbers of pallets and their lot-sizes circulating in the assembly system are getting optimized in accordance with a multi-objective problem; employing a hybrid approach out of meta-heuristics (genetic algorithm and simulated annealing and fuzzy system. Two main fuzzy sets as triangular and trapezoidal are applied in this technique for estimating ill-defined waiting times. The configured technique leads to smoother flows between push and pull sides in complex networks. A discrete-event simulation model is developed to analyze this thesis in an exemplary logistics network with dynamics.
A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems
2005-05-01
Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT
Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm.
Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel
2016-01-01
The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.
A Hybrid Metaheuristic for Multi-Objective Scientific Workflow Scheduling in a Cloud Environment
Directory of Open Access Journals (Sweden)
Nazia Anwar
2018-03-01
Full Text Available Cloud computing has emerged as a high-performance computing environment with a large pool of abstracted, virtualized, flexible, and on-demand resources and services. Scheduling of scientific workflows in a distributed environment is a well-known NP-complete problem and therefore intractable with exact solutions. It becomes even more challenging in the cloud computing platform due to its dynamic and heterogeneous nature. The aim of this study is to optimize multi-objective scheduling of scientific workflows in a cloud computing environment based on the proposed metaheuristic-based algorithm, Hybrid Bio-inspired Metaheuristic for Multi-objective Optimization (HBMMO. The strong global exploration ability of the nature-inspired metaheuristic Symbiotic Organisms Search (SOS is enhanced by involving an efficient list-scheduling heuristic, Predict Earliest Finish Time (PEFT, in the proposed algorithm to obtain better convergence and diversity of the approximate Pareto front in terms of reduced makespan, minimized cost, and efficient load balance of the Virtual Machines (VMs. The experiments using different scientific workflow applications highlight the effectiveness, practicality, and better performance of the proposed algorithm.
Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar
2017-07-01
This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.
Directory of Open Access Journals (Sweden)
Taylor Mac Intyer Fonseca Junior
2013-12-01
Full Text Available This work evaluate seven estimation methods of fatigue properties applied to stainless steels and aluminum alloys. Experimental strain-life curves are compared to the estimations obtained by each method. After applying seven different estimation methods at 14 material conditions, it was found that fatigue life can be estimated with good accuracy only by the Bäumel-Seeger method for the martensitic stainless steel tempered between 300°C and 500°C. The differences between mechanical behavior during monotonic and cyclic loading are probably the reason for the absence of a reliable method for estimation of fatigue behavior from monotonic properties for a group of materials.
Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.
2018-01-01
Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…
An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy
Gamso, Nancy M.
2011-01-01
The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…
Directory of Open Access Journals (Sweden)
Dawid Połap
2017-09-01
Full Text Available In the proposed article, we present a nature-inspired optimization algorithm, which we called Polar Bear Optimization Algorithm (PBO. The inspiration to develop the algorithm comes from the way polar bears hunt to survive in harsh arctic conditions. These carnivorous mammals are active all year round. Frosty climate, unfavorable to other animals, has made polar bears adapt to the specific mode of exploration and hunting in large areas, not only over ice but also water. The proposed novel mathematical model of the way polar bears move in the search for food and hunt can be a valuable method of optimization for various theoretical and practical problems. Optimization is very similar to nature, similarly to search for optimal solutions for mathematical models animals search for optimal conditions to develop in their natural environments. In this method. we have used a model of polar bear behaviors as a search engine for optimal solutions. Proposed simulated adaptation to harsh winter conditions is an advantage for local and global search, while birth and death mechanism controls the population. Proposed PBO was evaluated and compared to other meta-heuristic algorithms using sample test functions and some classical engineering problems. Experimental research results were compared to other algorithms and analyzed using various parameters. The analysis allowed us to identify the leading advantages which are rapid recognition of the area by the relevant population and efficient birth and death mechanism to improve global and local search within the solution space.
Energy Technology Data Exchange (ETDEWEB)
Sehgal, A K; Gupta, S C [Punjabi Univ., Patiala (India). Dept. of Physics
1982-12-14
The complementary variational principles method (CVP) is applied to the thermal conductivities of a plasma in a uniform magnetic field. The results of computations show that the CVP derived results are very useful.
Particle swarm optimization with random keys applied to the nuclear reactor reload problem
Energy Technology Data Exchange (ETDEWEB)
Meneses, Anderson Alvarenga de Moura [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Programa de Engenharia Nuclear; Fundacao Educacional de Macae (FUNEMAC), RJ (Brazil). Faculdade Professor Miguel Angelo da Silva Santos; Machado, Marcelo Dornellas; Medeiros, Jose Antonio Carlos Canedo; Schirru, Roberto [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Programa de Engenharia Nuclear]. E-mails: ameneses@con.ufrj.br; marcelo@lmp.ufrj.br; canedo@lmp.ufrj.br; schirru@lmp.ufrj.br
2007-07-01
In 1995, Kennedy and Eberhart presented the Particle Swarm Optimization (PSO), an Artificial Intelligence metaheuristic technique to optimize non-linear continuous functions. The concept of Swarm Intelligence is based on the socials aspects of intelligence, it means, the ability of individuals to learn with their own experience in a group as well as to take advantage of the performance of other individuals. Some PSO models for discrete search spaces have been developed for combinatorial optimization, although none of them presented satisfactory results to optimize a combinatorial problem as the nuclear reactor fuel reloading problem (NRFRP). In this sense, we developed the Particle Swarm Optimization with Random Keys (PSORK) in previous research to solve Combinatorial Problems. Experiences demonstrated that PSORK performed comparable to or better than other techniques. Thus, PSORK metaheuristic is being applied in optimization studies of the NRFRP for Angra 1 Nuclear Power Plant. Results will be compared with Genetic Algorithms and the manual method provided by a specialist. In this experience, the problem is being modeled for an eight-core symmetry and three-dimensional geometry, aiming at the minimization of the Nuclear Enthalpy Power Peaking Factor as well as the maximization of the cycle length. (author)
Particle swarm optimization with random keys applied to the nuclear reactor reload problem
International Nuclear Information System (INIS)
Meneses, Anderson Alvarenga de Moura; Fundacao Educacional de Macae; Machado, Marcelo Dornellas; Medeiros, Jose Antonio Carlos Canedo; Schirru, Roberto
2007-01-01
In 1995, Kennedy and Eberhart presented the Particle Swarm Optimization (PSO), an Artificial Intelligence metaheuristic technique to optimize non-linear continuous functions. The concept of Swarm Intelligence is based on the socials aspects of intelligence, it means, the ability of individuals to learn with their own experience in a group as well as to take advantage of the performance of other individuals. Some PSO models for discrete search spaces have been developed for combinatorial optimization, although none of them presented satisfactory results to optimize a combinatorial problem as the nuclear reactor fuel reloading problem (NRFRP). In this sense, we developed the Particle Swarm Optimization with Random Keys (PSORK) in previous research to solve Combinatorial Problems. Experiences demonstrated that PSORK performed comparable to or better than other techniques. Thus, PSORK metaheuristic is being applied in optimization studies of the NRFRP for Angra 1 Nuclear Power Plant. Results will be compared with Genetic Algorithms and the manual method provided by a specialist. In this experience, the problem is being modeled for an eight-core symmetry and three-dimensional geometry, aiming at the minimization of the Nuclear Enthalpy Power Peaking Factor as well as the maximization of the cycle length. (author)
Wielandt method applied to the diffusion equations discretized by finite element nodal methods
International Nuclear Information System (INIS)
Mugica R, A.; Valle G, E. del
2003-01-01
Nowadays the numerical methods of solution to the diffusion equation by means of algorithms and computer programs result so extensive due to the great number of routines and calculations that should carry out, this rebounds directly in the execution times of this programs, being obtained results in relatively long times. This work shows the application of an acceleration method of the convergence of the classic method of those powers that it reduces notably the number of necessary iterations for to obtain reliable results, what means that the compute times they see reduced in great measure. This method is known in the literature like Wielandt method and it has incorporated to a computer program that is based on the discretization of the neutron diffusion equations in plate geometry and stationary state by polynomial nodal methods. In this work the neutron diffusion equations are described for several energy groups and their discretization by means of those called physical nodal methods, being illustrated in particular the quadratic case. It is described a model problem widely described in the literature which is solved for the physical nodal grade schemes 1, 2, 3 and 4 in three different ways: to) with the classic method of the powers, b) method of the powers with the Wielandt acceleration and c) method of the powers with the Wielandt modified acceleration. The results for the model problem as well as for two additional problems known as benchmark problems are reported. Such acceleration method can also be implemented to problems of different geometry to the proposal in this work, besides being possible to extend their application to problems in 2 or 3 dimensions. (Author)
What is the method in applying formal methods to PLC applications?
Mader, Angelika H.; Engel, S.; Wupper, Hanno; Kowalewski, S.; Zaytoon, J.
2000-01-01
The question we investigate is how to obtain PLC applications with confidence in their proper functioning. Especially, we are interested in the contribution that formal methods can provide for their development. Our maxim is that the place of a particular formal method in the total picture of system
Formal methods applied to industrial complex systems implementation of the B method
Boulanger, Jean-Louis
2014-01-01
This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from
Xu, Zhenzhen; Zou, Yongxing; Kong, Xiangjie
2015-01-01
To our knowledge, this paper investigates the first application of meta-heuristic algorithms to tackle the parallel machines scheduling problem with weighted late work criterion and common due date ([Formula: see text]). Late work criterion is one of the performance measures of scheduling problems which considers the length of late parts of particular jobs when evaluating the quality of scheduling. Since this problem is known to be NP-hard, three meta-heuristic algorithms, namely ant colony system, genetic algorithm, and simulated annealing are designed and implemented, respectively. We also propose a novel algorithm named LDF (largest density first) which is improved from LPT (longest processing time first). The computational experiments compared these meta-heuristic algorithms with LDF, LPT and LS (list scheduling), and the experimental results show that SA performs the best in most cases. However, LDF is better than SA in some conditions, moreover, the running time of LDF is much shorter than SA.
Directory of Open Access Journals (Sweden)
Peng Wang
2013-01-01
Full Text Available This paper presents a novel biologically inspired metaheuristic algorithm called seven-spot ladybird optimization (SLO. The SLO is inspired by recent discoveries on the foraging behavior of a seven-spot ladybird. In this paper, the performance of the SLO is compared with that of the genetic algorithm, particle swarm optimization, and artificial bee colony algorithms by using five numerical benchmark functions with multimodality. The results show that SLO has the ability to find the best solution with a comparatively small population size and is suitable for solving optimization problems with lower dimensions.
A new clamp method for firing bricks | Obeng | Journal of Applied ...
African Journals Online (AJOL)
A new clamp method for firing bricks. ... Journal of Applied Science and Technology ... To overcome this operational deficiencies, a new method of firing bricks that uses brick clamp technique that incorporates a clamp wall of 60 cm thickness, a six tier approach of sealing the top of the clamp (by combination of green bricks) ...
1978-10-01
This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...
Determination methods for plutonium as applied in the field of reprocessing
International Nuclear Information System (INIS)
1983-07-01
The papers presented report on Pu-determination methods, which are routinely applied in process control, and also on new developments which could supercede current methods either because they are more accurate or because they are simpler and faster. (orig./DG) [de
Water Permeability of Pervious Concrete Is Dependent on the Applied Pressure and Testing Methods
Directory of Open Access Journals (Sweden)
Yinghong Qin
2015-01-01
Full Text Available Falling head method (FHM and constant head method (CHM are, respectively, used to test the water permeability of permeable concrete, using different water heads on the testing samples. The results indicate the apparent permeability of pervious concrete decreasing with the applied water head. The results also demonstrate the permeability measured from the FHM is lower than that from the CHM. The fundamental difference between the CHM and FHM is examined from the theory of fluid flowing through porous media. The testing results suggest that the water permeability of permeable concrete should be reported with the applied pressure and the associated testing method.
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
Directory of Open Access Journals (Sweden)
Javier Cubas
2015-01-01
Full Text Available A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
Diamond difference method with hybrid angular quadrature applied to neutron transport problems
International Nuclear Information System (INIS)
Zani, Jose H.; Barros, Ricardo C.; Alves Filho, Hermes
2005-01-01
In this work we presents the results for the calculations of the disadvantage factor in thermal nuclear reactor physics. We use the one-group discrete ordinates (S N ) equations to mathematically model the flux distributions in slab lattices. We apply the diamond difference method with source iteration iterative scheme to numerically solve the discretized systems equations. We used special interface conditions to describe the method with hybrid angular quadrature. We show numerical results to illustrate the accuracy of the hybrid method. (author)
Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto
In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.
Bendinskaitė, Irmina
2015-01-01
Bendinskaitė I. Perspective for applying traditional and innovative teaching and learning methods to nurse’s continuing education, magister thesis / supervisor Assoc. Prof. O. Riklikienė; Departament of Nursing and Care, Faculty of Nursing, Lithuanian University of Health Sciences. – Kaunas, 2015, – p. 92 The purpose of this study was to investigate traditional and innovative teaching and learning methods perspective to nurse’s continuing education. Material and methods. In a period fro...
Cluster detection methods applied to the Upper Cape Cod cancer data
Directory of Open Access Journals (Sweden)
Ozonoff David
2005-09-01
Full Text Available Abstract Background A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. Methods We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. Results The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. Conclusion The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.
Apparatus and method for applying an end plug to a fuel rod tube end
International Nuclear Information System (INIS)
Rieben, S.L.; Wylie, M.E.
1987-01-01
An apparatus is described for applying an end plug to a hollow end of a nuclear fuel rod tube, comprising: support means mounted for reciprocal movement between remote and adjacent positions relative to a nuclear fuel rod tube end to which an end plug is to be applied; guide means supported on the support means for movement; and drive means coupled to the support means and being actuatable for movement between retracted and extended positions for reciprocally moving the support means between its respective remote and adjacent positions. A method for applying an end plug to a hollow end of a nuclear fuel rod tube is also described
Method of levelized discounted costs applied in economic evaluation of nuclear power plant project
International Nuclear Information System (INIS)
Tian Li; Wang Yongqing; Liu Jingquan; Guo Jilin; Liu Wei
2000-01-01
The main methods of economic evaluation of bid which are in common use are introduced. The characteristics of levelized discounted cost method and its application are presented. The method of levelized discounted cost is applied to the cost calculation of a 200 MW nuclear heating reactor economic evaluation. The results indicate that the method of levelized discounted costs is simple, feasible and which is considered most suitable for the economic evaluation of various case. The method is suggested which is used in the national economic evaluation
STOCK MARKET PREDICTION USING CLUSTERING WITH META-HEURISTIC APPROACHES
Prasanna, S.; Ezhilmaran, D.
2015-01-01
Various examinations are performed to predict the stock values, yet not many points at assessing the predictability of the direction of stock index movement. Stock market prediction with data mining method is a standout amongst the most paramount issues to be researched and it is one of the interesting issues of stock market research over several decades. The approach of advanced data mining tools and refined database innovations has empowered specialists to handle the immense measure of data...
STOCK MARKET PREDICTION USING CLUSTERING WITH META-HEURISTIC APPROACHES
Prasanna, S.; Ezhilmaran, D.
2014-01-01
Various examinations are performed to predict the stock values, yet not many points at assessing the predictability of the direction of stock index movement. Stock market prediction with data mining method is a standout amongst the most paramount issues to be researched and it is one of the interesting issues of stock market research over several decades. The approach of advanced data mining tools and refined database innovations has empowered specialists to handle the immense measure of data...
Local regression type methods applied to the study of geophysics and high frequency financial data
Mariani, M. C.; Basu, K.
2014-09-01
In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.
Water demand forecasting: review of soft computing methods.
Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R
2017-07-01
Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.
A cellular automata based FPGA realization of a new metaheuristic bat-inspired algorithm
Progias, Pavlos; Amanatiadis, Angelos A.; Spataro, William; Trunfio, Giuseppe A.; Sirakoulis, Georgios Ch.
2016-10-01
Optimization algorithms are often inspired by processes occuring in nature, such as animal behavioral patterns. The main concern with implementing such algorithms in software is the large amounts of processing power they require. In contrast to software code, that can only perform calculations in a serial manner, an implementation in hardware, exploiting the inherent parallelism of single-purpose processors, can prove to be much more efficient both in speed and energy consumption. Furthermore, the use of Cellular Automata (CA) in such an implementation would be efficient both as a model for natural processes, as well as a computational paradigm implemented well on hardware. In this paper, we propose a VHDL implementation of a metaheuristic algorithm inspired by the echolocation behavior of bats. More specifically, the CA model is inspired by the metaheuristic algorithm proposed earlier in the literature, which could be considered at least as efficient than other existing optimization algorithms. The function of the FPGA implementation of our algorithm is explained in full detail and results of our simulations are also demonstrated.
Method to detect substances in a body and device to apply the method
International Nuclear Information System (INIS)
Voigt, H.
1978-01-01
The method and the measuring disposition serve to localize pellets doped with Gd 2 O 3 , lying between UO 2 pellets within a reactor fuel rod. The fuel rod is penetrating a homogeneous magnetic field generated between two pole shoes. The magnetic stray field caused by the doping substances is then measured by means of Hall probes (e.g. InAs) for quantitative discrimination from UO 2 . The position of the Gd 2 O 3 -doped pellets is determined by moving the fuel rod through the magnetic field in a direction perpendicular to the homogeneous field. The measuring signal is caused by the different susceptibility of Gd 2 O 3 with respect to UO 2 . (DG) [de
International Nuclear Information System (INIS)
Terra, Andre Miguel Barge Pontes Torres
2005-01-01
The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author)
Takae, Kyohei; Onuki, Akira
2013-09-28
We develop an efficient Ewald method of molecular dynamics simulation for calculating the electrostatic interactions among charged and polar particles between parallel metallic plates, where we may apply an electric field with an arbitrary size. We use the fact that the potential from the surface charges is equivalent to the sum of those from image charges and dipoles located outside the cell. We present simulation results on boundary effects of charged and polar fluids, formation of ionic crystals, and formation of dipole chains, where the applied field and the image interaction are crucial. For polar fluids, we find a large deviation of the classical Lorentz-field relation between the local field and the applied field due to pair correlations along the applied field. As general aspects, we clarify the difference between the potential-fixed and the charge-fixed boundary conditions and examine the relationship between the discrete particle description and the continuum electrostatics.
Non-invasive imaging methods applied to neo- and paleo-ontological cephalopod research
Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.
2014-05-01
Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum/maximum size of objects that can be studied, the degree of post-processing needed and availability. The main application of the methods is seen in morphometry and volumetry of cephalopod shells. In particular we present a method for precise buoyancy calculation. Therefore, cephalopod shells were scanned together with different reference bodies, an approach developed in medical sciences. It is necessary to know the volume of the reference bodies, which should have similar absorption properties like the object of interest. Exact volumes can be obtained from surface scanning. Depending on the dimensions of the study object different computed tomography techniques were applied.
DEFF Research Database (Denmark)
Zambach, Sine; Madsen, Bodil Nistrup
2009-01-01
By applying formal terminological methods to model an ontology within the domain of enzyme inhibition, we aim to clarify concepts and to obtain consistency. Additionally, we propose a procedure for implementing this ontology in OWL with the aim of obtaining a strict structure which can form...
Method of applying single higher order polynomial basis function over multiple domains
CSIR Research Space (South Africa)
Lysko, AA
2010-03-01
Full Text Available A novel method has been devised where one set of higher order polynomial-based basis functions can be applied over several wire segments, thus permitting to decouple the number of unknowns from the number of segments, and so from the geometrical...
Applied probabilistic methods in the field of reactor safety in Germany
International Nuclear Information System (INIS)
Heuser, F.W.
1982-01-01
Some aspects of applied reliability and risk analysis methods in nuclear safety and the present role of both in Germany, are discussed. First, some comments on the status and applications of reliability analysis are given. Second, some conclusions that can be drawn from previous work on the German Risk Study are summarized. (orig.)
21 CFR 111.320 - What requirements apply to laboratory methods for testing and examination?
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to laboratory methods for testing and examination? 111.320 Section 111.320 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING...
Splendor and misery of the distorted wave method applied to heavy ions transfer reactions
International Nuclear Information System (INIS)
Mermaz, M.C.
1979-01-01
The success and failure of the Distorted Wave Method (DWM) applied to heavy ion transfer reactions are illustrated by few examples: one and multi-nucleon transfer reactions induced by 15 N and 18 O on 28 Si target nucleus performed on the vicinity of Coulomb barrier respectively at 44 and 56 MeV incident energy
A nodal method applied to a diffusion problem with generalized coefficients
International Nuclear Information System (INIS)
Laazizi, A.; Guessous, N.
1999-01-01
In this paper, we consider second order neutrons diffusion problem with coefficients in L ∞ (Ω). Nodal method of the lowest order is applied to approximate the problem's solution. The approximation uses special basis functions in which the coefficients appear. The rate of convergence obtained is O(h 2 ) in L 2 (Ω), with a free rectangular triangulation. (authors)
Trends in Research Methods in Applied Linguistics: China and the West.
Yihong, Gao; Lichun, Li; Jun, Lu
2001-01-01
Examines and compares current trends in applied linguistics (AL) research methods in China and the West. Reviews AL articles in four Chinese journals, from 1978-1997, and four English journals from 1985 to 1997. Articles are categorized and subcategorized. Results show that in China, AL research is heading from non-empirical toward empirical, with…
Critical path method applied to research project planning: Fire Economics Evaluation System (FEES)
Earl B. Anderson; R. Stanton Hales
1986-01-01
The critical path method (CPM) of network analysis (a) depicts precedence among the many activities in a project by a network diagram; (b) identifies critical activities by calculating their starting, finishing, and float times; and (c) displays possible schedules by constructing time charts. CPM was applied to the development of the Forest Service's Fire...
Rajabi, A; Dabiri, A
2012-01-01
Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.
Enhanced Multi-Objective Energy Optimization by a Signaling Method
Soares, João; Borges, Nuno; Vale, Zita; Oliveira, P.B.
2016-01-01
In this paper three metaheuristics are used to solve a smart grid multi-objective energy management problem with conflictive design: how to maximize profits and minimize carbon dioxide (CO2) emissions, and the results compared. The metaheuristics implemented are: weighted particle swarm optimization (W-PSO), multi-objective particle swarm optimization (MOPSO) and non-dominated sorting genetic algorithm II (NSGA-II). The performance of these methods with the use of multi-dimensi...
A new effective Monte Carlo Midway coupling method in MCNP applied to a well logging problem
Energy Technology Data Exchange (ETDEWEB)
Serov, I.V.; John, T.M.; Hoogenboom, J.E
1998-12-01
The background of the Midway forward-adjoint coupling method including the black absorber technique for efficient Monte Carlo determination of radiation detector responses is described. The method is implemented in the general purpose MCNP Monte Carlo code. The utilization of the method is fairly straightforward and does not require any substantial extra expertise. The method was applied to a standard neutron well logging porosity tool problem. The results exhibit reliability and high efficiency of the Midway method. For the studied problem the efficiency gain is considerably higher than for a normal forward calculation, which is already strongly optimized by weight-windows. No additional effort is required to adjust the Midway model if the position of the detector or the porosity of the formation is changed. Additionally, the Midway method can be used with other variance reduction techniques if extra gain in efficiency is desired.
Determination of activity of I-125 applying sum-peak methods
International Nuclear Information System (INIS)
Arbelo Penna, Y.; Hernandez Rivero, A.T.; Oropesa Verdecia, P.; Serra Aguila, R.; Moreno Leon, Y.
2011-01-01
The determination of activity of I-125 in radioactive solutions, applying sum-peak methods, by using an n-type HPGe detector of extended range is described. Two procedures were used for obtaining I-125 specific activity in solutions: a) an absolute method, which is independent of nuclear parameters and detector efficiency, and b) an option which consider constant the efficiency in the region of interest and involves calculations using nuclear parameters. The measurement geometries studied are specifically solid point sources. The relative deviations between specific activities, obtained by these different procedures are not higher than 1 %. Moreover, the activity of the radioactive solution was obtained by measuring it in NIST ampoule using a CAPINTEC CRC 35R dose calibrator. The consistency of obtained results, confirm the feasibility of applying direct methods of measurement for I-125 activity determinations, which allow us to achieve lower uncertainties in comparison with the relative methods of measurement. The establishment of these methods is aimed to be applied for the calibration of equipment and radionuclide dose calibrators used currently in clinical RIA/IRMA assays and Nuclear medicine practice respectively. (Author)
An applied study using systems engineering methods to prioritize green systems options
Energy Technology Data Exchange (ETDEWEB)
Lee, Sonya M [Los Alamos National Laboratory; Macdonald, John M [Los Alamos National Laboratory
2009-01-01
For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective into how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.
Economic consequences assessment for scenarios and actual accidents do the same methods apply
International Nuclear Information System (INIS)
Brenot, J.
1991-01-01
Methods for estimating the economic consequences of major technological accidents, and their corresponding computer codes, are briefly presented with emphasis on the basic choices. When applied to hypothetic scenarios, those methods give results that are of interest for risk managers with a decision aiding perspective. Simultaneously the various costs, and the procedures for their estimation are reviewed for some actual accidents (Three Mile Island, Chernobyl,..). These costs are used in a perspective of litigation and compensation. The comparison of the methods used and cost estimates obtained for scenarios and actual accidents shows the points of convergence and discrepancies that are discussed
Non-invasive imaging methods applied to neo- and paleontological cephalopod research
Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.
2013-11-01
Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the methods is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.
Covariance methodology applied to 35S disintegration rate measurements by the CIEMAT/NIST method
International Nuclear Information System (INIS)
Koskinas, M.F.; Nascimento, T.S.; Yamazaki, I.M.; Dias, M.S.
2014-01-01
The Nuclear Metrology Laboratory (LMN) at IPEN is carrying out measurements in a LSC (Liquid Scintillation Counting system), applying the CIEMAT/NIST method. In this context 35 S is an important radionuclide for medical applications and it is difficult to be standardized by other primary methods due to low beta ray energy. The CIEMAT/NIST is a standard technique used by most metrology laboratories in order to improve accuracy and speed up beta emitter standardization. The focus of the present work was to apply the covariance methodology for determining the overall uncertainty in the 35 S disintegration rate. All partial uncertainties involved in the measurements were considered, taking into account all possible correlations between each pair of them. - Highlights: ► 35 S disintegration rate measured in Liquid Scintillator system using CIEMAT/NIST method. ► Covariance methodology applied to the overall uncertainty in the 35 S disintegration rate. ► Monte Carlo simulation was applied to determine 35 S activity in the 4πβ(PC)-γ coincidence system
Power System Oscillation Modes Identifications: Guidelines for Applying TLS-ESPRIT Method
Gajjar, Gopal R.; Soman, Shreevardhan
2013-05-01
Fast measurements of power system quantities available through wide-area measurement systems enables direct observations for power system electromechanical oscillations. But the raw observations data need to be processed to obtain the quantitative measures required to make any inference regarding the power system state. A detailed discussion is presented for the theory behind the general problem of oscillatory mode indentification. This paper presents some results on oscillation mode identification applied to a wide-area frequency measurements system. Guidelines for selection of parametes for obtaining most reliable results from the applied method are provided. Finally, some results on real measurements are presented with our inference on them.
Multigrid method applied to the solution of an elliptic, generalized eigenvalue problem
Energy Technology Data Exchange (ETDEWEB)
Alchalabi, R.M. [BOC Group, Murray Hill, NJ (United States); Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States)
1996-12-31
The work presented in this paper is concerned with the development of an efficient MG algorithm for the solution of an elliptic, generalized eigenvalue problem. The application is specifically applied to the multigroup neutron diffusion equation which is discretized by utilizing the Nodal Expansion Method (NEM). The underlying relaxation method is the Power Method, also known as the (Outer-Inner Method). The inner iterations are completed using Multi-color Line SOR, and the outer iterations are accelerated using Chebyshev Semi-iterative Method. Furthermore, the MG algorithm utilizes the consistent homogenization concept to construct the restriction operator, and a form function as a prolongation operator. The MG algorithm was integrated into the reactor neutronic analysis code NESTLE, and numerical results were obtained from solving production type benchmark problems.
Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction
Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan
2009-01-01
Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Directory of Open Access Journals (Sweden)
Ming Zeng
2017-01-01
Full Text Available The gantry crane scheduling and storage space allocation problem in the main containers yard of railway container terminal is studied. A mixed integer programming model which comprehensively considers the handling procedures, noncrossing constraints, the safety margin and traveling time of gantry cranes, and the storage modes in the main area is formulated. A metaheuristic named backtracking search algorithm (BSA is then improved to solve this intractable problem. A series of computational experiments are carried out to evaluate the performance of the proposed algorithm under some randomly generated cases based on the practical operation conditions. The results show that the proposed algorithm can gain the near-optimal solutions within a reasonable computation time.
Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.
2015-05-01
A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.
Artificial Intelligence, Evolutionary Computing and Metaheuristics In the Footsteps of Alan Turing
2013-01-01
Alan Turing pioneered many research areas such as artificial intelligence, computability, heuristics and pattern formation. Nowadays at the information age, it is hard to imagine how the world would be without computers and the Internet. Without Turing's work, especially the core concept of Turing Machine at the heart of every computer, mobile phone and microchip today, so many things on which we are so dependent would be impossible. 2012 is the Alan Turing year -- a centenary celebration of the life and work of Alan Turing. To celebrate Turing's legacy and follow the footsteps of this brilliant mind, we take this golden opportunity to review the latest developments in areas of artificial intelligence, evolutionary computation and metaheuristics, and all these areas can be traced back to Turing's pioneer work. Topics include Turing test, Turing machine, artificial intelligence, cryptography, software testing, image processing, neural networks, nature-inspired algorithms such as bat algorithm and cuckoo sear...
Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.
2014-01-01
This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860
A Meta-Heuristic Regression-Based Feature Selection for Predictive Analytics
Directory of Open Access Journals (Sweden)
Bharat Singh
2014-11-01
Full Text Available A high-dimensional feature selection having a very large number of features with an optimal feature subset is an NP-complete problem. Because conventional optimization techniques are unable to tackle large-scale feature selection problems, meta-heuristic algorithms are widely used. In this paper, we propose a particle swarm optimization technique while utilizing regression techniques for feature selection. We then use the selected features to classify the data. Classification accuracy is used as a criterion to evaluate classifier performance, and classification is accomplished through the use of k-nearest neighbour (KNN and Bayesian techniques. Various high dimensional data sets are used to evaluate the usefulness of the proposed approach. Results show that our approach gives better results when compared with other conventional feature selection algorithms.
Energy Technology Data Exchange (ETDEWEB)
Souza Filho, Erito M.; Bahiense, Laura; Ferreira Filho, Virgilio J.M. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Lima, Leonardo [Centro Federal de Educacao Tecnologica Celso Sukow da Fonseca (CEFET-RJ), Rio de Janeiro, RJ (Brazil)
2008-07-01
Pipeline are known as the most reliable and economical mode of transportation for petroleum and its derivatives, especially when large amounts of products have to be pumped for large distances. In this work we address the short-term schedule of a pipeline system comprising the distribution of several petroleum derivatives from a single oil refinery to several depots, connected to local consumer markets, through a single multi-product pipeline. We propose an integer linear programming formulation and a variable neighborhood search meta-heuristic in order to compare the performances of the exact and heuristic approaches to the problem. Computational tests in C language and MOSEL/XPRESS-MP language are performed over a real Brazilian pipeline system. (author)
Applied ecosystem analysis - a primer; the ecosystem diagnosis and treatment method
International Nuclear Information System (INIS)
Lestelle, L.C.; Mobrand, L.E.; Lichatowich, J.A.; Vogel, T.S.
1996-05-01
The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual
An Ultrasonic Guided Wave Method to Estimate Applied Biaxial Loads (Preprint)
2011-11-01
VALIDATION A fatigue test was performed with an array of six surface-bonded PZT transducers on a 6061 aluminum plate as shown in Figure 4. The specimen...direct paths of propagation are oriented at different angles. This method is applied to experimental sparse array data recorded during a fatigue test...and the additional complication of the resulting fatigue cracks interfering with some of the direct arrivals is addressed via proper selection of
Accuracy of the Adomian decomposition method applied to the Lorenz system
International Nuclear Information System (INIS)
Hashim, I.; Noorani, M.S.M.; Ahmad, R.; Bakar, S.A.; Ismail, E.S.; Zakaria, A.M.
2006-01-01
In this paper, the Adomian decomposition method (ADM) is applied to the famous Lorenz system. The ADM yields an analytical solution in terms of a rapidly convergent infinite power series with easily computable terms. Comparisons between the decomposition solutions and the fourth-order Runge-Kutta (RK4) numerical solutions are made for various time steps. In particular we look at the accuracy of the ADM as the Lorenz system changes from a non-chaotic system to a chaotic one
DEFF Research Database (Denmark)
Filyushkina, Anna; Strange, Niels; Löf, Magnus
2018-01-01
This study applied a structured expert elicitation technique, the Delphi method, to identify the impacts of five forest management alternatives and several forest characteristics on the preservation of biodiversity and habitats in the boreal zone of the Nordic countries. The panel of experts...... as a valuable addition to on-going empirical and modeling efforts. The findings could assist forest managers in developing forest management strategies that generate benefits from timber production while taking into account the trade-offs with biodiversity goals....
Modified Method of Simplest Equation Applied to the Nonlinear Schrödinger Equation
Directory of Open Access Journals (Sweden)
Vitanov Nikolay K.
2018-03-01
Full Text Available We consider an extension of the methodology of the modified method of simplest equation to the case of use of two simplest equations. The extended methodology is applied for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.
Modified Method of Simplest Equation Applied to the Nonlinear Schrödinger Equation
Vitanov, Nikolay K.; Dimitrova, Zlatinka I.
2018-03-01
We consider an extension of the methodology of the modified method of simplest equation to the case of use of two simplest equations. The extended methodology is applied for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.
Applied Ecosystem Analysis - - a Primer : EDT the Ecosystem Diagnosis and Treatment Method.
Energy Technology Data Exchange (ETDEWEB)
Lestelle, Lawrence C.; Mobrand, Lars E.
1996-05-01
The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual.
The LTSN method used in transport equation, applied in nuclear engineering problems
International Nuclear Information System (INIS)
Borges, Volnei; Vilhena, Marco Tulio de
2002-01-01
The LTS N method solves analytically the S N equations, applying the Laplace transform in the spatial variable. This methodology is used in determination of scalar flux for neutrons and photons, absorbed dose rate, buildup factors and power for a heterogeneous planar slab. This procedure leads to the solution of a transcendental equations for effective multiplication, critical thickness and the atomic density. In this work numerical results are reported, considering multigroup problem in heterogeneous slab. (author)
Machine Learning Method Applied in Readout System of Superheated Droplet Detector
Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco
2017-07-01
Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.
Translation Methods Applied in Translating Quotations in “the Secret” by Rhonda
FEBRIANTI, VICKY
2014-01-01
Keywords: Translation Methods, The Secret, Quotations.Translation helps human to get information written in any language evenwhen it is written in foreign languages. Therefore translation happens in printed media. Books have been popular printed media. The Secret written by Rhonda Byrne is a popular self-help book which has been translated into 50 languages including Indonesian (“The Secret”, n.d., para.5-6).This study is meant to find out the translation methods applied in The Secret. The wr...
Development of a tracking method for augmented reality applied to nuclear plant maintenance work
International Nuclear Information System (INIS)
Shimoda, Hiroshi; Maeshima, Masayuki; Nakai, Toshinori; Bian, Zhiqiang; Ishii, Hirotake; Yoshikawa, Hidekazu
2005-01-01
In this paper, a plant maintenance support method is described, which employs the state-of-the-art information technology, Augmented Reality (AR), in order to improve efficiency of NPP maintenance work and to prevent from human error. Although AR has a great possibility to support various works in real world, it is difficult to apply it to actual work support because the tracking method is the bottleneck for the practical use. In this study, a bar code marker tracking method is proposed to apply AR system for a maintenance work support in NPP field. The proposed method calculates the users position and orientation in real time by two long markers, which are captured by the user-mounted camera. The markers can be easily pasted on the pipes in plant field, and they can be easily recognized in long distance in order to reduce the number of pasted markers in the work field. Experiments were conducted in a laboratory and plant field to evaluate the proposed method. The results show that (1) fast and stable tracking can be realized, (2) position error in camera view is less than 1%, which is almost perfect under the limitation of camera resolution, and (3) it is relatively difficult to catch two markers in one camera view especially in short distance
Applying the response matrix method for solving coupled neutron diffusion and transport problems
International Nuclear Information System (INIS)
Sibiya, G.S.
1980-01-01
The numerical determination of the flux and power distribution in the design of large power reactors is quite a time-consuming procedure if the space under consideration is to be subdivided into very fine weshes. Many computing methods applied in reactor physics (such as the finite-difference method) require considerable computing time. In this thesis it is shown that the response matrix method can be successfully used as an alternative approach to solving the two-dimension diffusion equation. Furthermore it is shown that sufficient accuracy of the method is achieved by assuming a linear space dependence of the neutron currents on the boundaries of the geometries defined for the given space. (orig.) [de
Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro
2013-01-01
Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline.
Lessons learned applying CASE methods/tools to Ada software development projects
Blumberg, Maurice H.; Randall, Richard L.
1993-01-01
This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.
Applying some methods to process the data coming from the nuclear reactions
International Nuclear Information System (INIS)
Suleymanov, M.K.; Abdinov, O.B.; Belashev, B.Z.
2010-01-01
Full text : The methods of a posterior increasing the resolution of the spectral lines are offered to process the data coming from the nuclear reactions. The methods have applied to process the data coming from the nuclear reactions at high energies. They give possibilities to get more detail information on a structure of the spectra of particles emitted in the nuclear reactions. The nuclear reactions are main source of the information on the structure and physics of the atomic nuclei. Usually the spectrums of the fragments of the reactions are complex ones. Apparently it is not simple to extract the necessary for investigation information. In the talk we discuss the methods of a posterior increasing the resolution of the spectral lines. The methods could be useful to process the complex data coming from the nuclear reactions. We consider the Fourier transformation method and maximum entropy one. The complex structures were identified by the method. One can see that at lest two selected points are indicated by the method. Recent we presented a talk where we shown that the results of the analyzing the structure of the pseudorapidity spectra of charged relativistic particles with ≥ 0.7 measured in Au+Em and Pb+Em at AGS and SPS energies using the Fourier transformation method and maximum entropy one. The dependences of these spectra on the number of fast target protons were studied. These distribution shown visually some plateau and shoulder that was at least three selected points on the distributions. The plateaus become wider in PbEm reactions. The existing of plateau is necessary for the parton models. The maximum entropy method could confirm the existing of the plateau and the shoulder on the distributions. The figure shows the results of applying the maximum entropy method. One can see that the method indicates several clean selected points. Some of them same with observed visually ones. We would like to note that the Fourier transformation method could not
Intestinal colic in newborn babies: incidence and methods of proceeding applied by parents
Directory of Open Access Journals (Sweden)
Anna Lewandowska
2017-06-01
Full Text Available Introduction: Intestinal colic is one of the more frequent complaints that a general practitioner and paediatrician deal with in their work. 10-40% of babies formula fed and 10-20% breast fed are stricken by this complaint. A colic attack appears suddenly and very quickly causes energetic, squeaky cry or even scream. Colic attacks last for a few minutes and appear every 2-3 hours usually in the evenings. Specialist literature provides numerous definitions of intestinal colic. The concept was introduced for the first time to paediatric textbooks over 250 years ago. One of the most accurate definitions describe colic as recurring attacks of intensive cry and anxiety lasting for more than 3 hours a day, 3 days a week within 3 weeks. Care of a baby suffering from an intestinal colic causes numerous problems and anxiety among parents, therefore knowledge of effective methods to combat this complaint is a challenge for contemporary neonatology and paediatrics. The aim of the study is to estimate the incidence of intestinal colic in newborn babies formula and breast fed as well as to assess methods of proceeding applied by parents and analyze their effectiveness. Material and methods: The research involved 100 newborn babies breast fed and 100 formula fed, and their parents. The research method applied in the study was a diagnostic survey conducted by use of a questionnaire method. Results: Among examined newborn babies that were breast fed, 43% have experienced intestinal colic, while among those formula fed 30% have suffered from it. The study involved 44% new born female babies and 56% male babies. 52% of mothers were 30-34 years old, 30% 35-59 years old, and 17% 25-59 years old. When it comes to families, the most numerous was a group in good financial situation (60%. The second numerous group was that in average financial situation (40%. All the respondents claimed that they had the knowledge on intestinal colic and the main source of knowledge
Should methods of correction for multiple comparisons be applied in pharmacovigilance?
Directory of Open Access Journals (Sweden)
Lorenza Scotti
2015-12-01
Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica
2015-07-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
Power secant method applied to natural frequency extraction of Timoshenko beam structures
Directory of Open Access Journals (Sweden)
C.A.N. Dias
Full Text Available This work deals with an improved plane frame formulation whose exact dynamic stiffness matrix (DSM presents, uniquely, null determinant for the natural frequencies. In comparison with the classical DSM, the formulation herein presented has some major advantages: local mode shapes are preserved in the formulation so that, for any positive frequency, the DSM will never be ill-conditioned; in the absence of poles, it is possible to employ the secant method in order to have a more computationally efficient eigenvalue extraction procedure. Applying the procedure to the more general case of Timoshenko beams, we introduce a new technique, named "power deflation", that makes the secant method suitable for the transcendental nonlinear eigenvalue problems based on the improved DSM. In order to avoid overflow occurrences that can hinder the secant method iterations, limiting frequencies are formulated, with scaling also applied to the eigenvalue problem. Comparisons with results available in the literature demonstrate the strength of the proposed method. Computational efficiency is compared with solutions obtained both by FEM and by the Wittrick-Williams algorithm.
International Nuclear Information System (INIS)
Tumelero, Fernanda; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana
2015-01-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
International Nuclear Information System (INIS)
Vianna Filho, Alfredo Marques
2009-01-01
The economic equipment replacement problem is a central question in Nuclear Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost etc. New equipment, however, require a higher initial investment. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs, but in contrast having lower financial and insurance costs. The weighting of all these costs can be made with deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. The aim of this paper is to present a methodological framework to the choice of the most useful method applied in the problem of nuclear system substitution.(author)
Solution and study of nodal neutron transport equation applying the LTSN-DiagExp method
International Nuclear Information System (INIS)
Hauser, Eliete Biasotto; Pazos, Ruben Panta; Vilhena, Marco Tullio de; Barros, Ricardo Carvalho de
2003-01-01
In this paper we report advances about the three-dimensional nodal discrete-ordinates approximations of neutron transport equation for Cartesian geometry. We use the combined collocation method of the angular variables and nodal approach for the spatial variables. By nodal approach we mean the iterated transverse integration of the S N equations. This procedure leads to the set of one-dimensional averages angular fluxes in each spatial variable. The resulting system of equations is solved with the LTS N method, first applying the Laplace transform to the set of the nodal S N equations and then obtained the solution by symbolic computation. We include the LTS N method by diagonalization to solve the nodal neutron transport equation and then we outline the convergence of these nodal-LTS N approximations with the help of a norm associated to the quadrature formula used to approximate the integral term of the neutron transport equation. (author)
Artificial intelligence methods applied for quantitative analysis of natural radioactive sources
International Nuclear Information System (INIS)
Medhat, M.E.
2012-01-01
Highlights: ► Basic description of artificial neural networks. ► Natural gamma ray sources and problem of detections. ► Application of neural network for peak detection and activity determination. - Abstract: Artificial neural network (ANN) represents one of artificial intelligence methods in the field of modeling and uncertainty in different applications. The objective of the proposed work was focused to apply ANN to identify isotopes and to predict uncertainties of their activities of some natural radioactive sources. The method was tested for analyzing gamma-ray spectra emitted from natural radionuclides in soil samples detected by a high-resolution gamma-ray spectrometry based on HPGe (high purity germanium). The principle of the suggested method is described, including, relevant input parameters definition, input data scaling and networks training. It is clear that there is satisfactory agreement between obtained and predicted results using neural network.
Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows
Zwick, David; Hackl, Jason; Balachandar, S.
2017-11-01
Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.
Relativistic convergent close-coupling method applied to electron scattering from mercury
International Nuclear Information System (INIS)
Bostock, Christopher J.; Fursa, Dmitry V.; Bray, Igor
2010-01-01
We report on the extension of the recently formulated relativistic convergent close-coupling (RCCC) method to accommodate two-electron and quasi-two-electron targets. We apply the theory to electron scattering from mercury and obtain differential and integrated cross sections for elastic and inelastic scattering. We compared with previous nonrelativistic convergent close-coupling (CCC) calculations and for a number of transitions obtained significantly better agreement with the experiment. The RCCC method is able to resolve structure in the integrated cross sections for the energy regime in the vicinity of the excitation thresholds for the (6s6p) 3 P 0,1,2 states. These cross sections are associated with the formation of negative ion (Hg - ) resonances that could not be resolved with the nonrelativistic CCC method. The RCCC results are compared with the experiment and other relativistic theories.
A reflective lens: applying critical systems thinking and visual methods to ecohealth research.
Cleland, Deborah; Wyborn, Carina
2010-12-01
Critical systems methodology has been advocated as an effective and ethical way to engage with the uncertainty and conflicting values common to ecohealth problems. We use two contrasting case studies, coral reef management in the Philippines and national park management in Australia, to illustrate the value of critical systems approaches in exploring how people respond to environmental threats to their physical and spiritual well-being. In both cases, we used visual methods--participatory modeling and rich picturing, respectively. The critical systems methodology, with its emphasis on reflection, guided an appraisal of the research process. A discussion of these two case studies suggests that visual methods can be usefully applied within a critical systems framework to offer new insights into ecohealth issues across a diverse range of socio-political contexts. With this article, we hope to open up a conversation with other practitioners to expand the use of visual methods in integrated research.
International Nuclear Information System (INIS)
Suzuki, Mitsutoshi; Hori, Masato; Asou, Ryoji; Usuda, Shigekazu
2006-01-01
The multiscale statistical process control (MSSPC) method is applied to clarify the elements of material unaccounted for (MUF) in large scale reprocessing plants using numerical calculations. Continuous wavelet functions are used to decompose the process data, which simulate batch operation superimposed by various types of disturbance, and the disturbance components included in the data are divided into time and frequency spaces. The diagnosis of MSSPC is applied to distinguish abnormal events from the process data and shows how to detect abrupt and protracted diversions using principle component analysis. Quantitative performance of MSSPC for the time series data is shown with average run lengths given by Monte-Carlo simulation to compare to the non-detection probability β. Recent discussion about bias corrections in material balances is introduced and another approach is presented to evaluate MUF without assuming the measurement error model. (author)
The reduction method of statistic scale applied to study of climatic change
International Nuclear Information System (INIS)
Bernal Suarez, Nestor Ricardo; Molina Lizcano, Alicia; Martinez Collantes, Jorge; Pabon Jose Daniel
2000-01-01
In climate change studies the global circulation models of the atmosphere (GCMAs) enable one to simulate the global climate, with the field variables being represented on a grid points 300 km apart. One particular interest concerns the simulation of possible changes in rainfall and surface air temperature due to an assumed increase of greenhouse gases. However, the models yield the climatic projections on grid points that in most cases do not correspond to the sites of major interest. To achieve local estimates of the climatological variables, methods like the one known as statistical down scaling are applied. In this article we show a case in point by applying canonical correlation analysis (CCA) to the Guajira Region in the northeast of Colombia
Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation
Hatten, Noble; Russell, Ryan P.
2017-12-01
A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.
ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE
Directory of Open Access Journals (Sweden)
SABOU FELICIA
2014-05-01
Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied.
ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE
Directory of Open Access Journals (Sweden)
SABOU FELICIA
2014-05-01
Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied
The Fractional Step Method Applied to Simulations of Natural Convective Flows
Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)
2002-01-01
This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
Resonating group method as applied to the spectroscopy of α-transfer reactions
Subbotin, V. B.; Semjonov, V. M.; Gridnev, K. A.; Hefter, E. F.
1983-10-01
In the conventional approach to α-transfer reactions the finite- and/or zero-range distorted-wave Born approximation is used in liaison with a macroscopic description of the captured α particle in the residual nucleus. Here the specific example of 16O(6Li,d)20Ne reactions at different projectile energies is taken to present a microscopic resonating group method analysis of the α particle in the final nucleus (for the reaction part the simple zero-range distorted-wave Born approximation is employed). In the discussion of suitable nucleon-nucleon interactions, force number one of the effective interactions presented by Volkov is shown to be most appropriate for the system considered. Application of the continuous analog of Newton's method to the evaluation of the resonating group method equations yields an increased accuracy with respect to traditional methods. The resonating group method description induces only minor changes in the structures of the angular distributions, but it does serve its purpose in yielding reliable and consistent spectroscopic information. NUCLEAR STRUCTURE 16O(6Li,d)20Ne; E=20 to 32 MeV; calculated B(E2); reduced widths, dσdΩ extracted α-spectroscopic factors. ZRDWBA with microscope RGM description of residual α particle in 20Ne; application of continuous analog of Newton's method; tested and applied Volkov force No. 1; direct mechanism.
Directory of Open Access Journals (Sweden)
Bailing Liu
2016-02-01
Full Text Available Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws
Institute of Scientific and Technical Information of China (English)
DonghaiLI; XuezhiJIANG; 等
1997-01-01
The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.
Comparison of Heuristic Methods Applied for Optimal Operation of Water Resources
Directory of Open Access Journals (Sweden)
Alireza Borhani Dariane
2009-01-01
Full Text Available Water resources optimization problems are usually complex and hard to solve using the ordinary optimization methods, or they are at least not economically efficient. A great number of studies have been conducted in quest of suitable methods capable of handling such problems. In recent years, some new heuristic methods such as genetic and ant algorithms have been introduced in systems engineering. Preliminary applications of these methods in water resources problems have shown that some of them are powerful tools, capable of solving complex problems. In this paper, the application of such heuristic methods as Genetic Algorithm (GA and Ant Colony Optimization (ACO have been studied for optimizing reservoir operation. The Dez Dam reservoir inIranwas chosen for a case study. The methods were applied and compared using short-term (one year and long-term models. Comparison of the results showed that GA outperforms both DP and ACO in finding true global optimum solutions and operating rules.
Boundary element methods applied to two-dimensional neutron diffusion problems
International Nuclear Information System (INIS)
Itagaki, Masafumi
1985-01-01
The Boundary element method (BEM) has been applied to two-dimensional neutron diffusion problems. The boundary integral equation and its discretized form have been derived. Some numerical techniques have been developed, which can be applied to critical and fixed-source problems including multi-region ones. Two types of test programs have been developed according to whether the 'zero-determinant search' or the 'source iteration' technique is adopted for criticality search. Both programs require only the fluxes and currents on boundaries as the unknown variables. The former allows a reduction in computing time and memory in comparison with the finite element method (FEM). The latter is not always efficient in terms of computing time due to the domain integral related to the inhomogeneous source term; however, this domain integral can be replaced by the equivalent boundary integral for a region with a non-multiplying medium or with a uniform source, resulting in a significant reduction in computing time. The BEM, as well as the FEM, is well suited for solving irregular geometrical problems for which the finite difference method (FDM) is unsuited. The BEM also solves problems with infinite domains, which cannot be solved by the ordinary FEM and FDM. Some simple test calculations are made to compare the BEM with the FEM and FDM, and discussions are made concerning the relative merits of the BEM and problems requiring future solution. (author)
Methodical basis of training of cadets for the military applied heptathlon competitions
Directory of Open Access Journals (Sweden)
R.V. Anatskyi
2017-12-01
Full Text Available The purpose of the research is to develop methodical bases of training of cadets for the military applied heptathlon competitions. Material and methods: Cadets of 2-3 courses at the age of 19-20 years (n=20 participated in researches. Cadets were selected by the best results of exercises performing included into the program of military applied heptathlon competitions (100 m run, 50 m freestyle swimming, Kalashnikov rifle shooting, pull-up, obstacle course, grenade throwing, 3000 m run. Preparation took place on the basis of training center. All trainings were organized and carried out according to the methodical basics: in a week preparation microcycle five days cadets had two trainings a day (on Saturday was one training, on Sunday they had rest. The selected exercises with individual loads were performed, Results : Sport scores demonstrated top results in the performance of 100 m run, 3000 m run and pull-up. The indices of performing exercise "obstacle course" were much lower than expected. Rather low results were demonstrated in swimming and shooting. Conclusions . Results of researches indicate the necessity of quality improvement: cadets’ weapons proficiency; physical readiness to perform the exercises requiring complex demonstration of all physical qualities.
Nutrient Runoff Losses from Liquid Dairy Manure Applied with Low-Disturbance Methods.
Jokela, William; Sherman, Jessica; Cavadini, Jason
2016-09-01
Manure applied to cropland is a source of phosphorus (P) and nitrogen (N) in surface runoff and can contribute to impairment of surface waters. Tillage immediately after application incorporates manure into the soil, which may reduce nutrient loss in runoff as well as N loss via NH volatilization. However, tillage also incorporates crop residue, which reduces surface cover and may increase erosion potential. We applied liquid dairy manure in a silage corn ( L.)-cereal rye ( L.) cover crop system in late October using methods designed to incorporate manure with minimal soil and residue disturbance. These include strip-till injection and tine aerator-band manure application, which were compared with standard broadcast application, either incorporated with a disk or left on the surface. Runoff was generated with a portable rainfall simulator (42 mm h for 30 min) three separate times: (i) 2 to 5 d after the October manure application, (ii) in early spring, and (iii) after tillage and planting. In the postmanure application runoff, the highest losses of total P and dissolved reactive P were from surface-applied manure. Dissolved P loss was reduced 98% by strip-till injection; this result was not statistically different from the no-manure control. Reductions from the aerator band method and disk incorporation were 53 and 80%, respectively. Total P losses followed a similar pattern, with 87% reduction from injected manure. Runoff losses of N had generally similar patterns to those of P. Losses of P and N were, in most cases, lower in the spring rain simulations with fewer significant treatment effects. Overall, results show that low-disturbance manure application methods can significantly reduce nutrient runoff losses compared with surface application while maintaining residue cover better than incorporation by tillage. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
International Nuclear Information System (INIS)
Aly, Omar Fernandes; Andrade, Arnaldo Paes de; MattarNeto, Miguel; Aoki, Idalina Vieira
2002-01-01
This paper aims to collect information and to discuss the electrochemical noise measurements and the reversing dc potential drop method, applied to stress corrosion essays that can be used to evaluate the nucleation and the increase of stress corrosion cracking in Alloy 600 and/or Alloy 182 specimens from Angra I Nuclear Power Plant. Therefore we will pretend to establish a standard procedure to essays to be realized on the new autoclave equipment on the Laboratorio de Eletroquimica e Corrosao do Departamento de Engenharia Quimica da Escola Politecnica da Universidade de Sao Paulo - Electrochemical and Corrosion Laboratory of the Chemical Engineering Department of Polytechnical School of Sao Paulo University, Brazil. (author)
Making Design Decisions Visible: Applying the Case-Based Method in Designing Online Instruction
Directory of Open Access Journals (Sweden)
Heng Luo,
2011-01-01
Full Text Available The instructional intervention in this design case is a self-directed online tutorial that applies the case-based method to teach educators how to design and conduct entrepreneurship programs for elementary school students. In this article, the authors describe the major decisions made in each phase of the design and development process, explicate the rationales behind them, and demonstrate their effect on the production of the tutorial. Based on such analysis, the guidelines for designing case-based online instruction are summarized for the design case.
International Nuclear Information System (INIS)
Walker, R.S.; Thompson, D.A.; Poehlman, S.W.
1977-01-01
The application of single, plural or multiple scattering theories to the determination of defect dechanneling in channeling-backscattering disorder measurements is re-examined. A semiempirical modification to the method is described that results in making the extracted disorder and disorder distribution relatively insensitive to the scattering model employed. The various models and modifications have been applied to the 1 to 2 MeV He + channeling-backscatter data obtained from 20 to 80 keV H + to Ne + bombarded Si, GaP and GaAs at 50 K and 300 K. (author)
Zoltàn Dörnyei, Research Methods in Applied Linguistics
Marie-Françoise Narcy-Combes
2012-01-01
Research Methods in Applied Linguistics est un ouvrage pratique et accessible qui s’adresse en priorité au chercheur débutant et au doctorant en linguistique appliquée et en didactique des langues pour lesquels il représente un accompagnement fort utile. Son style clair et son organisation sans surprise en font une lecture facile et agréable et rendent les différents concepts aisément compréhensibles pour tous. Il présente un bilan de la méthodologie de la recherche en linguistique appliquée,...
Cork-resin ablative insulation for complex surfaces and method for applying the same
Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)
1980-01-01
A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.
Perturbative methods applied for sensitive coefficients calculations in thermal-hydraulic systems
International Nuclear Information System (INIS)
Andrade Lima, F.R. de
1993-01-01
The differential formalism and the Generalized Perturbation Theory (GPT) are applied to sensitivity analysis of thermal-hydraulics problems related to pressurized water reactor cores. The equations describing the thermal-hydraulic behavior of these reactors cores, used in COBRA-IV-I code, are conveniently written. The importance function related to the response of interest and the sensitivity coefficient of this response with respect to various selected parameters are obtained by using Differential and Generalized Perturbation Theory. The comparison among the results obtained with the application of these perturbative methods and those obtained directly with the model developed in COBRA-IV-I code shows a very good agreement. (author)
Brezina, Tadej; Graser, Anita; Leth, Ulrich
2017-04-01
Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.
Directory of Open Access Journals (Sweden)
Ismael de Moura Costa
2017-04-01
Full Text Available Introduction: Paper to presentation the MAIA Method for Architecture of Information Applied evolution, its structure, results obtained and three practical applications.Objective: Proposal of a methodological constructo for treatment of complex information, distinguishing information spaces and revealing inherent configurations of those spaces. Metodology: The argument is elaborated from theoretical research of analitical hallmark, using distinction as a way to express concepts. Phenomenology is used as a philosophical position, which considers the correlation between Subject↔Object. The research also considers the notion of interpretation as an integrating element for concepts definition. With these postulates, the steps to transform the information spaces are formulated. Results: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Conclusions: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Besides that, the article presents not only possible applications as a cientific method, but also as configuration tool in information spaces, as well as generator of ontologies. At last, but not least, presents a brief summary of the analysis made by researchers who have already evaluated the method considering the three aspects mentioned.
A Review of Auditing Methods Applied to the Content of Controlled Biomedical Terminologies
Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.
2012-01-01
Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing methods that apply formal methods to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these methods and have created a framework for characterizing them. The framework considers manual, systematic and heuristic methods that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the methods and provide examples to illustrate each part of the framework. PMID:19285571
Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme
Directory of Open Access Journals (Sweden)
Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin
2012-08-01
Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words: Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.
Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder
Directory of Open Access Journals (Sweden)
He Yan
2017-01-01
Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.
Efficient alpha particle detection by CR-39 applying 50 Hz-HV electrochemical etching method
International Nuclear Information System (INIS)
Sohrabi, M.; Soltani, Z.
2016-01-01
Alpha particles can be detected by CR-39 by applying either chemical etching (CE), electrochemical etching (ECE), or combined pre-etching and ECE usually through a multi-step HF-HV ECE process at temperatures much higher than room temperature. By applying pre-etching, characteristics responses of fast-neutron-induced recoil tracks in CR-39 by HF-HV ECE versus KOH normality (N) have shown two high-sensitivity peaks around 5–6 and 15–16 N and a large-diameter peak with a minimum sensitivity around 10–11 N at 25°C. On the other hand, 50 Hz-HV ECE method recently advanced in our laboratory detects alpha particles with high efficiency and broad registration energy range with small ECE tracks in polycarbonate (PC) detectors. By taking advantage of the CR-39 sensitivity to alpha particles, efficacy of 50 Hz-HV ECE method and CR-39 exotic responses under different KOH normalities, detection characteristics of 0.8 MeV alpha particle tracks were studied in 500 μm CR-39 for different fluences, ECE duration and KOH normality. Alpha registration efficiency increased as ECE duration increased to 90 ± 2% after 6–8 h beyond which plateaus are reached. Alpha track density versus fluence is linear up to 10 6 tracks cm −2 . The efficiency and mean track diameter versus alpha fluence up to 10 6 alphas cm −2 decrease as the fluence increases. Background track density and minimum detection limit are linear functions of ECE duration and increase as normality increases. The CR-39 processed for the first time in this study by 50 Hz-HV ECE method proved to provide a simple, efficient and practical alpha detection method at room temperature. - Highlights: • Alpha particles of 0.8 MeV were detected in CR-39 by 50 Hz-HV ECE method. • Efficiency/track diameter was studied vs fluence and time for 3 KOH normality. • Background track density and minimum detection limit vs duration were studied. • A new simple, efficient and low-cost alpha detection method
Directory of Open Access Journals (Sweden)
Ali Gerami Matin
2017-10-01
Full Text Available Optimized road maintenance planning seeks for solutions that can minimize the life-cycle cost of a road network and concurrently maximize pavement condition. Aiming at proposing an optimal set of road maintenance solutions, robust meta-heuristic algorithms are used in research. Two main optimization techniques are applied including single-objective and multi-objective optimization. Genetic algorithms (GA, particle swarm optimization (PSO, and combination of genetic algorithm and particle swarm optimization (GAPSO as single-objective techniques are used, while the non-domination sorting genetic algorithm II (NSGAII and multi-objective particle swarm optimization (MOPSO which are sufficient for solving computationally complex large-size optimization problems as multi-objective techniques are applied and compared. A real case study from the rural transportation network of Iran is employed to illustrate the sufficiency of the optimum algorithm. The formulation of the optimization model is carried out in such a way that a cost-effective maintenance strategy is reached by preserving the performance level of the road network at a desirable level. So, the objective functions are pavement performance maximization and maintenance cost minimization. It is concluded that multi-objective algorithms including non-domination sorting genetic algorithm II (NSGAII and multi-objective particle swarm optimization performed better than the single objective algorithms due to the capability to balance between both objectives. And between multi-objective algorithms the NSGAII provides the optimum solution for the road maintenance planning.
A METHOD FOR PREPARING A SUBSTRATE BY APPLYING A SAMPLE TO BE ANALYSED
DEFF Research Database (Denmark)
2017-01-01
The invention relates to a method for preparing a substrate (105a) comprising a sample reception area (110) and a sensing area (111). The method comprises the steps of: 1) applying a sample on the sample reception area; 2) rotating the substrate around a predetermined axis; 3) during rotation......, at least part of the liquid travels from the sample reception area to the sensing area due to capillary forces acting between the liquid and the substrate; and 4) removing the wave of particles and liquid formed at one end of the substrate. The sensing area is closer to the predetermined axis than...... the sample reception area. The sample comprises a liquid part and particles suspended therein....
Simplified inelastic analysis methods applied to fast breeder reactor core design
International Nuclear Information System (INIS)
Abo-El-Ata, M.M.
1978-01-01
The paper starts with a review of some currently available simplified inelastic analysis methods used in elevated temperature design for evaluating plastic and thermal creep strains. The primary purpose of the paper is to investigate how these simplified methods may be applied to fast breeder reactor core design where neutron irradiation effects are significant. One of the problems discussed is irradiation-induced creep and its effect on shakedown, ratcheting, and plastic cycling. Another problem is the development of swelling-induced stress which is an additional loading mechanism and must be taken into account. In this respect an expression for swelling-induced stress in the presence of irradiation creep is derived and a model for simplifying the stress analysis under these conditions is proposed. As an example, the effects of irradiation creep and swelling induced stress on the analysis of a thin walled tube under constant internal pressure and intermittent heat fluxes, simulating a fuel pin, is presented
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
Directory of Open Access Journals (Sweden)
Samir Saoudi
2008-07-01
Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Infrared thermography inspection methods applied to the target elements of W7-X divertor
Energy Technology Data Exchange (ETDEWEB)
Missirlian, M. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France)], E-mail: marc.missirlian@cea.fr; Traxler, H. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria); Boscary, J. [Max-Planck-Institut fuer Plasmaphysik, Euratom Association, Boltzmannstr. 2, D-85748 Garching (Germany); Durocher, A.; Escourbiac, F.; Schlosser, J. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France); Schedler, B.; Schuler, P. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria)
2007-10-15
The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application.
Infrared thermography inspection methods applied to the target elements of W7-X divertor
International Nuclear Information System (INIS)
Missirlian, M.; Traxler, H.; Boscary, J.; Durocher, A.; Escourbiac, F.; Schlosser, J.; Schedler, B.; Schuler, P.
2007-01-01
The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application
The fundamental parameter method applied to X-ray fluorescence analysis with synchrotron radiation
Pantenburg, F. J.; Beier, T.; Hennrich, F.; Mommsen, H.
1992-05-01
Quantitative X-ray fluorescence analysis applying the fundamental parameter method is usually restricted to monochromatic excitation sources. It is shown here, that such analyses can be performed as well with a white synchrotron radiation spectrum. To determine absolute elemental concentration values it is necessary to know the spectral distribution of this spectrum. A newly designed and tested experimental setup, which uses the synchrotron radiation emitted from electrons in a bending magnet of ELSA (electron stretcher accelerator of the university of Bonn) is presented. The determination of the exciting spectrum, described by the given electron beam parameters, is limited due to uncertainties in the vertical electron beam size and divergence. We describe a method which allows us to determine the relative and absolute spectral distributions needed for accurate analysis. First test measurements of different alloys and standards of known composition demonstrate that it is possible to determine exact concentration values in bulk and trace element analysis.
Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations
Atkins, Harold L.
2009-01-01
The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.
Data Analytics of Mobile Serious Games: Applying Bayesian Data Analysis Methods
Directory of Open Access Journals (Sweden)
Heide Lukosch
2018-03-01
Full Text Available Traditional teaching methods in the field of resuscitation training show some limitations, while teaching the right actions in critical situations could increase the number of people saved after a cardiac arrest. For our study, we developed a mobile game to support the transfer of theoretical knowledge on resuscitation. The game has been tested at three schools of further education. A number of data has been collected from 171 players. To analyze this large data set from different sources and quality, different types of data modeling and analyses had to be applied. This approach showed its usefulness in analyzing the large set of data from different sources. It revealed some interesting findings, such as that female players outperformed the male ones, and that the game fostering informal, self-directed is equally efficient as the traditional formal learning method.
An input feature selection method applied to fuzzy neural networks for signal esitmation
International Nuclear Information System (INIS)
Na, Man Gyun; Sim, Young Rok
2001-01-01
It is well known that the performance of a fuzzy neural networks strongly depends on the input features selected for its training. In its applications to sensor signal estimation, there are a large number of input variables related with an output. As the number of input variables increases, the training time of fuzzy neural networks required increases exponentially. Thus, it is essential to reduce the number of inputs to a fuzzy neural networks and to select the optimum number of mutually independent inputs that are able to clearly define the input-output mapping. In this work, principal component analysis (PAC), genetic algorithms (GA) and probability theory are combined to select new important input features. A proposed feature selection method is applied to the signal estimation of the steam generator water level, the hot-leg flowrate, the pressurizer water level and the pressurizer pressure sensors in pressurized water reactors and compared with other input feature selection methods
Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin
2017-02-01
Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.
Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies
Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on
Directory of Open Access Journals (Sweden)
Erin O Sills
Full Text Available Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012. This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Directory of Open Access Journals (Sweden)
Nadia Said
Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Labile soil phosphorus as influenced by methods of applying radioactive phosphorus
International Nuclear Information System (INIS)
Selvaratnam, V.V.; Andersen, A.J.; Thomsen, J.D.; Gissel-Nielsen, G.
1980-03-01
The influence of different methods of applying radioactive phosphorus on the E- and L-values was studied in four foil types using barley, buckwheat, and rye grass for the L-value determination. The four soils differed greatly in their E- and L-values. The experiment was carried out both with and without carrier-P. The presence of carrier-P had no influence on the E-values, while carrier-P in some cases gave a lower L-value. Both E- and L-values dependent on the method of application. When the 32 P was applied on a small soil or sand sample and dried before mixing with the total amount of soil, the E-values were higher than at direct application most likely because of a stronger fixation to the soil/sand particles. This was not the case for the L-values that are based on a much longer equilibrium time. On the contrary, the direct application of the 32 p-solution to the whole amount of soil gave higher L-values of a non-homogeneous distribution of the 32 p in the soil. (author)
Analysis of coupled neutron-gamma radiations, applied to shieldings in multigroup albedo method
International Nuclear Information System (INIS)
Dunley, Leonardo Souza
2002-01-01
The principal mathematical tools frequently available for calculations in Nuclear Engineering, including coupled neutron-gamma radiations shielding problems, involve the full Transport Theory or the Monte Carlo techniques. The Multigroup Albedo Method applied to shieldings is characterized by following the radiations through distinct layers of materials, allowing the determination of the neutron and gamma fractions reflected from, transmitted through and absorbed in the irradiated media when a neutronic stream hits the first layer of material, independently of flux calculations. Then, the method is a complementary tool of great didactic value due to its clarity and simplicity in solving neutron and/or gamma shielding problems. The outstanding results achieved in previous works motivated the elaboration and the development of this study that is presented in this dissertation. The radiation balance resulting from the incidence of a neutronic stream into a shielding composed by 'm' non-multiplying slab layers for neutrons was determined by the Albedo method, considering 'n' energy groups for neutrons and 'g' energy groups for gammas. It was taken into account there is no upscattering of neutrons and gammas. However, it was considered that neutrons from any energy groups are able to produce gammas of all energy groups. The ANISN code, for an angular quadrature order S 2 , was used as a standard for comparison of the results obtained by the Albedo method. So, it was necessary to choose an identical system configuration, both for ANISN and Albedo methods. This configuration was six neutron energy groups and eight gamma energy groups, using three slab layers (iron aluminum - manganese). The excellent results expressed in comparative tables show great agreement between the values determined by the deterministic code adopted as standard and, the values determined by the computational program created using the Albedo method and the algorithm developed for coupled neutron
Raies, Arwa B.
2017-12-05
One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds\\' features may improve model\\'s performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.
Raies, Arwa B.; Bajic, Vladimir B.
2017-01-01
One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds' features may improve model's performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.
International Nuclear Information System (INIS)
Huh, Jae Sung; Kwak, Byung Man
2011-01-01
Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated
Directory of Open Access Journals (Sweden)
V. I. Freyman
2015-11-01
Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Directory of Open Access Journals (Sweden)
Laura Susana Vargas-Valencia
2016-12-01
Full Text Available This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
The Cn method applied to problems with an anisotropic diffusion law
International Nuclear Information System (INIS)
Grandjean, P.M.
A 2-dimensional Cn calculation has been applied to homogeneous media subjected to the Rayleigh impact law. Results obtained with collision probabilities and Chandrasekhar calculations are compared to those from Cn method. Introducing in the expression of the transport equation, an expansion truncated on a polynomial basis for the outgoing angular flux (or possibly entrance flux) gives two Cn systems of algebraic linear equations for the expansion coefficients. The matrix elements of these equations are the moments of the Green function in infinite medium. The search for the Green function is effected through the Fourier transformation of the integrodifferential equation and its moments are derived from their Fourier transforms through a numerical integration in the complex plane. The method has been used for calculating the albedo in semi-infinite media, the extrapolation length of the Milne problem, and the albedo and transmission factor of a slab (a concise study of convergence is presented). A system of integro-differential equations bearing on the moments of the angular flux inside the medium has been derived, for the collision probability method. It is numerically solved with approximately the bulk flux by step functions. The albedo in semi-infinite medium has also been computed through the semi-analytical Chandrasekhar method. In the latter, the outgoing flux is expressed as a function of the entrance flux by means of a integral whose kernel is numerically derived [fr
International Nuclear Information System (INIS)
Brodsky, A.
1979-01-01
Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)
Directory of Open Access Journals (Sweden)
Koivistoinen Teemu
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Directory of Open Access Journals (Sweden)
Alpo Värri
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Energy Technology Data Exchange (ETDEWEB)
Vesisenaho, T [VTT Energy, Jyvaeskylae (Finland); Liukkonen, S [VTT Manufacturing Technology, Espoo (Finland)
1997-12-01
The objective of this project is to apply whole-tree harvesting method to Finnish timber harvesting conditions in order to lower the harvesting costs of energy wood and timber in spruce-dominant final cuttings. In Finnish conditions timber harvesting is normally based on the log-length method. Because of small landings and the high level of thinning cuttings, whole-tree skidding methods cannot be utilised extensively. The share of stands which could be harvested with whole-tree skidding method showed up to be about 10 % of the total harvesting amount of 50 mill. m{sup 3}. The corresponding harvesting potential of energy wood is 0,25 Mtoe. The aim of the structural measurements made in this project was to get information about the effect of different hauling methods into the structural response of the tractor, and thus reveal the possible special requirements that the new whole-tree skidding places forest tractor design. Altogether 7 strain gauge based sensors were mounted into the rear frame structures and drive shafts of the forest tractor. Five strain gauges measured local strains in some critical details and two sensors measured the torque moments of the front and rear bogie drive shafts. Also the revolution speed of the rear drive shaft was recorded. Signal time histories, maximum peaks, Time at Level distributions and Rainflow distributions were gathered in different hauling modes. From these, maximum values, average stress levels and fatigue life estimates were calculated for each mode, and a comparison of the different methods from the structural point of view was performed
Brucellosis Prevention Program: Applying “Child to Family Health Education” Method
Directory of Open Access Journals (Sweden)
H. Allahverdipour
2010-04-01
Full Text Available Introduction & Objective: Pupils have efficient potential to increase community awareness and promoting community health through participating in the health education programs. Child to family health education program is one of the communicative strategies that was applied in this field trial study. Because of high prevalence of Brucellosis in Hamadan province, Iran, the aim of this study was promoting families’ knowledge and preventive behaviors about Brucellosis in the rural areas by using child to family health education method.Materials & Methods: In this nonequivalent control group design study three rural schools were chosen (one as intervention and two others as control. At first knowledge and behavior of families about Brucellosis were determined using a designed questionnaire. Then the families were educated through “child to family” procedure. At this stage the students gained information. Then they were instructed to teach their parents what they had learned. After 3 months following the last session of education, the level of knowledge and behavior changes of the families about Brucellosis were determined and analyzed by paired t-test.Results: The results showed significant improvement in the knowledge of the mothers. The knowledge of the mothers about the signs of Brucellosis disease in human increased from 1.81 to 3.79 ( t:-21.64 , sig:0.000 , and also the knowledge on the signs of Brucellosis in animals increased from 1.48 to 2.82 ( t:-10.60 , sig:0.000. Conclusion: Child to family health education program is one of the effective and available methods, which would be useful and effective in most communities, and also Students potential would be effective for applying in the health promotion programs.
International Nuclear Information System (INIS)
Yiin, L.-M.; Lu, S.-E.; Sannoh, Sulaiman; Lim, B.S.; Rhoads, G.G.
2004-01-01
We conducted a cleaning trial in 40 northern New Jersey homes where home renovation and remodeling (R and R) activities were undertaken. Two cleaning protocols were used in the study: a specific method recommended by the US Department of Housing and Urban Development (HUD), in the 1995 'Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing', using a high-efficiency particulate air (HEPA)-filtered vacuum cleaner and a tri-sodium phosphate solution (TSP); and an alternative method using a household vacuum cleaner and a household detergent. Eligible homes were built before the 1970s with potential lead-based paint and had recent R and R activities without thorough cleaning. The two cleaning protocols were randomly assigned to the participants' homes and followed the HUD-recommended three-step procedure: vacuuming, wet washing, and repeat vacuuming. Wipe sampling was conducted on floor surfaces or windowsills before and after cleaning to evaluate the efficacy. All floor and windowsill data indicated that both methods (TSP/HEPA and non-TSP/non-HEPA) were effective in reducing lead loading on the surfaces (P<0.001). When cleaning was applied to surfaces with initial lead loading above the clearance standards, the reductions were even greater, above 95% for either cleaning method. The mixed-effect model analysis showed no significant difference between the two methods. Baseline lead loading was found to be associated with lead loading reduction significantly on floors (P<0.001) and marginally on windowsills (P=0.077). Such relations were different between the two cleaning methods significantly on floors (P<0.001) and marginally on windowsills (P=0.066), with the TSP/HEPA method being favored for higher baseline levels and the non-TSP/non-HEPA method for lower baseline levels. For the 10 homes with lead abatement, almost all post-cleaning lead loadings were below the standards using either cleaning method. Based on our results, we recommend that
Method for pulse to pulse dose reproducibility applied to electron linear accelerators
International Nuclear Information System (INIS)
Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.
2002-01-01
An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method
Winchester, David E; Burkart, Thomas A; Choi, Calvin Y; McKillop, Matthew S; Beyth, Rebecca J; Dahm, Phillipp
2016-06-01
Training in quality improvement (QI) is a pillar of the next accreditation system of the Accreditation Committee on Graduate Medical Education and a growing expectation of physicians for maintenance of certification. Despite this, many postgraduate medical trainees are not receiving training in QI methods. We created the Fellows Applied Quality Training (FAQT) curriculum for cardiology fellows using both didactic and applied components with the goal of increasing confidence to participate in future QI projects. Fellows completed didactic training from the Institute for Healthcare Improvement's Open School and then designed and completed a project to improve quality of care or patient safety. Self-assessments were completed by the fellows before, during, and after the first year of the curriculum. The primary outcome for our curriculum was the median score reported by the fellows regarding their self-confidence to complete QI activities. Self-assessments were completed by 23 fellows. The majority of fellows (15 of 23, 65.2%) reported no prior formal QI training. Median score on baseline self-assessment was 3.0 (range, 1.85-4), which was significantly increased to 3.27 (range, 2.23-4; P = 0.004) on the final assessment. The distribution of scores reported by the fellows indicates that 30% were slightly confident at conducting QI activities on their own, which was reduced to 5% after completing the FAQT curriculum. An interim assessment was conducted after the fellows completed didactic training only; median scores were not different from the baseline (mean, 3.0; P = 0.51). After completion of the FAQT, cardiology fellows reported higher self-confidence to complete QI activities. The increase in self-confidence seemed to be limited to the applied component of the curriculum, with no significant change after the didactic component.
Applying system engineering methods to site characterization research for nuclear waste repositories
International Nuclear Information System (INIS)
Woods, T.W.
1985-01-01
Nuclear research and engineering projects can benefit from the use of system engineering methods. This paper is brief overview illustrating how system engineering methods could be applied in structuring a site characterization effort for a candidate nuclear waste repository. System engineering is simply an orderly process that has been widely used to transform a recognized need into a fully defined system. Such a system may be physical or abstract, natural or man-made, hardware or procedural, as is appropriate to the system's need or objective. It is a way of mentally visualizing all the constituent elements and their relationships necessary to fulfill a need, and doing so compliant with all constraining requirements attendant to that need. Such a system approach provides completeness, order, clarity, and direction. Admittedly, system engineering can be burdensome and inappropriate for those project objectives having simple and familiar solutions that are easily held and controlled mentally. However, some type of documented and structured approach is needed for those objectives that dictate extensive, unique, or complex programs, and/or creation of state-of-the-art machines and facilities. System engineering methods have been used extensively and successfully in these cases. The scientific methods has served well in ordering countless technical undertakings that address a specific question. Similarly, conventional construction and engineering job methods will continue to be quite adequate to organize routine building projects. Nuclear waste repository site characterization projects involve multiple complex research questions and regulatory requirements that interface with each other and with advanced engineering and subsurface construction techniques. There is little doubt that system engineering is an appropriate orchestrating process to structure such diverse elements into a cohesive, well defied project
A Precise Method for Cloth Configuration Parsing Applied to Single-Arm Flattening
Directory of Open Access Journals (Sweden)
Li Sun
2016-04-01
Full Text Available In this paper, we investigate the contribution that visual perception affords to a robotic manipulation task in which a crumpled garment is flattened by eliminating visually detected wrinkles. In order to explore and validate visually guided clothing manipulation in a repeatable and controlled environment, we have developed a hand-eye interactive virtual robot manipulation system that incorporates a clothing simulator to close the effector-garment-visual sensing interaction loop. We present the technical details and compare the performance of two different methods for detecting, representing and interpreting wrinkles within clothing surfaces captured in high-resolution depth maps. The first method we present relies upon a clustering-based method for localizing and parametrizing wrinkles, while the second method adopts a more advanced geometry-based approach in which shape-topology analysis underpins the identification of the cloth configuration (i.e., maps wrinkles. Having interpreted the state of the cloth configuration by means of either of these methods, a heuristic-based flattening strategy is then executed to infer the appropriate forces, their directions and gripper contact locations that must be applied to the cloth in order to flatten the perceived wrinkles. A greedy approach, which attempts to flatten the largest detected wrinkle for each perception-iteration cycle, has been successfully adopted in this work. We present the results of our heuristic-based flattening methodology which relies upon clustering-based and geometry-based features respectively. Our experiments indicate that geometry-based features have the potential to provide a greater degree of clothing configuration understanding and, as a consequence, improve flattening performance. The results of experiments using a real robot (as opposed to simulated robot also confirm our proposition that a more effective visual perception system can advance the performance of cloth
Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly
Directory of Open Access Journals (Sweden)
Liana Chaves Mendes-Santos
Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.
An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems
Directory of Open Access Journals (Sweden)
Jesús Cajigas
2014-06-01
Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.
A method of applying two-pump system in automatic transmissions for energy conservation
Directory of Open Access Journals (Sweden)
Peng Dong
2015-06-01
Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.
IAEA-ASSET's root cause analysis method applied to sodium leakage incident at Monju
International Nuclear Information System (INIS)
Watanabe, Norio; Hirano, Masashi
1997-08-01
The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)
Bamberger, Katharine T
2016-03-01
The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.
IAEA-ASSET`s root cause analysis method applied to sodium leakage incident at Monju
Energy Technology Data Exchange (ETDEWEB)
Watanabe, Norio; Hirano, Masashi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-08-01
The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)
[Influence of Sex and Age on Contrast Sensitivity Subject to the Applied Method].
Darius, Sabine; Bergmann, Lisa; Blaschke, Saskia; Böckelmann, Irina
2018-02-01
The aim of the study was to detect gender and age differences in both photopic and mesopic contrast sensitivity with different methods in relation to German driver's license regulations (Fahrerlaubnisverordnung; FeV). We examined 134 healthy volunteers (53 men, 81 women) with an age between 18 and 76 years, that had been divided into two groups (AG I Mars charts under standardized illumination were applied for photopic contrast sensitivity. We could not find any gender differences. When evaluating age, there were no differences between the two groups for the Mars charts nor in the Rodatest. In all other tests, the younger volunteers achieved significantly better results. For contrast vision, there exists age-adapted cut-off-values. Concerning the driving safety of traffic participants, sufficient photopic and mesopic contrast vision should be focused on, independent of age. Therefore, there is a need to reconsider the age-adapted cut-off-values. Georg Thieme Verlag KG Stuttgart · New York.
Study of different ultrasonic focusing methods applied to non destructive testing
International Nuclear Information System (INIS)
El Amrani, M.
1995-01-01
The work presented in this thesis concerns the study of different ultrasonic focusing techniques applied to Nondestructive Testing (mechanical focusing and electronic focusing) and compares their capabilities. We have developed a model to predict the ultrasonic field radiated into a solid by water-coupled transducers. The model is based upon the Rayleigh integral formulation, modified to take account the refraction at the liquid-solid interface. The model has been validated by numerous experiments in various configurations. Running this model and the associated software, we have developed new methods to optimize focused transducers and studied the characteristics of the beam generated by transducers using various focusing techniques. (author). 120 refs., 95 figs., 4 appends
Wakabayashi, Hideaki; Asai, Masamitsu; Matsumoto, Keiji; Yamakita, Jiro
2016-11-01
Nakayama's shadow theory first discussed the diffraction by a perfectly conducting grating in a planar mounting. In the theory, a new formulation by use of a scattering factor was proposed. This paper focuses on the middle regions of a multilayered dielectric grating placed in conical mounting. Applying the shadow theory to the matrix eigenvalues method, we compose new transformation and improved propagation matrices of the shadow theory for conical mounting. Using these matrices and scattering factors, being the basic quantity of diffraction amplitudes, we formulate a new description of three-dimensional scattering fields which is available even for cases where the eigenvalues are degenerate in any region. Some numerical examples are given for cases where the eigenvalues are degenerate in the middle regions.
Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method
Directory of Open Access Journals (Sweden)
M. Macků
2012-09-01
Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to theproduction of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out whatdimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mouldproduction, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself.Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically.A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meetrequirements specified by a customer were the results.
Directory of Open Access Journals (Sweden)
J. Szymszal
2009-01-01
Full Text Available The study discusses application of computer simulation based on the method of inverse cumulative distribution function. The simulationrefers to an elementary static case, which can also be solved by physical experiment, consisting mainly in observations of foundryproduction in a selected foundry plant. For the simulation and forecasting of foundry production quality in selected cast iron grade, arandom number generator of Excel calculation sheet was chosen. Very wide potentials of this type of simulation when applied to theevaluation of foundry production quality were demonstrated, using a number generator of even distribution for generation of a variable ofan arbitrary distribution, especially of a preset empirical distribution, without any need of adjusting to this variable the smooth theoreticaldistributions.
Comparison of gradient methods for gain tuning of a PD controller applied on a quadrotor system
Kim, Jinho; Wilkerson, Stephen A.; Gadsden, S. Andrew
2016-05-01
Many mechanical and electrical systems have utilized the proportional-integral-derivative (PID) control strategy. The concept of PID control is a classical approach but it is easy to implement and yields a very good tracking performance. Unmanned aerial vehicles (UAVs) are currently experiencing a significant growth in popularity. Due to the advantages of PID controllers, UAVs are implementing PID controllers for improved stability and performance. An important consideration for the system is the selection of PID gain values in order to achieve a safe flight and successful mission. There are a number of different algorithms that can be used for real-time tuning of gains. This paper presents two algorithms for gain tuning, and are based on the method of steepest descent and Newton's minimization of an objective function. This paper compares the results of applying these two gain tuning algorithms in conjunction with a PD controller on a quadrotor system.
Adding randomness controlling parameters in GRASP method applied in school timetabling problem
Directory of Open Access Journals (Sweden)
Renato Santos Pereira
2017-09-01
Full Text Available This paper studies the influence of randomness controlling parameters (RCP in first stage GRASP method applied in graph coloring problem, specifically school timetabling problems in a public high school. The algorithm (with the inclusion of RCP was based on critical variables identified through focus groups, whose weights can be adjusted by the user in order to meet the institutional needs. The results of the computational experiment, with 11-year-old data (66 observations processed at the same high school show that the inclusion of RCP leads to significantly lowering the distance between initial solutions and local minima. The acceptance and the use of the solutions found allow us to conclude that the modified GRASP, as has been constructed, can make a positive contribution to this timetabling problem of the school in question.
Applied methods and techniques for mechatronic systems modelling, identification and control
Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya
2014-01-01
Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...
Applied methods for mitigation of damage by stress corrosion in BWR type reactors
International Nuclear Information System (INIS)
Hernandez C, R.; Diaz S, A.; Gachuz M, M.; Arganis J, C.
1998-01-01
The Boiling Water nuclear Reactors (BWR) have presented stress corrosion problems, mainly in components and pipes of the primary system, provoking negative impacts in the performance of energy generator plants, as well as the increasing in the radiation exposure to personnel involucred. This problem has caused development of research programs, which are guided to find solution alternatives for the phenomena control. Among results of greater relevance the control for the reactor water chemistry stands out particularly in the impurities concentration and oxidation of radiolysis products; as well as the supervision in the materials selection and the stresses levels reduction. The present work presents the methods which can be applied to diminish the problems of stress corrosion in BWR reactors. (Author)
An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring
Buratynski, E. K.; Caughey, D. A.
1984-01-01
An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.
International Nuclear Information System (INIS)
Klose, G.
1999-01-01
Lyotropic mesophases possess lattice dimensions of the order of magnitude of the length of their molecules. Consequently, the first Bragg reflections of such systems appear at small scattering angles (small angle scattering). A combination of scattering and NMR methods was applied to study structural properties of POPC/C 12 E n mixtures. Generally, the ranges of existence of the liquid crystalline lamellar phase, the dimension of the unit-cell of the lamellae and important structural parameters of the lipid and surfactant molecules in the mixed bilayers were determined. With that the POPC/C 12 E 4 bilayer represents one of the best structurally characterized mixed model membranes. It is a good starting system for studying the interrelation with other e.g. dynamic or thermodynamic properties. (K.A.)
Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method
Directory of Open Access Journals (Sweden)
Macků M.
2012-09-01
Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to the production of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out what dimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mould production, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself. Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically. A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meet requirements specified by a customer were the results.
Directory of Open Access Journals (Sweden)
Emer Bernal
2017-01-01
Full Text Available In this paper we are presenting a method using fuzzy logic for dynamic parameter adaptation in the imperialist competitive algorithm, which is usually known by its acronym ICA. The ICA algorithm was initially studied in its original form to find out how it works and what parameters have more effect upon its results. Based on this study, several designs of fuzzy systems for dynamic adjustment of the ICA parameters are proposed. The experiments were performed on the basis of solving complex optimization problems, particularly applied to benchmark mathematical functions. A comparison of the original imperialist competitive algorithm and our proposed fuzzy imperialist competitive algorithm was performed. In addition, the fuzzy ICA was compared with another metaheuristic using a statistical test to measure the advantage of the proposed fuzzy approach for dynamic parameter adaptation.
International Nuclear Information System (INIS)
Swiderska-Kowalczyk, M.; Gomez, F.J.; Martin, M.
1997-01-01
In aerosol research aerosols of known size, shape, and density are highly desirable because most aerosols properties depend strongly on particle size. However, such constant and reproducible generation of those aerosol particles whose size and concentration can be easily controlled, can be achieved only in laboratory-scale tests. In large scale experiments, different generation methods for various elements and compounds have been applied. This work presents, in a brief from, a review of applications of these methods used in large scale experiments on aerosol behaviour and source term. Description of generation method and generated aerosol transport conditions is followed by properties of obtained aerosol, aerosol instrumentation used, and the scheme of aerosol generation system-wherever it was available. An information concerning aerosol generation particular purposes and reference number(s) is given at the end of a particular case. These methods reviewed are: evaporation-condensation, using a furnace heating and using a plasma torch; atomization of liquid, using compressed air nebulizers, ultrasonic nebulizers and atomization of liquid suspension; and dispersion of powders. Among the projects included in this worked are: ACE, LACE, GE Experiments, EPRI Experiments, LACE-Spain. UKAEA Experiments, BNWL Experiments, ORNL Experiments, MARVIKEN, SPARTA and DEMONA. The aim chemical compounds studied are: Ba, Cs, CsOH, CsI, Ni, Cr, NaI, TeO 2 , UO 2 Al 2 O 3 , Al 2 SiO 5 , B 2 O 3 , Cd, CdO, Fe 2 O 3 , MnO, SiO 2 , AgO, SnO 2 , Te, U 3 O 8 , BaO, CsCl, CsNO 3 , Urania, RuO 2 , TiO 2 , Al(OH) 3 , BaSO 4 , Eu 2 O 3 and Sn. (Author)
Non-Invasive Seismic Methods for Earthquake Site Classification Applied to Ontario Bridge Sites
Bilson Darko, A.; Molnar, S.; Sadrekarimi, A.
2017-12-01
How a site responds to earthquake shaking and its corresponding damage is largely influenced by the underlying ground conditions through which it propagates. The effects of site conditions on propagating seismic waves can be predicted from measurements of the shear wave velocity (Vs) of the soil layer(s) and the impedance ratio between bedrock and soil. Currently the seismic design of new buildings and bridges (2015 Canadian building and bridge codes) requires determination of the time-averaged shear-wave velocity of the upper 30 metres (Vs30) of a given site. In this study, two in situ Vs profiling methods; Multichannel Analysis of Surface Waves (MASW) and Ambient Vibration Array (AVA) methods are used to determine Vs30 at chosen bridge sites in Ontario, Canada. Both active-source (MASW) and passive-source (AVA) surface wave methods are used at each bridge site to obtain Rayleigh-wave phase velocities over a wide frequency bandwidth. The dispersion curve is jointly inverted with each site's amplification function (microtremor horizontal-to-vertical spectral ratio) to obtain shear-wave velocity profile(s). We apply our non-invasive testing at three major infrastructure projects, e.g., five bridge sites along the Rt. Hon. Herb Gray Parkway in Windsor, Ontario. Our non-invasive testing is co-located with previous invasive testing, including Standard Penetration Test (SPT), Cone Penetration Test and downhole Vs data. Correlations between SPT blowcount and Vs are developed for the different soil types sampled at our Ontario bridge sites. A robust earthquake site classification procedure (reliable Vs30 estimates) for bridge sites across Ontario is evaluated from available combinations of invasive and non-invasive site characterization methods.
Infrared thermography inspection methods applied to the target elements of W7-X Divertor
International Nuclear Information System (INIS)
Missirlian, M.; Durocher, A.; Schlosser, J.; Farjon, J.-L.; Vignal, N.; Traxler, H.; Schedler, B.; Boscary, J.
2006-01-01
As heat exhaust capability and lifetime of plasma-facing component (PFC) during in-situ operation are linked to the manufacturing quality, a set of non-destructive testing must be operated during R-and-D and manufacturing phases. Within this framework, advanced non-destructive examination (NDE) methods are one of the key issues to achieve a high level of quality and reliability of joining techniques in the production of high heat flux components but also to develop and built successfully PFCs for a next generation of fusion devices. In this frame, two NDE infrared thermographic approaches, which have been recently applied to the qualification of CFC target elements of the W7-X divertor during the first series production will be discussed in this paper. The first one, developed by CEA (SATIR facility) and used with successfully to the control of the mass-produced actively cooled PFCs on Tore Supra, is based on the transient thermography where the testing protocol consists in inducing a thermal transient within the heat sink structure by an alternative hot/cold water flow. The second one, recently developed by PLANSEE (ARGUS facility), is based on the pulsed thermography where the component is heated externally by a single powerful flash of light. Results obtained on qualification experiences performed during the first series production of W7-X divertor components representing about thirty mock-ups with artificial and manufacturing defects, demonstrated the capabilities of these two methods and raised the efficiency of inspection to a level which is appropriate for industrial application. This comparative study, associated to a cross-checking analysis between the high heat flux performance tests and these inspection methods by infrared thermography, showed a good reproducibility and allowed to set a detectable limit specific at each method. Finally, the detectability of relevant defects showed excellent coincidence with thermal images obtained from high heat flux
Directory of Open Access Journals (Sweden)
Vitor Souza Martins
2017-03-01
Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for
Simulation methods to estimate design power: an overview for applied research.
Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E
2011-06-20
Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.
An Efficient Combined Meta-Heuristic Algorithm for Solving the Traveling Salesman Problem
Directory of Open Access Journals (Sweden)
Majid Yousefikhoshbakht
2016-08-01
Full Text Available The traveling salesman problem (TSP is one of the most important NP-hard Problems and probably the most famous and extensively studied problem in the field of combinatorial optimization. In this problem, a salesman is required to visit each of n given nodes once and only once, starting from any node and returning to the original place of departure. This paper presents an efficient evolutionary optimization algorithm developed through combining imperialist competitive algorithm and lin-kernighan algorithm called (MICALK in order to solve the TSP. The MICALK is tested on 44 TSP instances involving from 24 to 1655 nodes from the literature so that 26 best known solutions of the benchmark problem are also found by our algorithm. Furthermore, the performance of MICALK is compared with several metaheuristic algorithms, including GA, BA, IBA, ICA, GSAP, ABO, PSO and BCO on 32 instances from TSPLIB. The results indicate that the MICALK performs well and is quite competitive with the above algorithms.
Directory of Open Access Journals (Sweden)
Mehmet Polat Saka
2013-01-01
Full Text Available The type of mathematical modeling selected for the optimum design problems of steel skeletal frames affects the size and mathematical complexity of the programming problem obtained. Survey on the structural optimization literature reveals that there are basically two types of design optimization formulation. In the first type only cross sectional properties of frame members are taken as design variables. In such formulation when the values of design variables change during design cycles, it becomes necessary to analyze the structure and update the response of steel frame to the external loading. Structural analysis in this type is a complementary part of the design process. In the second type joint coordinates are also treated as design variables in addition to the cross sectional properties of members. Such formulation eliminates the necessity of carrying out structural analysis in every design cycle. The values of the joint displacements are determined by the optimization techniques in addition to cross sectional properties. The structural optimization literature contains structural design algorithms that make use of both type of formulation. In this study a review is carried out on mathematical and metaheuristic algorithms where the effect of the mathematical modeling on the efficiency of these algorithms is discussed.
A hybrid metaheuristic for the time-dependent vehicle routing problem with hard time windows
Directory of Open Access Journals (Sweden)
N. Rincon-Garcia
2017-01-01
Full Text Available This article paper presents a hybrid metaheuristic algorithm to solve the time-dependent vehicle routing problem with hard time windows. Time-dependent travel times are influenced by different congestion levels experienced throughout the day. Vehicle scheduling without consideration of congestion might lead to underestimation of travel times and consequently missed deliveries. The algorithm presented in this paper makes use of Large Neighbourhood Search approaches and Variable Neighbourhood Search techniques to guide the search. A first stage is specifically designed to reduce the number of vehicles required in a search space by the reduction of penalties generated by time-window violations with Large Neighbourhood Search procedures. A second stage minimises the travel distance and travel time in an ‘always feasible’ search space. Comparison of results with available test instances shows that the proposed algorithm is capable of obtaining a reduction in the number of vehicles (4.15%, travel distance (10.88% and travel time (12.00% compared to previous implementations in reasonable time.
Efficient Metaheuristics for the Mixed Team Orienteering Problem with Time Windows
Directory of Open Access Journals (Sweden)
Damianos Gavalas
2016-01-01
Full Text Available Given a graph whose nodes and edges are associated with a profit, a visiting (or traversing time and an admittance time window, the Mixed Team Orienteering Problem with Time Windows (MTOPTW seeks for a specific number of walks spanning a subset of nodes and edges of the graph so as to maximize the overall collected profit. The visit of the included nodes and edges should take place within their respective time window and the overall duration of each walk should be below a certain threshold. In this paper we introduce the MTOPTW, which can be used for modeling a realistic variant of the Tourist Trip Design Problem where the objective is the derivation of near-optimal multiple-day itineraries for tourists visiting a destination which features several points of interest (POIs and scenic routes. Since the MTOPTW is a NP-hard problem, we propose the first metaheuristic approaches to tackle it. The effectiveness of our algorithms is validated through a number of experiments on POI and scenic route sets compiled from the city of Athens (Greece.
Directory of Open Access Journals (Sweden)
José F. Herbert-Acero
2014-01-01
Full Text Available This work presents a novel framework for the aerodynamic design and optimization of blades for small horizontal axis wind turbines (WT. The framework is based on a state-of-the-art blade element momentum model, which is complemented with the XFOIL 6.96 software in order to provide an estimate of the sectional blade aerodynamics. The framework considers an innovative nested-hybrid solution procedure based on two metaheuristics, the virtual gene genetic algorithm and the simulated annealing algorithm, to provide a near-optimal solution to the problem. The objective of the study is to maximize the aerodynamic efficiency of small WT (SWT rotors for a wide range of operational conditions. The design variables are (1 the airfoil shape at the different blade span positions and the radial variation of the geometrical variables of (2 chord length, (3 twist angle, and (4 thickness along the blade span. A wind tunnel validation study of optimized rotors based on the NACA 4-digit airfoil series is presented. Based on the experimental data, improvements in terms of the aerodynamic efficiency, the cut-in wind speed, and the amount of material used during the manufacturing process were achieved. Recommendations for the aerodynamic design of SWT rotors are provided based on field experience.
International Nuclear Information System (INIS)
Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.
1985-01-01
In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%
Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana
2016-12-01
Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.
Applying the Weighted Horizontal Magnetic Gradient Method to a Simulated Flaring Active Region
Korsós, M. B.; Chatterjee, P.; Erdélyi, R.
2018-04-01
Here, we test the weighted horizontal magnetic gradient (WG M ) as a flare precursor, introduced by Korsós et al., by applying it to a magnetohydrodynamic (MHD) simulation of solar-like flares. The preflare evolution of the WG M and the behavior of the distance parameter between the area-weighted barycenters of opposite-polarity sunspots at various heights is investigated in the simulated δ-type sunspot. Four flares emanated from this sunspot. We found the optimum heights above the photosphere where the flare precursors of the WG M method are identifiable prior to each flare. These optimum heights agree reasonably well with the heights of the occurrence of flares identified from the analysis of their thermal and ohmic heating signatures in the simulation. We also estimated the expected time of the flare onsets from the duration of the approaching–receding motion of the barycenters of opposite polarities before each single flare. The estimated onset time and the actual time of occurrence of each flare are in good agreement at the corresponding optimum heights. This numerical experiment further supports the use of flare precursors based on the WG M method.
International Nuclear Information System (INIS)
Okuno, Hiroshi; Fujine, Yukio; Asakura, Toshihide; Murazaki, Minoru; Koyama, Tomozo; Sakakibara, Tetsuro; Shibata, Atsuhiro
1999-03-01
The crystallization method is proposed to apply for recovery of uranium from dissolution liquid, enabling to reduce handling materials in later stages of reprocessing used fast breeder reactor (FBR) fuels. This report studies possible safety problems accompanied by the proposed method. Crystallization process was first defined in the whole reprocessing process, and the quantity and the kind of treated fuel were specified. Possible problems, such as criticality, shielding, fire/explosion, and confinement, were then investigated; and the events that might induce accidental incidents were discussed. Criticality, above all the incidents, was further studied by considering exampled criticality control of the crystallization process. For crystallization equipment, in particular, evaluation models were set up in normal and accidental operation conditions. Related data were selected out from the nuclear criticality safety handbooks. The theoretical densities of plutonium nitrates, which give basic and important information, were estimated in this report based on the crystal structure data. The criticality limit of crystallization equipment was calculated based on the above information. (author)
Method of moments as applied to arbitrarily shaped bounded nonlinear scatterers
Caorsi, Salvatore; Massa, Andrea; Pastorino, Matteo
1994-01-01
In this paper, we explore the possibility of applying the moment method to determine the electromagnetic field distributions inside three-dimensional bounded nonlinear dielectric objects of arbitrary shapes. The moment method has usually been employed to solve linear scattering problems. We start with an integral equation formulation, and derive a nonlinear system of algebraic equations that allows us to obtain an approximate solution for the harmonic vector components of the electric field. Preliminary results of some numerical simulations are reported. Dans cet article nous explorons la possibilité d'appliquer la méthode des moments pour déterminer la distribution du champ électromagnétique dans des objets tridimensionnels diélectriques, non-linéaires, limités et de formes arbitraires. La méthode des moments a été communément employée pour les problèmes de diffusion linéaire. Nous commençons par une formulation basée sur l'équation intégrale et nous dérivons un système non-linéaire d'équations algébriques qui nous permet d'obtenir une solution approximative pour les composantes harmoniques du vecteur du champ électrique. Les résultats préliminaires de quelques simulations numériques sont présentés.
[An experimental assessment of methods for applying intestinal sutures in intestinal obstruction].
Akhmadudinov, M G
1992-04-01
The results of various methods used in applying intestinal sutures in obturation were studied. Three series of experiments were conducted on 30 dogs--resection of the intestine after obstruction with the formation of anastomoses by means of double-row suture (Albert--Shmiden--Lambert) in the first series (10 dogs), by a single-row suture after V. M. Mateshchuk [correction of Mateshuku] in the second series, and bu a single-row stretching suture suggested by the author in the third series. The postoperative complications and the parameters of physical airtightness of the intestinal anastomosis were studied in dynamics in the experimental animals. The results of the study: incompetence of the anastomosis sutures in the first series 6, in the second 4, and in the third series one. Adhesions occurred in all animals of the first and second series and in 2 of the third series. Six dogs of the first series died, 4 of the second, and one of the third. Study of the dynamics of the results showed a direct connection of the complications with the parameters of the physical airtightness of the anastomosis, and the last-named with the method of the intestinal suture. Relatively better results were noted in formation of the anastomosis by means of our suggested stretshing continuous suture passed through the serous, muscular, and submucous coats of the intestine.
Chemometric methods and near-infrared spectroscopy applied to bioenergy production
International Nuclear Information System (INIS)
Liebmann, B.
2010-01-01
data analysis (i) successfully determine the concentrations of moisture, protein, and starch in the feedstock material as well as glucose, ethanol, glycerol, lactic acid, acetic acid in the processed bioethanol broths; (ii) and allow quantifying a complex biofuel's property such as the heating value. At the third stage, this thesis focuses on new chemometric methods that improve mathematical analysis of multivariate data such as NIR spectra. The newly developed method 'repeated double cross validation' (rdCV) separates optimization of regression models from tests of model performance; furthermore, rdCV estimates the variability of the model performance based on a large number of prediction errors from test samples. The rdCV procedure has been applied to both the classical PLS regression and the robust 'partial robust M' regression method, which can handle erroneous data. The peculiar and relatively unknown 'random projection' method is tested for its potential of dimensionality reduction of data from chemometrics and chemoinformatics. The main findings are: (i) rdCV fosters a realistic assessment of model performance, (ii) robust regression has outstanding performance for data containing outliers and thus is strongly recommendable, and (iii) random projection is a useful niche application for high-dimensional data combined with possible restrictions in data storage and computing time. The three chemometric methods described are available as functions for the free software R. (author) [de
Applying a weighted random forests method to extract karst sinkholes from LiDAR data
Zhu, Junfeng; Pierskalla, William P.
2016-02-01
Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.
International Nuclear Information System (INIS)
Zhang Huiqun
2009-01-01
By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein-Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations.
Branney, Jonathan; Priego-Hernández, Jacqueline
2018-02-01
It is important for nurses to have a thorough understanding of the biosciences such as pathophysiology that underpin nursing care. These courses include content that can be difficult to learn. Team-based learning is emerging as a strategy for enhancing learning in nurse education due to the promotion of individual learning as well as learning in teams. In this study we sought to evaluate the use of team-based learning in the teaching of applied pathophysiology to undergraduate student nurses. A mixed methods observational study. In a year two, undergraduate nursing applied pathophysiology module circulatory shock was taught using Team-based Learning while all remaining topics were taught using traditional lectures. After the Team-based Learning intervention the students were invited to complete the Team-based Learning Student Assessment Instrument, which measures accountability, preference and satisfaction with Team-based Learning. Students were also invited to focus group discussions to gain a more thorough understanding of their experience with Team-based Learning. Exam scores for answers to questions based on Team-based Learning-taught material were compared with those from lecture-taught material. Of the 197 students enrolled on the module, 167 (85% response rate) returned the instrument, the results from which indicated a favourable experience with Team-based Learning. Most students reported higher accountability (93%) and satisfaction (92%) with Team-based Learning. Lectures that promoted active learning were viewed as an important feature of the university experience which may explain the 76% exhibiting a preference for Team-based Learning. Most students wanted to make a meaningful contribution so as not to let down their team and they saw a clear relevance between the Team-based Learning activities and their own experiences of teamwork in clinical practice. Exam scores on the question related to Team-based Learning-taught material were comparable to those
Lesellier, E; Mith, D; Dubrulle, I
2015-12-04
necessary, two-step gradient elution. The developed methods were then applied to real cosmetic samples to assess the method specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations
International Nuclear Information System (INIS)
Arimescu, V.E.; Heins, L.
2001-01-01
method, which is computationally efficient, is presented for the evaluation of the global statement. It is proved that, r, the expected fraction of fuel rods exceeding a certain limit is equal to the (1-r)-quantile of the overall distribution of all possible values from all fuel rods. In this way, the problem is reduced to that of estimating a certain quantile of the overall distribution, and the same techniques used for a single rod distribution can be applied again. A simplified test case was devised to verify and validate the methodology. The fuel code was replaced by a transfer function dependent on two input parameters. The function was chosen so that analytic results could be obtained for the distribution of the output. This offers a direct validation for the statistical procedure. Also, a sensitivity study has been performed to analyze the effect on the final outcome of the sampling procedure, simple Monte Carlo and Latin Hypercube Sampling. Also, the effect on the accuracy and bias of the statistical results due to the size of the sample was studied and the conclusion was reached that the results of the statistical methodology are typically conservative. In the end, an example of applying these statistical techniques to a PWR reload is presented together with the improvements and new insights the statistical methodology brings to fuel rod design calculations. (author)
Sattarvand, Javad; Niemann-Delius, Christian
2013-03-01
Paper describes a new metaheuristic algorithm which has been developed based on the Ant Colony Optimisation (ACO) and its efficiency have been discussed. To apply the ACO process on mine planning problem, a series of variables are considered for each block as the pheromone trails that represent the desirability of the block for being the deepest point of the mine in that column for the given mining period. During implementation several mine schedules are constructed in each iteration. Then the pheromone values of all blocks are reduced to a certain percentage and additionally the pheromone value of those blocks that are used in defining the constructed schedules are increased according to the quality of the generated solutions. By repeated iterations, the pheromone values of those blocks that define the shape of the optimum solution are increased whereas those of the others have been significantly evaporated.
Directory of Open Access Journals (Sweden)
D.D. Lestiani
2011-08-01
Full Text Available Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA and particles induced X-ray emission (PIXE. Particle samples in the PM2.5 and PM2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preffered, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment.
New methods applied to the analysis and treatment of ovarian cancer
International Nuclear Information System (INIS)
Order, S.E.; Rosenshein, N.B.; Klein, J.L.; Lichter, A.S.; Ettinger, D.S.; Dillon, M.B.; Leibel, S.A.
1979-01-01
The development of rigorous staging methods, appreciation of new knowledge concerning ovarian cancer dissemination, and administration of new treatment techniques have been applied to ovarian cancer. The method of staging consists of peritoneal cytology, total abdominal hysterectomy-bilateral salpingo oophorectomy (TAH-BSO), omentectomy, nodal biopsy, diaphragmatic inspection and is coupled with maximal surgical resection. An additional examination being evaluated for usefulness in future staging is intraperitoneal /sup 99m/Tc sulfur colloid scans. Nineteen patients have entered the pilot studies. Sixteen patients (5 Stage 2, 10 Stage 3 micrometastatic, and 1 Stage 4) have been treated with colloidal 32 P, i.p. followed 2 weeks later by split abdominal irradiation (200 rad fractions pelvis-2 hr rest-150 rad upper abdomen) to a total abdominal dose of 3000 rad with a pelvic cone down to 4000 rad. Five of these patients received Phenylalanine mustard (L-PAM) (7 mg/m 2 ) maintenance therapy. The 3 year actuarial survival was 78% and the 3 year disease free actuarial survival 68%. Seven patients were treated with intraperitoneal tumor antisera and 4/7 remain in complete remission as of this writing. The specificity of the antiserum has been demonstrated by immunoelectrophoresis in 4/4 patients, and by live cell fluorescence in 1 patient. Rabbit IgG levels revealed significant increasing titers in 4/6 patients following i.p. antiovarian antiserum. Radiolabeled IgG derived from the antiserum demonstrated tumor localization and correlation with conventional radiograhy and computerized axial tomograhy (CAT) scans in 2 patients studied to date. Biomarker analysis reveals that free secretory protein 6/6, apha globulin 5/6, and CEA (carcinoembryonic antigen) 3/6 were elevated in the 6 patients studied. Two patients whose disease progressed demonstrated elevated levels of all three biomarkers
The Global Survey Method Applied to Ground-level Cosmic Ray Measurements
Belov, A.; Eroshenko, E.; Yanke, V.; Oleneva, V.; Abunin, A.; Abunina, M.; Papaioannou, A.; Mavromichalaki, H.
2018-04-01
The global survey method (GSM) technique unites simultaneous ground-level observations of cosmic rays in different locations and allows us to obtain the main characteristics of cosmic-ray variations outside of the atmosphere and magnetosphere of Earth. This technique has been developed and applied in numerous studies over many years by the Institute of Terrestrial Magnetism, Ionosphere and Radiowave Propagation (IZMIRAN). We here describe the IZMIRAN version of the GSM in detail. With this technique, the hourly data of the world-wide neutron-monitor network from July 1957 until December 2016 were processed, and further processing is enabled upon the receipt of new data. The result is a database of homogeneous and continuous hourly characteristics of the density variations (an isotropic part of the intensity) and the 3D vector of the cosmic-ray anisotropy. It includes all of the effects that could be identified in galactic cosmic-ray variations that were caused by large-scale disturbances of the interplanetary medium in more than 50 years. These results in turn became the basis for a database on Forbush effects and interplanetary disturbances. This database allows correlating various space-environment parameters (the characteristics of the Sun, the solar wind, et cetera) with cosmic-ray parameters and studying their interrelations. We also present features of the coupling coefficients for different neutron monitors that enable us to make a connection from ground-level measurements to primary cosmic-ray variations outside the atmosphere and the magnetosphere. We discuss the strengths and weaknesses of the current version of the GSM as well as further possible developments and improvements. The method developed allows us to minimize the problems of the neutron-monitor network, which are typical for experimental physics, and to considerably enhance its advantages.
International Nuclear Information System (INIS)
Lestiani, D.D.; Santoso, M.
2011-01-01
Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA) and particles induced X-ray emission (PIXE). Particle samples in the PM 2.5 and PM 2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preferred, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment. (author)
Stochastic Methods Applied to Power System Operations with Renewable Energy: A Review
Energy Technology Data Exchange (ETDEWEB)
Zhou, Z. [Argonne National Lab. (ANL), Argonne, IL (United States); Liu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Electric Reliability Council of Texas (ERCOT), Austin, TX (United States); Botterud, A. [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-08-01
Renewable energy resources have been rapidly integrated into power systems in many parts of the world, contributing to a cleaner and more sustainable supply of electricity. Wind and solar resources also introduce new challenges for system operations and planning in terms of economics and reliability because of their variability and uncertainty. Operational strategies based on stochastic optimization have been developed recently to address these challenges. In general terms, these stochastic strategies either embed uncertainties into the scheduling formulations (e.g., the unit commitment [UC] problem) in probabilistic forms or develop more appropriate operating reserve strategies to take advantage of advanced forecasting techniques. Other approaches to address uncertainty are also proposed, where operational feasibility is ensured within an uncertainty set of forecasting intervals. In this report, a comprehensive review is conducted to present the state of the art through Spring 2015 in the area of stochastic methods applied to power system operations with high penetration of renewable energy. Chapters 1 and 2 give a brief introduction and overview of power system and electricity market operations, as well as the impact of renewable energy and how this impact is typically considered in modeling tools. Chapter 3 reviews relevant literature on operating reserves and specifically probabilistic methods to estimate the need for system reserve requirements. Chapter 4 looks at stochastic programming formulations of the UC and economic dispatch (ED) problems, highlighting benefits reported in the literature as well as recent industry developments. Chapter 5 briefly introduces alternative formulations of UC under uncertainty, such as robust, chance-constrained, and interval programming. Finally, in Chapter 6, we conclude with the main observations from our review and important directions for future work.
International Nuclear Information System (INIS)
Nicolau-Rebigan, S.; Sporea, D.; Niculescu, V.I.R.
2000-01-01
The paper presents a holographic method applied in the ionizing radiation dosimetry. It is possible to use two types of holographic interferometry like as double exposure holographic interferometry, or fast real time holographic interferometry. In this paper the applications of holographic interferometry to ionizing radiation dosimetry are presented. The determination of the accurate value of dose delivered by an ionizing radiation source (released energy per mass unit) is a complex problem which imposes different solutions depending on experimental parameters and it is solved with a double exposure holographic interferometric method associated with an optoelectronic interface and Z80 microprocessor. The method can determine the absorbed integral dose as well as the three-dimensional distribution of dose in given volume. The paper presents some results obtained in radiation dosimetry. Original mathematical relations for integral absorbed dose in irreversible radiolyzing liquids where derived. Irradiation effects can be estimated from the holographic fringes displacement and density. To measure these parameters, the obtained holographic interferograms were picked-up by a closed TV circuit system in such a way that a selected TV line explores the picture along the direction of interest using a special designed interface, a Z80 and our microprocessor system captures data along the selected TV line. When the integral dose is to be measured the microprocessor computes it from the information contained in the fringes distribution, according to the proposed formulae. Integral absorbed dose and spatial dose distribution can be estimated with an accuracy better than 4%. Some advantages of this method are outlined comparatively with conventional method in radiation dosimetry. The paper presents an original holographic set-up with an electronic interface, assisted by a Z80 microprocessor and used for nondestructive testing of transparent objects at the laser wave length
A nuclear-medical method applied for determining the choledochus diameter after cholecystectomy
International Nuclear Information System (INIS)
Wolf, M.
1980-01-01
54 patients (46 of them females, 8 males) who underwent cholecystectomy at least 4 years ago, were followed up roentgenologically by infusion cholangiography and nuclear-medicinally by quantitative hepatobiliary functional scintiscanning (HBFS). The ROI method applied for HBFS permits to record time/activity curves above the liver parenchyma (A) and the porta of the liver (B). By substracting curve A of curve B with the scale in which A is incorporated in B, a curve B' results, indicating the flow volume through the porta of the liver. The quotient Q=maximum pulse A to B/maximum pulse B to B indicates the portion of the liver parenchyma in the porta curve. The quotient represents a measure for the total volume of the large bile ducts included in the region of the porta of the liver. The quantity 1-Q/Q was put in relation with the roentgenologically determined common bile duct diameters. It resulted that both quantities correlated well, with a correlation coefficient of r=-0.860. Thus, the choledochus diameter can be determined in a primarily functional examination with a precision of 2 mm, a degree which permits the detection of clinically relevant discharge malfunctions. It was not possible to detect peristalsis-dependent phenomena with a dosage of 4-5 mCi 99 mTc-diethyl-IDA, an irradiation dose which was sufficient for answering the clinical questions and could be justified for the patients. (orig.) [de
A new method of identifying target groups for pronatalist policy applied to Australia.
Directory of Open Access Journals (Sweden)
Mengni Chen
Full Text Available A country's total fertility rate (TFR depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup's potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies.
Multicriterial Hierarchy Methods Applied in Consumption Demand Analysis. The Case of Romania
Directory of Open Access Journals (Sweden)
Constantin Bob
2008-03-01
Full Text Available The basic information for computing the quantitative statistical indicators, that characterize the demand of industrial products and services are collected by the national statistics organizations, through a series of statistical surveys (most of them periodical and partial. The source for data we used in the present paper is an statistical investigation organized by the National Institute of Statistics, "Family budgets survey" that allows to collect information regarding the households composition, income, expenditure, consumption and other aspects of population living standard. In 2005, in Romania, a person spent monthly in average 391,2 RON, meaning about 115,1 Euros for purchasing the consumed food products and beverage, as well as non-foods products, services, investments and other taxes. 23% of this sum was spent for purchasing the consumed food products and beverages, 21.6% of the total sum was spent for purchasing non-food goods and 18,1% for payment of different services. There is a discrepancy between the different development regions in Romania, regarding total households expenditure composition. For this reason, in the present paper we applied statistical methods for ranking the various development regions in Romania, using the share of householdsí expenditure on categories of products and services as ranking criteria.
Bending stress modeling of dismountable furniture joints applied with a use of finite element method
Directory of Open Access Journals (Sweden)
Milan Šimek
2009-01-01
Full Text Available Presented work focuses on bending moment stress modeling of dismountable furniture joints with a use of Finite Element Method. The joints are created from Minifix and Rondorfix cams combined with non-glued wooden dowels. Laminated particleboard 18 mm of thickness is used as a connected material. The connectors were chosen such as the most applied kind in furniture industry for the case furniture. All gained results were reciprocally compared to each other and also in comparison to experimental testing by the mean of stiffness. The non-linear numerical model of chosen joints was successfully created using the software Ansys Workbench. The detailed analysis of stress distribution in the joint was achieved with non-linear numerical simulation. A relationship between numerical simulation and experimental testing was showed by comparison stiffness tangents. A numerical simulation of RTA joint loads also demonstrated the important role of non-glued dowels in the tested joints. The low strength of particleboard in the tension parallel to surface (internal bond is the most likely the cause of the joint failure. Results are applicable for strength designing of furniture with the aid of Computer Aided Engineering.
Commissioning methods applied to the Hunterston 'B' AGR operator training simulator
International Nuclear Information System (INIS)
Hacking, D.
1985-01-01
The Hunterston 'B' full scope AGR Simulator, built for the South of Scotland Electricity Board by Marconi Instruments, encompasses all systems under direct and indirect control of the Hunterston central control room operators. The resulting breadth and depth of simulation together with the specification for the real time implementation of a large number of highly interactive detailed plant models leads to the classic problem of identifying acceptance and acceptability criteria. For example, whilst the ultimate criterion for acceptability must clearly be that within the context of the training requirement the simulator should be indistinguishable from the actual plant, far more measurable (i.e. less subjective) statements are required if a formal contractual acceptance condition is to be achieved. Within the framework, individual models and processes can have radically different acceptance requirements which therefore reflect on the commissioning approach applied. This paper discusses the application of a combination of quality assurance methods, design code results, plant data, theoretical analysis and operator 'feel' in the commissioning of the Hunterston 'B' AGR Operator Training Simulator. (author)
N. Brouard; J.-M. Robine; E. Cambois
1999-01-01
Cambois (Emmanuelle), Robin? (Jean-Marie), Brouard (Nicolas).- Life Expectancies Applied to Specific Statuses: A History of the Indicators and the Methods of Calculation Indicators of life expectancy applied to specific statuses, such as the state of health or professional status, were introduced at the end of the 1930s and are currently the object of renewed interest. Because they relate mortality to different domains (health, professional activity) applied life expectancies reflect simultan...
Directory of Open Access Journals (Sweden)
Antonio Augusto Chaves
2007-08-01
Full Text Available O Problema do Caixeiro Viajante com Coleta de Prêmios (PCVCP pode ser associado a um caixeiro que coleta um prêmio em cada cidade visitada e paga uma penalidade para cada cidade não visitada, com um custo de deslocamento entre as cidades. O problema encontra-se em minimizar o somatório dos custos da viagem e penalidades, enquanto inclui na sua rota um número suficiente de cidades que lhe permita coletar um prêmio mínimo preestabelecido. Este trabalho contribui com o desenvolvimento de metaheurísticas híbridas para o PCVCP, baseadas em GRASP e métodos de busca em vizinhança variável (VNS/VND para solucionar aproximadamente o PCVCP. De forma a validar as soluções obtidas, propõe-se uma formulação matemática a ser resolvida por um solver comercial, objetivando encontrar a solução ótima para o problema, sendo este solver aplicado a problemas de pequeno porte. Resultados computacionais demonstram a eficiência da abordagem híbrida proposta, tanto em relação à qualidade da solução final obtida quanto em relação ao tempo de execução.The Prize Collecting Traveling Salesman Problem (PCTSP can be associated to a salesman who collects a prize in each city visited and pays a penalty for each city not visited, with travel costs among the cities. The objective is to minimize the sum of the travel costs and penalties, including in the tour enough number of cities that allow collecting a minimum prize. This paper contributes with the development of a hybrid metaheuristic to PCTSP, based on GRASP and search methods in variable neighborhood (VNS/VND to solve PCTSP approximately. In order to validate the obtained solutions, we proposed a mathematical formulation to be solved by a commercial solver to find the best solution to the problem, being this solver applied to small problems. Computational results demonstrate the efficiency of the proposed method, as much in relation to the quality of the obtained final solution as in relation
A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases
Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie
2018-01-01
Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade
Energy Technology Data Exchange (ETDEWEB)
Silva Neto, C.A. da [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Inst. de Computacao], e-mail: cneto@ic.uff.br; Schilling, M.T. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil)], E-mail: schilling@ic.uff.br; Souza, J.C.S. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Programa de Pos-Graduacao em Computacao], E-mail: julio@ic.uff.br
2009-07-01
The paper presents aspects leading the combined use of electromechanical simulations complete and metaheuristics in order to increase the safe operation of electric power systems. The index that measuring the level of security and, consequently, the ability to each candidate solution is the level of damping of oscillations voltage. The complete electromechanics simulations allow a more accurate representation of the elements of the grid resulting in a more reliable diagnosis. Metaheuristics possess a high degree of generalization enabling its application in highly complex optimization problems such as the maximization of the attenuation level of voltage oscillations, which occur in a power system, due to a defect in the net. Due to the unprecedented nature of this methodology will be investigated two different metaheuristics, one based on a evolutionary algorithm and the other in particle swarm.
Directory of Open Access Journals (Sweden)
M. Omidvari
2015-09-01
Full Text Available Introduction: Occupational accidents are of the main issues in industries. It is necessary to identify the main root causes of accidents for their control. Several models have been proposed for determining the accidents root causes. FTA is one of the most widely used models which could graphically establish the root causes of accidents. The non-linear function is one of the main challenges in FTA compliance and in order to obtain the exact number, the meta-heuristic algorithms can be used. Material and Method: The present research was done in power plant industries in construction phase. In this study, a pattern for the analysis of human error in work-related accidents was provided by combination of neural network algorithms and FTA analytical model. Finally, using this pattern, the potential rate of all causes was determined. Result: The results showed that training, age, and non-compliance with safety principals in the workplace were the most important factors influencing human error in the occupational accident. Conclusion: According to the obtained results, it can be concluded that human errors can be greatly reduced by training, right choice of workers with regard to the type of occupations, and provision of appropriate safety conditions in the work place.
Ayvaz, M. Tamer
2007-11-01
This study proposes an inverse solution algorithm through which both the aquifer parameters and the zone structure of these parameters can be determined based on a given set of observations on piezometric heads. In the zone structure identification problem fuzzy c-means ( FCM) clustering method is used. The association of the zone structure with the transmissivity distribution is accomplished through an optimization model. The meta-heuristic harmony search ( HS) algorithm, which is conceptualized using the musical process of searching for a perfect state of harmony, is used as an optimization technique. The optimum parameter zone structure is identified based on three criteria which are the residual error, parameter uncertainty, and structure discrimination. A numerical example given in the literature is solved to demonstrate the performance of the proposed algorithm. Also, a sensitivity analysis is performed to test the performance of the HS algorithm for different sets of solution parameters. Results indicate that the proposed solution algorithm is an effective way in the simultaneous identification of aquifer parameters and their corresponding zone structures.
Metaheuristics for the dynamic stochastic dial-a-ride problem with expected return transports.
Schilde, M; Doerner, K F; Hartl, R F
2011-12-01
The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15%. Moreover, improvements of up to 41% can be achieved for some test instances.
Directory of Open Access Journals (Sweden)
Gunar Boye
2015-06-01
Full Text Available The axial heat transfer coefficient during flow boiling of n-hexane was measured using infrared thermography to determine the axial wall temperature in three geometrically similar annular gaps with different widths (s = 1.5 mm, s = 1 mm, s = 0.5 mm. During the design and evaluation process, the methods of statistical experimental design were applied. The following factors/parameters were varied: the heat flux q · = 30 − 190 kW / m 2 , the mass flux m · = 30 − 700 kg / m 2 s , the vapor quality x · = 0 . 2 − 0 . 7 , and the subcooled inlet temperature T U = 20 − 60 K . The test sections with gap widths of s = 1.5 mm and s = 1 mm had very similar heat transfer characteristics. The heat transfer coefficient increases significantly in the range of subcooled boiling, and after reaching a maximum at the transition to the saturated flow boiling, it drops almost monotonically with increasing vapor quality. With a gap width of 0.5 mm, however, the heat transfer coefficient in the range of saturated flow boiling first has a downward trend and then increases at higher vapor qualities. For each test section, two correlations between the heat transfer coefficient and the operating parameters have been created. The comparison also shows a clear trend of an increasing heat transfer coefficient with increasing heat flux for test sections s = 1.5 mm and s = 1.0 mm, but with increasing vapor quality, this trend is reversed for test section 0.5 mm.
A new method of identifying target groups for pronatalist policy applied to Australia
Chen, Mengni; Lloyd, Chris J.
2018-01-01
A country’s total fertility rate (TFR) depends on many factors. Attributing changes in TFR to changes of policy is difficult, as they could easily be correlated with changes in the unmeasured drivers of TFR. A case in point is Australia where both pronatalist effort and TFR increased in lock step from 2001 to 2008 and then decreased. The global financial crisis or other unobserved confounders might explain both the reducing TFR and pronatalist incentives after 2008. Therefore, it is difficult to estimate causal effects of policy using econometric techniques. The aim of this study is to instead look at the structure of the population to identify which subgroups most influence TFR. Specifically, we build a stochastic model relating TFR to the fertility rates of various subgroups and calculate elasticity of TFR with respect to each rate. For each subgroup, the ratio of its elasticity to its group size is used to evaluate the subgroup’s potential cost effectiveness as a pronatalist target. In addition, we measure the historical stability of group fertility rates, which measures propensity to change. Groups with a high effectiveness ratio and also high propensity to change are natural policy targets. We applied this new method to Australian data on fertility rates broken down by parity, age and marital status. The results show that targeting parity 3+ is more cost-effective than lower parities. This study contributes to the literature on pronatalist policies by investigating the targeting of policies, and generates important implications for formulating cost-effective policies. PMID:29425220
An implementation of the diagnosis method DYANA, applied to a combined heat-power device
Energy Technology Data Exchange (ETDEWEB)
Van der Neut, F.
1993-10-01
The development and implementation of the monitor-and-diagnosis method DYANA is presented. This implementation is applied to and tested on a combined heat and power generating device (CHP). The steps that have been taken in realizing this implementation are evaluated into detail . In chapter two the theory behind DYANA is recapitulated. Attention is paid to the basic theory of diagnoses, and the steps of the path from this theory to the algorithm DYANA are revealed. These steps include the hierarchical approach, and explain the following features of DYANA: a) the use of best-first dynamic model zooming based on heuristics with respect to parsimony of the number of components within the diagnoses, b) the use of consistency of fault models with observations to focus on the most likely diagnoses, and c) the use of online diagnosis: the current set of diagnoses is incrementally updated after a new observation of the system is made. In chapter three the relevant aspects of the system to be diagnosed, the CHP, are dealt with in detail. An explanation is given of the broad working of the CHP, its hierarchical structure and mathematical representation are given, CHP observation is commented, and some possible forms of fault models are stated. In chapter four the pseudocode of the implementation, developed for DYANA, is presented. The pseudocode consists of two parts: the monitoring process (using numerical simulation), and the diagnostic process. The differences between the pseudocode and the actual implementation are mentioned. The CHP will then be monitored and diagnosed with this algorithm and results of this test are given in chapter five. An actual implementation of DYANA can be found in a separately supplied appendix, the Programme Appendix. The implementation of the monitoring process is meant only for this example of the CHP. The code for the diagnostic process can be easily adjusted for diagnosing other devices, such as electronic circuits. The language is Pascal.
Proposal of inspection method of radiation protection applied to nuclear medicine establishments
International Nuclear Information System (INIS)
Mendes, Leopoldino da Cruz Gouveia
2003-01-01
The principal objective of this paper is to implement a method of an impartial and efficient inspection, due to a correct and secure dose of ionizing radiation in the field of Nuclear Medicine. The Radiological Protection Model was tested in 113 Nuclear Medicine Services all over the country, according to a biannual analysis frequency (1996, 1998, 2000 and 2002). The data sheet comprised general information about the structure of the NMS and a technical approach. In the analytical process, a methodology of inputting different importance levels to each of the 82 features was adopted, based on the risk factors stated in the CNEN NE's and in the IAEA recommendations, as well. From this point of view, as a feature does not fit one of the rules above, it will correspond to a radioprotection fault and be imparted a grade. The sum of those grades, classified the NMS in one of the three different ranges, as follows: - operating without restriction - 100 points and below- operating with restriction - between 100 and 300 points - temporary shutdown - above and equal to 300 points. The allowance of the second group to carry on operating should be attached to a defined and restricted period of time (six to twelve months), supposed large enough to the NMS solving the problems being new evaluation proceeded then. The NMS's classified in the third group are supposed to go back into operation only when fit all the pending radioprotection requirements. Until the next regular evaluation, meanwhile a multiplication factor 2 n was applied to the recalcitrant NMS s where n is the number of unwilling occurrences. The previous establishment of those items of radioprotection, with its respective grade, excluded subjective and personal values in the judgement and technical evaluation of the institutions. (author)
Directory of Open Access Journals (Sweden)
Eneko Osaba
2016-12-01
Full Text Available This paper aims to give a presentation of the PhD defended by Eneko Osaba on November 16th, 2015, at the University of Deusto. The thesis can be placed in the field of artificial intelligence. Specifically, it is related with multi- population meta-heuristics for solving vehicle routing problems. The dissertation was held in the main auditorium of the University, in a publicly open presentation. After the presentation, Eneko was awarded with the highest grade (cum laude. Additionally, Eneko obtained the PhD obtaining award granted by the Basque Government through.
Estimation Methods for Infinite-Dimensional Systems Applied to the Hemodynamic Response in the Brain
Belkhatir, Zehor
2018-05-01
Infinite-Dimensional Systems (IDSs) which have been made possible by recent advances in mathematical and computational tools can be used to model complex real phenomena. However, due to physical, economic, or stringent non-invasive constraints on real systems, the underlying characteristics for mathematical models in general (and IDSs in particular) are often missing or subject to uncertainty. Therefore, developing efficient estimation techniques to extract missing pieces of information from available measurements is essential. The human brain is an example of IDSs with severe constraints on information collection from controlled experiments and invasive sensors. Investigating the intriguing modeling potential of the brain is, in fact, the main motivation for this work. Here, we will characterize the hemodynamic behavior of the brain using functional magnetic resonance imaging data. In this regard, we propose efficient estimation methods for two classes of IDSs, namely Partial Differential Equations (PDEs) and Fractional Differential Equations (FDEs). This work is divided into two parts. The first part addresses the joint estimation problem of the state, parameters, and input for a coupled second-order hyperbolic PDE and an infinite-dimensional ordinary differential equation using sampled-in-space measurements. Two estimation techniques are proposed: a Kalman-based algorithm that relies on a reduced finite-dimensional model of the IDS, and an infinite-dimensional adaptive estimator whose convergence proof is based on the Lyapunov approach. We study and discuss the identifiability of the unknown variables for both cases. The second part contributes to the development of estimation methods for FDEs where major challenges arise in estimating fractional differentiation orders and non-smooth pointwise inputs. First, we propose a fractional high-order sliding mode observer to jointly estimate the pseudo-state and input of commensurate FDEs. Second, we propose a
Higher Order, Hybrid BEM/FEM Methods Applied to Antenna Modeling
Fink, P. W.; Wilton, D. R.; Dobbins, J. A.
2002-01-01
In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several methods for handling these terms have been presented. It is also well known that achieving he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, applied to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation applied to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required
Storberg-Walker, Julia; Chermack, Thomas J.
2007-01-01
The purpose of this article is to describe four methods for completing the conceptual development phase of theory building research for single or multiparadigm research. The four methods selected for this review are (1) Weick's method of "theorizing as disciplined imagination" (1989); (2) Whetten's method of "modeling as theorizing" (2002); (3)…
Directory of Open Access Journals (Sweden)
Antonio Costa
2014-07-01
Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.
Directory of Open Access Journals (Sweden)
Maryam Ashouri
2017-07-01
Full Text Available Vehicle routing problem (VRP is a Nondeterministic Polynomial Hard combinatorial optimization problem to serve the consumers from central depots and returned back to the originated depots with given vehicles. Furthermore, two of the most important extensions of the VRPs are the open vehicle routing problem (OVRP and VRP with simultaneous pickup and delivery (VRPSPD. In OVRP, the vehicles have not return to the depot after last visit and in VRPSPD, customers require simultaneous delivery and pick-up service. The aim of this paper is to present a combined effective ant colony optimization (CEACO which includes sweep and several local search algorithms which is different with common ant colony optimization (ACO. An extensive numerical experiment is performed on benchmark problem instances addressed in the literature. The computational result shows that suggested CEACO approach not only presented a very satisfying scalability, but also was competitive with other meta-heuristic algorithms in the literature for solving VRP, OVRP and VRPSPD problems. Keywords: Meta-heuristic algorithms, Vehicle Routing Problem, Open Vehicle Routing Problem, Simultaneously Pickup and Delivery, Ant Colony Optimization.
International Nuclear Information System (INIS)
Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.
2014-01-01
A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online
A global method for calculating plant CSR ecological strategies applied across biomes world-wide
Pierce, S.; Negreiros, D.; Cerabolini, B.E.L.; Kattge, J.; Díaz, S.; Kleyer, M.; Shipley, B.; Wright, S.J.; Soudzilovskaia, N.A.; Onipchenko, V.G.; van Bodegom, P.M.; Frenette-Dussault, C.; Weiher, E.; Pinho, B.X.; Cornelissen, J.H.C.; Grime, J.P.; Thompson, K.; Hunt, R.; Wilson, P.J.; Buffa, G.; Nyakunga, O.C.; Reich, P.B.; Caccianiga, M.; Mangili, F.; Ceriani, R.M.; Luzzaro, A.; Brusa, G.; Siefert, A.; Barbosa, N.P.U.; Chapin III, F.S.; Cornwell, W.K.; Fang, Jingyun; Wilson Fernandez, G.; Garnier, E.; Le Stradic, S.; Peñuelas, J.; Melo, F.P.L.; Slaviero, A.; Tabarrelli, M.; Tampucci, D.
2017-01-01
Competitor, stress-tolerator, ruderal (CSR) theory is a prominent plant functional strategy scheme previously applied to local floras. Globally, the wide geographic and phylogenetic coverage of available values of leaf area (LA), leaf dry matter content (LDMC) and specific leaf area (SLA)
Benthic microalgal production in the Arctic: Applied methods and status of the current database
DEFF Research Database (Denmark)
Glud, Ronnie Nøhr; Woelfel, Jana; Karsten, Ulf
2009-01-01
The current database on benthic microalgal production in Arctic waters comprises 10 peer-reviewed and three unpublished studies. Here, we compile and discuss these datasets, along with the applied measurement approaches used. The latter is essential for robust comparative analysis and to clarify ...
Applying Item Response Theory Methods to Examine the Impact of Different Response Formats
Hohensinn, Christine; Kubinger, Klaus D.
2011-01-01
In aptitude and achievement tests, different response formats are usually used. A fundamental distinction must be made between the class of multiple-choice formats and the constructed response formats. Previous studies have examined the impact of different response formats applying traditional statistical approaches, but these influences can also…
Sica, R. J.; Haefele, A.; Jalali, A.; Gamage, S.; Farhani, G.
2018-04-01
The optimal estimation method (OEM) has a long history of use in passive remote sensing, but has only recently been applied to active instruments like lidar. The OEM's advantage over traditional techniques includes obtaining a full systematic and random uncertainty budget plus the ability to work with the raw measurements without first applying instrument corrections. In our meeting presentation we will show you how to use the OEM for temperature and composition retrievals for Rayleigh-scatter, Ramanscatter and DIAL lidars.
An Integrated Start-Up Method for Pumped Storage Units Based on a Novel Artificial Sheep Algorithm
Directory of Open Access Journals (Sweden)
Zanbin Wang
2018-01-01
Full Text Available Pumped storage units (PSUs are an important storage tool for power systems containing large-scale renewable energy, and the merit of rapid start-up enable PSUs to modulate and stabilize the power system. In this paper, PSU start-up strategies have been studied and a new integrated start-up method has been proposed for the purpose of achieving swift and smooth start-up. A two-phase closed-loop startup strategy, composed of switching Proportion Integration (PI and Proportion Integration Differentiation (PID controller is designed, and an integrated optimization scheme is proposed for a synchronous optimization of the parameters in the strategy. To enhance the optimization performance, a novel meta-heuristic called Artificial Sheep Algorithm (ASA is proposed and applied to solve the optimization task after a sufficient verification with seven popular meta-heuristic algorithms and 13 typical benchmark functions. Simulation model has been built for a China’s PSU and comparative experiments are conducted to evaluate the proposed integrated method. Results show that the start-up performance could be significantly improved on both indices on overshoot and start-up, and up to 34%-time consumption has been reduced under different working condition. The significant improvements on start-up of PSU is interesting and meaning for further application on real unit.
Applying Item Response Theory methods to design a learning progression-based science assessment
Chen, Jing
Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1) how to use items in different formats to classify students into levels on the learning progression, (2) how to design a test to give good information about students' progress through the learning progression of a particular construct and (3) what characteristics of test items support their use for assessing students' levels. Data used for this study were collected from 1500 elementary and secondary school students during 2009--2010. The written assessment was developed in several formats such as the Constructed Response (CR) items, Ordered Multiple Choice (OMC) and Multiple True or False (MTF) items. The followings are the main findings from this study. The OMC, MTF and CR items might measure different components of the construct. A single construct explained most of the variance in students' performances. However, additional dimensions in terms of item format can explain certain amount of the variance in student performance. So additional dimensions need to be considered when we want to capture the differences in students' performances on different types of items targeting the understanding of the same underlying progression. Items in each item format need to be improved in certain ways to classify students more accurately into the learning progression levels. This study establishes some general steps that can be followed to design other learning progression-based tests as well. For example, first, the boundaries between levels on the IRT scale can be defined by using the means of the item thresholds across a set of good items. Second, items in multiple formats can be selected to achieve the information criterion at all
Evaluation of Two Fitting Methods Applied for Thin-Layer Drying of Cape Gooseberry Fruits
Directory of Open Access Journals (Sweden)
Erkan Karacabey
Full Text Available ABSTRACT Drying data of cape gooseberry was used to compare two fitting methods: namely 2-step and 1-step methods. Literature data was also used to confirm the results. To demonstrate the applicability of these methods, two primary models (Page, Two-term-exponential were selected. Linear equation was used as secondary model. As well-known from the previous modelling studies on drying, 2-step method required at least two regressions: One is primary model and one is secondary (if you have only one environmental condition such as temperature. On the other hand, one regression was enough for 1-step method. Although previous studies on kinetic modelling of drying of foods were based on 2-step method, this study indicated that 1-step method may also be a good alternative with some advantages such as drawing an informative figure and reducing time of calculations.
Linear, Transﬁnite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images
DEFF Research Database (Denmark)
Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen
2018-01-01
of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...
International Nuclear Information System (INIS)
Poupeau, G.; Soliani Junior, E.
1988-01-01
This article discuss some applications of the 'nuclear tracks method' in geochronology, geochemistry and geophysic. In geochronology, after rapid presentation of the dating principles by 'Fission Track' and the kinds of geological events mensurable by this method, is showed some application in metallogeny and in petroleum geolocy. In geochemistry the 'fission tracks' method utilizations are related with mining prospecting and uranium prospecting. In geophysics an important application is the earthquake prevision, through the Ra 222 emanations continous control. (author) [pt
Purists need not apply: the case for pragmatism in mixed methods research.
Florczak, Kristine L
2014-10-01
The purpose of this column is to describe several different ways of conducting mixed method research. The paradigms that underpin both qualitative and quantitative research are also considered along with a cursory review of classical pragmatism as it relates conducting mixed methods studies. Finally, the idea of loosely coupled systems as a means to support mixed methods studies is proposed along with several caveats to researchers who desire to use this new way of obtaining knowledge. © The Author(s) 2014.
The development of a curved beam element model applied to finite elements method
International Nuclear Information System (INIS)
Bento Filho, A.
1980-01-01
A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt
Applying formal method to design of nuclear power plant embedded protection system
International Nuclear Information System (INIS)
Kim, Jin Hyun; Kim, Il Gon; Sung, Chang Hoon; Choi, Jin Young; Lee, Na Young
2001-01-01
Nuclear power embedded protection systems is a typical safety-critical system, which detects its failure and shutdowns its operation of nuclear reactor. These systems are very dangerous so that it absolutely requires safety and reliability. Therefore nuclear power embedded protection system should fulfill verification and validation completely from the design stage. To develop embedded system, various V and V method have been provided and especially its design using Formal Method is studied in other advanced country. In this paper, we introduce design method of nuclear power embedded protection systems using various Formal-Method in various respect following nuclear power plant software development guideline
Researching and applying the MRSS method in fuel assembly mechanical design
International Nuclear Information System (INIS)
Li Jiwei; Zhou Yunqing; Liu Jiazheng; Tong Xing; Zheng Yixiong
2014-01-01
Tolerance analysis is an important part in mechanical design of fuel assemblies. With introduction of the MRSS method and the process capability, the relation between the two was discussed. The conditions of the MRSS method were limited by calculating the protrusion of the Outer Strap Spring of the Grid. The results show that the MRSS method shall be preferred used in linear tolerance analysis in fuel assemblies with numbers of dimensions by controlling process capability and considering sensitivities and modified factor. The results can be approbated both by designers and manufacturers with the MRSS method used. (authors)
Energy Technology Data Exchange (ETDEWEB)
Milivojevic, S [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)
1974-12-15
Probability method was chosen for analysing the reactor system reliability is considered realistic since it is based on verified experimental data. In fact this is a statistical method. The probability method developed takes into account the probability distribution of permitted levels of relevant parameters and their particular influence on the reliability of the system as a whole. The proposed method is rather general, and was used for problem of thermal safety analysis of reactor system. This analysis enables to analyze basic properties of the system under different operation conditions, expressed in form of probability they show the reliability of the system on the whole as well as reliability of each component.
An Analysis of Methods Section of Research Reports in Applied Linguistics
Patrícia Marcuzzo
2011-01-01
This work aims at identifying analytical categories and research procedures adopted in the analysis of research article in Applied Linguistics/EAP in order to propose a systematization of the research procedures in Genre Analysis. For that purpose, 12 research reports and interviews with four authors were analyzed. The analysis showed that the studies are concentrated on the investigation of the macrostructure or on the microstructure of research articles in different fields. Studies about th...
Directory of Open Access Journals (Sweden)
Maliar E.I.
2010-11-01
Full Text Available Is considered the directions of professionally-applied physical preparation of students with the prevailing use of facilities of football. Are presented the methods of professionally-applied physical preparation of students. It is indicated that application of method of the circular training is rendered by an assistance development of discipline, honesty, honesty, rational use of time. Underline, that in teaching it is necessary to provide a short cut to mastering of the planned knowledge, abilities and skills, improvement of physical qualities.
Statistical methods applied to gamma-ray spectroscopy algorithms in nuclear security missions.
Fagan, Deborah K; Robinson, Sean M; Runkle, Robert C
2012-10-01
Gamma-ray spectroscopy is a critical research and development priority to a range of nuclear security missions, specifically the interdiction of special nuclear material involving the detection and identification of gamma-ray sources. We categorize existing methods by the statistical methods on which they rely and identify methods that have yet to be considered. Current methods estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty, which may be significantly more complex. Thus, significantly improving algorithm performance may require greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods, could reduce decision uncertainty by rigorously and comprehensively incorporating all sources of uncertainty. Application of such methods should further meet the needs of nuclear security missions by improving upon the existing numerical infrastructure for which these analyses have not been conducted. Copyright © 2012 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Zhang Huiqun
2009-01-01
By using some exact solutions of an auxiliary ordinary differential equation, a direct algebraic method is described to construct the exact complex solutions for nonlinear partial differential equations. The method is implemented for the NLS equation, a new Hamiltonian amplitude equation, the coupled Schrodinger-KdV equations and the Hirota-Maccari equations. New exact complex solutions are obtained.
DEFF Research Database (Denmark)
Ommen, Torben Schmidt; Sigthorsson, Oskar; Elmegaard, Brian
2017-01-01
In order to investigate options for improving the maintenance protocol of commercial refrigeration plants, two thermoeconomic diagnosis methods were evaluated on a state-of-the-art refrigeration plant. A common relative indicator was proposed for the two methods in order to directly compare the q...
Heterogeneity among violence-exposed women: applying person-oriented research methods.
Nurius, Paula S; Macy, Rebecca J
2008-03-01
Variability of experience and outcomes among violence-exposed people pose considerable challenges toward developing effective prevention and treatment protocols. To address these needs, the authors present an approach to research and a class of methodologies referred to as person oriented. Person-oriented tools support assessment of meaningful patterns among people that distinguish one group from another, subgroups for whom different interventions are indicated. The authors review the conceptual base of person-oriented methods, outline their distinction from more familiar variable-oriented methods, present descriptions of selected methods as well as empirical applications of person-oriented methods germane to violence exposure, and conclude with discussion of implications for future research and translation between research and practice. The authors focus on violence against women as a population, drawing on stress and coping theory as a theoretical framework. However, person-oriented methods hold utility for investigating diversity among violence-exposed people's experiences and needs across populations and theoretical foundations.
Hallinen, Nicole R.; Chi, Min; Chin, Doris B.; Prempeh, Joe; Blair, Kristen P.; Schwartz, Daniel L.
2013-01-01
Cognitive developmental psychology often describes children's growing qualitative understanding of the physical world. Physics educators may be able to use the relevant methods to advantage for characterizing changes in students' qualitative reasoning. Siegler developed the "rule assessment" method for characterizing levels of qualitative understanding for two factor situations (e.g., volume and mass for density). The method assigns children to rule levels that correspond to the degree they notice and coordinate the two factors. Here, we provide a brief tutorial plus a demonstration of how we have used this method to evaluate instructional outcomes with middle-school students who learned about torque, projectile motion, and collisions using different instructional methods with simulations.
Applying the Taguchi method to river water pollution remediation strategy optimization.
Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju
2014-04-15
Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.
Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization
Directory of Open Access Journals (Sweden)
Tsung-Ming Yang
2014-04-01
Full Text Available Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.
Study on State Transition Method Applied to Motion Planning for a Humanoid Robot
Directory of Open Access Journals (Sweden)
Xuyang Wang
2008-11-01
Full Text Available This paper presents an approach of motion planning for a humanoid robot using a state transition method. In this method, motion planning is simplified by introducing a state-space to describe the whole motion series. And each state in the state-space corresponds to a contact state specified during the motion. The continuous motion is represented by a sequence of discrete states. The concept of the transition between two neighboring states, that is the state transition, can be realized by using some traditional path planning methods. Considering the dynamical stability of the robot, a state transition method based on search strategy is proposed. Different sets of trajectories are generated by using a variable 5th-order polynomial interpolation method. After quantifying the stabilities of these trajectories, the trajectories with the largest stability margin are selected as the final state transition trajectories. Rising motion process is exemplified to validate the method and the simulation results show the proposed method to be feasible and effective.
CO2 (carbon dioxide) fixation by applying new chemical absorption-precipitation methods
International Nuclear Information System (INIS)
Park, Sangwon; Lee, Min-Gu; Park, Jinwon
2013-01-01
CO 2 (carbon dioxide) is the most common greenhouse gas and most of it is emitted from human activities. The methods for CO 2 emission reduction can be divided into physical, chemical, and biochemical methods. Among the physical and chemical methods, CCS (carbon capture and storage) is a well-known reducing technology. However, this method has many disadvantages including the required storage area. In general, CCS requires capture and storage processes. In this study, we propose a method for reusing the absorbed CO 2 either in nature or in industry. The emitted CO 2 was converted into CO 3 2− using a conversion solution, and then made into a carbonate by combining the conversion solution with metal ions at normal temperature and pressure. The resulting carbonate was analyzed using FT-IR (Fourier transform infrared spectroscopy) and XRD (X-ray diffraction). We verified the formation of a solid consisting of calcite and vaterite. In addition, the conversion solution that was used could be reused in the same process of CCS technology. Our study demonstrates a successful method of reducing and reusing emitted CO 2 , thereby making CO 2 a potential future resource. - Highlights: • This study focused on a new CO 2 fixation process method. • In CCS technology, the desorption process requires high thermal energy consumption. • This new method does not require a desorption process because the CO 2 is accomplished through CaCO 3 crystallization. • A new absorption method is possible instead of the conventional absorption-desorption process. • This is not only a rapid reaction for fixing CO 2 , but also economically feasible
International Nuclear Information System (INIS)
Schmitt, K.W.
1974-01-01
The Compton backscattering method is applied to determine the bone decalcification. Post mortal excised calcanei and vertebral bodies of 50 people are taken as investigation objects which are examined for their calcium salt content and are then ashed for control measurement. The results show that the method would be better suited to early diagnosis of calcipenic osteopathy than the densitometric method used today on extremity bones. (ORU/LH) [de