Deterministic mean-variance-optimal consumption and investment
DEFF Research Database (Denmark)
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
Directory of Open Access Journals (Sweden)
Fushing Hsieh
2016-11-01
Full Text Available Discrete combinatorial optimization problems in real world are typically defined via an ensemble of potentially high dimensional measurements pertaining to all subjects of a system under study. We point out that such a data ensemble in fact embeds with system's information content that is not directly used in defining the combinatorial optimization problems. Can machine learning algorithms extract such information content and make combinatorial optimizing tasks more efficient? Would such algorithmic computations bring new perspectives into this classic topic of Applied Mathematics and Theoretical Computer Science? We show that answers to both questions are positive. One key reason is due to permutation invariance. That is, the data ensemble of subjects' measurement vectors is permutation invariant when it is represented through a subject-vs-measurement matrix. An unsupervised machine learning algorithm, called Data Mechanics (DM, is applied to find optimal permutations on row and column axes such that the permuted matrix reveals coupled deterministic and stochastic structures as the system's information content. The deterministic structures are shown to facilitate geometry-based divide-and-conquer scheme that helps optimizing task, while stochastic structures are used to generate an ensemble of mimicries retaining the deterministic structures, and then reveal the robustness pertaining to the original version of optimal solution. Two simulated systems, Assignment problem and Traveling Salesman problem, are considered. Beyond demonstrating computational advantages and intrinsic robustness in the two systems, we propose brand new robust optimal solutions. We believe such robust versions of optimal solutions are potentially more realistic and practical in real world settings.
Optimal Deterministic Investment Strategies for Insurers
Directory of Open Access Journals (Sweden)
Ulrich Rieder
2013-11-01
Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.
Dynamic optimization deterministic and stochastic models
Hinderer, Karl; Stieglitz, Michael
2016-01-01
This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.
Advances in stochastic and deterministic global optimization
Zhigljavsky, Anatoly; Žilinskas, Julius
2016-01-01
Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...
Machine learning a Bayesian and optimization perspective
Theodoridis, Sergios
2015-01-01
This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches, which rely on optimization techniques, as well as Bayesian inference, which is based on a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as shor...
Joint optimization of production scheduling and machine group preventive maintenance
International Nuclear Information System (INIS)
Xiao, Lei; Song, Sanling; Chen, Xiaohui; Coit, David W.
2016-01-01
Joint optimization models were developed combining group preventive maintenance of a series system and production scheduling. In this paper, we propose a joint optimization model to minimize the total cost including production cost, preventive maintenance cost, minimal repair cost for unexpected failures and tardiness cost. The total cost depends on both the production process and the machine maintenance plan associated with reliability. For the problems addressed in this research, any machine unavailability leads to system downtime. Therefore, it is important to optimize the preventive maintenance of machines because their performance impacts the collective production processing associated with all machines. Too lengthy preventive maintenance intervals may be associated with low scheduled machine maintenance cost, but may incur expensive costs for unplanned failure due to low machine reliability. Alternatively, too frequent preventive maintenance activities may achieve the desired high reliability machines, but unacceptably high scheduled maintenance cost. Additionally, product scheduling plans affect tardiness and maintenance cost. Two results are obtained when solving the problem; the optimal group preventive maintenance interval for machines, and the assignment of each job, including the corresponding start time and completion time. To solve this non-deterministic polynomial-time problem, random keys genetic algorithms are used, and a numerical example is solved to illustrate the proposed model. - Highlights: • Group preventive maintenance (PM) planning and production scheduling are jointed. • Maintenance interval and assignment of jobs are decided by minimizing total cost. • Relationships among system age, PM, job processing time are quantified. • Random keys genetic algorithms (GA) are used to solve the NP-hard problem. • Random keys GA and Particle Swarm Optimization (PSO) are compared.
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Ordinal optimization and its application to complex deterministic problems
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Optimization of structures subjected to dynamic load: deterministic and probabilistic methods
Directory of Open Access Journals (Sweden)
Élcio Cassimiro Alves
Full Text Available Abstract This paper deals with the deterministic and probabilistic optimization of structures against bending when submitted to dynamic loads. The deterministic optimization problem considers the plate submitted to a time varying load while the probabilistic one takes into account a random loading defined by a power spectral density function. The correlation between the two problems is made by one Fourier Transformed. The finite element method is used to model the structures. The sensitivity analysis is performed through the analytical method and the optimization problem is dealt with by the method of interior points. A comparison between the deterministic optimisation and the probabilistic one with a power spectral density function compatible with the time varying load shows very good results.
Optimal power flow: a bibliographic survey I. Formulations and deterministic methods
Energy Technology Data Exchange (ETDEWEB)
Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [University of Jyvaskyla, Department of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)
2012-09-15
Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey (this article) provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)
Coil Optimization for HTS Machines
DEFF Research Database (Denmark)
Mijatovic, Nenad; Jensen, Bogi Bech; Abrahamsen, Asger Bech
An optimization approach of HTS coils in HTS synchronous machines (SM) is presented. The optimization is aimed at high power SM suitable for direct driven wind turbines applications. The optimization process was applied to a general radial flux machine with a peak air gap flux density of ~3T...... is suitable for which coil segment is presented. Thus, the performed study gives valuable input for the coil design of HTS machines ensuring optimal usage of HTS tapes....
Beeping a Deterministic Time-Optimal Leader Election
Dufoulon , Fabien; Burman , Janna; Beauquier , Joffroy
2018-01-01
The beeping model is an extremely restrictive broadcast communication model that relies only on carrier sensing. In this model, we solve the leader election problem with an asymptotically optimal round complexity of O(D + log n), for a network of unknown size n and unknown diameter D (but with unique identifiers). Contrary to the best previously known algorithms in the same setting, the proposed one is deterministic. The techniques we introduce give a new insight as to how local constraints o...
International Nuclear Information System (INIS)
Azadeh, A.; Ghaderi, S.F.; Omrani, H.
2009-01-01
This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran
Optimal power flow: a bibliographic survey II. Non-deterministic and hybrid methods
Energy Technology Data Exchange (ETDEWEB)
Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [Univ. of Jyvaskyla, Dept. of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)
2012-09-15
Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey (this article) examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)
Deterministic methods for multi-control fuel loading optimization
Rahman, Fariz B. Abdul
We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.
Coil Optimization for High Temperature Superconductor Machines
DEFF Research Database (Denmark)
Mijatovic, Nenad; Jensen, Bogi Bech; Abrahamsen, Asger Bech
2011-01-01
This paper presents topology optimization of HTS racetrack coils for large HTS synchronous machines. The topology optimization is used to acquire optimal coil designs for the excitation system of 3 T HTS machines. Several tapes are evaluated and the optimization results are discussed. The optimiz...
Combinatorial optimization on a Boltzmann machine
Korst, J.H.M.; Aarts, E.H.L.
1989-01-01
We discuss the problem of solving (approximately) combinatorial optimization problems on a Boltzmann machine. It is shown for a number of combinatorial optimization problems how they can be mapped directly onto a Boltzmann machine by choosing appropriate connection patterns and connection strengths.
Miró, Anton; Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Egea, Jose A; Jiménez, Laureano
2012-05-10
The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.
Deterministic global optimization an introduction to the diagonal approach
Sergeyev, Yaroslav D
2017-01-01
This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...
Directory of Open Access Journals (Sweden)
Wenying Yue
2014-01-01
Full Text Available Cloud computing has come to be a significant commercial infrastructure offering utility-oriented IT services to users worldwide. However, data centers hosting cloud applications consume huge amounts of energy, leading to high operational cost and greenhouse gas emission. Therefore, green cloud computing solutions are needed not only to achieve high level service performance but also to minimize energy consumption. This paper studies the dynamic placement of virtual machines (VMs with deterministic and stochastic demands. In order to ensure a quick response to VM requests and improve the energy efficiency, a two-phase optimization strategy has been proposed, in which VMs are deployed in runtime and consolidated into servers periodically. Based on an improved multidimensional space partition model, a modified energy efficient algorithm with balanced resource utilization (MEAGLE and a live migration algorithm based on the basic set (LMABBS are, respectively, developed for each phase. Experimental results have shown that under different VMs’ stochastic demand variations, MEAGLE guarantees the availability of stochastic resources with a defined probability and reduces the number of required servers by 2.49% to 20.40% compared with the benchmark algorithms. Also, the difference between the LMABBS solution and Gurobi solution is fairly small, but LMABBS significantly excels in computational efficiency.
International Nuclear Information System (INIS)
Milickovic, N.; Lahanas, M.; Papagiannopoulou, M.; Zamboglou, N.; Baltas, D.
2002-01-01
In high dose rate (HDR) brachytherapy, conventional dose optimization algorithms consider multiple objectives in the form of an aggregate function that transforms the multiobjective problem into a single-objective problem. As a result, there is a loss of information on the available alternative possible solutions. This method assumes that the treatment planner exactly understands the correlation between competing objectives and knows the physical constraints. This knowledge is provided by the Pareto trade-off set obtained by single-objective optimization algorithms with a repeated optimization with different importance vectors. A mapping technique avoids non-feasible solutions with negative dwell weights and allows the use of constraint free gradient-based deterministic algorithms. We compare various such algorithms and methods which could improve their performance. This finally allows us to generate a large number of solutions in a few minutes. We use objectives expressed in terms of dose variances obtained from a few hundred sampling points in the planning target volume (PTV) and in organs at risk (OAR). We compare two- to four-dimensional Pareto fronts obtained with the deterministic algorithms and with a fast-simulated annealing algorithm. For PTV-based objectives, due to the convex objective functions, the obtained solutions are global optimal. If OARs are included, then the solutions found are also global optimal, although local minima may be present as suggested. (author)
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
DEFF Research Database (Denmark)
Fischetti, Martina; Fraccaro, Marco
2018-01-01
In this paper we propose a combination of Mathematical Optimization and Machine Learning to estimate the value of optimized solutions. In particular, we investigate if a machine, trained on a large number of optimized solutions, could accurately estimate the value of the optimized solution for new...... in production between optimized/non optimized solutions, it is not trivial to understand the potential value of a new site without running a complete optimization. This could be too time consuming if a lot of sites need to be evaluated, therefore we propose to use Machine Learning to quickly estimate...... the potential of new sites (i.e., to estimate the optimized production of a site without explicitly running the optimization). To do so, we trained and tested different Machine Learning models on a dataset of 3000+ optimized layouts found by the optimizer. Thanks to the close collaboration with a leading...
Machine Learning Optimization of Evolvable Artificial Cells
DEFF Research Database (Denmark)
Caschera, F.; Rasmussen, S.; Hanczyc, M.
2011-01-01
can be explored. A machine learning approach (Evo-DoE) could be applied to explore this experimental space and define optimal interactions according to a specific fitness function. Herein an implementation of an evolutionary design of experiments to optimize chemical and biochemical systems based...... on a machine learning process is presented. The optimization proceeds over generations of experiments in iterative loop until optimal compositions are discovered. The fitness function is experimentally measured every time the loop is closed. Two examples of complex systems, namely a liposomal drug formulation...
Deterministic extraction from weak random sources
Gabizon, Ariel
2011-01-01
In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.
A Review of Design Optimization Methods for Electrical Machines
Directory of Open Access Journals (Sweden)
Gang Lei
2017-11-01
Full Text Available Electrical machines are the hearts of many appliances, industrial equipment and systems. In the context of global sustainability, they must fulfill various requirements, not only physically and technologically but also environmentally. Therefore, their design optimization process becomes more and more complex as more engineering disciplines/domains and constraints are involved, such as electromagnetics, structural mechanics and heat transfer. This paper aims to present a review of the design optimization methods for electrical machines, including design analysis methods and models, optimization models, algorithms and methods/strategies. Several efficient optimization methods/strategies are highlighted with comments, including surrogate-model based and multi-level optimization methods. In addition, two promising and challenging topics in both academic and industrial communities are discussed, and two novel optimization methods are introduced for advanced design optimization of electrical machines. First, a system-level design optimization method is introduced for the development of advanced electric drive systems. Second, a robust design optimization method based on the design for six-sigma technique is introduced for high-quality manufacturing of electrical machines in production. Meanwhile, a proposal is presented for the development of a robust design optimization service based on industrial big data and cloud computing services. Finally, five future directions are proposed, including smart design optimization method for future intelligent design and production of electrical machines.
Optimization of pocket machining strategy in HSM
Msaddek, El Bechir; Bouaziz, Zoubeir; Dessein, Gilles; Baili, Maher
2012-01-01
International audience; Our two major concerns, which should be taken into consideration as soon as we start the selecting the machining parameters, are the minimization of the machining time and the maintaining of the high-speed machining machine in good state. The manufacturing strategy is one of the parameters which practically influences the time of the different geometrical forms manufacturing, as well as the machine itself. In this article, we propose an optimization methodology of the ...
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Yu Wang
2015-01-01
Full Text Available A new reliability-based design optimization (RBDO method based on support vector machines (SVM and the Most Probable Point (MPP is proposed in this work. SVM is used to create a surrogate model of the limit-state function at the MPP with the gradient information in the reliability analysis. This guarantees that the surrogate model not only passes through the MPP but also is tangent to the limit-state function at the MPP. Then, importance sampling (IS is used to calculate the probability of failure based on the surrogate model. This treatment significantly improves the accuracy of reliability analysis. For RBDO, the Sequential Optimization and Reliability Assessment (SORA is employed as well, which decouples deterministic optimization from the reliability analysis. The improved SVM-based reliability analysis is used to amend the error from linear approximation for limit-state function in SORA. A mathematical example and a simplified aircraft wing design demonstrate that the improved SVM-based reliability analysis is more accurate than FORM and needs less training points than the Monte Carlo simulation and that the proposed optimization strategy is efficient.
International Nuclear Information System (INIS)
Yokose, Yoshio; Noguchi, So; Yamashita, Hideo
2002-01-01
Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)
Bi and tri-objective optimization in the deterministic network interdiction problem
International Nuclear Information System (INIS)
Rocco S, Claudio M.; Emmanuel Ramirez-Marquez, Jose; Salazar A, Daniel E.
2010-01-01
Solution approaches to the deterministic network interdiction problem have previously been developed for optimizing a single figure-of-merit of the network configuration (i.e. flow that can be transmitted between a source node and a sink node for a fixed network design) under constraints related to limited amount of resources available to interdict network links. These approaches work under the assumption that: (1) nominal capacity of each link is completely reduced when interdicted and (2) there is a single criterion to optimize. This paper presents a newly developed evolutionary algorithm that for the first time allows solving multi-objective optimization models for the design of network interdiction strategies that take into account a variety of figures-of-merit. The algorithm provides an approximation to the optimal Pareto frontier using: (a) techniques in Monte Carlo simulation to generate potential network interdiction strategies, (b) graph theory to analyze strategies' maximum source-sink flow and (c) an evolutionary search that is driven by the probability that a link will belong to the optimal Pareto set. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate and validate the approach.
Deterministic network interdiction optimization via an evolutionary approach
International Nuclear Information System (INIS)
Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel
2009-01-01
This paper introduces an evolutionary optimization approach that can be readily applied to solve deterministic network interdiction problems. The network interdiction problem solved considers the minimization of the maximum flow that can be transmitted between a source node and a sink node for a fixed network design when there is a limited amount of resources available to interdict network links. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link. For this problem, the solution approach developed is based on three steps that use: (1) Monte Carlo simulation, to generate potential network interdiction strategies, (2) Ford-Fulkerson algorithm for maximum s-t flow, to analyze strategies' maximum source-sink flow and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate the approach. In terms of computational effort, the results illustrate that solutions are obtained from a significantly restricted solution search space. Finally, the authors discuss the need for a reliability perspective to network interdiction, so that solutions developed address more realistic scenarios of such problem
Some relations between quantum Turing machines and Turing machines
Sicard, Andrés; Vélez, Mario
1999-01-01
For quantum Turing machines we present three elements: Its components, its time evolution operator and its local transition function. The components are related with the components of deterministic Turing machines, the time evolution operator is related with the evolution of reversible Turing machines and the local transition function is related with the transition function of probabilistic and reversible Turing machines.
Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines
Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł
2018-01-01
Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.
Optimization of Moving Coil Actuators for Digital Displacement Machines
DEFF Research Database (Denmark)
Nørgård, Christian; Bech, Michael Møller; Roemer, Daniel Beck
2016-01-01
This paper focuses on deriving an optimal moving coil actuator design, used as force pro-ducing element in hydraulic on/off valves for Digital Displacement machines. Different moving coil actuator geometry topologies (permanent magnet placement and magnetiza-tion direction) are optimized for actu......This paper focuses on deriving an optimal moving coil actuator design, used as force pro-ducing element in hydraulic on/off valves for Digital Displacement machines. Different moving coil actuator geometry topologies (permanent magnet placement and magnetiza-tion direction) are optimized...... for actuating annular seat valves in a digital displacement machine. The optimization objectives are to the minimize the actuator power, the valve flow losses and the height of the actuator. Evaluation of the objective function involves static finite element simulation and simulation of an entire operation...... designs requires approximately 20 W on average and may be realized in 20 mm × Ø 22.5 mm (height × diameter) for a 20 kW pressure chamber. The optimization is carried out using the multi-objective Generalized Differential Evolu-tion optimization algorithm GDE3 which successfully handles constrained multi-objective...
Joint optimization of maintenance, buffers and machines in manufacturing lines
Nahas, Nabil; Nourelfath, Mustapha
2018-01-01
This article considers a series manufacturing line composed of several machines separated by intermediate buffers of finite capacity. The goal is to find the optimal number of preventive maintenance actions performed on each machine, the optimal selection of machines and the optimal buffer allocation plan that minimize the total system cost, while providing the desired system throughput level. The mean times between failures of all machines are assumed to increase when applying periodic preventive maintenance. To estimate the production line throughput, a decomposition method is used. The decision variables in the formulated optimal design problem are buffer levels, types of machines and times between preventive maintenance actions. Three heuristic approaches are developed to solve the formulated combinatorial optimization problem. The first heuristic consists of a genetic algorithm, the second is based on the nonlinear threshold accepting metaheuristic and the third is an ant colony system. The proposed heuristics are compared and their efficiency is shown through several numerical examples. It is found that the nonlinear threshold accepting algorithm outperforms the genetic algorithm and ant colony system, while the genetic algorithm provides better results than the ant colony system for longer manufacturing lines.
Support vector machines optimization based theory, algorithms, and extensions
Deng, Naiyang; Zhang, Chunhua
2013-01-01
Support Vector Machines: Optimization Based Theory, Algorithms, and Extensions presents an accessible treatment of the two main components of support vector machines (SVMs)-classification problems and regression problems. The book emphasizes the close connection between optimization theory and SVMs since optimization is one of the pillars on which SVMs are built.The authors share insight on many of their research achievements. They give a precise interpretation of statistical leaning theory for C-support vector classification. They also discuss regularized twi
Directory of Open Access Journals (Sweden)
Debkalpa Goswami
2015-03-01
Full Text Available Ultrasonic machining (USM is a mechanical material removal process used to erode holes and cavities in hard or brittle workpieces by using shaped tools, high-frequency mechanical motion and an abrasive slurry. Unlike other non-traditional machining processes, such as laser beam and electrical discharge machining, USM process does not thermally damage the workpiece or introduce significant levels of residual stress, which is important for survival of materials in service. For having enhanced machining performance and better machined job characteristics, it is often required to determine the optimal control parameter settings of an USM process. The earlier mathematical approaches for parametric optimization of USM processes have mostly yielded near optimal or sub-optimal solutions. In this paper, two almost unexplored non-conventional optimization techniques, i.e. gravitational search algorithm (GSA and fireworks algorithm (FWA are applied for parametric optimization of USM processes. The optimization performance of these two algorithms is compared with that of other popular population-based algorithms, and the effects of their algorithm parameters on the derived optimal solutions and computational speed are also investigated. It is observed that FWA provides the best optimal results for the considered USM processes.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
Analytical Model-Based Design Optimization of a Transverse Flux Machine
Energy Technology Data Exchange (ETDEWEB)
Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz; Husain, Iqbal; Muljadi, Eduard
2017-02-16
This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variables that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.
Directory of Open Access Journals (Sweden)
Erik Aguirre
2015-02-01
Full Text Available One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs, mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.
Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco
2015-02-05
One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.
Optimal Placement Algorithms for Virtual Machines
Bellur, Umesh; Rao, Chetan S; SD, Madhu Kumar
2010-01-01
Cloud computing provides a computing platform for the users to meet their demands in an efficient, cost-effective way. Virtualization technologies are used in the clouds to aid the efficient usage of hardware. Virtual machines (VMs) are utilized to satisfy the user needs and are placed on physical machines (PMs) of the cloud for effective usage of hardware resources and electricity in the cloud. Optimizing the number of PMs used helps in cutting down the power consumption by a substantial amo...
Fast machine-learning online optimization of ultra-cold-atom experiments.
Wigley, P B; Everitt, P J; van den Hengel, A; Bastian, J W; Sooriyabandara, M A; McDonald, G D; Hardman, K S; Quinlivan, C D; Manju, P; Kuhn, C C N; Petersen, I R; Luiten, A N; Hope, J J; Robins, N P; Hush, M R
2016-05-16
We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.
Learning Machines Implemented on Non-Deterministic Hardware
Gupta, Suyog; Sindhwani, Vikas; Gopalakrishnan, Kailash
2014-01-01
This paper highlights new opportunities for designing large-scale machine learning systems as a consequence of blurring traditional boundaries that have allowed algorithm designers and application-level practitioners to stay -- for the most part -- oblivious to the details of the underlying hardware-level implementations. The hardware/software co-design methodology advocated here hinges on the deployment of compute-intensive machine learning kernels onto compute platforms that trade-off deter...
Boosting reversible pushdown machines by preprocessing
DEFF Research Database (Denmark)
Axelsen, Holger Bock; Kutrib, Martin; Malcher, Andreas
2016-01-01
languages, whereas for reversible pushdown automata the accepted family of languages lies strictly in between the reversible deterministic context-free languages and the real-time deterministic context-free languages. Moreover, it is shown that the computational power of both types of machines...... is not changed by allowing the preprocessing sequential transducer to work irreversibly. Finally, we examine the closure properties of the family of languages accepted by such machines....
Optimal Overhaul-Replacement Policies for Repairable Machine Sold with Warranty
Directory of Open Access Journals (Sweden)
Kusmaningrum Soemadi
2014-12-01
Full Text Available This research deals with an overhaul-replacement policy for a repairable machine sold with Free Replacement Warranty (FRW. The machine will be used for a finite horizon, T (T <, and evaluated at a fixed interval, s (s< T. At each evaluation point, the buyer considers three alternative decisions i.e. Keep the machine, Overhaul it, or Replace it with a new identical one. An overhaul can reduce the machine age virtually, but not to a point that the machine is as good as new. If the machine fails during the warranty period, it is rectified at no cost to the buyer. Any failure occurring before and after the expiry of the warranty is restored by minimal repair. An overhaul-replacement policy is formulated for such machines by using dynamic programming approach to obtain the buyer’s optimal policy. The results show that a significant rejuvenation effect due to overhaul may extend the length of machine life cycle and delay the replacement decision. In contrast, the warranty stimulates early machine replacement and by then increases the replacement frequencies for a certain range of replacement cost. This demonstrates that to minimize the total ownership cost over T the buyer needs to consider the minimal repair cost reduction due to rejuvenation effect of overhaul as well as the warranty benefit due to replacement. Numerical examples are presented for both illustrating the optimal policy and describing the behavior of the optimal solution.
Optimization of machining fixture layout for tolerance requirements ...
African Journals Online (AJOL)
Dimensional accuracy of workpart under machining is strongly influenced by the layout of the fixturing elements like locators and clamps. Setup or geometrical errors in locators result in overall machining error of the feature under consideration. Therefore it is necessary to ensure that the layout is optimized for the desired ...
OPTIMIZATION OF MAGNETIZATION AND MAGNATION REGIMES OF STOPPED THREE-PHASE SYNCHRONOUS MACHINE
Directory of Open Access Journals (Sweden)
V. A. VOLKOV
2018-05-01
Full Text Available Purpose. Investigation and optimization (minimization of electric energy losses in a stopped synchronous machine with a thyristor exciter under conditions of its magnetization and demagnetization. Methodology. Operator and variational calculus, mathematical analysis and simulation computer simulation. Findings. The mathematical description of the system under study is developed: "thyristor exciter – stopped synchronous machine", which represents the analytical dependencies for electromagnetic processes, as well as the total power and energy losses in the system under magnetization and demagnetization regimes of the synchronous machine. The optimal time functions for changing the flux linkages of the damper winding and the excitation current of the stopped synchronous machine, in which they are minimized by energy in the system under investigation when the machine is magnetized and demagnetized. The dependences of the total energy losses in the system under study on the durations of the magnetization and demagnetization times of the machine are calculated, and their comparison is compared for different types (linear, parabolic and proposed optimal of the trajectories of the change of the linkage, as well as for a linear and exponential change in the excitation current of the machine. Analytic dependencies are obtained using the calculations of electromagnetic and energy transient processes in the "thyristor exciter – stopped synchronous machine" system under the considered types of variation of flux linkage and excitation current of the machine. Originality. It consists in finding the optimal trajectories of the time variation of the excitation current of a stopped synchronous machine and the optimal durations of its magnetization and demagnetization times, which ensure minimization of energy losses in the system "thyristor exciter – stopped synchronous machine". Practical value. It consists in reducing unproductive energy losses in
Optimizing block-based maintenance under random machine usage
de Jonge, Bram; Jakobsons, Edgars
Existing studies on maintenance optimization generally assume that machines are either used continuously, or that times until failure do not depend on the actual usage. In practice, however, these assumptions are often not realistic. In this paper, we consider block-based maintenance optimization
Mainardi Fan, Fernando; Schwanenberg, Dirk; Alvarado, Rodolfo; Assis dos Reis, Alberto; Naumann, Steffi; Collischonn, Walter
2016-04-01
Hydropower is the most important electricity source in Brazil. During recent years, it accounted for 60% to 70% of the total electric power supply. Marginal costs of hydropower are lower than for thermal power plants, therefore, there is a strong economic motivation to maximize its share. On the other hand, hydropower depends on the availability of water, which has a natural variability. Its extremes lead to the risks of power production deficits during droughts and safety issues in the reservoir and downstream river reaches during flood events. One building block of the proper management of hydropower assets is the short-term forecast of reservoir inflows as input for an online, event-based optimization of its release strategy. While deterministic forecasts and optimization schemes are the established techniques for the short-term reservoir management, the use of probabilistic ensemble forecasts and stochastic optimization techniques receives growing attention and a number of researches have shown its benefit. The present work shows one of the first hindcasting and closed-loop control experiments for a multi-purpose hydropower reservoir in a tropical region in Brazil. The case study is the hydropower project (HPP) Três Marias, located in southeast Brazil. The HPP reservoir is operated with two main objectives: (i) hydroelectricity generation and (ii) flood control at Pirapora City located 120 km downstream of the dam. In the experiments, precipitation forecasts based on observed data, deterministic and probabilistic forecasts with 50 ensemble members of the ECMWF are used as forcing of the MGB-IPH hydrological model to generate streamflow forecasts over a period of 2 years. The online optimization depends on a deterministic and multi-stage stochastic version of a model predictive control scheme. Results for the perfect forecasts show the potential benefit of the online optimization and indicate a desired forecast lead time of 30 days. In comparison, the use of
Simulation of quantum computation : A deterministic event-based approach
Michielsen, K; De Raedt, K; De Raedt, H
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Simulation of Quantum Computation : A Deterministic Event-Based Approach
Michielsen, K.; Raedt, K. De; Raedt, H. De
2005-01-01
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
Parameter optimization of electrochemical machining process using black hole algorithm
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
Optimization of machining parameters of hard porcelain on a CNC ...
African Journals Online (AJOL)
Optimization of machining parameters of hard porcelain on a CNC machine by Taguchi-and RSM method. ... Journal Home > Vol 10, No 1 (2018) > ... The conduct of experiments was made by employing the Taguchi's L27 Orthogonal array to ...
Deterministic Echo State Networks Based Stock Price Forecasting
Directory of Open Access Journals (Sweden)
Jingpei Dan
2014-01-01
Full Text Available Echo state networks (ESNs, as efficient and powerful computational models for approximating nonlinear dynamical systems, have been successfully applied in financial time series forecasting. Reservoir constructions in standard ESNs rely on trials and errors in real applications due to a series of randomized model building stages. A novel form of ESN with deterministically constructed reservoir is competitive with standard ESN by minimal complexity and possibility of optimizations for ESN specifications. In this paper, forecasting performances of deterministic ESNs are investigated in stock price prediction applications. The experiment results on two benchmark datasets (Shanghai Composite Index and S&P500 demonstrate that deterministic ESNs outperform standard ESN in both accuracy and efficiency, which indicate the prospect of deterministic ESNs for financial prediction.
Optimal replacement time estimation for machines and equipment based on cost function
J. Šebo; J. Buša; P. Demeč; J. Svetlík
2013-01-01
The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables). Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is ...
Influence of TiB2 particles on machinability and machining parameter optimization of TiB2/Al MMCs
Directory of Open Access Journals (Sweden)
Ruisong JIANG
2018-01-01
Full Text Available In situ formed TiB2 particle reinforced aluminum matrix composites (TiB2/Al MMCs have some extraordinary properties which make them be a promising material for high performance aero-engine blade. Due to the influence of TiB2 particles, the machinability is still a problem which restricts the application of TiB2/Al MMCs. In order to meet the industrial requirements, the influence of TiB2 particles on the machinability of TiB2/Al MMCs was investigated experimentally. Moreover, the optimal machining conditions for this kind of MMCs were investigated in this study. The major conclusions are: (1 the machining force of TiB2/Al MMCs is bigger than that of non-reinforced alloy and mainly controlled by feed rate; (2 the residual stress of TiB2/Al MMCs is compressive while that of non-reinforced alloy is nearly neutral; (3 the surface roughness of TiB2/Al MMCs is smaller than that of non-reinforced alloy under the same cutting speed, but reverse result was observed when the feed rate increased; (4 a multi-objective optimization model for surface roughness and material removal rate (MRR was established, and a set of optimal parameter combinations of the machining was obtained. The results show a great difference from SiC particle reinforced MMCs and provide a useful guide for a better control of machining process of this material.
A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.
Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei
2017-09-21
In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.
Permanent Magnet Flux-Switching Machine, Optimal Design and Performance Analysis
Directory of Open Access Journals (Sweden)
Liviu Emilian Somesan
2013-01-01
Full Text Available In this paper an analytical sizing-design procedure for a typical permanent magnet flux-switching machine (PMFSM with 12 stator and respectively 10 rotor poles is presented. An optimal design, based on Hooke-Jeeves method with the objective functions of maximum torque density, is performed. The results were validated via two dimensions finite element analysis (2D-FEA applied on the optimized structure. The influence of the permanent magnet (PM dimensions and type, respectively of the rotor poles' shape on the machine performance were also studied via 2D-FEA.
International Nuclear Information System (INIS)
Wong, Ka In; Wong, Pak Kin
2017-01-01
Highlights: • A new calibration method is proposed for dual-injection engines under biofuel blends. • Sparse Bayesian extreme learning machine and flower pollination algorithm are employed in the proposed method. • An SI engine is retrofitted for operating under dual-injection strategy. • The proposed method is verified experimentally under the two idle speed conditions. • Comparison with other machine learning methods and optimization algorithms is conducted. - Abstract: Although many combinations of biofuel blends are available in the market, it is more beneficial to vary the ratio of biofuel blends at different engine operating conditions for optimal engine performance. Dual-injection engines have the potential to implement such function. However, while optimal engine calibration is critical for achieving high performance, the use of two injection systems, together with other modern engine technologies, leads the calibration of the dual-injection engines to a very complicated task. Traditional trial-and-error-based calibration approach can no longer be adopted as it would be time-, fuel- and labor-consuming. Therefore, a new and fast calibration method based on sparse Bayesian extreme learning machine (SBELM) and metaheuristic optimization is proposed to optimize the dual-injection engines operating with biofuels. A dual-injection spark-ignition engine fueled with ethanol and gasoline is employed for demonstration purpose. The engine response for various parameters is firstly acquired, and an engine model is then constructed using SBELM. With the engine model, the optimal engine settings are determined based on recently proposed metaheuristic optimization methods. Experimental results validate the optimal settings obtained with the proposed methodology, indicating that the use of machine learning and metaheuristic optimization for dual-injection engine calibration is effective and promising.
Live Replication of Paravirtual Machines
Stodden, Daniel
2009-01-01
Virtual machines offer a fair degree of system state encapsulation, which promotes practical advances in fault tolerance, system debugging, profiling and security applications. This work investigates deterministic replay and semi-active replication for system paravirtualization, a software discipline trading guest kernel binar compatibility for reduced dependency on costly trap-and-emulate techniques. A primary contribution is evidence that trace capturing under a piecewise deterministic exec...
An optimal maintenance policy for machine replacement problem using dynamic programming
Directory of Open Access Journals (Sweden)
Mohsen Sadegh Amalnik
2017-06-01
Full Text Available In this article, we present an acceptance sampling plan for machine replacement problem based on the backward dynamic programming model. Discount dynamic programming is used to solve a two-state machine replacement problem. We plan to design a model for maintenance by consid-ering the quality of the item produced. The purpose of the proposed model is to determine the optimal threshold policy for maintenance in a finite time horizon. We create a decision tree based on a sequential sampling including renew, repair and do nothing and wish to achieve an optimal threshold for making decisions including renew, repair and continue the production in order to minimize the expected cost. Results show that the optimal policy is sensitive to the data, for the probability of defective machines and parameters defined in the model. This can be clearly demonstrated by a sensitivity analysis technique.
Optimization of operating schedule of machines in granite industry using evolutionary algorithms
International Nuclear Information System (INIS)
Loganthurai, P.; Rajasekaran, V.; Gnanambal, K.
2014-01-01
Highlights: • Operating time of machines in granite industries was studied. • Operating time has been optimized using evolutionary algorithms such as PSO, DE. • The maximum demand has been reduced. • Hence the electricity cost of the industry and feeder stress have been reduced. - Abstract: Electrical energy consumption cost plays an important role in the production cost of any industry. The electrical energy consumption cost is calculated as two part tariff, the first part is maximum demand cost and the second part is energy consumption cost or unit cost (kW h). The maximum demand cost can be reduced without affecting the production. This paper focuses on the reduction of maximum demand by proper operating schedule of major equipments. For this analysis, various granite industries are considered. The major equipments in granite industries are cutting machine, polishing machine and compressor. To reduce the maximum demand, the operating time of polishing machine is rescheduled by optimization techniques such as Differential Evolution (DE) and particle swarm optimization (PSO). The maximum demand costs are calculated before and after rescheduling. The results show that if the machines are optimally operated, the cost is reduced. Both DE and PSO algorithms reduce the maximum demand cost at the same rate for all the granite industries. However, the optimum scheduling obtained by DE reduces the feeder power flow than the PSO scheduling
Tooth-coil permanent magnet synchronous machine design for special applications
Energy Technology Data Exchange (ETDEWEB)
Ponomarev, P.
2013-11-01
This doctoral thesis presents a study on the design of tooth-coil permanent magnet synchronous machines. The electromagnetic properties of concentrated non-overlapping winding permanent magnet synchronous machines, or simply tooth-coil permanent magnet synchronous machines (TC-PMSMs), are studied in details. It is shown that current linkage harmonics play the deterministic role in the behavior of this type of machines. Important contributions are presented as regards of calculation of parameters of TC-PMSMs,particularly the estimation of inductances. The current linkage harmonics essentially define the air-gap harmonic leakage inductance, rotor losses and localized temporal inductance variation. It is proven by FEM analysis that inductance variation caused by the local temporal harmonic saturation results in considerable torque ripple, and can influence on sensorless control capabilities. Example case studies an integrated application of TC-IPMSMs in hybrid off-highway working vehicles. A methodology for increasing the efficiency of working vehicles is introduced. It comprises several approaches - hybridization, working operations optimization, component optimization and integration. As a result of component optimization and integration, a novel integrated electro-hydraulic energy converter (IEHEC) for off-highway working vehicles is designed. The IEHEC can considerably increase the operational efficiency of a hybrid working vehicle. The energy converter consists of an axial-piston hydraulic machine and an integrated TCIPMSM being built on the same shaft. The compact assembly of the electrical and hydraulic machines enhances the ability to find applications for such a device in the mobile environment of working vehicles.Usage of hydraulic fluid, typically used in working actuators, enables direct-immersion oil cooling of designed electrical machine, and further increases the torque- and power- densities of the whole device. (orig.)
Minimum Time Trajectory Optimization of CNC Machining with Tracking Error Constraints
Directory of Open Access Journals (Sweden)
Qiang Zhang
2014-01-01
Full Text Available An off-line optimization approach of high precision minimum time feedrate for CNC machining is proposed. Besides the ordinary considered velocity, acceleration, and jerk constraints, dynamic performance constraint of each servo drive is also considered in this optimization problem to improve the tracking precision along the optimized feedrate trajectory. Tracking error is applied to indicate the servo dynamic performance of each axis. By using variable substitution, the tracking error constrained minimum time trajectory planning problem is formulated as a nonlinear path constrained optimal control problem. Bang-bang constraints structure of the optimal trajectory is proved in this paper; then a novel constraint handling method is proposed to realize a convex optimization based solution of the nonlinear constrained optimal control problem. A simple ellipse feedrate planning test is presented to demonstrate the effectiveness of the approach. Then the practicability and robustness of the trajectory generated by the proposed approach are demonstrated by a butterfly contour machining example.
Optimal Rules for Single Machine Scheduling with Stochastic Breakdowns
Directory of Open Access Journals (Sweden)
Jinwei Gu
2014-01-01
Full Text Available This paper studies the problem of scheduling a set of jobs on a single machine subject to stochastic breakdowns, where jobs have to be restarted if preemptions occur because of breakdowns. The breakdown process of the machine is independent of the jobs processed on the machine. The processing times required to complete the jobs are constants if no breakdown occurs. The machine uptimes are independently and identically distributed (i.i.d. and are subject to a uniform distribution. It is proved that the Longest Processing Time first (LPT rule minimizes the expected makespan. For the large-scale problem, it is also showed that the Shortest Processing Time first (SPT rule is optimal to minimize the expected total completion times of all jobs.
Jointly Production and Correlated Maintenance Optimization for Parallel Leased Machines
Directory of Open Access Journals (Sweden)
Tarek ASKRI
2017-04-01
Full Text Available This paper deals with a preventive maintenance strategy optimization correlated to production for a manufacturing system made by several parallel machines under lease contract. In order to minimize the total cost of production and maintenance by reducing the production system interruptions due to maintenance activities, a correlated group preventive maintenance policy is developed using the gravity center approach (GCA. The aim of this study is to determine an economical production plan and an optimal group preventive maintenance interval Tn at which all machines are maintained simultaneously. An analytical correlation between failure rate of machines and production level is considered and the impact of the preventive maintenance policy on the production plan is studied. Finally, the proposed maintenance policy GPM is compared with an individual simple strategy approach IPM in order to illustrate its efficiency.
Topology Optimization of a High-Temperature Superconducting Field Winding of a Synchronous Machine
DEFF Research Database (Denmark)
Pozzi, Matias; Mijatovic, Nenad; Jensen, Bogi Bech
2013-01-01
This paper presents topology optimization (TO) of the high-temperature superconductor (HTS) field winding of an HTS synchronous machine. The TO problem is defined in order to find the minimum HTS material usage for a given HTS synchronous machine design. Optimization is performed using a modified...... genetic algorithm with local optimization search based on on/off sensitivity analysis. The results show an optimal HTS coil distribution, achieving compact designs with a maximum of approximately 22% of the available space for the field winding occupied with HTS tape. In addition, this paper describes...... potential HTS savings, which could be achieved using multiple power supplies for the excitation of the machine. Using the TO approach combined with two excitation currents, an additional HTS saving of 9.1% can be achieved....
Machine performance assessment and enhancement for a hexapod machine
Energy Technology Data Exchange (ETDEWEB)
Mou, J.I. [Arizona State Univ., Tempe, AZ (United States); King, C. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems Center
1998-03-19
The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess the status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.
Optimization of line configuration and balancing for flexible machining lines
Liu, Xuemei; Li, Aiping; Chen, Zurui
2016-05-01
Line configuration and balancing is to select the type of line and allot a given set of operations as well as machines to a sequence of workstations to realize high-efficiency production. Most of the current researches for machining line configuration and balancing problems are related to dedicated transfer lines with dedicated machine workstations. With growing trends towards great product variety and fluctuations in market demand, dedicated transfer lines are being replaced with flexible machining line composed of identical CNC machines. This paper deals with the line configuration and balancing problem for flexible machining lines. The objective is to assign operations to workstations and find the sequence of execution, specify the number of machines in each workstation while minimizing the line cycle time and total number of machines. This problem is subject to precedence, clustering, accessibility and capacity constraints among the features, operations, setups and workstations. The mathematical model and heuristic algorithm based on feature group strategy and polychromatic sets theory are presented to find an optimal solution. The feature group strategy and polychromatic sets theory are used to establish constraint model. A heuristic operations sequencing and assignment algorithm is given. An industrial case study is carried out, and multiple optimal solutions in different line configurations are obtained. The case studying results show that the solutions with shorter cycle time and higher line balancing rate demonstrate the feasibility and effectiveness of the proposed algorithm. This research proposes a heuristic line configuration and balancing algorithm based on feature group strategy and polychromatic sets theory which is able to provide better solutions while achieving an improvement in computing time.
Feedback optimal control of dynamic stochastic two-machine flowshop with a finite buffer
Directory of Open Access Journals (Sweden)
Thang Diep
2010-06-01
Full Text Available This paper examines the optimization of production involving a tandem two-machine system producing a single part type, with each machine being subject to random breakdowns and repairs. An analytical model is formulated with a view to solving an optimal stochastic production problem of the system with machines having up-downtime non-exponential distributions. The model developed is obtained by using a dynamic programming approach and a semi-Markov process. The control problem aims to find the production rates needed by the machines to meet the demand rate, through a minimization of the inventory/shortage cost. Using the Bellman principle, the optimality conditions obtained satisfy the Hamilton-Jacobi-Bellman equation, which depends on time and system states, and ultimately, leads to a feedback control. Consequently, the new model enables us to improve the coefficient of variation (CVup/down to be less than one while it is equal to one in Markov model. Heuristics methods are used to involve the problem because of the difficulty of the analytical model using several states, and to show what control law should be used in each system state (i.e., including Kanban, feedback and CONWIP control. Numerical methods are used to solve the optimality conditions and to show how a machine should produce.
Runtime Optimizations for Tree-Based Machine Learning Models
N. Asadi; J.J.P. Lin (Jimmy); A.P. de Vries (Arjen)
2014-01-01
htmlabstractTree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression
Optimization on robot arm machining by using genetic algorithms
Liu, Tung-Kuan; Chen, Chiu-Hung; Tsai, Shang-En
2007-12-01
In this study, an optimization problem on the robot arm machining is formulated and solved by using genetic algorithms (GAs). The proposed approach adopts direct kinematics model and utilizes GA's global search ability to find the optimum solution. The direct kinematics equations of the robot arm are formulated and can be used to compute the end-effector coordinates. Based on these, the objective of optimum machining along a set of points can be evolutionarily evaluated with the distance between machining points and end-effector positions. Besides, a 3D CAD application, CATIA, is used to build up the 3D models of the robot arm, work-pieces and their components. A simulated experiment in CATIA is used to verify the computation results first and a practical control on the robot arm through the RS232 port is also performed. From the results, this approach is proved to be robust and can be suitable for most machining needs when robot arms are adopted as the machining tools.
A coherent Ising machine for 2000-node optimization problems
Inagaki, Takahiro; Haribara, Yoshitaka; Igarashi, Koji; Sonobe, Tomohiro; Tamate, Shuhei; Honjo, Toshimori; Marandi, Alireza; McMahon, Peter L.; Umeki, Takeshi; Enbutsu, Koji; Tadanaga, Osamu; Takenouchi, Hirokazu; Aihara, Kazuyuki; Kawarabayashi, Ken-ichi; Inoue, Kyo; Utsunomiya, Shoko; Takesue, Hiroki
2016-11-01
The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
Optimal replacement time estimation for machines and equipment based on cost function
Directory of Open Access Journals (Sweden)
J. Šebo
2013-01-01
Full Text Available The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables. Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is sufficient to use simpler models. In addition to the testing of models we developed the method (tested on selected simple model which enable us in actual real time (with limited data set to indicate the optimal replacement time. The indicated time moment is close enough to the optimal replacement time t*.
Design and Optimization of Permanent Magnet Brushless Machines for Electric Vehicle Applications
Directory of Open Access Journals (Sweden)
Weiwei Gu
2015-12-01
Full Text Available In this paper, by considering and establishing the relationship between the maximum operating speed and d-axis inductance, a new design and optimization method is proposed. Thus, a more extended constant power speed range, as well as reduced losses and increased efficiency, especially in the high-speed region, can be obtained, which is essential for electric vehicles (EVs. In the first step, the initial permanent magnet (PM brushless machine is designed based on the consideration of the maximum speed and performance specifications in the entire operation region. Then, on the basis of increasing d-axis inductance, and meanwhile maintaining constant permanent magnet flux linkage, the PM brushless machine is optimized. The corresponding performance of the initial and optimal PM brushless machines are analyzed and compared by the finite-element method (FEM. Several tests are carried out in an EV simulation model based on the urban dynamometer driving schedule (UDDS for evaluation. Both theoretical analysis and simulation results verify the validity of the proposed design and optimization method.
Directory of Open Access Journals (Sweden)
Shazada Muhammad Umair Khan
2018-01-01
Full Text Available In human machine systems, a user display should contain sufficient information to encapsulate expressive and normative human operator behavior. Failure in such system that is commanded by supervisor can be difficult to anticipate because of unexpected interactions between the different users and machines. Currently, most interfaces have non-deterministic choices at state of machine. Inspired by the theories of single user of an interface established on discrete event system, we present a formal model of multiple users, multiple machines, a supervisor and a supervisor machine. The syntax and semantics of these models are based on the system specification using timed automata that adheres to desirable specification properties conducive to solving the non-deterministic choices for usability properties of the supervisor and user interface. Further, the succinct interface developed by applying the weak bi-simulation relation, where large classes of potentially equivalent states are refined into a smaller one, enables the supervisor and user to perform specified task correctly. Finally, the proposed approach is applied to a model of a manufacturing system with several users interacting with their machines, a supervisor with several users and a supervisor with a supervisor machine to illustrate the design procedure of human–machine systems. The formal specification is validated by z-eves toolset.
Numerical modeling and optimization of machining duplex stainless steels
Directory of Open Access Journals (Sweden)
Rastee D. Koyee
2015-01-01
Full Text Available The shortcomings of the machining analytical and empirical models in combination with the industry demands have to be fulfilled. A three-dimensional finite element modeling (FEM introduces an attractive alternative to bridge the gap between pure empirical and fundamental scientific quantities, and fulfill the industry needs. However, the challenging aspects which hinder the successful adoption of FEM in the machining sector of manufacturing industry have to be solved first. One of the greatest challenges is the identification of the correct set of machining simulation input parameters. This study presents a new methodology to inversely calculate the input parameters when simulating the machining of standard duplex EN 1.4462 and super duplex EN 1.4410 stainless steels. JMatPro software is first used to model elastic–viscoplastic and physical work material behavior. In order to effectively obtain an optimum set of inversely identified friction coefficients, thermal contact conductance, Cockcroft–Latham critical damage value, percentage reduction in flow stress, and Taylor–Quinney coefficient, Taguchi-VIKOR coupled with Firefly Algorithm Neural Network System is applied. The optimization procedure effectively minimizes the overall differences between the experimentally measured performances such as cutting forces, tool nose temperature and chip thickness, and the numerically obtained ones at any specified cutting condition. The optimum set of input parameter is verified and used for the next step of 3D-FEM application. In the next stage of the study, design of experiments, numerical simulations, and fuzzy rule modeling approaches are employed to optimize types of chip breaker, insert shapes, process conditions, cutting parameters, and tool orientation angles based on many important performances. Through this study, not only a new methodology in defining the optimal set of controllable parameters for turning simulations is introduced, but also
Optimization of machining parameters of turning operations based on multi performance criteria
Directory of Open Access Journals (Sweden)
N.K.Mandal
2013-01-01
Full Text Available The selection of optimum machining parameters plays a significant role to ensure quality of product, to reduce the manufacturing cost and to increase productivity in computer controlled manufacturing process. For many years, multi-objective optimization of turning based on inherent complexity of process is a competitive engineering issue. This study investigates multi-response optimization of turning process for an optimal parametric combination to yield the minimum power consumption, surface roughness and frequency of tool vibration using a combination of a Grey relational analysis (GRA. Confirmation test is conducted for the optimal machining parameters to validate the test result. Various turning parameters, such as spindle speed, feed and depth of cut are considered. Experiments are designed and conducted based on full factorial design of experiment.
Genetic algorithm enhanced by machine learning in dynamic aperture optimization
Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert
2018-05-01
With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.
Design and optimize of 3-axis filament winding machine
Quanjin, Ma; Rejab, M. R. M.; Idris, M. S.; Bachtiar, B.; Siregar, J. P.; Harith, M. N.
2017-10-01
Filament winding technique is developed as the primary process for composite cylindrical structures fabrication at low cost. Fibres are wound on a rotating mandrel by a filament winding machine where resin impregnated fibres pass through a pay-out eye. This paper aims to develop and optimize a 3-axis, lightweight, practical, efficient, portable filament winding machine to satisfy the customer demand, which can fabricate pipes and round shape cylinders with resins. There are 3 main units on the 3-axis filament winding machine, which are the rotary unit, the delivery unit and control system unit. Comparison with previous existing filament winding machines in the factory, it has 3 degrees of freedom and can fabricate more complex shape specimens based on the mandrel shape and particular control system. The machine has been designed and fabricated on 3 axes movements with control system. The x-axis is for movement of the carriage, the y-axis is the rotation of mandrel and the z-axis is the movement of the pay-out eye. Cylindrical specimens with different dimensions and winding angles were produced. 3-axis automated filament winding machine has been successfully designed with simple control system.
Energy Technology Data Exchange (ETDEWEB)
Pettersson, H. [Valmet Oyj Pansio, Turku (Finland)
1998-12-31
Conventionally the energy content of paper and board machine dryer section exhaust air is recovered in the heat recovery tower. This has had a major contribution to the overall energy economy of a paper machine. Modern paper machines have already reached momentary record speeds above 1700 m/min, and speeds above 2000 m/min will be strived to. This is possible by developing new efficient drying technologies. These will require new solutions for the heat recovery systems. At the same time requirements for new heat recovery solutions come from the gradually closing of paper mill water circulation systems. In this project a discrete tool based on optimization is developed, a tool for analyzing, optimizing and dimensioning of paper machine heat recovery systems for different process conditions. Delivery of a paper machine process requires more and more transferring of process knowledge into calculation model parameters. The overall target of the tool is to decrease the energy consumption considering new drying technologies and the gradually closing of water circulation systems. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Pettersson, H [Valmet Oyj Pansio, Turku (Finland)
1999-12-31
Conventionally the energy content of paper and board machine dryer section exhaust air is recovered in the heat recovery tower. This has had a major contribution to the overall energy economy of a paper machine. Modern paper machines have already reached momentary record speeds above 1700 m/min, and speeds above 2000 m/min will be strived to. This is possible by developing new efficient drying technologies. These will require new solutions for the heat recovery systems. At the same time requirements for new heat recovery solutions come from the gradually closing of paper mill water circulation systems. In this project a discrete tool based on optimization is developed, a tool for analyzing, optimizing and dimensioning of paper machine heat recovery systems for different process conditions. Delivery of a paper machine process requires more and more transferring of process knowledge into calculation model parameters. The overall target of the tool is to decrease the energy consumption considering new drying technologies and the gradually closing of water circulation systems. (orig.)
International Nuclear Information System (INIS)
Liu, Henan; Chen, Mingjun; Yu, Bo; Zhen, Fang
2016-01-01
Magnetorheological finishing (MRF) is a computer-controlled deterministic polishing technique that is widely used in the production of high-quality optics. In order to overcome the defects of existing MRF processes that are unable to achieve concave surfaces with small radius of curvature, a configuration method of a novel structured MRF machine tool using small ball-end permanent-magnet polishing head is proposed in this paper. The preliminary design focuses on the structural configuration of the machine, which includes the machine body, motion units and accessory equipment, and so on. Structural deformation and fabrication accuracy of the machine are analyzed theoretically, in which the reasonable structure sizes, manufacturing errors and assembly errors of main structural components are given for configuration optimization. Based on the theoretical analysis, a four-axes linkage MRF machine tool is developed. Preliminary experiments of spot polishing are carried out and the results indicate that the proposed MRF process can achieve stable polishing area which meets requirement of deterministic polishing. A typical small-bore complex component is polished on the developed device and fine surface quality is obtained with sphericity of the finished spherical surfaces 1.3 μm and surface roughness Ra less than 0.018 μm.
Salehi, Mojtaba; Bahreininejad, Ardeshir
2011-08-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.
Optimization of electrical parameters of windings used in axial flux electrical machines
International Nuclear Information System (INIS)
Uhrik, M.
2012-01-01
This paper deals with shape optimization of windings used in electrical machines with disc type construction. These machines have short axial length what makes them suitable for use in small wind-power turbines or in-wheel traction drives. Disc type construction of stator offers more possibilities for winding arrangements than are available in classical machines with cylindrical construction. To find out the best winding arrangement for the novel disc type machine construction a series of analytical calculations, simulations and experimental measurements were performed. (Authors)
Optimization for energy consumption in drying section of fluting paper machine
Directory of Open Access Journals (Sweden)
Ghodbanan Shaaban
2017-01-01
Full Text Available Non-linear programming optimization method was used to optimize total steam and air consumption in the dryer section of multi-cylinder fluting paper machine. Equality constraints of the optimization model were obtained from specified process blocks considering mass and energy balance relationships in drying and heat recovery sections. Inequality constraints correspond to process parameters such as production capacity, operating conditions, and other limitations. Using the simulation, the process parameters can be optimized to improve the energy efficiency and heat recovery performance. For a corrugating machine, optimized parameters show the total steam use can be reduced by about 11% due to improvement of the heat recovery performance and optimization of the operating conditions such as inlet web dryness, evaporation rate, and exhaust air humidity, accordingly total steam consumption can be decreased from about 1.71 to 1.53 tonnes steam per tonne paper production. The humidity of the exhaust air should be kept as high as possible to optimize the energy performance and avoid condensation in the pocket dryers and hood exhaust air. So the simulation shows the supply air should be increased by about 10% to achieve optimal humidity level which was determined about 0.152 kgH2O/(kg dry air.
Stochastic optimization methods
Marti, Kurt
2005-01-01
Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.
ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining
Chandrasekaran, Muthumari; Tamang, Santosh
2017-08-01
Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.
Multi-machine power system stabilizers design using chaotic optimization algorithm
Energy Technology Data Exchange (ETDEWEB)
Shayeghi, H., E-mail: hshayeghi@gmail.co [Technical Engineering Department, University of Mohaghegh Ardabili, Ardabil (Iran, Islamic Republic of); Shayanfar, H.A. [Center of Excellence for Power System Automation and Operation, Electrical Engineering Department, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of); Jalilzadeh, S.; Safari, A. [Technical Engineering Department, Zanjan University, Zanjan (Iran, Islamic Republic of)
2010-07-15
In this paper, a multiobjective design of the multi-machine power system stabilizers (PSSs) using chaotic optimization algorithm (COA) is proposed. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from the local optimum, is a promising tool for the engineering applications. The PSSs parameters tuning problem is converted to an optimization problem which is solved by a chaotic optimization algorithm based on Lozi map. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization problem introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. Two different objective functions are proposed in this study for the PSSs design problem. The first objective function is the eigenvalues based comprising the damping factor, and the damping ratio of the lightly damped electro-mechanical modes, while the second is the time domain-based multi-objective function. The robustness of the proposed COA-based PSSs (COAPSS) is verified on a multi-machine power system under different operating conditions and disturbances. The results of the proposed COAPSS are demonstrated through eigenvalue analysis, nonlinear time-domain simulation and some performance indices. In addition, the potential and superiority of the proposed method over the classical approach and genetic algorithm is demonstrated.
Fu Yu; Mu Jiong; Duan Xu Liang
2016-01-01
By means of the model of extreme learning machine based upon DE optimization, this article particularly centers on the optimization thinking of such a model as well as its application effect in the field of listed company’s financial position classification. It proves that the improved extreme learning machine algorithm based upon DE optimization eclipses the traditional extreme learning machine algorithm following comparison. Meanwhile, this article also intends to introduce certain research...
Empirical Investigation of Optimization Algorithms in Neural Machine Translation
Directory of Open Access Journals (Sweden)
Bahar Parnia
2017-06-01
Full Text Available Training neural networks is a non-convex and a high-dimensional optimization problem. In this paper, we provide a comparative study of the most popular stochastic optimization techniques used to train neural networks. We evaluate the methods in terms of convergence speed, translation quality, and training stability. In addition, we investigate combinations that seek to improve optimization in terms of these aspects. We train state-of-the-art attention-based models and apply them to perform neural machine translation. We demonstrate our results on two tasks: WMT 2016 En→Ro and WMT 2015 De→En.
Parameter identification and optimization of slide guide joint of CNC machine tools
Zhou, S.; Sun, B. B.
2017-11-01
The joint surface has an important influence on the performance of CNC machine tools. In order to identify the dynamic parameters of slide guide joint, the parametric finite element model of the joint is established and optimum design method is used based on the finite element simulation and modal test. Then the mode that has the most influence on the dynamics of slip joint is found through harmonic response analysis. Take the frequency of this mode as objective, the sensitivity analysis of the stiffness of each joint surface is carried out using Latin Hypercube Sampling and Monte Carlo Simulation. The result shows that the vertical stiffness of slip joint surface constituted by the bed and the slide plate has the most obvious influence on the structure. Therefore, this stiffness is taken as the optimization variable and the optimal value is obtained through studying the relationship between structural dynamic performance and stiffness. Take the stiffness values before and after optimization into the FEM of machine tool, and it is found that the dynamic performance of the machine tool is improved.
Using Machine Learning in Adversarial Environments.
Energy Technology Data Exchange (ETDEWEB)
Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-02-01
Intrusion/anomaly detection systems are among the first lines of cyber defense. Commonly, they either use signatures or machine learning (ML) to identify threats, but fail to account for sophisticated attackers trying to circumvent them. We propose to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD. Our approach addresses three key shortcomings of ML in adversarial settings: 1) resulting classifiers are typically deterministic and, therefore, easy to reverse engineer; 2) ML approaches only address the prediction problem, but do not prescribe how one should operationalize predictions, nor account for operational costs and constraints; and 3) ML approaches do not model attackers’ response and can be circumvented by sophisticated adversaries. The principal novelty of our approach is to construct an optimization framework that blends ML, operational considerations, and a model predicting attackers reaction, with the goal of computing optimal moving target defense. One important challenge is to construct a realistic model of an adversary that is tractable, yet realistic. We aim to advance the science of attacker modeling by considering game-theoretic methods, and by engaging experimental subjects with red teaming experience in trying to actively circumvent an intrusion detection system, and learning a predictive model of such circumvention activities. In addition, we will generate metrics to test that a particular model of an adversary is consistent with available data.
Directory of Open Access Journals (Sweden)
Fu Yu
2016-01-01
Full Text Available By means of the model of extreme learning machine based upon DE optimization, this article particularly centers on the optimization thinking of such a model as well as its application effect in the field of listed company’s financial position classification. It proves that the improved extreme learning machine algorithm based upon DE optimization eclipses the traditional extreme learning machine algorithm following comparison. Meanwhile, this article also intends to introduce certain research thinking concerning extreme learning machine into the economics classification area so as to fulfill the purpose of computerizing the speedy but effective evaluation of massive financial statements of listed companies pertain to different classes
Ma, Yuliang; Ding, Xiaohui; She, Qingshan; Luo, Zhizeng; Potter, Thomas; Zhang, Yingchun
2016-01-01
Support vector machines are powerful tools used to solve the small sample and nonlinear classification problems, but their ultimate classification performance depends heavily upon the selection of appropriate kernel and penalty parameters. In this study, we propose using a particle swarm optimization algorithm to optimize the selection of both the kernel and penalty parameters in order to improve the classification performance of support vector machines. The performance of the optimized classifier was evaluated with motor imagery EEG signals in terms of both classification and prediction. Results show that the optimized classifier can significantly improve the classification accuracy of motor imagery EEG signals. PMID:27313656
Energy Technology Data Exchange (ETDEWEB)
Liang, B; Liu, B; Li, Y; Guo, B; Xu, X; Wei, R; Zhou, F [Beihang University, Beijing, Beijing (China); Wu, Q [Duke University Medical Center, Durham, NC (United States)
2016-06-15
Purpose: Treatment plan optimization in multi-Co60 source focused radiotherapy with multiple isocenters is challenging, because dose distribution is normalized to maximum dose during optimization and evaluation. The objective functions are traditionally defined based on relative dosimetric distribution. This study presents an alternative absolute dose-volume constraint (ADC) based deterministic optimization framework (ADC-DOF). Methods: The initial isocenters are placed on the eroded target surface. Collimator size is chosen based on the area of 2D contour on corresponding axial slice. The isocenter spacing is determined by adjacent collimator sizes. The weights are optimized by minimizing the deviation from ADCs using the steepest descent technique. An iterative procedure is developed to reduce the number of isocenters, where the isocenter with lowest weight is removed without affecting plan quality. The ADC-DOF is compared with the genetic algorithm (GA) using the same arbitrary shaped target (254cc), with a 15mm margin ring structure representing normal tissues. Results: For ADC-DOF, the ADCs imposed on target and ring are (D100>10Gy, D50,10, 0<12Gy, 15Gy and 20Gy) and (D40<10Gy). The resulting D100, 50, 10, 0 and D40 are (9.9Gy, 12.0Gy, 14.1Gy and 16.2Gy) and (10.2Gy). The objectives of GA are to maximize 50% isodose target coverage (TC) while minimize the dose delivered to the ring structure, which results in 97% TC and 47.2% average dose in ring structure. For ADC-DOF (GA) techniques, 20 out of 38 (10 out of 12) initial isocenters are used in the final plan, and the computation time is 8.7s (412.2s) on an i5 computer. Conclusion: We have developed a new optimization technique using ADC and deterministic optimization. Compared with GA, ADC-DOF uses more isocenters but is faster and more robust, and achieves a better conformity. For future work, we will focus on developing a more effective mechanism for initial isocenter determination.
Surface roughness optimization in machining of AZ31 magnesium alloy using ABC algorithm
Directory of Open Access Journals (Sweden)
Abhijith
2018-01-01
Full Text Available Magnesium alloys serve as excellent substitutes for materials traditionally used for engine block heads in automobiles and gear housings in aircraft industries. AZ31 is a magnesium alloy finds its applications in orthopedic implants and cardiovascular stents. Surface roughness is an important parameter in the present manufacturing sector. In this work optimization techniques namely firefly algorithm (FA, particle swarm optimization (PSO and artificial bee colony algorithm (ABC which are based on swarm intelligence techniques, have been implemented to optimize the machining parameters namely cutting speed, feed rate and depth of cut in order to achieve minimum surface roughness. The parameter Ra has been considered for evaluating the surface roughness. Comparing the performance of ABC algorithm with FA and PSO algorithm, which is a widely used optimization algorithm in machining studies, the results conclude that ABC produces better optimization when compared to FA and PSO for optimizing surface roughness of AZ 31.
Deterministic ion beam material adding technology for high-precision optical surfaces.
Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin
2013-02-20
Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.
Directory of Open Access Journals (Sweden)
Cai Ligang
2017-01-01
Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.
Parametric optimization for the production of nanostructure in high carbon steel chips via machining
Directory of Open Access Journals (Sweden)
M. Ilangkumaran
2015-09-01
Full Text Available Nano crystalline materials are an area of interest for the researchers all over the world due to its superior mechanical properties such as high strength and high hardness. But the cost of nano-crystals is high because of the complexity and cost incurred during its production. This paper focuses on the application of Taguchi method with Fuzzy logic for optimizing the machining parameters of nano-crystalline structured chips production in High Carbon Steel (HCS through machining. An orthogonal array, multi-response performance index, signals to noise ratio and analysis of variance are used to study the machining process with multi-response performance characteristics. The machining parameters namely rake angle, depth of cut, heat treatment, feed and cutting velocity are optimized with considerations of the multi-response performance characteristics. Using the Taguchi and Fuzzy logic method optimum cutting conditions are identified in order to obtain the smallest nanocrystalline structure via machining.
Nur, Rusdi; Suyuti, Muhammad Arsyad; Susanto, Tri Agus
2017-06-01
Aluminum is widely utilized in the industrial sector. There are several advantages of aluminum, i.e. good flexibility and formability, high corrosion resistance and electrical conductivity, and high heat. Despite of these characteristics, however, pure aluminum is rarely used because of its lacks of strength. Thus, most of the aluminum used in the industrial sectors was in the form of alloy form. Sustainable machining can be considered to link with the transformation of input materials and energy/power demand into finished goods. Machining processes are responsible for environmental effects accepting to their power consumption. The cutting conditions have been optimized to minimize the cutting power, which is the power consumed for cutting. This paper presents an experimental study of sustainable machining of Al-11%Si base alloy that was operated without any cooling system to assess the capacity in reducing power consumption. The cutting force was measured and the cutting power was calculated. Both of cutting force and cutting power were analyzed and modeled by using the central composite design (CCD). The result of this study indicated that the cutting speed has an effect on machining performance and that optimum cutting conditions have to be determined, while sustainable machining can be followed in terms of minimizing power consumption and cutting force. The model developed from this study can be used for evaluation process and optimization to determine optimal cutting conditions for the performance of the whole process.
CALTRANS: A parallel, deterministic, 3D neutronics code
Energy Technology Data Exchange (ETDEWEB)
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.
Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone
2017-12-26
Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.
International Nuclear Information System (INIS)
Hwangbo, Soonho; Lee, In-Beum; Han, Jeehoon
2014-01-01
Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network
Optimal Machining Parameters for Achieving the Desired Surface Roughness in Turning of Steel
Directory of Open Access Journals (Sweden)
LB Abhang
2012-06-01
Full Text Available Due to the widespread use of highly automated machine tools in the metal cutting industry, manufacturing requires highly reliable models and methods for the prediction of output performance in the machining process. The prediction of optimal manufacturing conditions for good surface finish and dimensional accuracy plays a very important role in process planning. In the steel turning process the tool geometry and cutting conditions determine the time and cost of production which ultimately affect the quality of the final product. In the present work, experimental investigations have been conducted to determine the effect of the tool geometry (effective tool nose radius and metal cutting conditions (cutting speed, feed rate and depth of cut on surface finish during the turning of EN-31 steel. First and second order mathematical models are developed in terms of machining parameters by using the response surface methodology on the basis of the experimental results. The surface roughness prediction model has been optimized to obtain the surface roughness values by using LINGO solver programs. LINGO is a mathematical modeling language which is used in linear and nonlinear optimization to formulate large problems concisely, solve them, and analyze the solution in engineering sciences, operation research etc. The LINGO solver program is global optimization software. It gives minimum values of surface roughness and their respective optimal conditions.
Optimization of temperature field of tobacco heat shrink machine
Yang, Xudong; Yang, Hai; Sun, Dong; Xu, Mingyang
2018-06-01
A company currently shrinking machine in the course of the film shrinkage is not compact, uneven temperature, resulting in poor quality of the shrinkage of the surface film. To solve this problem, the simulation and optimization of the temperature field are performed by using the k-epsilon turbulence model and the MRF model in fluent. The simulation results show that after the mesh screen structure is installed at the suction inlet of the centrifugal fan, the suction resistance of the fan can be increased and the eddy current intensity caused by the high-speed rotation of the fan can be improved, so that the internal temperature continuity of the heat shrinkable machine is Stronger.
STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION
Directory of Open Access Journals (Sweden)
Nataša Krejić
2014-12-01
Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.
Quantum optimization for training support vector machines.
Anguita, Davide; Ridella, Sandro; Rivieccio, Fabio; Zunino, Rodolfo
2003-01-01
Refined concepts, such as Rademacher estimates of model complexity and nonlinear criteria for weighting empirical classification errors, represent recent and promising approaches to characterize the generalization ability of Support Vector Machines (SVMs). The advantages of those techniques lie in both improving the SVM representation ability and yielding tighter generalization bounds. On the other hand, they often make Quadratic-Programming algorithms no longer applicable, and SVM training cannot benefit from efficient, specialized optimization techniques. The paper considers the application of Quantum Computing to solve the problem of effective SVM training, especially in the case of digital implementations. The presented research compares the behavioral aspects of conventional and enhanced SVMs; experiments in both a synthetic and real-world problems support the theoretical analysis. At the same time, the related differences between Quadratic-Programming and Quantum-based optimization techniques are considered.
An optimal maintenance policy for machine replacement problem using dynamic programming
Mohsen Sadegh Amalnik; Morteza Pourgharibshahi
2017-01-01
In this article, we present an acceptance sampling plan for machine replacement problem based on the backward dynamic programming model. Discount dynamic programming is used to solve a two-state machine replacement problem. We plan to design a model for maintenance by consid-ering the quality of the item produced. The purpose of the proposed model is to determine the optimal threshold policy for maintenance in a finite time horizon. We create a decision tree based on a sequential sampling inc...
Directory of Open Access Journals (Sweden)
Shuyu Dai
2018-01-01
Full Text Available Daily peak load forecasting is an important part of power load forecasting. The accuracy of its prediction has great influence on the formulation of power generation plan, power grid dispatching, power grid operation and power supply reliability of power system. Therefore, it is of great significance to construct a suitable model to realize the accurate prediction of the daily peak load. A novel daily peak load forecasting model, CEEMDAN-MGWO-SVM (Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, is proposed in this paper. Firstly, the model uses the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN algorithm to decompose the daily peak load sequence into multiple sub sequences. Then, the model of modified grey wolf optimization and support vector machine (MGWO-SVM is adopted to forecast the sub sequences. Finally, the forecasting sequence is reconstructed and the forecasting result is obtained. Using CEEMDAN can realize noise reduction for non-stationary daily peak load sequence, which makes the daily peak load sequence more regular. The model adopts the grey wolf optimization algorithm improved by introducing the population dynamic evolution operator and the nonlinear convergence factor to enhance the global search ability and avoid falling into the local optimum, which can better optimize the parameters of the SVM algorithm for improving the forecasting accuracy of daily peak load. In this paper, three cases are used to test the forecasting accuracy of the CEEMDAN-MGWO-SVM model. We choose the models EEMD-MGWO-SVM (Ensemble Empirical Mode Decomposition and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, MGWO-SVM (Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, GWO-SVM (Support Vector Machine Optimized by Grey Wolf Optimization Algorithm, SVM (Support Vector
Directory of Open Access Journals (Sweden)
Xue-cun Yang
2015-01-01
Full Text Available For coal slurry pipeline blockage prediction problem, through the analysis of actual scene, it is determined that the pressure prediction from each measuring point is the premise of pipeline blockage prediction. Kernel function of support vector machine is introduced into extreme learning machine, the parameters are optimized by particle swarm algorithm, and blockage prediction method based on particle swarm optimization kernel function extreme learning machine (PSOKELM is put forward. The actual test data from HuangLing coal gangue power plant are used for simulation experiments and compared with support vector machine prediction model optimized by particle swarm algorithm (PSOSVM and kernel function extreme learning machine prediction model (KELM. The results prove that mean square error (MSE for the prediction model based on PSOKELM is 0.0038 and the correlation coefficient is 0.9955, which is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy.
Simulation and Optimization of Contactless Power Transfer System for Rotary Ultrasonic Machining
Directory of Open Access Journals (Sweden)
Wang Xinwei
2016-01-01
Full Text Available In today’s rotary ultrasonic machining (RUM, the power transfer system is based on a contactless power system (rotary transformer rather than the slip ring that cannot cope with high-speed rotary of the tool. The efficiency of the rotary transformer is vital to the whole rotary ultrasonic machine. This paper focused on simulation of the rotary transformer and enhancing the efficiency of the rotary transformer by optimizing three main factors that influence its efficiency, including the gap between the two ferrite cores, the ratio of length and width of the ferrite core and the thickness of ferrite. The finite element model of rotary transformer was built on Maxwell platform. Simulation and optimization work was based on the finite element model. The optimization results compared with the initial simulation result showed an approximate 18% enhancement in terms of efficiency, from 77.69% to 95.2%.
Probability and sensitivity analysis of machine foundation and soil interaction
Directory of Open Access Journals (Sweden)
Králik J., jr.
2009-06-01
Full Text Available This paper deals with the possibility of the sensitivity and probabilistic analysis of the reliability of the machine foundation depending on variability of the soil stiffness, structure geometry and compressor operation. The requirements to design of the foundation under rotating machines increased due to development of calculation method and computer tools. During the structural design process, an engineer has to consider problems of the soil-foundation and foundation-machine interaction from the safety, reliability and durability of structure point of view. The advantages and disadvantages of the deterministic and probabilistic analysis of the machine foundation resistance are discussed. The sensitivity of the machine foundation to the uncertainties of the soil properties due to longtime rotating movement of machine is not negligible for design engineers. On the example of compressor foundation and turbine fy. SIEMENS AG the affectivity of the probabilistic design methodology was presented. The Latin Hypercube Sampling (LHS simulation method for the analysis of the compressor foundation reliability was used on program ANSYS. The 200 simulations for five load cases were calculated in the real time on PC. The probabilistic analysis gives us more complex information about the soil-foundation-machine interaction as the deterministic analysis.
Development of a new concrete pipe molding machine using topology optimization
International Nuclear Information System (INIS)
Park, Hong Seok; Dahal, Prakash; Nguyen, Trung Thanh
2016-01-01
Sulfur polymer concrete (SPC) is a relatively new material used to replace Portland cement for manufacturing sewer pipes. The objective of this work is to develop an efficient molding machine with an inner rotating die to mix, compress and shape the SPC pipe. First, the alternative concepts were generated based on the TRIZ principles to overcome the drawbacks of existing machines. Then, the concept scoring technique was used to identify the best design in terms of machine structure and product quality. Finally, topology optimization was applied with the support of the density method to reduce mass and to displace the inner die. Results showed that the die volume can be reduced by approximately 9% and the displacement can be decreased by approximately 3% when compared with the initial design. This work is expected to improve the manufacturing efficiency of the concrete pipe molding machine
Development of a new concrete pipe molding machine using topology optimization
Energy Technology Data Exchange (ETDEWEB)
Park, Hong Seok; Dahal, Prakash [School of Mechanical and Automotive Engineering, University of Ulsan, Ulsan (Korea, Republic of); Nguyen, Trung Thanh [Faculty of Mechanical Engineering, Le Quy Don Technical University, Hanoi (Viet Nam)
2016-08-15
Sulfur polymer concrete (SPC) is a relatively new material used to replace Portland cement for manufacturing sewer pipes. The objective of this work is to develop an efficient molding machine with an inner rotating die to mix, compress and shape the SPC pipe. First, the alternative concepts were generated based on the TRIZ principles to overcome the drawbacks of existing machines. Then, the concept scoring technique was used to identify the best design in terms of machine structure and product quality. Finally, topology optimization was applied with the support of the density method to reduce mass and to displace the inner die. Results showed that the die volume can be reduced by approximately 9% and the displacement can be decreased by approximately 3% when compared with the initial design. This work is expected to improve the manufacturing efficiency of the concrete pipe molding machine.
Directory of Open Access Journals (Sweden)
Yuliang Su
2015-04-01
Full Text Available A turning machine tool is a kind of new type of machine tool that is equipped with more than one spindle and turret. The distinctive simultaneous and parallel processing abilities of turning machine tool increase the complexity of process planning. The operations would not only be sequenced and satisfy precedence constraints, but also should be scheduled with multiple objectives such as minimizing machining cost, maximizing utilization of turning machine tool, and so on. To solve this problem, a hybrid genetic algorithm was proposed to generate optimal process plans based on a mixed 0-1 integer programming model. An operation precedence graph is used to represent precedence constraints and help generate a feasible initial population of hybrid genetic algorithm. Encoding strategy based on data structure was developed to represent process plans digitally in order to form the solution space. In addition, a local search approach for optimizing the assignments of available turrets would be added to incorporate scheduling with process planning. A real-world case is used to prove that the proposed approach could avoid infeasible solutions and effectively generate a global optimal process plan.
Directory of Open Access Journals (Sweden)
Rui Zhang
2013-01-01
Full Text Available We consider a parallel machine scheduling problem with random processing/setup times and adjustable production rates. The objective functions to be minimized consist of two parts; the first part is related with the due date performance (i.e., the tardiness of the jobs, while the second part is related with the setting of machine speeds. Therefore, the decision variables include both the production schedule (sequences of jobs and the production rate of each machine. The optimization process, however, is significantly complicated by the stochastic factors in the manufacturing system. To address the difficulty, a simulation-based three-stage optimization framework is presented in this paper for high-quality robust solutions to the integrated scheduling problem. The first stage (crude optimization is featured by the ordinal optimization theory, the second stage (finer optimization is implemented with a metaheuristic called differential evolution, and the third stage (fine-tuning is characterized by a perturbation-based local search. Finally, computational experiments are conducted to verify the effectiveness of the proposed approach. Sensitivity analysis and practical implications are also discussed.
Abstract quantum computing machines and quantum computational logics
Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto
2016-06-01
Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.
Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin
2016-01-15
Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods. Copyright © 2015 Elsevier B.V. All rights reserved.
Optimal methodology for a machining process scheduling in spot electricity markets
International Nuclear Information System (INIS)
Yusta, J.M.; Torres, F.; Khodr, H.M.
2010-01-01
Electricity spot markets have introduced hourly variations in the price of electricity. These variations allow the increase of the energy efficiency by the appropriate scheduling and adaptation of the industrial production to the hourly cost of electricity in order to obtain the maximum profit for the industry. In this article a mathematical optimization model simulates costs and the electricity demand of a machining process. The resultant problem is solved using the generalized reduced gradient approach, to find the optimum production schedule that maximizes the industry profit considering the hourly variations of the price of electricity in the spot market. Different price scenarios are studied to analyze the impact of the spot market prices for electricity on the optimal scheduling of the machining process and on the industry profit. The convenience of the application of the proposed model is shown especially in cases of very high electricity prices.
International Nuclear Information System (INIS)
Boustani, Ehsan; Amirkabir University of Technology, Tehran; Khakshournia, Samad
2016-01-01
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Energy Technology Data Exchange (ETDEWEB)
Boustani, Ehsan [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.; Khakshournia, Samad [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.
2016-12-15
In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.
Network of time-multiplexed optical parametric oscillators as a coherent Ising machine
Marandi, Alireza; Wang, Zhe; Takata, Kenta; Byer, Robert L.; Yamamoto, Yoshihisa
2014-12-01
Finding the ground states of the Ising Hamiltonian maps to various combinatorial optimization problems in biology, medicine, wireless communications, artificial intelligence and social network. So far, no efficient classical and quantum algorithm is known for these problems and intensive research is focused on creating physical systems—Ising machines—capable of finding the absolute or approximate ground states of the Ising Hamiltonian. Here, we report an Ising machine using a network of degenerate optical parametric oscillators (OPOs). Spins are represented with above-threshold binary phases of the OPOs and the Ising couplings are realized by mutual injections. The network is implemented in a single OPO ring cavity with multiple trains of femtosecond pulses and configurable mutual couplings, and operates at room temperature. We programmed a small non-deterministic polynomial time-hard problem on a 4-OPO Ising machine and in 1,000 runs no computational error was detected.
Directory of Open Access Journals (Sweden)
Jian Chai
2015-01-01
Full Text Available This paper proposes an EMD-LSSVM (empirical mode decomposition least squares support vector machine model to analyze the CSI 300 index. A WD-LSSVM (wavelet denoising least squares support machine is also proposed as a benchmark to compare with the performance of EMD-LSSVM. Since parameters selection is vital to the performance of the model, different optimization methods are used, including simplex, GS (grid search, PSO (particle swarm optimization, and GA (genetic algorithm. Experimental results show that the EMD-LSSVM model with GS algorithm outperforms other methods in predicting stock market movement direction.
Energy Technology Data Exchange (ETDEWEB)
Yang, Y M; Bush, K; Han, B; Xing, L [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)
2016-06-15
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) method that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high
International Nuclear Information System (INIS)
Yang, Y M; Bush, K; Han, B; Xing, L
2016-01-01
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) method that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high
Linear parallel processing machines I
Energy Technology Data Exchange (ETDEWEB)
Von Kunze, M
1984-01-01
As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.
Compiler design handbook optimizations and machine code generation
Srikant, YN
2003-01-01
The widespread use of object-oriented languages and Internet security concerns are just the beginning. Add embedded systems, multiple memory banks, highly pipelined units operating in parallel, and a host of other advances and it becomes clear that current and future computer architectures pose immense challenges to compiler designers-challenges that already exceed the capabilities of traditional compilation techniques. The Compiler Design Handbook: Optimizations and Machine Code Generation is designed to help you meet those challenges. Written by top researchers and designers from around the
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Bankole I. Oladapo; Vincent A. Balogun; Adeyinka O.M. Adeoye; Ige E. Olubunmi; Samuel O. Afolabi
2017-01-01
The sustainability criterion in the manufacturing industries is imperative, especially in the automobile industries. Currently, efforts are being made by the industries to mitigate CO2 emission by the total vehicle weight optimization, machine utilization and resource efficiency. In lieu of this, it is important to understudy the manufacturing machines adopted in the automobile industries. One of such machine is the hot stamping machine that is used for about 35% of the manufacturing operatio...
Directory of Open Access Journals (Sweden)
Suryakant B. Chandgude
2015-09-01
Full Text Available The optimum selection of process parameters has played an important role for improving the surface finish, minimizing tool wear, increasing material removal rate and reducing machining time of any machining process. In this paper, optimum parameters while machining AISI D2 hardened steel using solid carbide TiAlN coated end mill has been investigated. For optimization of process parameters along with multiple quality characteristics, principal components analysis method has been adopted in this work. The confirmation experiments have revealed that to improve performance of cutting; principal components analysis method would be a useful tool.
Design Optimization of Moving Magnet Actuated Valves for Digital Displacement Machines
DEFF Research Database (Denmark)
Madsen, Esben Lundø; Jørgensen, Janus Martin Thastum; Nørgård, Christian
2017-01-01
High-efficiency hydraulic machines using digital valves are presently a topic of great focus. Digital valve performance with respect to pressure loss, closing time as well as electrical power consumption, is key to obtaining high efficiency. A recent digital seat valve design developed at Aalborg...... optimized design closes in 2.1 ms, has a pressure drop of 0.8 bar at 150 l/min and yields a digital displacement machine average chamber efficiency of 98.9%. The design is simple in construction and uses a single coil, positioned outside the pressure chamber, eliminating the need for an electrical interface...
Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.
2018-01-01
Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. Optimization of response parameters are essential for effective machining of these materials. Past researchers have already used Taguchi for obtaining the optimal responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective optimization problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed method results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.
Fuzzy Linguistic Optimization on Multi-Attribute Machining
Directory of Open Access Journals (Sweden)
Tian-Syung Lan
2010-06-01
Full Text Available Most existing multi-attribute optimization researches for the modern CNC (computer numerical control turning industry were either accomplished within certain manufacturing circumstances, or achieved through numerous equipment operations. Therefore, a general deduction optimization scheme proposed is deemed to be necessary for the industry. In this paper, four parameters (cutting depth, feed rate, speed, tool nose runoff with three levels (low, medium, high are considered to optimize the multi-attribute (surface roughness, tool wear, and material removal rate finish turning. Through FAHP (Fuzzy Analytic Hierarchy Process with eighty intervals for each attribute, the weight of each attribute is evaluated from the paired comparison matrix constructed by the expert judgment. Additionally, twenty-seven fuzzy control rules using trapezoid membership function with respective to seventeen linguistic grades for each attribute are constructed. Considering thirty input and eighty output intervals, the defuzzifierion using center of gravity is thus completed. The TOPSIS (Technique for Order Preference by Similarity to Ideal Solution is moreover utilized to integrate and evaluate the multiple machining attributes for the Taguchi experiment, and thus the optimum general deduction parameters can then be received. The confirmation experiment for optimum general deduction parameters is furthermore performed on an ECOCA-3807 CNC lathe. It is shown that the attributes from the fuzzy linguistic optimization parameters are all significantly advanced comparing to those from benchmark. This paper not only proposes a general deduction optimization scheme using orthogonal array, but also contributes the satisfactory fuzzy linguistic approach for multiple CNC turning attributes with profound insight.
Exponential power spectra, deterministic chaos and Lorentzian pulses in plasma edge dynamics
International Nuclear Information System (INIS)
Maggs, J E; Morales, G J
2012-01-01
Exponential spectra have been observed in the edges of tokamaks, stellarators, helical devices and linear machines. The observation of exponential power spectra is significant because such a spectral character has been closely associated with the phenomenon of deterministic chaos by the nonlinear dynamics community. The proximate cause of exponential power spectra in both magnetized plasma edges and nonlinear dynamics models is the occurrence of Lorentzian pulses in the time signals of fluctuations. Lorentzian pulses are produced by chaotic behavior in the separatrix regions of plasma E × B flow fields or the limit cycle regions of nonlinear models. Chaotic advection, driven by the potential fields of drift waves in plasmas, results in transport. The observation of exponential power spectra and Lorentzian pulses suggests that fluctuations and transport at the edge of magnetized plasmas arise from deterministic, rather than stochastic, dynamics. (paper)
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
Directory of Open Access Journals (Sweden)
I. Nayak
2017-06-01
Full Text Available In the present research work, four different multi response optimization techniques, viz. multiple response signal-to-noise (MRSN ratio, weighted signal-to-noise (WSN ratio, Grey relational analysis (GRA and VIKOR (VlseKriterijumska Optimizacija I Kompromisno Resenje in Serbian methods have been used to optimize the electro-discharge machining (EDM performance characteristics such as material removal rate (MRR, tool wear rate (TWR and surface roughness (SR simultaneously. Experiments have been planned on a D2 steel specimen based on L9 orthogonal array. Experimental results are analyzed using the standard procedure. The optimum level combinations of input process parameters such as voltage, current, pulse-on-time and pulse-off-time, and percentage contributions of each process parameter using ANOVA technique have been determined. Different correlations have been developed between the various input process parameters and output performance characteristics. Finally, the optimum performances of these four methods are compared and the results show that WSN ratio method is the best multiresponse optimization technique for this process. From the analysis, it is also found that the current has the maximum effect on the overall performance of EDM operation as compared to other process parameters.
Solar photovoltaic power forecasting using optimized modified extreme learning machine technique
Directory of Open Access Journals (Sweden)
Manoja Kumar Behera
2018-06-01
Full Text Available Prediction of photovoltaic power is a significant research area using different forecasting techniques mitigating the effects of the uncertainty of the photovoltaic generation. Increasingly high penetration level of photovoltaic (PV generation arises in smart grid and microgrid concept. Solar source is irregular in nature as a result PV power is intermittent and is highly dependent on irradiance, temperature level and other atmospheric parameters. Large scale photovoltaic generation and penetration to the conventional power system introduces the significant challenges to microgrid a smart grid energy management. It is very critical to do exact forecasting of solar power/irradiance in order to secure the economic operation of the microgrid and smart grid. In this paper an extreme learning machine (ELM technique is used for PV power forecasting of a real time model whose location is given in the Table 1. Here the model is associated with the incremental conductance (IC maximum power point tracking (MPPT technique that is based on proportional integral (PI controller which is simulated in MATLAB/SIMULINK software. To train single layer feed-forward network (SLFN, ELM algorithm is implemented whose weights are updated by different particle swarm optimization (PSO techniques and their performance are compared with existing models like back propagation (BP forecasting model. Keywords: PV array, Extreme learning machine, Maximum power point tracking, Particle swarm optimization, Craziness particle swarm optimization, Accelerate particle swarm optimization, Single layer feed-forward network
Modelling and optimization of a permanent-magnet machine in a flywheel
Holm, S.R.
2003-01-01
This thesis describes the derivation of an analytical model for the design and optimization of a permanent-magnet machine for use in an energy storage flywheel. A prototype of this flywheel is to be used as the peak-power unit in a hybrid electric city bus. The thesis starts by showing the
International Nuclear Information System (INIS)
Sahin, M.Ö.; Krücker, D.; Melzer-Pellmann, I.-A.
2016-01-01
In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications.
Energy Technology Data Exchange (ETDEWEB)
Sahin, M.Ö., E-mail: ozgur.sahin@desy.de; Krücker, D., E-mail: dirk.kruecker@desy.de; Melzer-Pellmann, I.-A., E-mail: isabell.melzer@desy.de
2016-12-01
In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications.
Height-Deterministic Pushdown Automata
DEFF Research Database (Denmark)
Nowotka, Dirk; Srba, Jiri
2007-01-01
We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...
Optimal Design of the Transverse Flux Machine Using a Fitted Genetic Algorithm with Real Parameters
DEFF Research Database (Denmark)
Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika
2012-01-01
This paper applies a fitted genetic algorithm (GA) to the optimal design of transverse flux machine (TFM). The main goal is to provide a tool for the optimal design of TFM that is an easy to use. The GA optimizes the analytic basic design of two TFM topologies: the C-core and the U-core. First...
Gao, Lingyun; Ye, Mingquan; Wu, Changrong
2017-11-29
Intelligent optimization algorithms have advantages in dealing with complex nonlinear problems accompanied by good flexibility and adaptability. In this paper, the FCBF (Fast Correlation-Based Feature selection) method is used to filter irrelevant and redundant features in order to improve the quality of cancer classification. Then, we perform classification based on SVM (Support Vector Machine) optimized by PSO (Particle Swarm Optimization) combined with ABC (Artificial Bee Colony) approaches, which is represented as PA-SVM. The proposed PA-SVM method is applied to nine cancer datasets, including five datasets of outcome prediction and a protein dataset of ovarian cancer. By comparison with other classification methods, the results demonstrate the effectiveness and the robustness of the proposed PA-SVM method in handling various types of data for cancer classification.
Directory of Open Access Journals (Sweden)
C. V. Subbulakshmi
2015-01-01
Full Text Available Medical data classification is a prime data mining problem being discussed about for a decade that has attracted several researchers around the world. Most classifiers are designed so as to learn from the data itself using a training process, because complete expert knowledge to determine classifier parameters is impracticable. This paper proposes a hybrid methodology based on machine learning paradigm. This paradigm integrates the successful exploration mechanism called self-regulated learning capability of the particle swarm optimization (PSO algorithm with the extreme learning machine (ELM classifier. As a recent off-line learning method, ELM is a single-hidden layer feedforward neural network (FFNN, proved to be an excellent classifier with large number of hidden layer neurons. In this research, PSO is used to determine the optimum set of parameters for the ELM, thus reducing the number of hidden layer neurons, and it further improves the network generalization performance. The proposed method is experimented on five benchmarked datasets of the UCI Machine Learning Repository for handling medical dataset classification. Simulation results show that the proposed approach is able to achieve good generalization performance, compared to the results of other classifiers.
ch, Sudheer; Kumar, Deepak; Prasad, Ram Kailash; Mathur, Shashi
2013-08-01
A methodology based on support vector machine and particle swarm optimization techniques (SVM-PSO) was used in this study to determine an optimal pumping rate and well location to achieve an optimal cost of an in-situ bioremediation system. In the first stage of the two stage methodology suggested for optimal in-situ bioremediation design, the optimal number of wells and their locations was determined from preselected candidate well locations. The pumping rate and well location in the first stage were subsequently optimized in the second stage of the methodology. The highly nonlinear system of equations governing in-situ bioremediation comprises the equations of flow and solute transport coupled with relevant biodegradation kinetics. A finite difference model was developed to simulate the process of in-situ bioremediation using an Alternate-Direction Implicit technique. This developed model (BIOFDM) yields the spatial and temporal distribution of contaminant concentration for predefined initial and boundary conditions. BIOFDM was later validated by comparing the simulated results with those obtained using BIOPLUME III for the case study of Shieh and Peralta (2005). The results were found to be in close agreement. Moreover, since the solution of the highly nonlinear equation otherwise requires significant computational effort, the computational burden in this study was managed within a practical time frame by replacing the BIOFDM model with a trained SVM model. Support Vector Machine which generates fast solutions in real time was considered to be a universal function approximator in the study. Apart from reducing the computational burden, this technique generates a set of near optimal solutions (instead of a single optimal solution) and creates a re-usable data base that could be used to address many other management problems. Besides this, the search for an optimal pumping pattern was directed by a simple PSO technique and a penalty parameter approach was adopted
ch, Sudheer; Kumar, Deepak; Prasad, Ram Kailash; Mathur, Shashi
2013-08-01
A methodology based on support vector machine and particle swarm optimization techniques (SVM-PSO) was used in this study to determine an optimal pumping rate and well location to achieve an optimal cost of an in-situ bioremediation system. In the first stage of the two stage methodology suggested for optimal in-situ bioremediation design, the optimal number of wells and their locations was determined from preselected candidate well locations. The pumping rate and well location in the first stage were subsequently optimized in the second stage of the methodology. The highly nonlinear system of equations governing in-situ bioremediation comprises the equations of flow and solute transport coupled with relevant biodegradation kinetics. A finite difference model was developed to simulate the process of in-situ bioremediation using an Alternate-Direction Implicit technique. This developed model (BIOFDM) yields the spatial and temporal distribution of contaminant concentration for predefined initial and boundary conditions. BIOFDM was later validated by comparing the simulated results with those obtained using BIOPLUME III for the case study of Shieh and Peralta (2005). The results were found to be in close agreement. Moreover, since the solution of the highly nonlinear equation otherwise requires significant computational effort, the computational burden in this study was managed within a practical time frame by replacing the BIOFDM model with a trained SVM model. Support Vector Machine which generates fast solutions in real time was considered to be a universal function approximator in the study. Apart from reducing the computational burden, this technique generates a set of near optimal solutions (instead of a single optimal solution) and creates a re-usable data base that could be used to address many other management problems. Besides this, the search for an optimal pumping pattern was directed by a simple PSO technique and a penalty parameter approach was adopted
Deterministic behavioural models for concurrency
DEFF Research Database (Denmark)
Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn
1993-01-01
This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...... event structures, generalized trace languages in which the independence relation is context-dependent, and deterministic languages of pomsets....
Optimal interference code based on machine learning
Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua
2016-10-01
In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.
Vu, Duy-Duc; Monies, Frédéric; Rubio, Walter
2018-05-01
A large number of studies, based on 3-axis end milling of free-form surfaces, seek to optimize tool path planning. Approaches try to optimize the machining time by reducing the total tool path length while respecting the criterion of the maximum scallop height. Theoretically, the tool path trajectories that remove the most material follow the directions in which the machined width is the largest. The free-form surface is often considered as a single machining area. Therefore, the optimization on the entire surface is limited. Indeed, it is difficult to define tool trajectories with optimal feed directions which generate largest machined widths. Another limiting point of previous approaches for effectively reduce machining time is the inadequate choice of the tool. Researchers use generally a spherical tool on the entire surface. However, the gains proposed by these different methods developed with these tools lead to relatively small time savings. Therefore, this study proposes a new method, using toroidal milling tools, for generating toolpaths in different regions on the machining surface. The surface is divided into several regions based on machining intervals. These intervals ensure that the effective radius of the tool, at each cutter-contact points on the surface, is always greater than the radius of the tool in an optimized feed direction. A parallel plane strategy is then used on the sub-surfaces with an optimal specific feed direction for each sub-surface. This method allows one to mill the entire surface with efficiency greater than with the use of a spherical tool. The proposed method is calculated and modeled using Maple software to find optimal regions and feed directions in each region. This new method is tested on a free-form surface. A comparison is made with a spherical cutter to show the significant gains obtained with a toroidal milling cutter. Comparisons with CAM software and experimental validations are also done. The results show the
Risk-based and deterministic regulation
International Nuclear Information System (INIS)
Fischer, L.E.; Brown, N.W.
1995-07-01
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose
Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms
Negro Maggio, Valentina; Iocchi, Luca
2015-02-01
Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.
Optimizing Distributed Machine Learning for Large Scale EEG Data Set
Directory of Open Access Journals (Sweden)
M Bilal Shaikh
2017-06-01
Full Text Available Distributed Machine Learning (DML has gained its importance more than ever in this era of Big Data. There are a lot of challenges to scale machine learning techniques on distributed platforms. When it comes to scalability, improving the processor technology for high level computation of data is at its limit, however increasing machine nodes and distributing data along with computation looks as a viable solution. Different frameworks and platforms are available to solve DML problems. These platforms provide automated random data distribution of datasets which miss the power of user defined intelligent data partitioning based on domain knowledge. We have conducted an empirical study which uses an EEG Data Set collected through P300 Speller component of an ERP (Event Related Potential which is widely used in BCI problems; it helps in translating the intention of subject w h i l e performing any cognitive task. EEG data contains noise due to waves generated by other activities in the brain which contaminates true P300Speller. Use of Machine Learning techniques could help in detecting errors made by P300 Speller. We are solving this classification problem by partitioning data into different chunks and preparing distributed models using Elastic CV Classifier. To present a case of optimizing distributed machine learning, we propose an intelligent user defined data partitioning approach that could impact on the accuracy of distributed machine learners on average. Our results show better average AUC as compared to average AUC obtained after applying random data partitioning which gives no control to user over data partitioning. It improves the average accuracy of distributed learner due to the domain specific intelligent partitioning by the user. Our customized approach achieves 0.66 AUC on individual sessions and 0.75 AUC on mixed sessions, whereas random / uncontrolled data distribution records 0.63 AUC.
Directory of Open Access Journals (Sweden)
Yanjun Zhang
2015-01-01
Full Text Available A new optimized extreme learning machine- (ELM- based method for power system transient stability prediction (TSP using synchrophasors is presented in this paper. First, the input features symbolizing the transient stability of power systems are extracted from synchronized measurements. Then, an ELM classifier is employed to build the TSP model. And finally, the optimal parameters of the model are optimized by using the improved particle swarm optimization (IPSO algorithm. The novelty of the proposal is in the fact that it improves the prediction performance of the ELM-based TSP model by using IPSO to optimize the parameters of the model with synchrophasors. And finally, based on the test results on both IEEE 39-bus system and a large-scale real power system, the correctness and validity of the presented approach are verified.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
Analysis and optimization of machining parameters of laser cutting for polypropylene composite
Deepa, A.; Padmanabhan, K.; Kuppan, P.
2017-11-01
Present works explains about machining of self-reinforced Polypropylene composite fabricated using hot compaction method. The objective of the experiment is to find optimum machining parameters for Polypropylene (PP). Laser power and Machining speed were the parameters considered in response to tensile test and Flexure test. Taguchi method is used for experimentation. Grey Relational Analysis (GRA) is used for multiple process parameter optimization. ANOVA (Analysis of Variance) is used to find impact for process parameter. Polypropylene has got the great application in various fields like, it is used in the form of foam in model aircraft and other radio-controlled vehicles, thin sheets (∼2-20μm) used as a dielectric, PP is also used in piping system, it is also been used in hernia and pelvic organ repair or protect new herrnis in the same location.
Topology optimization under stochastic stiffness
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
International Nuclear Information System (INIS)
Dobler, Barbara; Pohl, Fabian; Bogner, Ludwig; Koelbl, Oliver
2007-01-01
To evaluate the effects of direct machine parameter optimization in the treatment planning of intensity-modulated radiation therapy (IMRT) for hypopharyngeal cancer as compared to subsequent leaf sequencing in Oncentra Masterplan v1.5. For 10 hypopharyngeal cancer patients IMRT plans were generated in Oncentra Masterplan v1.5 (Nucletron BV, Veenendal, the Netherlands) for a Siemens Primus linear accelerator. For optimization the dose volume objectives (DVO) for the planning target volume (PTV) were set to 53 Gy minimum dose and 59 Gy maximum dose, in order to reach a dose of 56 Gy to the average of the PTV. For the parotids a median dose of 22 Gy was allowed and for the spinal cord a maximum dose of 35 Gy. The maximum DVO to the external contour of the patient was set to 59 Gy. The treatment plans were optimized with the direct machine parameter optimization ('Direct Step & Shoot', DSS, Raysearch Laboratories, Sweden) newly implemented in Masterplan v1.5 and the fluence modulation technique ('Intensity Modulation', IM) which was available in previous versions of Masterplan already. The two techniques were compared with regard to compliance to the DVO, plan quality, and number of monitor units (MU) required per fraction dose. The plans optimized with the DSS technique met the DVO for the PTV significantly better than the plans optimized with IM (p = 0.007 for the min DVO and p < 0.0005 for the max DVO). No significant difference could be observed for compliance to the DVO for the organs at risk (OAR) (p > 0.05). Plan quality, target coverage and dose homogeneity inside the PTV were superior for the plans optimized with DSS for similar dose to the spinal cord and lower dose to the normal tissue. The mean dose to the parotids was lower for the plans optimized with IM. Treatment plan efficiency was higher for the DSS plans with (901 ± 160) MU compared to (1151 ± 157) MU for IM (p-value < 0.05). Renormalization of the IM plans to the mean of the
Shamir, Reuben R; Dolber, Trygve; Noecker, Angela M; Walter, Benjamin L; McIntyre, Cameron C
2015-01-01
Deep brain stimulation (DBS) of the subthalamic region is an established therapy for advanced Parkinson's disease (PD). However, patients often require time-intensive post-operative management to balance their coupled stimulation and medication treatments. Given the large and complex parameter space associated with this task, we propose that clinical decision support systems (CDSS) based on machine learning algorithms could assist in treatment optimization. Develop a proof-of-concept implementation of a CDSS that incorporates patient-specific details on both stimulation and medication. Clinical data from 10 patients, and 89 post-DBS surgery visits, were used to create a prototype CDSS. The system was designed to provide three key functions: (1) information retrieval; (2) visualization of treatment, and; (3) recommendation on expected effective stimulation and drug dosages, based on three machine learning methods that included support vector machines, Naïve Bayes, and random forest. Measures of medication dosages, time factors, and symptom-specific pre-operative response to levodopa were significantly correlated with post-operative outcomes (P < 0.05) and their effect on outcomes was of similar magnitude to that of DBS. Using those results, the combined machine learning algorithms were able to accurately predict 86% (12/14) of the motor improvement scores at one year after surgery. Using patient-specific details, an appropriately parameterized CDSS could help select theoretically optimal DBS parameter settings and medication dosages that have potential to improve the clinical management of PD patients. Copyright © 2015 Elsevier Inc. All rights reserved.
Performance optimization of a CNC machine through exploration of the timed state space
Mota, M.A. Mujica; Piera, Miquel Angel
2010-01-01
Flexible production units provide very efficient mechanisms to adapt the type and production rate according to fluctuations in demand. The optimal sequence of the different manufacturing tasks in each machine is a challenging problem that can deal with important productivity benefits.
Energy Technology Data Exchange (ETDEWEB)
Sahin, M.Oe.; Kruecker, D.; Melzer-Pellmann, I.A.
2016-01-15
In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications. A new C++ LIBSVM interface called SVM-HINT is developed and available on Github.
International Nuclear Information System (INIS)
Sahin, M.Oe.; Kruecker, D.; Melzer-Pellmann, I.A.
2016-01-01
In this paper we promote the use of Support Vector Machines (SVM) as a machine learning tool for searches in high-energy physics. As an example for a new-physics search we discuss the popular case of Supersymmetry at the Large Hadron Collider. We demonstrate that the SVM is a valuable tool and show that an automated discovery-significance based optimization of the SVM hyper-parameters is a highly efficient way to prepare an SVM for such applications. A new C++ LIBSVM interface called SVM-HINT is developed and available on Github.
International Nuclear Information System (INIS)
Richei, A.; Hauptmanns, U.; Unger, H.
2001-01-01
A new procedure allowing the probabilistic evaluation and optimization of the man-machine system is presented. This procedure and the resulting expert system HEROS, which is an acronym for Human Error Rate Assessment and Optimizing System, is based on the fuzzy set theory. Most of the well-known procedures employed for the probabilistic evaluation of human factors involve the use of vague linguistic statements on performance shaping factors to select and to modify basic human error probabilities from the associated databases. This implies a large portion of subjectivity. Vague statements are expressed here in terms of fuzzy numbers or intervals which allow mathematical operations to be performed on them. A model of the man-machine system is the basis of the procedure. A fuzzy rule-based expert system was derived from ergonomic and psychological studies. Hence, it does not rely on a database, whose transferability to situations different from its origin is questionable. In this way, subjective elements are eliminated to a large extent. HEROS facilitates the importance analysis for the evaluation of human factors, which is necessary for optimizing the man-machine system. HEROS is applied to the analysis of a simple diagnosis of task of the operating personnel in a nuclear power plant
Optimized extreme learning machine for urban land cover classification using hyperspectral imagery
Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam
2017-12-01
This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.
International Nuclear Information System (INIS)
El-Berry, A.; El-Berry, A.; Al-Bossly, A.
2010-01-01
In machining operation, the quality of surface finish is an important requirement for many work pieces. Thus, that is very important to optimize cutting parameters for controlling the required manufacturing quality. Surface roughness parameter (Ra) in mechanical parts depends on turning parameters during the turning process. In the development of predictive models, cutting parameters of feed, cutting speed, depth of cut, are considered as model variables. For this purpose, this study focuses on comparing various machining experiments which using CNC vertical machining center, work pieces was aluminum 6061. Multiple regression models are used to predict the surface roughness at different experiments.
Realization of universal optimal quantum machines by projective operators and stochastic maps
International Nuclear Information System (INIS)
Sciarrino, F.; Sias, C.; Ricci, M.; De Martini, F.
2004-01-01
Optimal quantum machines can be implemented by linear projective operations. In the present work a general qubit symmetrization theory is presented by investigating the close links to the qubit purification process and to the programmable teleportation of any generic optimal antiunitary map. In addition, the contextual realization of the N→M cloning map and of the teleportation of the N→(M-N) universal-NOT (UNOT) gate is analyzed by a very general angular momentum theory. An extended set of experimental realizations by state symmetrization linear optical procedures is reported. These include the 1→2 cloning process, the UNOT gate and the quantum tomographic characterization of the optimal partial transpose map of polarization encoded qubits
Directory of Open Access Journals (Sweden)
Ravi Pratap Singh
2018-01-01
Full Text Available This article addresses the application of grey based fuzzy logic coupled with Taguchi’s approach for optimization of multi performance characteristics in ultrasonic machining of WC-Co composite material. The Taguchi’s L-36 array has been employed to conduct the experimentation and also to observe the influence of different process variables (power rating, cobalt content, tool geometry, thickness of work piece, tool material, abrasive grit size on machining characteristics. Grey relational fuzzy grade has been computed by converting the multiple responses, i.e., material removal rate and tool wear rate obtained from Taguchi’s approach into a single performance characteristic using grey based fuzzy logic. In addition, analysis of variance (ANOVA has also been attempted in a view to identify the significant parameters. Results revealed grit size and power rating as leading parameters for optimization of multi performance characteristics. From the microstructure analysis, the mode of material deformation has been observed and the critical parameters (i.e., work material properties, grit size, and power rating for the deformation mode have been established.
Topological and sizing optimization of reinforced ribs for a machining centre
Chen, T. Y.; Wang, C. B.
2008-01-01
The topology optimization technique is applied to improve rib designs of a machining centre. The ribs of the original design are eliminated and new ribs are generated by topology optimization in the same 3D design space containing the original ribs. Two-dimensional plate elements are used to replace the optimum rib topologies formed by 3D rectangular elements. After topology optimization, sizing optimization is used to determine the optimum thicknesses of the ribs. When forming the optimum design problem, multiple configurations of the structure are considered simultaneously. The objective is to minimize rib weight. Static constraints confine displacements of the cutting tool and the workpiece due to cutting forces and the heat generated by spindle bearings. The dynamic constraint requires the fundamental natural frequency of the structure to be greater than a given value in order to reduce dynamic deflection. Compared with the original design, the improvement resulting from this approach is significant.
Optimization of the man-machine interface for LMFBRs
International Nuclear Information System (INIS)
Seeman, S.E.; Colley, R.W.; Stratton, R.C.
1982-01-01
An effort is underway to optimize the roles of man and machine in control of Liquid Metal Fast Breeder Reactors. The work reported on here describes two systems that have been developed. The first of these, MIDAS, is a large data base system developed for use at FFTF as an aid to operators in determining how to proceed with maintenance and repairs to be carried out on plant components. This system is presently in use at FFTF. The second system, the Procedure Prompting System, is a system being developed to demonstrate a new methodology for automatically generating off-normal plant recovery instructions. The methodology for this system has been demonstrated on a model of a small subsystem of FFTF
Machine learning paradigms in design optimization: Applications in turbine aerodynamic design
Goel, Sanjay
Mechanisms of incorporating machine learning paradigms in design optimization have been investigated in the current research. The primary focus of the work is on machine learning algorithms which use computational models that are analogous to the hypothesized principles of natural or biological learning. Examples from structural and aerodynamic optimization have been used to demonstrate the potential of the proposed schemes. The first strategy examined in the current work seeks to improve the convergence of optimization problems by pruning the search space of weak variables. Such variables are identified by learning from a database of existing designs using neural networks. By using clustering techniques, different sets of weak variables are identified in different regions of the design space. Parameter sensitivity information obtained in the process of identifying weak variables provides accurate heuristics for formulating design rules. The impact of this methodology on obtaining converged designs has been investigated for a turbine design problem. Optimization results from a three-stage power turbine and an aircraft engine turbine are presented in this thesis. The second scheme is an evolutionary design optimization technique which gets progressively 'smarter' during the optimization process by learning from computed domain knowledge. This technique employs adaptive learning mechanisms (classifiers) which recognize the influence of the design variables on the problem solution and then generalize them to dynamically create or change design rules during optimization. This technique, when applied to a constrained optimization problem, shows progressive improvement in convergence of search, as successive generations of rules evolve by learning from the environment. To investigate this methodology, a truss optimization problem is solved with an objective of minimizing the truss weight subject to stress constraints in the truss members. A distinct convergent trend is
Electrical Discharge Platinum Machining Optimization Using Stefan Problem Solutions
Directory of Open Access Journals (Sweden)
I. B. Stavitskiy
2015-01-01
Full Text Available The article presents the theoretical study results of platinum workability by electrical discharge machining (EDM, based on the solution of the thermal problem of moving the boundary of material change phase, i.e. Stefan problem. The problem solution enables defining the surface melt penetration of the material under the heat flow proceeding from the time of its action and the physical properties of the processed material. To determine the rational EDM operating conditions of platinum the article suggests relating its workability with machinability of materials, for which the rational EDM operating conditions are, currently, defined. It is shown that at low densities of the heat flow corresponding to the finishing EDM operating conditions, the processing conditions used for steel 45 are appropriate for platinum machining; with EDM at higher heat flow densities (e.g. 50 GW / m2 for this purpose copper processing conditions are used; at the high heat flow densities corresponding to heavy roughing EDM it is reasonable to use tungsten processing conditions. The article also represents how the minimum width of the current pulses, at which platinum starts melting and, accordingly, the EDM process becomes possible, depends on the heat flow density. It is shown that the processing of platinum is expedient at a pulse width corresponding to the values, called the effective pulse width. Exceeding these values does not lead to a substantial increase in removal of material per pulse, but considerably reduces the maximum repetition rate and therefore, the EDM capacity. The paper shows the effective pulse width versus the heat flow density. It also presents the dependences of the maximum platinum surface melt penetration and the corresponding pulse width on the heat flow density. Results obtained using solutions of the Stephen heat problem can be used to optimize EDM operating conditions of platinum machining.
Thiruvenkadam, T; Karthikeyani, V
2014-01-01
Mapping the virtual machines to the physical machines cluster is called the VM placement. Placing the VM in the appropriate host is necessary for ensuring the effective resource utilization and minimizing the datacenter cost as well as power. Here we present an efficient hybrid genetic based host load aware algorithm for scheduling and optimization of virtual machines in a cluster of Physical hosts. We developed the algorithm based on two different methods, first initial VM packing is done by...
Materials and optimized designs for human-machine interfaces via epidermal electronics.
Jeong, Jae-Woong; Yeo, Woon-Hong; Akhtar, Aadeel; Norton, James J S; Kwack, Young-Jin; Li, Shuo; Jung, Sung-Young; Su, Yewang; Lee, Woosik; Xia, Jing; Cheng, Huanyu; Huang, Yonggang; Choi, Woon-Seop; Bretl, Timothy; Rogers, John A
2013-12-17
Thin, soft, and elastic electronics with physical properties well matched to the epidermis can be conformally and robustly integrated with the skin. Materials and optimized designs for such devices are presented for surface electromyography (sEMG). The findings enable sEMG from wide ranging areas of the body. The measurements have quality sufficient for advanced forms of human-machine interface. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A novel hybrid genetic algorithm for optimal design of IPM machines for electric vehicle
Wang, Aimeng; Guo, Jiayu
2017-12-01
A novel hybrid genetic algorithm (HGA) is proposed to optimize the rotor structure of an IPM machine which is used in EV application. The finite element (FE) simulation results of the HGA design is compared with the genetic algorithm (GA) design and those before optimized. It is shown that the performance of the IPMSM is effectively improved by employing the GA and HGA, especially by HGA. Moreover, higher flux-weakening capability and less magnet usage are also obtained. Therefore, the validity of HGA method in IPMSM optimization design is verified.
A deterministic algorithm for fitting a step function to a weighted point-set
Fournier, Hervé
2013-01-01
Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance
Safety margins in deterministic safety analysis
International Nuclear Information System (INIS)
Viktorov, A.
2011-01-01
The concept of safety margins has acquired certain prominence in the attempts to demonstrate quantitatively the level of the nuclear power plant safety by means of deterministic analysis, especially when considering impacts from plant ageing and discovery issues. A number of international or industry publications exist that discuss various applications and interpretations of safety margins. The objective of this presentation is to bring together and examine in some detail, from the regulatory point of view, the safety margins that relate to deterministic safety analysis. In this paper, definitions of various safety margins are presented and discussed along with the regulatory expectations for them. Interrelationships of analysis input and output parameters with corresponding limits are explored. It is shown that the overall safety margin is composed of several components each having different origins and potential uses; in particular, margins associated with analysis output parameters are contrasted with margins linked to the analysis input. While these are separate, it is possible to influence output margins through the analysis input, and analysis method. Preserving safety margins is tantamount to maintaining safety. At the same time, efficiency of operation requires optimization of safety margins taking into account various technical and regulatory considerations. For this, basic definitions and rules for safety margins must be first established. (author)
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
Energy Technology Data Exchange (ETDEWEB)
Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.
2014-02-01
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
Russenschuck, Stephan
1999-01-01
The ROXIE software program package has been developed for the design of the superconducting magnets for the LHC at CERN. The software is used as an approach towards the integrated design of superconducting magnets including feature-based coil geometry creation, conceptual design using genetic algorithms, optimization of the coil and iron cross-sections using a reduced vector-potential formulation, 3-D coil end geometry and field optimization using deterministic vector- optimization techniques, tolerance analysis, production of drawings by means of a DXF interface, end-spacer design with interfaces to CAD-CAM for the CNC machining of these pieces, and the tracing of manufacturing errors using field quality measurements. This paper gives an overview of the methods applied in the ROXIE program. (9 refs).
Jusoh, L. I.; Sulaiman, E.; Bahrim, F. S.; Kumar, R.
2017-08-01
Recent advancements have led to the development of flux switching machines (FSMs) with flux sources within the stators. The advantage of being a single-piece machine with a robust rotor structure makes FSM an excellent choice for speed applications. There are three categories of FSM, namely, the permanent magnet (PM) FSM, the field excitation (FE) FSM, and the hybrid excitation (HE) FSM. The PMFSM and the FEFSM have their respective PM and field excitation coil (FEC) as their key flux sources. Meanwhile, as the name suggests, the HEFSM has a combination of PM and FECs as the flux sources. The PMFSM is a simple and cheap machine, and it has the ability to control variable flux, which would be suitable for an electric bicycle. Thus, this paper will present a design comparison between an inner rotor and an outer rotor for a single-phase permanent magnet flux switching machine with 8S-10P, designed specifically for an electric bicycle. The performance of this machine was validated using the 2D- FEA. As conclusion, the outer-rotor has much higher torque approximately at 54.2% of an innerrotor PMFSM. From the comprehensive analysis of both designs it can be conclude that output performance is lower than the SRM and IPMSM design machine. But, it shows that the possibility to increase the design performance by using “deterministic optimization method”.
Event-driven control of a speed varying digital displacement machine
DEFF Research Database (Denmark)
Pedersen, Niels Henrik; Johansen, Per; Andersen, Torben O.
2017-01-01
. The controller synthesis is carried out as a discrete optimal deterministic problem with full state feedback. Based on a linear analysis of the feedback control system, stability is proven in a pre-specified operation region. Simulation of a non-linear evaluation model with the controller implemented shows great...... be treated as a Discrete Linear Time Invariant control problem with synchronous sampling rate. To make synchronous linear control theory applicable for a variable speed digital displacement machine, a method based on event-driven control is presented. Using this method, the time domain differential equations...... are converted into the spatial (position) domain to obtain a constant sampling rate and thus allowing for use of classical control theory. The method is applied to a down scaled digital fluid power motor, where the motor speed is controlled at varying references under varying pressure and load torque conditions...
Static/dynamic Analysis and Optimization of Z-axis Stand of PCB CNC Drilling Machine*
Directory of Open Access Journals (Sweden)
Zhou Yanjun
2016-01-01
Full Text Available The finite element analysis is used for the static and dynamic analysis of the Z axis brace of PCB CNC drilling machine. With its results of maximum displacement deformation and von Mises stress and modal frequency, the defect of original design was found out. On such bases, a variety of optimization scheme is put forward and the best size of the Z axis brace is obtained by the performance comparison of the schemes. This method offers bases for the design and renovation of other machine tool components.
DEFF Research Database (Denmark)
Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika
2013-01-01
, genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application.......Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design...
Directory of Open Access Journals (Sweden)
Lagerev I.A.
2016-12-01
Full Text Available In this paper the mathematical models of the main types of turning hydraulic engines, which at the present time widely used in the construction of handling systems of domestic and foreign mobile transport-technological machines wide functionality. They allow to take into consideration the most significant from the viewpoint of ensuring high technical-economic indicators of hydraulic efficiency criteria – minimum mass (weight, their volume and losses of power. On the basis of these mathematical models the problem of multicriterial constrained optimization of the constructive sizes of turning hydraulic engines are subject to complex constructive, strength and deformation limits. It allows you to de-velop the hydraulic engines in an optimized design which is required for the purpose of designing a comprehensive measure takes into account efficiency criteria. The multicriterial optimization problem is universal in nature, so when designing a turning hydraulic engines allows for one-, two - and three-criteria optimization without making any changes in the solution algorithm. This is a significant advantage for the development of universal software for the automation of design of mobile transport-technological machines.
Optimization of Support Vector Machine (SVM) for Object Classification
Scholten, Matthew; Dhingra, Neil; Lu, Thomas T.; Chao, Tien-Hsin
2012-01-01
The Support Vector Machine (SVM) is a powerful algorithm, useful in classifying data into species. The SVMs implemented in this research were used as classifiers for the final stage in a Multistage Automatic Target Recognition (ATR) system. A single kernel SVM known as SVMlight, and a modified version known as a SVM with K-Means Clustering were used. These SVM algorithms were tested as classifiers under varying conditions. Image noise levels varied, and the orientation of the targets changed. The classifiers were then optimized to demonstrate their maximum potential as classifiers. Results demonstrate the reliability of SVM as a method for classification. From trial to trial, SVM produces consistent results.
Directory of Open Access Journals (Sweden)
Ricardo Soto
2016-01-01
Full Text Available The Machine-Part Cell Formation Problem (MPCFP is a NP-Hard optimization problem that consists in grouping machines and parts in a set of cells, so that each cell can operate independently and the intercell movements are minimized. This problem has largely been tackled in the literature by using different techniques ranging from classic methods such as linear programming to more modern nature-inspired metaheuristics. In this paper, we present an efficient parallel version of the Migrating Birds Optimization metaheuristic for solving the MPCFP. Migrating Birds Optimization is a population metaheuristic based on the V-Flight formation of the migrating birds, which is proven to be an effective formation in energy saving. This approach is enhanced by the smart incorporation of parallel procedures that notably improve performance of the several sorting processes performed by the metaheuristic. We perform computational experiments on 1080 benchmarks resulting from the combination of 90 well-known MPCFP instances with 12 sorting configurations with and without threads. We illustrate promising results where the proposal is able to reach the global optimum in all instances, while the solving time with respect to a nonparallel approach is notably reduced.
Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment
Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty
2017-12-01
Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.
Seamless live migration of virtual machines over the MAN/WAN
Travostino, F.; Daspit, P.; Gommans, L.; Jog, C.; de Laat, C.; Mambretti, J.; Monga, I.; van Oudenaarde, B.; Raghunath, S.; Wang, P.Y.
2006-01-01
The “VM Turntable” demonstrator at iGRID 2005 pioneered the integration of Virtual Machines (VMs) with deterministic “lightpath” network services across a MAN/WAN. The results provide for a new stage of virtualization—one for which computation is no longer localized within a data center but rather
DEFF Research Database (Denmark)
Ghoreishi, Maryam
2018-01-01
Many models within the field of optimal dynamic pricing and lot-sizing models for deteriorating items assume everything is deterministic and develop a differential equation as the core of analysis. Two prominent examples are the papers by Rajan et al. (Manag Sci 38:240–262, 1992) and Abad (Manag......, we will try to expose the model by Abad (1996) and Rajan et al. (1992) to stochastic inputs; however, designing these stochastic inputs such that they as closely as possible are aligned with the assumptions of those papers. We do our investigation through a numerical test where we test the robustness...... of the numerical results reported in Rajan et al. (1992) and Abad (1996) in a simulation model. Our numerical results seem to confirm that the results stated in these papers are indeed robust when being imposed to stochastic inputs....
Optimal entangling operations between deterministic blocks of qubits encoded into single photons
Smith, Jake A.; Kaplan, Lev
2018-01-01
Here, we numerically simulate probabilistic elementary entangling operations between rail-encoded photons for the purpose of scalable universal quantum computation or communication. We propose grouping logical qubits into single-photon blocks wherein single-qubit rotations and the controlled-not (cnot) gate are fully deterministic and simple to implement. Interblock communication is then allowed through said probabilistic entangling operations. We find a promising trend in the increasing probability of successful interblock communication as we increase the number of optical modes operated on by our elementary entangling operations.
Directory of Open Access Journals (Sweden)
Adel Taha Abbas
2018-05-01
Full Text Available Magnesium alloys are widely used in aerospace vehicles and modern cars, due to their rapid machinability at high cutting speeds. A novel Edgeworth–Pareto optimization of an artificial neural network (ANN is presented in this paper for surface roughness (Ra prediction of one component in computer numerical control (CNC turning over minimal machining time (Tm and at prime machining costs (C. An ANN is built in the Matlab programming environment, based on a 4-12-3 multi-layer perceptron (MLP, to predict Ra, Tm, and C, in relation to cutting speed, vc, depth of cut, ap, and feed per revolution, fr. For the first time, a profile of an AZ61 alloy workpiece after finish turning is constructed using an ANN for the range of experimental values vc, ap, and fr. The global minimum length of a three-dimensional estimation vector was defined with the following coordinates: Ra = 0.087 μm, Tm = 0.358 min/cm3, C = $8.2973. Likewise, the corresponding finish-turning parameters were also estimated: cutting speed vc = 250 m/min, cutting depth ap = 1.0 mm, and feed per revolution fr = 0.08 mm/rev. The ANN model achieved a reliable prediction accuracy of ±1.35% for surface roughness.
Preliminary Test of Upgraded Conventional Milling Machine into PC Based CNC Milling Machine
International Nuclear Information System (INIS)
Abdul Hafid
2008-01-01
CNC (Computerized Numerical Control) milling machine yields a challenge to make an innovation in the field of machining. With an action job is machining quality equivalent to CNC milling machine, the conventional milling machine ability was improved to be based on PC CNC milling machine. Mechanically and instrumentally change. As a control replacing was conducted by servo drive and proximity were used. Computer programme was constructed to give instruction into milling machine. The program structure of consists GUI model and ladder diagram. Program was put on programming systems called RTX software. The result of up-grade is computer programming and CNC instruction job. The result was beginning step and it will be continued in next time. With upgrading ability milling machine becomes user can be done safe and optimal from accident risk. By improving performance of milling machine, the user will be more working optimal and safely against accident risk. (author)
New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems
Directory of Open Access Journals (Sweden)
Xiguang Li
2017-01-01
Full Text Available Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA, is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent.
Structural Optimization with Reliability Constraints
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Thoft-Christensen, Palle
1986-01-01
During the last 25 years considerable progress has been made in the fields of structural optimization and structural reliability theory. In classical deterministic structural optimization all variables are assumed to be deterministic. Due to the unpredictability of loads and strengths of actual......]. In this paper we consider only structures which can be modelled as systems of elasto-plastic elements, e.g. frame and truss structures. In section 2 a method to evaluate the reliability of such structural systems is presented. Based on a probabilistic point of view a modern structural optimization problem...... is formulated in section 3. The formulation is a natural extension of the commonly used formulations in determinstic structural optimization. The mathematical form of the optimization problem is briefly discussed. In section 4 two new optimization procedures especially designed for the reliability...
Xie, Rui-Fang; Shi, Zhi-Na; Li, Zhi-Cheng; Chen, Pei-Pei; Li, Yi-Min; Zhou, Xin
2015-04-01
Using Dachengqi Tang (DCQT) as a model, high performance liquid chromatography (HPLC) fingerprints were applied to optimize machine extracting process with the Box-Behnken experimental design. HPLC fingerprints were carried out to investigate the chemical ingredients of DCQT; synthetic weighing method based on analytic hierarchy process (AHP) and criteria importance through intercriteria correlation (CRITIC) was performed to calculate synthetic scores of fingerprints; using the mark ingredients contents and synthetic scores as indicators, the Box-Behnken design was carried out to optimize the process parameters of machine decocting process under high pressure for DCQT. Results of optimal process showed that the herb materials were soaked for 45 min and extracted with 9 folds volume of water in the decocting machine under the temperature of 140 °C till the pressure arrived at 0.25 MPa; then hot decoction was excreted to soak Dahuang and Mangxiao for 5 min. Finally, obtained solutions were mixed, filtrated and packed. It concluded that HPLC fingerprints combined with the Box-Behnken experimental design could be used to optimize extracting process of traditional Chinese medicine (TCM).
PGHPF – An Optimizing High Performance Fortran Compiler for Distributed Memory Machines
Directory of Open Access Journals (Sweden)
Zeki Bozkus
1997-01-01
Full Text Available High Performance Fortran (HPF is the first widely supported, efficient, and portable parallel programming language for shared and distributed memory systems. HPF is realized through a set of directive-based extensions to Fortran 90. It enables application developers and Fortran end-users to write compact, portable, and efficient software that will compile and execute on workstations, shared memory servers, clusters, traditional supercomputers, or massively parallel processors. This article describes a production-quality HPF compiler for a set of parallel machines. Compilation techniques such as data and computation distribution, communication generation, run-time support, and optimization issues are elaborated as the basis for an HPF compiler implementation on distributed memory machines. The performance of this compiler on benchmark programs demonstrates that high efficiency can be achieved executing HPF code on parallel architectures.
Optimal design method to minimize users' thinking mapping load in human-machine interactions.
Huang, Yanqun; Li, Xu; Zhang, Jie
2015-01-01
The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.
Zainal Ariffin, S.; Razlan, A.; Ali, M. Mohd; Efendee, A. M.; Rahman, M. M.
2018-03-01
Background/Objectives: The paper discusses about the optimum cutting parameters with coolant techniques condition (1.0 mm nozzle orifice, wet and dry) to optimize surface roughness, temperature and tool wear in the machining process based on the selected setting parameters. The selected cutting parameters for this study were the cutting speed, feed rate, depth of cut and coolant techniques condition. Methods/Statistical Analysis Experiments were conducted and investigated based on Design of Experiment (DOE) with Response Surface Method. The research of the aggressive machining process on aluminum alloy (A319) for automotive applications is an effort to understand the machining concept, which widely used in a variety of manufacturing industries especially in the automotive industry. Findings: The results show that the dominant failure mode is the surface roughness, temperature and tool wear when using 1.0 mm nozzle orifice, increases during machining and also can be alternative minimize built up edge of the A319. The exploration for surface roughness, productivity and the optimization of cutting speed in the technical and commercial aspects of the manufacturing processes of A319 are discussed in automotive components industries for further work Applications/Improvements: The research result also beneficial in minimizing the costs incurred and improving productivity of manufacturing firms. According to the mathematical model and equations, generated by CCD based RSM, experiments were performed and cutting coolant condition technique using size nozzle can reduces tool wear, surface roughness and temperature was obtained. Results have been analyzed and optimization has been carried out for selecting cutting parameters, shows that the effectiveness and efficiency of the system can be identified and helps to solve potential problems.
CERN. Geneva
2017-01-01
Machine learning, which builds on ideas in computer science, statistics, and optimization, focuses on developing algorithms to identify patterns and regularities in data, and using these learned patterns to make predictions on new observations. Boosted by its industrial and commercial applications, the field of machine learning is quickly evolving and expanding. Recent advances have seen great success in the realms of computer vision, natural language processing, and broadly in data science. Many of these techniques have already been applied in particle physics, for instance for particle identification, detector monitoring, and the optimization of computer resources. Modern machine learning approaches, such as deep learning, are only just beginning to be applied to the analysis of High Energy Physics data to approach more and more complex problems. These classes will review the framework behind machine learning and discuss recent developments in the field.
Gower, Robert M.
2018-02-12
We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite matrices in such a way that all iterates (approximate solutions) generated by the algorithm are positive definite matrices themselves. This opens the way for many applications in the field of optimization and machine learning. As an application of our general theory, we develop the {\\\\em first accelerated (deterministic and stochastic) quasi-Newton updates}. Our updates lead to provably more aggressive approximations of the inverse Hessian, and lead to speed-ups over classical non-accelerated rules in numerical experiments. Experiments with empirical risk minimization show that our rules can accelerate training of machine learning models.
The effect of linear guide representation for topology optimization on a five-axis milling machine
Yüksel, Esra; Yuksel, Esra
2017-01-01
Topology optimization is a countermeasure to obtain lightweight and stiff structures for machine tools. Topology optimizations are applied at component level due to computational limitations, therefore linear guides’ rolling elements are underestimated in most of the cases. Stiffness of the entire assembly depends on the least stiff components which are identified as linear guides in the current literature. In this study, effects of linear guide’s representation in virtual environment are inv...
Directory of Open Access Journals (Sweden)
Pejović Branko B.
2014-01-01
Full Text Available Starting from the expression for the thrust in the field of full utilization cross section scrapings and tool life, an expression is derived for the maximum required thrust of universal machine. Then, using a working diagram the analysis of the main features of the simultaneous utilization of machines was performed and determined the optimal area of its utilization for given optimal diameter of treatment. Based on this, the well-known machine using its structural details and the corresponding function workability, derived relations to determine the optimal coefficient slenderness of scrapings suitable for practical use. In this case we think that is known critical material for work piece. Considering the critical and authoritative material of the work piece, based on the expression for the cutting speed was determined by the characteristic constant workability as the basis for establishing optimum tool material which is adequate for optimum regime. Both obtained relation can be considered as general model that can be applied directly to solving setting problems. Also, given the possibilities of practical application of the presented relations, especially in the case of other typical kinds of treatment. Finally, the model is verified on a one calculation example from practice for a Specific machine tool, where certain important characteristics of the optimal treatment are defined.
Machine learning techniques for optical communication system optimization
DEFF Research Database (Denmark)
Zibar, Darko; Wass, Jesper; Thrane, Jakob
In this paper, machine learning techniques relevant to optical communication are presented and discussed. The focus is on applying machine learning tools to optical performance monitoring and performance prediction.......In this paper, machine learning techniques relevant to optical communication are presented and discussed. The focus is on applying machine learning tools to optical performance monitoring and performance prediction....
Prediction of Skin Sensitization with a Particle Swarm Optimized Support Vector Machine
Directory of Open Access Journals (Sweden)
Chenzhong Cao
2009-07-01
Full Text Available Skin sensitization is the most commonly reported occupational illness, causing much suffering to a wide range of people. Identification and labeling of environmental allergens is urgently required to protect people from skin sensitization. The guinea pig maximization test (GPMT and murine local lymph node assay (LLNA are the two most important in vivo models for identification of skin sensitizers. In order to reduce the number of animal tests, quantitative structure-activity relationships (QSARs are strongly encouraged in the assessment of skin sensitization of chemicals. This paper has investigated the skin sensitization potential of 162 compounds with LLNA results and 92 compounds with GPMT results using a support vector machine. A particle swarm optimization algorithm was implemented for feature selection from a large number of molecular descriptors calculated by Dragon. For the LLNA data set, the classification accuracies are 95.37% and 88.89% for the training and the test sets, respectively. For the GPMT data set, the classification accuracies are 91.80% and 90.32% for the training and the test sets, respectively. The classification performances were greatly improved compared to those reported in the literature, indicating that the support vector machine optimized by particle swarm in this paper is competent for the identification of skin sensitizers.
Prediction of Skin Sensitization with a Particle Swarm Optimized Support Vector Machine
Yuan, Hua; Huang, Jianping; Cao, Chenzhong
2009-01-01
Skin sensitization is the most commonly reported occupational illness, causing much suffering to a wide range of people. Identification and labeling of environmental allergens is urgently required to protect people from skin sensitization. The guinea pig maximization test (GPMT) and murine local lymph node assay (LLNA) are the two most important in vivo models for identification of skin sensitizers. In order to reduce the number of animal tests, quantitative structure-activity relationships (QSARs) are strongly encouraged in the assessment of skin sensitization of chemicals. This paper has investigated the skin sensitization potential of 162 compounds with LLNA results and 92 compounds with GPMT results using a support vector machine. A particle swarm optimization algorithm was implemented for feature selection from a large number of molecular descriptors calculated by Dragon. For the LLNA data set, the classification accuracies are 95.37% and 88.89% for the training and the test sets, respectively. For the GPMT data set, the classification accuracies are 91.80% and 90.32% for the training and the test sets, respectively. The classification performances were greatly improved compared to those reported in the literature, indicating that the support vector machine optimized by particle swarm in this paper is competent for the identification of skin sensitizers. PMID:19742136
Experimental aspects of deterministic secure quantum key distribution
Energy Technology Data Exchange (ETDEWEB)
Walenta, Nino; Korn, Dietmar; Puhlmann, Dirk; Felbinger, Timo; Hoffmann, Holger; Ostermeyer, Martin [Universitaet Potsdam (Germany). Institut fuer Physik; Bostroem, Kim [Universitaet Muenster (Germany)
2008-07-01
Most common protocols for quantum key distribution (QKD) use non-deterministic algorithms to establish a shared key. But deterministic implementations can allow for higher net key transfer rates and eavesdropping detection rates. The Ping-Pong coding scheme by Bostroem and Felbinger[1] employs deterministic information encoding in entangled states with its characteristic quantum channel from Bob to Alice and back to Bob. Based on a table-top implementation of this protocol with polarization-entangled photons fundamental advantages as well as practical issues like transmission losses, photon storage and requirements for progress towards longer transmission distances are discussed and compared to non-deterministic protocols. Modifications of common protocols towards a deterministic quantum key distribution are addressed.
Abbas, Adel Taha; Pimenov, Danil Yurievich; Erdakov, Ivan Nikolaevich; Taha, Mohamed Adel; Soliman, Mahmoud Sayed; El Rayes, Magdy Mostafa
2018-05-16
Magnesium alloys are widely used in aerospace vehicles and modern cars, due to their rapid machinability at high cutting speeds. A novel Edgeworth⁻Pareto optimization of an artificial neural network (ANN) is presented in this paper for surface roughness ( Ra ) prediction of one component in computer numerical control (CNC) turning over minimal machining time ( T m ) and at prime machining costs ( C ). An ANN is built in the Matlab programming environment, based on a 4-12-3 multi-layer perceptron (MLP), to predict Ra , T m , and C , in relation to cutting speed, v c , depth of cut, a p , and feed per revolution, f r . For the first time, a profile of an AZ61 alloy workpiece after finish turning is constructed using an ANN for the range of experimental values v c , a p , and f r . The global minimum length of a three-dimensional estimation vector was defined with the following coordinates: Ra = 0.087 μm, T m = 0.358 min/cm³, C = $8.2973. Likewise, the corresponding finish-turning parameters were also estimated: cutting speed v c = 250 m/min, cutting depth a p = 1.0 mm, and feed per revolution f r = 0.08 mm/rev. The ANN model achieved a reliable prediction accuracy of ±1.35% for surface roughness.
Optimizing virtual machine placement for energy and SLA in clouds using utility functions
Directory of Open Access Journals (Sweden)
Abdelkhalik Mosa
2016-10-01
Full Text Available Abstract Cloud computing provides on-demand access to a shared pool of computing resources, which enables organizations to outsource their IT infrastructure. Cloud providers are building data centers to handle the continuous increase in cloud users’ demands. Consequently, these cloud data centers consume, and have the potential to waste, substantial amounts of energy. This energy consumption increases the operational cost and the CO2 emissions. The goal of this paper is to develop an optimized energy and SLA-aware virtual machine (VM placement strategy that dynamically assigns VMs to Physical Machines (PMs in cloud data centers. This placement strategy co-optimizes energy consumption and service level agreement (SLA violations. The proposed solution adopts utility functions to formulate the VM placement problem. A genetic algorithm searches the possible VMs-to-PMs assignments with a view to finding an assignment that maximizes utility. Simulation results using CloudSim show that the proposed utility-based approach reduced the average energy consumption by approximately 6 % and the overall SLA violations by more than 38 %, using fewer VM migrations and PM shutdowns, compared to a well-known heuristics-based approach.
Energy Technology Data Exchange (ETDEWEB)
Kreim, Alexander; Schaefer, Uwe [TU Berlin (Germany). Sek. EM4 Elektrische Antriebstechnik
2010-10-15
This article introduces a nonlinear optimization algorithm for mixed integer problems. The proposed algorithm is a trust region algorithm for an exact penalty function. The quadratic subproblem is used for the integration of discrete variables. This is done by a branch-and-bound approach. The application of the algorithm is shown by minimizing the losses of a permanent magnet synchronous machine. The machine is designed for use in hybrid and electric vehicles. It is shown how load cycles can be included into the optimization process. (orig.)
Digital Repository Service at National Institute of Oceanography (India)
Harish, N.; Mandal, S.; Rao, S.; Patil, S.G.
breakwater. Soft computing tools like Artificial Neural Network, Fuzzy Logic, Support Vector Machine (SVM), etc, are successfully used to solve complex problems. In the present study, SVM and hybrid of Particle Swarm Optimization (PSO) with SVM (PSO...
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices
Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen
2013-01-01
In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588
SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations
International Nuclear Information System (INIS)
Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir
2014-01-01
The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with
Directory of Open Access Journals (Sweden)
Kun Li
2015-01-01
Full Text Available This paper investigates a special single machine scheduling problem derived from practical industries, namely, the selective single machine scheduling with sequence dependent setup costs and downstream demands. Different from traditional single machine scheduling, this problem further takes into account the selection of jobs and the demands of downstream lines. This problem is formulated as a mixed integer linear programming model and an improved particle swarm optimization (PSO is proposed to solve it. To enhance the exploitation ability of the PSO, an adaptive neighborhood search with different search depth is developed based on the decision characteristics of the problem. To improve the search diversity and make the proposed PSO algorithm capable of getting out of local optimum, an elite solution pool is introduced into the PSO. Computational results based on extensive test instances show that the proposed PSO can obtain optimal solutions for small size problems and outperform the CPLEX and some other powerful algorithms for large size problems.
Solving a Higgs optimization problem with quantum annealing for machine learning.
Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria
2017-10-18
The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.
Lucian, P.; Gheorghe, S.
2017-08-01
This paper presents a new method, based on FRISCO formula, for optimizing the choice of the best control system for kinematical feed chains with great distance between slides used in computer numerical controlled machine tools. Such machines are usually, but not limited to, used for machining large and complex parts (mostly in the aviation industry) or complex casting molds. For such machine tools the kinematic feed chains are arranged in a dual-parallel drive structure that allows the mobile element to be moved by the two kinematical branches and their related control systems. Such an arrangement allows for high speed and high rigidity (a critical requirement for precision machining) during the machining process. A significant issue for such an arrangement it’s the ability of the two parallel control systems to follow the same trajectory accurately in order to address this issue it is necessary to achieve synchronous motion control for the two kinematical branches ensuring that the correct perpendicular position it’s kept by the mobile element during its motion on the two slides.
Optimal design of earth-moving machine elements with cusp catastrophe theory application
Pitukhin, A. V.; Skobtsov, I. G.
2017-10-01
This paper deals with the optimal design problem solution for the operator of an earth-moving machine with a roll-over protective structure (ROPS) in terms of the catastrophe theory. A brief description of the catastrophe theory is presented, the cusp catastrophe is considered, control parameters are viewed as Gaussian stochastic quantities in the first part of the paper. The statement of optimal design problem is given in the second part of the paper. It includes the choice of the objective function and independent design variables, establishment of system limits. The objective function is determined as mean total cost that includes initial cost and cost of failure according to the cusp catastrophe probability. Algorithm of random search method with an interval reduction subject to side and functional constraints is given in the last part of the paper. The way of optimal design problem solution can be applied to choose rational ROPS parameters, which will increase safety and reduce production and exploitation expenses.
Optimal Design Solutions for Permanent Magnet Synchronous Machines
Directory of Open Access Journals (Sweden)
POPESCU, M.
2011-11-01
Full Text Available This paper presents optimal design solutions for reducing the cogging torque of permanent magnets synchronous machines. A first solution proposed in the paper consists in using closed stator slots that determines a nearly isotropic magnetic structure of the stator core, reducing the mutual attraction between permanent magnets and the slotted armature. To avoid complications in the windings manufacture technology the stator slots are closed using wedges made of soft magnetic composite materials. The second solution consists in properly choosing the combination of pole number and stator slots number that typically leads to a winding with fractional number of slots/pole/phase. The proposed measures for cogging torque reduction are analyzed by means of 2D/3D finite element models developed using the professional Flux software package. Numerical results are discussed and compared with experimental ones obtained by testing a PMSM prototype.
Deterministic Design Optimization of Structures in OpenMDAO Framework
Coroneos, Rula M.; Pai, Shantaram S.
2012-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.
International Nuclear Information System (INIS)
Richei, A.; Koch, M.K.; Unger, H.; Hauptmanns, U.
1998-01-01
For the probabilistic evaluation and optimization of the man-machine-system a new procedure is developed. This and the resulting expert system, which is based on the fuzzy set theory, is called HEROS, an acronym for Human Error Rate Assessment and Optimizing System. There are several existing procedures for the probabilistic evaluation of human errors, which have several disadvantages. However, in all of these procedures fuzzy verbal expressions are often used for the evaluation of human factors, also in the best known procedures. This use of verbal expressions for describing performance-shaping factors, enabling the evaluation of human factors is the basic approach for HEROS. A model of a modem man-machine-system is the basis of the procedure. Results from ergonomic studies are used to establish a rule-based expert system. HEROS simplifies the importance analysis for the evaluation of human factors, which is necessary for the optimization of the man-machine-system. It can be used in all areas of probabilistic safety assessment. The application of HEROS in the scope of accident management procedures and the comparison with the results of other procedures as an example for the usefulness and substantially more extensive applicability of this new procedure will be shown. (author)
Energy Technology Data Exchange (ETDEWEB)
Karamooz, Saeed [Vadatech Inc. (United States); Breeding, John Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Justice, T Alan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-08-01
As MicroTCA expands into applications beyond the telecommunications industry from which it originated, it faces new challenges in the area of inter-blade communications. The ability to achieve deterministic, low-latency communications between blades is critical to realizing a scalable architecture. In the past, legacy bus architectures accomplished inter-blade communications using dedicated parallel buses across the backplane. Because of limited fabric resources on its backplane, MicroTCA uses the carrier hub (MCH) for this purpose. Unfortunately, MCH products from commercial vendors are limited to standard bus protocols such as PCI Express, Serial Rapid IO and 10/40GbE. While these protocols have exceptional throughput capability, they are neither deterministic nor necessarily low-latency. To overcome this limitation, an MCH has been developed based on the Xilinx Virtex-7 690T FPGA. This MCH provides the system architect/developer complete flexibility in both the interface protocol and routing of information between blades. In this paper, we present the application of this configurable MCH concept to the Machine Protection System under development for the Spallation Neutron Sources's proton accelerator. Specifically, we demonstrate the use of the configurable MCH as a 12x4-lane crossbar switch using the Aurora protocol to achieve a deterministic, low-latency data link. In this configuration, the crossbar has an aggregate bandwidth of 48 GB/s.
Directory of Open Access Journals (Sweden)
Yakai Xu
2017-01-01
Full Text Available Dynamic stiffness and damping of the headstock, which is a critical component of precision horizontal machining center, are two main factors that influence machining accuracy and surface finish quality. Constrained Layer Damping (CLD structure is proved to be effective in raising damping capacity for the thin plate and shell structures. In this paper, one kind of high damping material is utilized on the headstock to improve damping capacity. The dynamic characteristic of the hybrid headstock is investigated analytically and experimentally. The results demonstrate that the resonant response amplitudes of the headstock with damping material can decrease significantly compared to original cast structure. To obtain the optimal configuration of damping material, a topology optimization method based on the Evolutionary Structural Optimization (ESO is implemented. Modal Strain Energy (MSE method is employed to analyze the damping and to derive the sensitivity of the modal loss factor. The optimization results indicate that the added weight of damping material decreases by 50%; meanwhile the first two orders of modal loss factor decrease by less than 23.5% compared to the original structure.
Directory of Open Access Journals (Sweden)
Hao Li
2017-01-01
Full Text Available Predicting the performance of solar water heater (SWH is challenging due to the complexity of the system. Fortunately, knowledge-based machine learning can provide a fast and precise prediction method for SWH performance. With the predictive power of machine learning models, we can further solve a more challenging question: how to cost-effectively design a high-performance SWH? Here, we summarize our recent studies and propose a general framework of SWH design using a machine learning-based high-throughput screening (HTS method. Design of water-in-glass evacuated tube solar water heater (WGET-SWH is selected as a case study to show the potential application of machine learning-based HTS to the design and optimization of solar energy systems.
Optimized Virtual Machine Placement with Traffic-Aware Balancing in Data Center Networks
Directory of Open Access Journals (Sweden)
Tao Chen
2016-01-01
Full Text Available Virtualization has been an efficient method to fully utilize computing resources such as servers. The way of placing virtual machines (VMs among a large pool of servers greatly affects the performance of data center networks (DCNs. As network resources have become a main bottleneck of the performance of DCNs, we concentrate on VM placement with Traffic-Aware Balancing to evenly utilize the links in DCNs. In this paper, we first proposed a Virtual Machine Placement Problem with Traffic-Aware Balancing (VMPPTB and then proved it to be NP-hard and designed a Longest Processing Time Based Placement algorithm (LPTBP algorithm to solve it. To take advantage of the communication locality, we proposed Locality-Aware Virtual Machine Placement Problem with Traffic-Aware Balancing (LVMPPTB, which is a multiobjective optimization problem of simultaneously minimizing the maximum number of VM partitions of requests and minimizing the maximum bandwidth occupancy on uplinks of Top of Rack (ToR switches. We also proved it to be NP-hard and designed a heuristic algorithm (Least-Load First Based Placement algorithm, LLBP algorithm to solve it. Through extensive simulations, the proposed heuristic algorithm is proven to significantly balance the bandwidth occupancy on uplinks of ToR switches, while keeping the number of VM partitions of each request small enough.
Optimization of Cvd Diamond Coating Type on Micro Drills in Pcb Machining
Lei, X. L.; He, Y.; Sun, F. H.
2016-12-01
The demand for better tools for machining printed circuit boards (PCBs) is increasing due to the extensive usage of these boards in digital electronic products. This paper is aimed at optimizing coating type on micro drills in order to extend their lifetime in PCB machining. First, the tribotests involving micro crystalline diamond (MCD), nano crystalline diamond (NCD) and bare tungsten carbide (WC-Co) against PCBs show that NCD-PCB tribopair exhibits the lowest friction coefficient (0.35) due to the unique nano structure and low surface roughness of NCD films. Thereafter, the dry machining performance of the MCD- and NCD-coated micro drills on PCBs is systematically studied, using diamond-like coating (DLC) and TiAlN-coated micro drills as comparison. The experiments show that the working lives of these micro drills can be ranked as: NCD>TiAlN>DLC>MCD>bare WC-Co. The superior cutting performance of NCD-coated micro drills in terms of the lowest flank wear growth rate, no tool degradation (e.g. chipping, tool tipping) appearance, the best hole quality as well as the lowest feed force may come from the excellent wear resistance, lower friction coefficient against PCB as well as the high adhesive strength on the underneath substrate of NCD films.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
Deterministic methods in radiation transport
International Nuclear Information System (INIS)
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community
Directory of Open Access Journals (Sweden)
Xinen Lv
2018-02-01
Full Text Available It is of great clinical significance to establish an accurate intelligent model to diagnose the somatization disorder of community correctional personnel. In this study, a novel machine learning framework is proposed to predict the severity of somatization disorder in community correction personnel. The core of this framework is to adopt the improved bacterial foraging optimization (IBFO to optimize two key parameters (penalty coefficient and the kernel width of a kernel extreme learning machine (KELM and build an IBFO-based KELM (IBFO-KELM for the diagnosis of somatization disorder patients. The main innovation point of the IBFO-KELM model is the introduction of opposition-based learning strategies in traditional bacteria foraging optimization, which increases the diversity of bacterial species, keeps a uniform distribution of individuals of initial population, and improves the convergence rate of the BFO optimization process as well as the probability of escaping from the local optimal solution. In order to verify the effectiveness of the method proposed in this study, a 10-fold cross-validation method based on data from a symptom self-assessment scale (SCL-90 is used to make comparison among IBFO-KELM, BFO-KELM (model based on the original bacterial foraging optimization model, GA-KELM (model based on genetic algorithm, PSO-KELM (model based on particle swarm optimization algorithm and Grid-KELM (model based on grid search method. The experimental results show that the proposed IBFO-KELM prediction model has better performance than other methods in terms of classification accuracy, Matthews correlation coefficient (MCC, sensitivity and specificity. It can distinguish very well between severe somatization disorder and mild somatization and assist the psychological doctor with clinical diagnosis.
Directory of Open Access Journals (Sweden)
Muhammad Munawar
2012-01-01
Full Text Available Optimization of surface roughness has been one of the primary objectives in most of the machining operations. Poor control on the desired surface roughness generates non conforming parts and results into increase in cost and loss of productivity due to rework or scrap. Surface roughness value is a result of several process variables among which machine tool condition is one of the significant variables. In this study, experimentation was carried out to investigate the effect of machine tool condition on surface roughness. Variable used to represent machine tool\\'s condition was vibration amplitude. Input parameters used, besides vibration amplitude, were feed rate and insert nose radius. Cutting speed and depth of cut were kept constant. Based on Taguchi orthogonal array, a series of experimentation was designed and performed on AISI 1040 carbon steel bar at default and induced machine tool\\'s vibration amplitudes. ANOVA (Analysis of Variance, revealed that vibration amplitude and feed rate had moderate effect on the surface roughness and insert nose radius had the highest significant effect on the surface roughness. It was also found that a machine tool with low vibration amplitude produced better surface roughness. Insert with larger nose radius produced better surface roughness at low feed rate.
Design of deterministic interleaver for turbo codes
International Nuclear Information System (INIS)
Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.
2008-01-01
The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)
Deterministic Approach to Detect Heart Sound Irregularities
Directory of Open Access Journals (Sweden)
Richard Mengko
2017-07-01
Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.
Optimization of Multiple Responses of Ultrasonic Machining (USM Process: A Comparative Study
Directory of Open Access Journals (Sweden)
Rina Chakravorty
2013-04-01
Full Text Available Ultrasonic machining (USM process has multiple performance measures, e.g. material removal rate (MRR, tool wear rate (TWR, surface roughness (SR etc., which are affected by several process parameters. The researchers commonly attempted to optimize USM process with respect to individual responses, separately. In the recent past, several systematic procedures for dealing with the multi-response optimization problems have been proposed in the literature. Although most of these methods use complex mathematics or statistics, there are some simple methods, which can be comprehended and implemented by the engineers to optimize the multiple responses of USM processes. However, the relative optimization performance of these approaches is unknown because the effectiveness of different methods has been demonstrated using different sets of process data. In this paper, the computational requirements for four simple methods are presented, and two sets of past experimental data on USM processes are analysed using these methods. The relative performances of these methods are then compared. The results show that weighted signal-to-noise (WSN ratio method and utility theory (UT method usually give better overall optimisation performance for the USM process than the other approaches.
Directory of Open Access Journals (Sweden)
Xiaomin Xu
2015-01-01
Full Text Available Not only can the icing coat on transmission line cause the electrical fault of gap discharge and icing flashover but also it will lead to the mechanical failure of tower, conductor, insulators, and others. It will bring great harm to the people’s daily life and work. Thus, accurate prediction of ice thickness has important significance for power department to control the ice disaster effectively. Based on the analysis of standard support vector machine, this paper presents a weighted support vector machine regression model based on the similarity (WSVR. According to the different importance of samples, this paper introduces the weighted support vector machine and optimizes its parameters by hybrid swarm intelligence optimization algorithm with the particle swarm and ant colony (PSO-ACO, which improves the generalization ability of the model. In the case study, the actual data of ice thickness and climate in a certain area of Hunan province have been used to predict the icing thickness of the area, which verifies the validity and applicability of this proposed method. The predicted results show that the intelligent model proposed in this paper has higher precision and stronger generalization ability.
Directory of Open Access Journals (Sweden)
Li Pan
2016-03-01
Full Text Available Virtualization technologies make it possible for cloud providers to consolidate multiple IaaS provisions into a single server in the form of virtual machines (VMs. Additionally, in order to fulfill the divergent service requirements from multiple users, a cloud provider needs to offer several types of VM instances, which are associated with varying configurations and performance, as well as different prices. In such a heterogeneous virtual machine placement process, one significant problem faced by a cloud provider is how to optimally accept and place multiple VM service requests into its cloud data centers to achieve revenue maximization. To address this issue, in this paper, we first formulate such a revenue maximization problem during VM admission control as a multiple-dimensional knapsack problem, which is known to be NP-hard to solve. Then, we propose to use a cross-entropy-based optimization approach to address this revenue maximization problem, by obtaining a near-optimal eligible set for the provider to accept into its data centers, from the waiting VM service requests in the system. Finally, through extensive experiments and measurements in a simulated environment with the settings of VM instance classes derived from real-world cloud systems, we show that our proposed cross-entropy-based admission control optimization algorithm is efficient and effective in maximizing cloud providers’ revenue in a public cloud computing environment.
DEFF Research Database (Denmark)
Chen, Zhe; Zhang, Jianzhong; Cheng, Ming
2008-01-01
This paper proposes a new approach to minimize the cogging torque of a stator interior permanent magnet (SIPM) machine. The optimization of stator slot gap and permanent magnet is carried out and the cogging torque ripple is analyzed by using finite element analysis. Experiments on a prototype...
Proving Non-Deterministic Computations in Agda
Directory of Open Access Journals (Sweden)
Sergio Antoy
2017-01-01
Full Text Available We investigate proving properties of Curry programs using Agda. First, we address the functional correctness of Curry functions that, apart from some syntactic and semantic differences, are in the intersection of the two languages. Second, we use Agda to model non-deterministic functions with two distinct and competitive approaches incorporating the non-determinism. The first approach eliminates non-determinism by considering the set of all non-deterministic values produced by an application. The second approach encodes every non-deterministic choice that the application could perform. We consider our initial experiment a success. Although proving properties of programs is a notoriously difficult task, the functional logic paradigm does not seem to add any significant layer of difficulty or complexity to the task.
SETTING OF TASK OF OPTIMIZATION OF THE ACTIVITY OF A MACHINE-BUILDING CLUSTER COMPANY
Directory of Open Access Journals (Sweden)
A. V. Romanenko
2014-01-01
Full Text Available The work is dedicated to the development of methodological approaches to the management of machine-building enterprise on the basis of cost reduction, optimization of the portfolio of orders and capacity utilization in the process of operational management. Evaluation of economic efficiency of such economic entities of the real sector of the economy is determined, including the timing of orders, which depend on the issues of building a production facility, maintenance of fixed assets and maintain them at a given level. Formulated key components of economic-mathematical model of industrial activity and is defined as the optimization criterion. As proposed formula accumulating profits due to production capacity and technology to produce products current direct variable costs, the amount of property tax and expenses appearing as a result of manifestations of variance when performing replacement of production tasks for a single period of time. The main component of the optimization of the production activity of the enterprise on the basis of this criterion is the vector of direct variable costs. It depends on the number of types of products in the current portfolio of orders, production schedules production, the normative time for the release of a particular product available Fund time efficient production positions, the current valuation for certain groups of technological operations and the current priority of operations for the degree of readiness performed internal orders. Modeling of industrial activity based on the proposed provisions would allow the enterprises of machine-building cluster, active innovation, improve the efficient use of available production resources by optimizing current operations at the high uncertainty of the magnitude of the demand planning and carrying out maintenance and routine repairs.
Directory of Open Access Journals (Sweden)
M. Kaladhar
2012-08-01
Full Text Available In this work, Taguchi method is applied to determine the optimum process parameters for turning of AISI 304 austenitic stainless steel on CNC lathe. A Chemical vapour deposition (CVD coated cemented carbide cutting insert is used which is produced by DuratomicTM technology of 0.4 and 0.8 mm nose radii. The tests are conducted at four levels of Cutting speed, feed and depth of cut. The influence of these parameters are investigated on the surface roughness and material removal rate (MRR. The Analysis Of Variance (ANOVA is also used to analyze the influence of cutting parameters during machining. The results revealed that cutting speed significantly (46.05% affected the machined surface roughness values followed by nose radius (23.7%. The influence of the depth of cut (61.31% in affecting material removal rate (MRR is significantly large. The cutting speed (20.40% is the next significant factor. Optimal range and optimal level of parameters are also predicted for responses.
Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods
Directory of Open Access Journals (Sweden)
Felix F. Gonzalez-Navarro
2016-10-01
Full Text Available Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.
Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods.
Gonzalez-Navarro, Felix F; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A; Flores-Rios, Brenda L; Ibarra-Esquer, Jorge E
2016-10-26
Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.
Creating optimized machine working patterns on agricultural fields
Mark Spekken
2015-01-01
In the current agricultural context, agricultural machine unproductivity on fields and their impacts on soil along pathways are unavoidable. These machines have direct and indirect costs associated to their work in field, with non-productive time spent in manoeuvres when these are reaching field borders; likewise, there is a double application of product when machines are covering headlands while adding farm inputs. Both issues aggravate under irregular field geometry. Moreover, unproductive ...
FEM Optimal Design of Energy Efficient Induction Machines
Directory of Open Access Journals (Sweden)
TUDORACHE, T.
2009-06-01
Full Text Available This paper deals with a comparative numerical analysis of performances of several design solutions of induction machines with improved energy efficiency. Starting from a typical cast aluminum cage induction machine this study highlights the benefit of replacing the classical cast aluminum cage with a cast copper cage in the manufacture of future generation of high efficiency induction machines used as motors or generators. Then the advantage of replacement of standard electrical steel with higher grade steel with smaller losses is pointed out. The numerical analysis carried out in the paper is based on 2D plane-parallel finite element approach of the induction machine, the numerical results being discussed and compared with experimental measurements.
Directory of Open Access Journals (Sweden)
Jian-Long Kuo
2015-01-01
Full Text Available This paper proposes a turbo injection mode (TIM for an axial flux motor to apply onto injection molding machine. Since the injection molding machine requires different speed and force parameters setting when finishing a complete injection process. The interleaved winding structure in the motor provides two different injection levels to provide enough injection forces. Two wye-wye windings are designed to switch two control modes conveniently. Wye-wye configuration is used to switch two force levels for the motor. When only one set of wye-winding is energized, field weakening function is achieved. Both of the torque and speed increase under field weakening operation. To achieve two control objectives for torque and speed of the motor, fuzzy based multiple performance characteristics index (MPCI with particle swarm optimization (PSO is used to find out the multiobjective optimal design solution. Both of the torque and speed are expected to be maximal at the same time. Three control factors are selected as studied factors: winding diameter, winding type, and air-gap. Experimental results show that both of the torque and speed increase under the optimal condition. This will provide enough large torque and speed to perform the turbo injection mode in injection process for the injection molding machine.
Energy Technology Data Exchange (ETDEWEB)
Sahin, Mehmet Oezguer; Kruecker, Dirk; Melzer-Pellmann, Isabell [DESY, Hamburg (Germany)
2016-07-01
In this talk, the use of Support Vector Machines (SVM) is promoted for new-physics searches in high-energy physics. We developed an interface, called SVM HEP Interface (SVM-HINT), for a popular SVM library, LibSVM, and introduced a statistical-significance based hyper-parameter optimization algorithm for the new-physics searches. As example case study, a search for Supersymmetry at the Large Hadron Collider is given to demonstrate the capabilities of SVM using SVM-HINT.
2011-02-02
... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,554] International Business Machines (IBM), Software Group Business Unit, Optim Data Studio Tools QA, San Jose, CA; Notice of Affirmative Determination Regarding Application for Reconsideration By application dated November 29, 2010, a worker and a state workforce official...
Bolodurina, I. P.; Parfenov, D. I.
2018-01-01
We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.
Cao, Jin; Jiang, Zhibin; Wang, Kangzhou
2017-07-01
Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Directory of Open Access Journals (Sweden)
Yukai Yao
2015-01-01
Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.
International Nuclear Information System (INIS)
Aspelund, Audun
2012-01-01
Process Synthesis (PS) is a term used to describe a class of general and systematic methods for the conceptual design of processing plants and energy systems. The term also refers to the development of the process flowsheet (structure or topology), the selection of unit operations and the determination of the most important operating conditions.In this thesis an attempt is made to characterize some of the most common methodologies in a PS pyramid and discuss their advantages and disadvantages as well as where in the design phase they could be used most efficiently. The thesis shows how design tools have been developed for subambient processes by combining and expanding PS methods such as Heuristic Rules, sequential modular Process Simulations, Pinch Analysis, Exergy Analysis, Mathematical Programming using Deterministic Optimization methods and optimization using Stochastic Optimization methods. The most important contributions to the process design community are three new methodologies that include the pressure as an important variable in heat exchanger network synthesis (HENS).The methodologies have been used to develop a novel and efficient energy chain based on stranded natural gas including power production with carbon capture and sequestration (CCS). This Liquefied Energy Chain consists of an offshore process a combined gas carrier and an onshore process. This energy chain is capable of efficiently exploiting resources that cannot be utilized economically today with minor Co2 emissions. Finally, a new Stochastic Optimization approach based on a Tabu Search (TS), the Nelder Mead method or Downhill Simplex Method (NMDS) and the sequential process simulator HYSYS is used to search for better solutions for the Liquefied Energy Chain with respect to minimum cost or maximum profit. (au)
Energy Technology Data Exchange (ETDEWEB)
Aspelund, Audun
2012-07-01
Process Synthesis (PS) is a term used to describe a class of general and systematic methods for the conceptual design of processing plants and energy systems. The term also refers to the development of the process flowsheet (structure or topology), the selection of unit operations and the determination of the most important operating conditions.In this thesis an attempt is made to characterize some of the most common methodologies in a PS pyramid and discuss their advantages and disadvantages as well as where in the design phase they could be used most efficiently. The thesis shows how design tools have been developed for subambient processes by combining and expanding PS methods such as Heuristic Rules, sequential modular Process Simulations, Pinch Analysis, Exergy Analysis, Mathematical Programming using Deterministic Optimization methods and optimization using Stochastic Optimization methods. The most important contributions to the process design community are three new methodologies that include the pressure as an important variable in heat exchanger network synthesis (HENS).The methodologies have been used to develop a novel and efficient energy chain based on stranded natural gas including power production with carbon capture and sequestration (CCS). This Liquefied Energy Chain consists of an offshore process a combined gas carrier and an onshore process. This energy chain is capable of efficiently exploiting resources that cannot be utilized economically today with minor Co2 emissions. Finally, a new Stochastic Optimization approach based on a Tabu Search (TS), the Nelder Mead method or Downhill Simplex Method (NMDS) and the sequential process simulator HYSYS is used to search for better solutions for the Liquefied Energy Chain with respect to minimum cost or maximum profit. (au)
Deterministic chaos in the pitting phenomena of passivable alloys
International Nuclear Information System (INIS)
Hoerle, Stephane
1998-01-01
It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl - ]/[NO 3 - ] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author) [fr
International Nuclear Information System (INIS)
Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo
2017-01-01
Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.
International Nuclear Information System (INIS)
Chen, Zhicong; Wu, Lijun; Cheng, Shuying; Lin, Peijie; Wu, Yue; Lin, Wencheng
2017-01-01
Highlights: •An improved Simulink based modeling method is proposed for PV modules and arrays. •Key points of I-V curves and PV model parameters are used as the feature variables. •Kernel extreme learning machine (KELM) is explored for PV arrays fault diagnosis. •The parameters of KELM algorithm are optimized by the Nelder-Mead simplex method. •The optimized KELM fault diagnosis model achieves high accuracy and reliability. -- Abstract: Fault diagnosis of photovoltaic (PV) arrays is important for improving the reliability, efficiency and safety of PV power stations, because the PV arrays usually operate in harsh outdoor environment and tend to suffer various faults. Due to the nonlinear output characteristics and varying operating environment of PV arrays, many machine learning based fault diagnosis methods have been proposed. However, there still exist some issues: fault diagnosis performance is still limited due to insufficient monitored information; fault diagnosis models are not efficient to be trained and updated; labeled fault data samples are hard to obtain by field experiments. To address these issues, this paper makes contribution in the following three aspects: (1) based on the key points and model parameters extracted from monitored I-V characteristic curves and environment condition, an effective and efficient feature vector of seven dimensions is proposed as the input of the fault diagnosis model; (2) the emerging kernel based extreme learning machine (KELM), which features extremely fast learning speed and good generalization performance, is utilized to automatically establish the fault diagnosis model. Moreover, the Nelder-Mead Simplex (NMS) optimization method is employed to optimize the KELM parameters which affect the classification performance; (3) an improved accurate Simulink based PV modeling approach is proposed for a laboratory PV array to facilitate the fault simulation and data sample acquisition. Intensive fault experiments are
Directory of Open Access Journals (Sweden)
J.S. Pang
2014-08-01
Full Text Available This paper introduces the application of Taguchi optimization methodology in optimizing the cutting parameters of end-milling process for machining the halloysite nanotubes (HNTs with aluminium reinforced epoxy hybrid composite material under dry condition. The machining parameters which are chosen to be evaluated in this study are the depth of cut (d, cutting speed (S and feed rate (f. While, the response factors to be measured are the surface roughness of the machined composite surface and the cutting force. An orthogonal array of the Taguchi method was set-up and used to analyse the effect of the milling parameters on the surface roughness and cutting force. The result from this study shows that the application of the Taguchi method can determine the best combination of machining parameters that can provide the optimal machining response conditions which are the lowest surface roughness and lowest cutting force value. For the best surface finish, A1–B3–C3 (d = 0.4 mm, S = 1500 rpm, f = 60 mmpm is found to be the optimized combination of levels for all the three control factors from the analysis. Meanwhile, the optimized combination of levels for all the three control factors from the analysis which provides the lowest cutting force was found to be A2–B2–C2 (d = 0.6 mm, S = 1000 rpm, f = 40 mmpm.
Directory of Open Access Journals (Sweden)
M. Fera
2018-09-01
Full Text Available Additive Manufacturing (AM is a process of joining materials to make objects from 3D model data, usually layer by layer, as opposed to subtractive manufacturing methodologies. Selective Laser Melting, commercially known as Direct Metal Laser Sintering (DMLS®, is the most diffused additive process in today’s manufacturing industry. Introduction of a DMLS® machine in a production department has remarkable effects not only on industrial design but also on production planning, for example, on machine scheduling. Scheduling for a traditional single machine can employ consolidated models. Scheduling of an AM machine presents new issues because it must consider the capability of producing different geometries, simultaneously. The aim of this paper is to provide a mathematical model for an AM/SLM machine scheduling. The complexity of the model is NP-HARD, so possible solutions must be found by metaheuristic algorithms, e.g., Genetic Algorithms. Genetic Algorithms solve sequential optimization problems by handling vectors; in the present paper, we must modify them to handle a matrix. The effectiveness of the proposed algorithms will be tested on a test case formed by a 30 Part Number production plan with a high variability in complexity, distinct due dates and low production volumes.
Real time PI-backstepping induction machine drive with efficiency optimization.
Farhani, Fethi; Ben Regaya, Chiheb; Zaafouri, Abderrahmen; Chaari, Abdelkader
2017-09-01
This paper describes a robust and efficient speed control of a three phase induction machine (IM) subjected to load disturbances. First, a Multiple-Input Multiple-Output (MIMO) PI-Backstepping controller is proposed for a robust and highly accurate tracking of the mechanical speed and rotor flux. Asymptotic stability of the control scheme is proven by Lyapunov Stability Theory. Second, an active online optimization algorithm is used to optimize the efficiency of the drive system. The efficiency improvement approach consists of adjusting the rotor flux with respect to the load torque in order to minimize total losses in the IM. A dSPACE DS1104 R&D board is used to implement the proposed solution. The experimental results released on 3kW squirrel cage IM, show that the reference speed as well as the rotor flux are rapidly achieved with a fast transient response and without overshoot. A good load disturbances rejection response and IM parameters variation are fairly handled. The improvement of drive system efficiency reaches up to 180% at light load. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
The probabilistic approach and the deterministic licensing procedure
International Nuclear Information System (INIS)
Fabian, H.; Feigel, A.; Gremm, O.
1984-01-01
If safety goals are given, the creativity of the engineers is necessary to transform the goals into actual safety measures. That is, safety goals are not sufficient for the derivation of a safety concept; the licensing process asks ''What does a safe plant look like.'' The answer connot be given by a probabilistic procedure, but need definite deterministic statements; the conclusion is, that the licensing process needs a deterministic approach. The probabilistic approach should be used in a complementary role in cases where deterministic criteria are not complete, not detailed enough or not consistent and additional arguments for decision making in connection with the adequacy of a specific measure are necessary. But also in these cases the probabilistic answer has to be transformed into a clear deterministic statement. (orig.)
International Nuclear Information System (INIS)
Jiang, He; Dong, Yao
2016-01-01
Highlights: • Eclat data mining algorithm is used to determine the possible predictors. • Support vector machine is converted into a ridge regularization problem. • Hard penalty selects the number of radial basis functions to simply the structure. • Glowworm swarm optimization is utilized to determine the optimal parameters. - Abstract: For a portion of the power which is generated by grid connected photovoltaic installations, an effective solar irradiation forecasting approach must be crucial to ensure the quality and the security of power grid. This paper develops and investigates a novel model to forecast 30 daily global solar radiation at four given locations of the United States. Eclat data mining algorithm is first presented to discover association rules between solar radiation and several meteorological factors laying a theoretical foundation for these correlative factors as input vectors. An effective and innovative intelligent optimization model based on nonlinear support vector machine and hard penalty function is proposed to forecast solar radiation by converting support vector machine into a regularization problem with ridge penalty, adding a hard penalty function to select the number of radial basis functions, and using glowworm swarm optimization algorithm to determine the optimal parameters of the model. In order to illustrate our validity of the proposed method, the datasets at four sites of the United States are split to into training data and test data, separately. The experiment results reveal that the proposed model delivers the best forecasting performances comparing with other competitors.
Directory of Open Access Journals (Sweden)
Muhammad Qaiser Saleem
2015-04-01
Full Text Available This paper parametrically optimizes the EDM (Electrical Discharge Machining process in die sinking mode for material removal rate, surface roughness and edge quality of aluminum alloy Al-6061. The effect of eight parameters namely discharge current, pulse on-time, pulse off-time, auxiliary current, working time, jump time distance, servo speed and work piece hardness are investigated. Taguchi's orthogonal array L18 is employed herein for experimentation. ANOVA (Analysis of Variance with F-ratio criterion at 95% confidence level is used for identification of significant parameters whereas SNR (Signal to Noise Ratio is used for determination of optimum levels. Optimization obtained for Al-6061 with parametric combination investigated herein is validated by the confirmation run.
Chen, Jian; Matuttis, Hans-Georg
2013-02-01
We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.
Deterministic indexing for packed strings
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye
2017-01-01
Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ...
Directory of Open Access Journals (Sweden)
Prillia Ayudia
2017-01-01
Full Text Available PT Buana Intan Gemilang is a company engaged in textile industry. The curtain textile production need punching machine to control the fabric process. The operator still works manually so it takes high cost of electrical energy consumption. So to solve the problem can implement green manufacturing on punching machine. The method include firstly to identify the color by classifying the company into the black, brown, gray or green color categories using questionnaire. Secondly is improvement area to be optimized and analyzed. Improvement plan at this stage that is focusing on energy area and technology. Thirdly is process applies by modifying the technology through implementing automation system on the punching machine so that there is an increase of green level on the process machine. The result obtained after implement the method can save cost on electrical energy consumption in the amount of Rp 1.068.159/day.
Goher, K M; Fadlallah, S O
2017-01-01
This paper presents the performance of utilizing a bacterial foraging optimization algorithm on a PID control scheme for controlling a five DOF two-wheeled robotic machine with two-directional handling mechanism. The system under investigation provides solutions for industrial robotic applications that require a limited-space working environment. The system nonlinear mathematical model, derived using Lagrangian modeling approach, is simulated in MATLAB/Simulink ® environment. Bacterial foraging-optimized PID control with decoupled nature is designed and implemented. Various working scenarios with multiple initial conditions are used to test the robustness and the system performance. Simulation results revealed the effectiveness of the bacterial foraging-optimized PID control method in improving the system performance compared to the PID control scheme.
International Nuclear Information System (INIS)
Engels, Klaus; Harasta, Michaela; Braitsch, Werner; Moser, Albert; Schaefer, Andreas
2012-01-01
In Germany's energy markets of today, pumped-storage power plants offer excellent business opportunities due to their outstanding flexibility. However, the energy-economic simulation of pumped-storage plants, which is necessary to base the investment decision on a sound business case, is a highly complex matter since the plant's capacity must be optimized in a given plant portfolio and between two relevant markets: the scheduled wholesale and the reserve market. This mathematical optimization problem becomes even more complex when the question is raised as to which type of machine should be used for a pumped-storage new build option. For the first time, it has been proven possible to simulate the optimum dispatch of different pumped-storage machine concepts within two relevant markets - the scheduled wholesale and the reserve market - thereby greatly supporting the investment decision process. The methodology and findings of a cooperation study between E.ON and RWTH Aachen University in respect of the German pumped-storage extension project 'Waldeck 2+' are described, showing the latest development in dispatch simulation for generation portfolios. (authors)
Energy Technology Data Exchange (ETDEWEB)
Fei, Sheng-wei; Wang, Ming-Jun; Miao, Yu-bin; Tu, Jun; Liu, Cheng-liang [School of Mechanical Engineering, Shanghai Jiaotong University, Shanghai 200240 (China)
2009-06-15
Forecasting of dissolved gases content in power transformer oil is a complicated problem due to its nonlinearity and the small quantity of training data. Support vector machine (SVM) has been successfully employed to solve regression problem of nonlinearity and small sample. However, the practicability of SVM is effected due to the difficulty of selecting appropriate SVM parameters. Particle swarm optimization (PSO) is a new optimization method, which is motivated by social behaviour of organisms such as bird flocking and fish schooling. The method not only has strong global search capability, but also is very easy to implement. Thus, the proposed PSO-SVM model is applied to forecast dissolved gases content in power transformer oil in this paper, among which PSO is used to determine free parameters of support vector machine. The experimental data from several electric power companies in China is used to illustrate the performance of proposed PSO-SVM model. The experimental results indicate that the PSO-SVM method can achieve greater forecasting accuracy than grey model, artificial neural network under the circumstances of small sample. (author)
Energy Technology Data Exchange (ETDEWEB)
Fei Shengwei [School of Mechanical Engineering, Shanghai Jiaotong University, Shanghai 200240 (China)], E-mail: feishengwei@sohu.com; Wang Mingjun; Miao Yubin; Tu Jun; Liu Chengliang [School of Mechanical Engineering, Shanghai Jiaotong University, Shanghai 200240 (China)
2009-06-15
Forecasting of dissolved gases content in power transformer oil is a complicated problem due to its nonlinearity and the small quantity of training data. Support vector machine (SVM) has been successfully employed to solve regression problem of nonlinearity and small sample. However, the practicability of SVM is effected due to the difficulty of selecting appropriate SVM parameters. Particle swarm optimization (PSO) is a new optimization method, which is motivated by social behaviour of organisms such as bird flocking and fish schooling. The method not only has strong global search capability, but also is very easy to implement. Thus, the proposed PSO-SVM model is applied to forecast dissolved gases content in power transformer oil in this paper, among which PSO is used to determine free parameters of support vector machine. The experimental data from several electric power companies in China is used to illustrate the performance of proposed PSO-SVM model. The experimental results indicate that the PSO-SVM method can achieve greater forecasting accuracy than grey model, artificial neural network under the circumstances of small sample.
Deterministic chaos in the processor load
International Nuclear Information System (INIS)
Halbiniak, Zbigniew; Jozwiak, Ireneusz J.
2007-01-01
In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case
Fischer, P.; Jardani, A.; Lecoq, N.
2018-02-01
In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.
International Nuclear Information System (INIS)
Mohanta, Dusmanta Kumar; Sadhu, Pradip Kumar; Chakrabarti, R.
2007-01-01
This paper presents a comparison of results for optimization of captive power plant maintenance scheduling using genetic algorithm (GA) as well as hybrid GA/simulated annealing (SA) techniques. As utilities catered by captive power plants are very sensitive to power failure, therefore both deterministic and stochastic reliability objective functions have been considered to incorporate statutory safety regulations for maintenance of boilers, turbines and generators. The significant contribution of this paper is to incorporate stochastic feature of generating units and that of load using levelized risk method. Another significant contribution of this paper is to evaluate confidence interval for loss of load probability (LOLP) because some variations from optimum schedule are anticipated while executing maintenance schedules due to different real-life unforeseen exigencies. Such exigencies are incorporated in terms of near-optimum schedules obtained from hybrid GA/SA technique during the final stages of convergence. Case studies corroborate that same optimum schedules are obtained using GA and hybrid GA/SA for respective deterministic and stochastic formulations. The comparison of results in terms of interval of confidence for LOLP indicates that levelized risk method adequately incorporates the stochastic nature of power system as compared with levelized reserve method. Also the interval of confidence for LOLP denotes the possible risk in a quantified manner and it is of immense use from perspective of captive power plants intended for quality power
Chemically intuited, large-scale screening of MOFs by machine learning techniques
Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.
2017-10-01
A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.
Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.
Huang, Cai; Mezencev, Roman; McDonald, John F; Vannberg, Fredrik
2017-01-01
Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM) algorithm combined with a standard recursive feature elimination (RFE) approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60). The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC) patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.
Open source machine-learning algorithms for the prediction of optimal cancer drug therapies.
Directory of Open Access Journals (Sweden)
Cai Huang
Full Text Available Precision medicine is a rapidly growing area of modern medical science and open source machine-learning codes promise to be a critical component for the successful development of standardized and automated analysis of patient data. One important goal of precision cancer medicine is the accurate prediction of optimal drug therapies from the genomic profiles of individual patient tumors. We introduce here an open source software platform that employs a highly versatile support vector machine (SVM algorithm combined with a standard recursive feature elimination (RFE approach to predict personalized drug responses from gene expression profiles. Drug specific models were built using gene expression and drug response data from the National Cancer Institute panel of 60 human cancer cell lines (NCI-60. The models are highly accurate in predicting the drug responsiveness of a variety of cancer cell lines including those comprising the recent NCI-DREAM Challenge. We demonstrate that predictive accuracy is optimized when the learning dataset utilizes all probe-set expression values from a diversity of cancer cell types without pre-filtering for genes generally considered to be "drivers" of cancer onset/progression. Application of our models to publically available ovarian cancer (OC patient gene expression datasets generated predictions consistent with observed responses previously reported in the literature. By making our algorithm "open source", we hope to facilitate its testing in a variety of cancer types and contexts leading to community-driven improvements and refinements in subsequent applications.
Introducing Synchronisation in Deterministic Network Models
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...
Deterministic Compressed Sensing
2011-11-01
39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch
Determining Optimal Decision Version
Directory of Open Access Journals (Sweden)
Olga Ioana Amariei
2014-06-01
Full Text Available In this paper we start from the calculation of the product cost, applying the method of calculating the cost of hour- machine (THM, on each of the three cutting machines, namely: the cutting machine with plasma, the combined cutting machine (plasma and water jet and the cutting machine with a water jet. Following the calculation of cost and taking into account the precision of manufacturing of each machine, as well as the quality of the processed surface, the optimal decisional version needs to be determined regarding the product manufacturing. To determine the optimal decisional version, we resort firstly to calculating the optimal version on each criterion, and then overall using multiattribute decision methods.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody; Tembine, Hamidou; Tempone, Raul
2016-01-01
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Directory of Open Access Journals (Sweden)
K. Shunmugesh
2016-09-01
Full Text Available Carbon Fiber Reinforced Polymer (CFRP composites are widely used in aerospace industry in lieu of its high strength to weight ratio. This study is an attempt to evaluate the machinability of Bi-Directional Carbon Fiber–Epoxy composite and optimize the process parameters of cutting speed, feed rate and drill tool material. Machining trials were carried using drill bits made of high speed steel, TiN and TiAlN at different cutting speeds and feed rates. Output parameters of thrust force and torque were monitored using Kistler multicomponent dynamometer 9257B and vibrations occurring during machining normal to the work surface were measured by a vibration sensor (Dytran 3055B. Linear regression analysis was carried out by using Response Surface Methodology (RSM, to correlate the input and output parameters in drilling of the composite in the longitudinal and transverse directions. The optimization of process parameters were attempted using Genetic Algorithm (GA and Particle Swarm Optimization–Gravitational Search Algorithm (PSO–GSA techniques.
Optimal Overhaul-Replacement Policies for Repairable Machine Sold with Warranty
Soemadi, Kusmaningrum; Iskandar, Bermawi P; Taroepratjeka, Harsono
2014-01-01
This research deals with an overhaul-replacement policy for a repairable machine sold with Free Replacement Warranty (FRW). The machine will be used for a finite horizon, T (T <), and evaluated at a fixed interval, s (s< T). At each evaluation point, the buyer considers three alternative decisions i.e. Keep the machine, Overhaul it, or Replace it with a new identical one. An overhaul can reduce the machine age virtually, but not to a point that the machine is as good as new. If the mac...
Sequential optimization and reliability assessment method for metal forming processes
International Nuclear Information System (INIS)
Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.
2004-01-01
Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations
Deterministic uncertainty analysis
International Nuclear Information System (INIS)
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig
Recognition of deterministic ETOL languages in logarithmic space
DEFF Research Database (Denmark)
Jones, Neil D.; Skyum, Sven
1977-01-01
It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian...
Pseudo-random number generator based on asymptotic deterministic randomness
Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming
2008-06-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.
Pseudo-random number generator based on asymptotic deterministic randomness
International Nuclear Information System (INIS)
Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming
2008-01-01
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks
Deterministic and unambiguous dense coding
International Nuclear Information System (INIS)
Wu Shengjun; Cohen, Scott M.; Sun Yuqing; Griffiths, Robert B.
2006-01-01
Optimal dense coding using a partially-entangled pure state of Schmidt rank D and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most L d messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τ x ) Bob knows for sure that Alice sent message x, and when it fails (probability 1-τ x ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For D≤D a bound is obtained for L d in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes et al. [Phys. Rev. A71, 012311 (2005)]. For D>D it is shown that L d is strictly less than D 2 unless D is an integer multiple of D, in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for D≤D, assuming τ x >0 for a set of DD messages, and a bound is obtained for the average . A bound on the average requires an additional assumption of encoding by isometries (unitaries when D=D) that are orthogonal for different messages. Both bounds are saturated when τ x is a constant independent of x, by a protocol based on one-shot entanglement concentration. For D>D it is shown that (at least) D 2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states
Directory of Open Access Journals (Sweden)
Rui-Fang Xie
2015-04-01
Full Text Available Using Dachengqi Tang (DCQT as a model, high performance liquid chromatography (HPLC fingerprints were applied to optimize machine extracting process with the BoxâBehnken experimental design. HPLC fingerprints were carried out to investigate the chemical ingredients of DCQT; synthetic weighing method based on analytic hierarchy process (AHP and criteria importance through intercriteria correlation (CRITIC was performed to calculate synthetic scores of fingerprints; using the mark ingredients contents and synthetic scores as indicators, the BoxâBehnken design was carried out to optimize the process parameters of machine decocting process under high pressure for DCQT. Results of optimal process showed that the herb materials were soaked for 45Â min and extracted with 9 folds volume of water in the decocting machine under the temperature of 140Â Â°C till the pressure arrived at 0.25Â MPa; then hot decoction was excreted to soak Dahuang and Mangxiao for 5Â min. Finally, obtained solutions were mixed, filtrated and packed. It concluded that HPLC fingerprints combined with the BoxâBehnken experimental design could be used to optimize extracting process of traditional Chinese medicine (TCM. Keywords: Dachengqi Tang, HPLC fingerprints, BoxâBehnken design, Synthetic weighing method
Directory of Open Access Journals (Sweden)
Fanping Zhang
2014-01-01
Full Text Available Streamflow forecasting has an important role in water resource management and reservoir operation. Support vector machine (SVM is an appropriate and suitable method for streamflow prediction due to its best versatility, robustness, and effectiveness. In this study, a wavelet transform particle swarm optimization support vector machine (WT-PSO-SVM model is proposed and applied for streamflow time series prediction. Firstly, the streamflow time series were decomposed into various details (Ds and an approximation (A3 at three resolution levels (21-22-23 using Daubechies (db3 discrete wavelet. Correlation coefficients between each D subtime series and original monthly streamflow time series are calculated. Ds components with high correlation coefficients (D3 are added to the approximation (A3 as the input values of the SVM model. Secondly, the PSO is employed to select the optimal parameters, C, ε, and σ, of the SVM model. Finally, the WT-PSO-SVM models are trained and tested by the monthly streamflow time series of Tangnaihai Station located in Yellow River upper stream from January 1956 to December 2008. The test results indicate that the WT-PSO-SVM approach provide a superior alternative to the single SVM model for forecasting monthly streamflow in situations without formulating models for internal structure of the watershed.
Equivalence relations between deterministic and quantum mechanical systems
International Nuclear Information System (INIS)
Hooft, G.
1988-01-01
Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale
Operational State Complexity of Deterministic Unranked Tree Automata
Directory of Open Access Journals (Sweden)
Xiaoxue Piao
2010-08-01
Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.
Selected problems relating to the dynamics of block-type foundations for machines
Directory of Open Access Journals (Sweden)
Marek Zombroń
2014-07-01
Full Text Available Atypical but real practical problems relating to the dynamics of block-type foundations for machines are considered using the deterministic approach and assuming that the determined parameters are random variables. A foundation model in the form of an undeformable solid on which another undeformable solid modelling a machine is mounted via viscoelastic constraints was adopted. The dynamic load was defined by a harmonically varying signal and by a series of short duration signals. The vibration of the system was investigated for the case when stratified ground (groundwater occurred within the side backfill was present. Calculation results illustrating the theoretical analyses are presented.
Burriel-Valencia, Jordi; Puche-Panadero, Ruben; Martinez-Roman, Javier; Sapena-Bano, Angel; Pineda-Sanchez, Manuel
2018-01-06
The aim of this paper is to introduce a new methodology for the fault diagnosis of induction machines working in the transient regime, when time-frequency analysis tools are used. The proposed method relies on the use of the optimized Slepian window for performing the short time Fourier transform (STFT) of the stator current signal. It is shown that for a given sequence length of finite duration, the Slepian window has the maximum concentration of energy, greater than can be reached with a gated Gaussian window, which is usually used as the analysis window. In this paper, the use and optimization of the Slepian window for fault diagnosis of induction machines is theoretically introduced and experimentally validated through the test of a 3.15-MW induction motor with broken bars during the start-up transient. The theoretical analysis and the experimental results show that the use of the Slepian window can highlight the fault components in the current's spectrogram with a significant reduction of the required computational resources.
Directory of Open Access Journals (Sweden)
Jordi Burriel-Valencia
2018-01-01
Full Text Available The aim of this paper is to introduce a new methodology for the fault diagnosis of induction machines working in the transient regime, when time-frequency analysis tools are used. The proposed method relies on the use of the optimized Slepian window for performing the short time Fourier transform (STFT of the stator current signal. It is shown that for a given sequence length of finite duration, the Slepian window has the maximum concentration of energy, greater than can be reached with a gated Gaussian window, which is usually used as the analysis window. In this paper, the use and optimization of the Slepian window for fault diagnosis of induction machines is theoretically introduced and experimentally validated through the test of a 3.15-MW induction motor with broken bars during the start-up transient. The theoretical analysis and the experimental results show that the use of the Slepian window can highlight the fault components in the current’s spectrogram with a significant reduction of the required computational resources.
Mesh generation and energy group condensation studies for the jaguar deterministic transport code
International Nuclear Information System (INIS)
Kennedy, R. A.; Watson, A. M.; Iwueke, C. I.; Edwards, E. J.
2012-01-01
The deterministic transport code Jaguar is introduced, and the modeling process for Jaguar is demonstrated using a two-dimensional assembly model of the Hoogenboom-Martin Performance Benchmark Problem. This single assembly model is being used to test and analyze optimal modeling methodologies and techniques for Jaguar. This paper focuses on spatial mesh generation and energy condensation techniques. In this summary, the models and processes are defined as well as thermal flux solution comparisons with the Monte Carlo code MC21. (authors)
Mesh generation and energy group condensation studies for the jaguar deterministic transport code
Energy Technology Data Exchange (ETDEWEB)
Kennedy, R. A.; Watson, A. M.; Iwueke, C. I.; Edwards, E. J. [Knolls Atomic Power Laboratory, Bechtel Marine Propulsion Corporation, P.O. Box 1072, Schenectady, NY 12301-1072 (United States)
2012-07-01
The deterministic transport code Jaguar is introduced, and the modeling process for Jaguar is demonstrated using a two-dimensional assembly model of the Hoogenboom-Martin Performance Benchmark Problem. This single assembly model is being used to test and analyze optimal modeling methodologies and techniques for Jaguar. This paper focuses on spatial mesh generation and energy condensation techniques. In this summary, the models and processes are defined as well as thermal flux solution comparisons with the Monte Carlo code MC21. (authors)
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus
2007-01-01
Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...
International Nuclear Information System (INIS)
1990-01-01
In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)
Design and analysis of gasket cutting machine
Directory of Open Access Journals (Sweden)
Vipin V. Gopal
2016-03-01
Full Text Available Paper is about the design and analysis of the optimized gasket cutting machine which can be provide to the companies where there is use of gaskets at a certain interval of time. The paper contain the cost optimized machine which is provide at a much lower cost as compared to the machines presently available in the market. This machine can be specifically used for the boiler and refrigeration companies where the gaskets are used to avoid the leakage due to the joining of two different diametric pipes. Inspite of giving a large order of gaskets, they can prepare the same at small rate whenever needed at the location.
Hsiao, Ming-Chih; Su, Ling-Huey
2018-02-01
This research addresses the problem of scheduling hybrid machine types, in which one type is a two-machine flowshop and another type is a single machine. A job is either processed on the two-machine flowshop or on the single machine. The objective is to determine a production schedule for all jobs so as to minimize the makespan. The problem is NP-hard since the two parallel machines problem was proved to be NP-hard. Simulated annealing algorithms are developed to solve the problem optimally. A mixed integer programming (MIP) is developed and used to evaluate the performance for two SAs. Computational experiments demonstrate the efficiency of the simulated annealing algorithms, the quality of the simulated annealing algorithms will also be reported.
Directory of Open Access Journals (Sweden)
Shan Pang
2016-01-01
Full Text Available A new aero gas turbine engine gas path component fault diagnosis method based on multi-hidden-layer extreme learning machine with optimized structure (OM-ELM was proposed. OM-ELM employs quantum-behaved particle swarm optimization to automatically obtain the optimal network structure according to both the root mean square error on training data set and the norm of output weights. The proposed method is applied to handwritten recognition data set and a gas turbine engine diagnostic application and is compared with basic ELM, multi-hidden-layer ELM, and two state-of-the-art deep learning algorithms: deep belief network and the stacked denoising autoencoder. Results show that, with optimized network structure, OM-ELM obtains better test accuracy in both applications and is more robust to sensor noise. Meanwhile it controls the model complexity and needs far less hidden nodes than multi-hidden-layer ELM, thus saving computer memory and making it more efficient to implement. All these advantages make our method an effective and reliable tool for engine component fault diagnosis tool.
DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS
Directory of Open Access Journals (Sweden)
MICULEAC Melania Elena
2014-06-01
Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.
Optimization Under Uncertainty for Wake Steering Strategies: Preprint
Energy Technology Data Exchange (ETDEWEB)
Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Annoni, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fleming, Paul A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ning, Andrew [Brigham Young University
2017-05-01
Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.
Using neural networks to speed up optimization algorithms
Bazan, M
2000-01-01
The paper presents the application of radial-basis-function (RBF) neural networks to speed up deterministic search algorithms used for the design and optimization of superconducting LHC magnets. The optimization of the iron yoke of the main dipoles requires a number of numerical field computations per trial solution as the field quality depends on the excitation of the magnets. This results in computation times of about 30 minutes for each objective function evaluation (on a DEC-Alpha 600/333) and only the most robust (deterministic) optimization algorithms can be applied. Using a RBF function approximator, the achieved speed-up of the search algorithm is in the order of 25% for problems with two parameters and about 18% for problems with three and five design variables. (13 refs).
Rosenblum, Serge; Borne, Adrien; Dayan, Barak
2017-03-01
The long-standing goal of deterministic quantum interactions between single photons and single atoms was recently realized in various experiments. Among these, an appealing demonstration relied on single-photon Raman interaction (SPRINT) in a three-level atom coupled to a single-mode waveguide. In essence, the interference-based process of SPRINT deterministically swaps the qubits encoded in a single photon and a single atom, without the need for additional control pulses. It can also be harnessed to construct passive entangling quantum gates, and can therefore form the basis for scalable quantum networks in which communication between the nodes is carried out only by single-photon pulses. Here we present an analytical and numerical study of SPRINT, characterizing its limitations and defining parameters for its optimal operation. Specifically, we study the effect of losses, imperfect polarization, and the presence of multiple excited states. In all cases we discuss strategies for restoring the operation of SPRINT.
CSL model checking of deterministic and stochastic Petri nets
Martinez Verdugo, J.M.; Haverkort, Boudewijn R.H.M.; German, R.; Heindl, A.
2006-01-01
Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
2017-01-01
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)=Z(t) Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are X 2 -distributed....
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed....
Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin
2017-07-18
Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.
International Nuclear Information System (INIS)
Sekimizu, K.; Araki, T.; Tatemichi, S.I.
1987-01-01
Optimization of fuel assembly exchange machine movements during periodic refueling outage is discussed. The fuel assembly movements during a fuel shuffling were examined, and it was found that the fuel assembly movements consist of two different movement sequences;one is the ''PATH,'' which begins at a discharged fuel assembly and terminates at a fresh fuel assembly, and the other is the ''LOOP,'' where fuel assemblies circulate in the core. It is also shown that fuel-loading patterns during the fuel shuffling can be expressed by the state of each PATH, which is the number of elements already accomplished in the PATH actions. Based on this fact, a scheme to determine a fuel assembly movement sequence within the constraint was formulated using the artificial intelligence language PROLOG. An additional merit to the scheme is that it can simultaneously evaluate fuel assembly movement, due to the control rods and local power range monitor exchange, in addition to normal fuel shuffling. Fuel assembly movements, for fuel shuffling in a 540-MW(electric) boiling water reactor power plant, were calculated by this scheme. It is also shown that the true optimization to minimize the fuel exchange machine movements would be costly to obtain due to the number of alternatives that would need to be evaluated. However, a method to obtain a quasi-optimum solution is suggested
The dialectical thinking about deterministic and probabilistic safety analysis
International Nuclear Information System (INIS)
Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong
2005-01-01
There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)
Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide (Russian Edition)
International Nuclear Information System (INIS)
2014-01-01
The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References
Essays and surveys in global optimization
Audet, Charles; Savard, Giles
2005-01-01
Global optimization aims at solving the most general problems of deterministic mathematical programming. In addition, once the solutions are found, this methodology is also expected to prove their optimality. With these difficulties in mind, global optimization is becoming an increasingly powerful and important methodology. This book is the most recent examination of its mathematical capability, power, and wide ranging solutions to many fields in the applied sciences.
Deterministic dense coding with partially entangled states
Mozes, Shay; Oppenheim, Jonathan; Reznik, Benni
2005-01-01
The utilization of a d -level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d . In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1 . We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.
Directory of Open Access Journals (Sweden)
Wei Sun
2015-01-01
Full Text Available Electric power is a kind of unstorable energy concerning the national welfare and the people’s livelihood, the stability of which is attracting more and more attention. Because the short-term power load is always interfered by various external factors with the characteristics like high volatility and instability, a single model is not suitable for short-term load forecasting due to low accuracy. In order to solve this problem, this paper proposes a new model based on wavelet transform and the least squares support vector machine (LSSVM which is optimized by fruit fly algorithm (FOA for short-term load forecasting. Wavelet transform is used to remove error points and enhance the stability of the data. Fruit fly algorithm is applied to optimize the parameters of LSSVM, avoiding the randomness and inaccuracy to parameters setting. The result of implementation of short-term load forecasting demonstrates that the hybrid model can be used in the short-term forecasting of the power system.
Relationship of Deterministic Thinking With Loneliness and Depression in the Elderly
Directory of Open Access Journals (Sweden)
Mehdi Sharifi
2017-12-01
Conclusion According to the results, it can be said that deterministic thinking has a significant relationship with depression and sense of loneliness in older adults. So, deterministic thinking acts as a predictor of depression and sense of loneliness in older adults. Therefore, psychological interventions for challenging cognitive distortion of deterministic thinking and attention to mental health in older adult are very important.
Robust Topology Optimization Based on Stochastic Collocation Methods under Loading Uncertainties
Directory of Open Access Journals (Sweden)
Qinghai Zhao
2015-01-01
Full Text Available A robust topology optimization (RTO approach with consideration of loading uncertainties is developed in this paper. The stochastic collocation method combined with full tensor product grid and Smolyak sparse grid transforms the robust formulation into a weighted multiple loading deterministic problem at the collocation points. The proposed approach is amenable to implementation in existing commercial topology optimization software package and thus feasible to practical engineering problems. Numerical examples of two- and three-dimensional topology optimization problems are provided to demonstrate the proposed RTO approach and its applications. The optimal topologies obtained from deterministic and robust topology optimization designs under tensor product grid and sparse grid with different levels are compared with one another to investigate the pros and cons of optimization algorithm on final topologies, and an extensive Monte Carlo simulation is also performed to verify the proposed approach.
Optimal protocols and optimal transport in stochastic thermodynamics.
Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo
2011-06-24
Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.
Chen, B.; Harp, D. R.; Lin, Y.; Keating, E. H.; Pawar, R.
2017-12-01
Monitoring is a crucial aspect of geologic carbon sequestration (GCS) risk management. It has gained importance as a means to ensure CO2 is safely and permanently stored underground throughout the lifecycle of a GCS project. Three issues are often involved in a monitoring project: (i) where is the optimal location to place the monitoring well(s), (ii) what type of data (pressure, rate and/or CO2 concentration) should be measured, and (iii) What is the optimal frequency to collect the data. In order to address these important issues, a filtering-based data assimilation procedure is developed to perform the monitoring optimization. The optimal monitoring strategy is selected based on the uncertainty reduction of the objective of interest (e.g., cumulative CO2 leak) for all potential monitoring strategies. To reduce the computational cost of the filtering-based data assimilation process, two machine-learning algorithms: Support Vector Regression (SVR) and Multivariate Adaptive Regression Splines (MARS) are used to develop the computationally efficient reduced-order-models (ROMs) from full numerical simulations of CO2 and brine flow. The proposed framework for GCS monitoring optimization is demonstrated with two examples: a simple 3D synthetic case and a real field case named Rock Spring Uplift carbon storage site in Southwestern Wyoming.
Buddala, Raviteja; Mahapatra, Siba Sankar
2017-11-01
Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.
Energy Technology Data Exchange (ETDEWEB)
Graham, Emily B. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Crump, Alex R. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Resch, Charles T. [Geochemistry Department, Pacific Northwest National Laboratory, Richland WA USA; Fansler, Sarah [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Environmental Compliance and Emergency Preparation, Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Fredrickson, Jim K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA
2017-03-28
Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets. The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.
Machining of Metal Matrix Composites
2012-01-01
Machining of Metal Matrix Composites provides the fundamentals and recent advances in the study of machining of metal matrix composites (MMCs). Each chapter is written by an international expert in this important field of research. Machining of Metal Matrix Composites gives the reader information on machining of MMCs with a special emphasis on aluminium matrix composites. Chapter 1 provides the mechanics and modelling of chip formation for traditional machining processes. Chapter 2 is dedicated to surface integrity when machining MMCs. Chapter 3 describes the machinability aspects of MMCs. Chapter 4 contains information on traditional machining processes and Chapter 5 is dedicated to the grinding of MMCs. Chapter 6 describes the dry cutting of MMCs with SiC particulate reinforcement. Finally, Chapter 7 is dedicated to computational methods and optimization in the machining of MMCs. Machining of Metal Matrix Composites can serve as a useful reference for academics, manufacturing and materials researchers, manu...
Optimization of water removal in the press section of a paper machine
Directory of Open Access Journals (Sweden)
D. M. D. Drummond
2010-06-01
Full Text Available An optimization problem regarding water removal in the press section of a paper machine is considered in this work. The proposed model tries to minimize a cost function comprising the replacement of the felts in the press section, the cost of energy to operate the press and the cost of energy in the drying section, while satisfying the constraints of water mass balance in the process. The model is classified as a mixed-integer nonlinear program (MINLP in which the most important decisions are: a the sequence of paper to produce or when to produce the paper; b the need to exchange the felts; and c when to exchange the felts. Numerical examples are presented to illustrate the performance of the model.
Towards Massive Machine Type Cellular Communications
Dawy, Zaher; Saad, Walid; Ghosh, Arunabha; Andrews, Jeffrey G.; Yaacoub, Elias
2015-01-01
Cellular networks have been engineered and optimized to carrying ever-increasing amounts of mobile data, but over the last few years, a new class of applications based on machine-centric communications has begun to emerge. Automated devices such as sensors, tracking devices, and meters - often referred to as machine-to-machine (M2M) or machine-type communications (MTC) - introduce an attractive revenue stream for mobile network operators, if a massive number of them can be efficiently support...
Optimization of the Development of a Plastic Recycling Machine ...
African Journals Online (AJOL)
Nigerian Journal of Technology ... The performance test analysis carried out defines the characteristics of the machine and shows that at a speed of 268 rpm the machine functions effectively and efficiently in performing its task producing a high finishing recycling efficiency or recyclability of 97%, takes 2 minutes to recycle a ...
Deterministic hydrodynamics: Taking blood apart
Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.
2006-10-01
We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication
APPLICATION OF THE PERFORMANCE SELECTION INDEX METHOD FOR SOLVING MACHINING MCDM PROBLEMS
Directory of Open Access Journals (Sweden)
Dušan Petković
2017-04-01
Full Text Available Complex nature of machining processes requires the use of different methods and techniques for process optimization. Over the past few years a number of different optimization methods have been proposed for solving continuous machining optimization problems. In manufacturing environment, engineers are also facing a number of discrete machining optimization problems. In order to help decision makers in solving this type of optimization problems a number of multi criteria decision making (MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. performance selection index (PSI method for solving machining MCDM problems. The main motivation for using the PSI method is that it is not necessary to determine criteria weights as in other MCDM methods. Applicability and effectiveness of the PSI method have been demonstrated while solving two case studies dealing with machinability of materials and selection of the most suitable cutting fluid for the given machining application. The obtained rankings have good correlation with those derived by the past researchers using other MCDM methods which validate the usefulness of this method for solving machining MCDM problems.
Optimizing integrated airport surface and terminal airspace operations under uncertainty
Bosson, Christabelle S.
In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is
Conway, Drew
2012-01-01
If you're an experienced programmer interested in crunching data, this book will get you started with machine learning-a toolkit of algorithms that enables computers to train themselves to automate useful tasks. Authors Drew Conway and John Myles White help you understand machine learning and statistics tools through a series of hands-on case studies, instead of a traditional math-heavy presentation. Each chapter focuses on a specific problem in machine learning, such as classification, prediction, optimization, and recommendation. Using the R programming language, you'll learn how to analyz
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
Directory of Open Access Journals (Sweden)
Ortega Antonio
2005-01-01
Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
Parallel-Machine Scheduling with Time-Dependent and Machine Availability Constraints
Directory of Open Access Journals (Sweden)
Cuixia Miao
2015-01-01
Full Text Available We consider the parallel-machine scheduling problem in which the machines have availability constraints and the processing time of each job is simple linear increasing function of its starting times. For the makespan minimization problem, which is NP-hard in the strong sense, we discuss the Longest Deteriorating Rate algorithm and List Scheduling algorithm; we also provide a lower bound of any optimal schedule. For the total completion time minimization problem, we analyze the strong NP-hardness, and we present a dynamic programming algorithm and a fully polynomial time approximation scheme for the two-machine problem. Furthermore, we extended the dynamic programming algorithm to the total weighted completion time minimization problem.
Classification and unification of the microscopic deterministic traffic models.
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Directory of Open Access Journals (Sweden)
Wenliao Du
2013-01-01
Full Text Available Promptly and accurately dealing with the equipment breakdown is very important in terms of enhancing reliability and decreasing downtime. A novel fault diagnosis method PSO-RVM based on relevance vector machines (RVM with particle swarm optimization (PSO algorithm for plunger pump in truck crane is proposed. The particle swarm optimization algorithm is utilized to determine the kernel width parameter of the kernel function in RVM, and the five two-class RVMs with binary tree architecture are trained to recognize the condition of mechanism. The proposed method is employed in the diagnosis of plunger pump in truck crane. The six states, including normal state, bearing inner race fault, bearing roller fault, plunger wear fault, thrust plate wear fault, and swash plate wear fault, are used to test the classification performance of the proposed PSO-RVM model, which compared with the classical models, such as back-propagation artificial neural network (BP-ANN, ant colony optimization artificial neural network (ANT-ANN, RVM, and support vectors, machines with particle swarm optimization (PSO-SVM, respectively. The experimental results show that the PSO-RVM is superior to the first three classical models, and has a comparative performance to the PSO-SVM, the corresponding diagnostic accuracy achieving as high as 99.17% and 99.58%, respectively. But the number of relevance vectors is far fewer than that of support vector, and the former is about 1/12–1/3 of the latter, which indicates that the proposed PSO-RVM model is more suitable for applications that require low complexity and real-time monitoring.
International Nuclear Information System (INIS)
Maheri, Alireza
2014-01-01
Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a
Deterministic analyses of severe accident issues
International Nuclear Information System (INIS)
Dua, S.S.; Moody, F.J.; Muralidharan, R.; Claassen, L.B.
2004-01-01
Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents
Strength analysis and optimization of writing mechanism of steel billet marking machine
Directory of Open Access Journals (Sweden)
Fu Min
2017-01-01
Full Text Available According to steel billet marking theory of plasma arc nicking, the paper designs a dual laser ranging marking machine against online marking of special steel billet and realizes multi-character marking of the end face of hot steel billet. Writing mechanism bases on the rectangular coordinates marking form, Z axis adopts cantilever structure. It completes the overall marking task utilizing the synergy of KK module in X axis, Y axis and Z axis. It makes modal analysis on the writing mechanism model established by Pro/Enginner utilizing ANSYS Workbench at the position of X1Y1Z1, and obtains the first six order modal frequency and analyzes the vibration in the writing process. Moreover, the paper analyzes the static structure of the cantilever of writing mechanism, computes its maximum stress and total deformation. To make the writing mechanism reach the target of light weight, the paper optimizes Z-axis cantilever of writing mechanism. According to the analysis, it is known that the optimized Z-axis cantilever of the writing mechanism still meets the strength and rigidity requirement and total mass declines approximately 30%.
Dynamic Characteristics of a New Machine for Fatigue Testing of Railway Axles – Part 2
Directory of Open Access Journals (Sweden)
Karel FRYDRÝŠEK
2012-06-01
Full Text Available There were done some proposal calculations for a new testing machine. This new testing machine is determined for a dynamic fatigue testing of railway axles. The railway axles are subjected to bending and rotation (centrifugal effects. For the right proposition of a new machine is very important to know the basic dynamic characteristics of whole system. These dynamic characteristics are solved via FEM (MSC.Marc/Mentat software in combination with SBRA (Simulation-Based Reliability Assessment Method (probabilistic Monte Carlo approach, Anthill and Python software. The proposed dimensions and springs of a new machine for fatigue testing of railway axles were used for manufacturing. Application of the SBRA method connected with FEM in these areas is a new and innovative trend in mechanics. This paper is continuation of former work (i.e. easier deterministic approach already presented in this journal in 2007.
Considerations upon the Machine Learning Technologies
Alin Munteanu; Cristina Ofelia Sofran
2006-01-01
Artificial intelligence offers superior techniques and methods by which problems from diverse domains may find an optimal solution. The Machine Learning technologies refer to the domain of artificial intelligence aiming to develop the techniques allowing the computers to “learn”. Some systems based on Machine Learning technologies tend to eliminate the necessity of the human intelligence while the others adopt a man-machine collaborative approach.
An Optimized Prediction Intervals Approach for Short Term PV Power Forecasting
Directory of Open Access Journals (Sweden)
Qiang Ni
2017-10-01
Full Text Available High quality photovoltaic (PV power prediction intervals (PIs are essential to power system operation and planning. To improve the reliability and sharpness of PIs, in this paper, a new method is proposed, which involves the model uncertainties and noise uncertainties, and PIs are constructed with a two-step formulation. In the first step, the variance of model uncertainties is obtained by using extreme learning machine to make deterministic forecasts of PV power. In the second stage, innovative PI-based cost function is developed to optimize the parameters of ELM and noise uncertainties are quantization in terms of variance. The performance of the proposed approach is examined by using the PV power and meteorological data measured from 1kW rooftop DC micro-grid system. The validity of the proposed method is verified by comparing the experimental analysis with other benchmarking methods, and the results exhibit a superior performance.
Deterministic and stochastic CTMC models from Zika disease transmission
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
ICRP (1991) and deterministic effects
International Nuclear Information System (INIS)
Mole, R.H.
1992-01-01
A critical review of ICRP Publication 60 (1991) shows that considerable revisions are needed in both language and thinking about deterministic effects (DE). ICRP (1991) makes a welcome and clear distinction between change, caused by irradiation; damage, some degree of deleterious change, for example to cells, but not necessarily deleterious to the exposed individual; harm, clinically observable deleterious effects expressed in individuals or their descendants; and detriment, a complex concept combining the probability, severity and time of expression of harm (para42). (All added emphases come from the author.) Unfortunately these distinctions are not carried through into the discussion of deterministic effects (DE) and two important terms are left undefined. Presumably effect may refer to change, damage, harm or detriment, according to context. Clinically observable is also undefined although its meaning is crucial to any consideration of DE since DE are defined as causing observable harm (para 20). (Author)
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.
Cuperlovic-Culf, Miroslava
2018-01-11
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.
Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling
Cuperlovic-Culf, Miroslava
2018-01-01
Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649
When to conduct probabilistic linkage vs. deterministic linkage? A simulation study.
Zhu, Ying; Matsuyama, Yutaka; Ohashi, Yasuo; Setoguchi, Soko
2015-08-01
When unique identifiers are unavailable, successful record linkage depends greatly on data quality and types of variables available. While probabilistic linkage theoretically captures more true matches than deterministic linkage by allowing imperfection in identifiers, studies have shown inconclusive results likely due to variations in data quality, implementation of linkage methodology and validation method. The simulation study aimed to understand data characteristics that affect the performance of probabilistic vs. deterministic linkage. We created ninety-six scenarios that represent real-life situations using non-unique identifiers. We systematically introduced a range of discriminative power, rate of missing and error, and file size to increase linkage patterns and difficulties. We assessed the performance difference of linkage methods using standard validity measures and computation time. Across scenarios, deterministic linkage showed advantage in PPV while probabilistic linkage showed advantage in sensitivity. Probabilistic linkage uniformly outperformed deterministic linkage as the former generated linkages with better trade-off between sensitivity and PPV regardless of data quality. However, with low rate of missing and error in data, deterministic linkage performed not significantly worse. The implementation of deterministic linkage in SAS took less than 1min, and probabilistic linkage took 2min to 2h depending on file size. Our simulation study demonstrated that the intrinsic rate of missing and error of linkage variables was key to choosing between linkage methods. In general, probabilistic linkage was a better choice, but for exceptionally good quality data (<5% error), deterministic linkage was a more resource efficient choice. Copyright © 2015 Elsevier Inc. All rights reserved.
Tool path strategy and cutting process monitoring in intelligent machining
Chen, Ming; Wang, Chengdong; An, Qinglong; Ming, Weiwei
2018-06-01
Intelligent machining is a current focus in advanced manufacturing technology, and is characterized by high accuracy and efficiency. A central technology of intelligent machining—the cutting process online monitoring and optimization—is urgently needed for mass production. In this research, the cutting process online monitoring and optimization in jet engine impeller machining, cranio-maxillofacial surgery, and hydraulic servo valve deburring are introduced as examples of intelligent machining. Results show that intelligent tool path optimization and cutting process online monitoring are efficient techniques for improving the efficiency, quality, and reliability of machining.
Directory of Open Access Journals (Sweden)
Sachin Ashok Sonawane
2018-04-01
Full Text Available This paper reports the results of research to examine the effects of cutting parameters such as pulse-on time, pulse-off time, servo voltage, peak current, wire feed rate and cable tension on surface finish, overcut and metal removal rate (MRR during Wire Electrical Discharge Machining (WEDM of grade-5 titanium (Ti-6Al-4V. Taguchi’s L27 orthogonal design method is used for experimentation. Multi-response optimization is performed by applying weighted principal component analysis (WPCA. The optimum values of cutting variables are found as a pulse on time 118 μs, pulse off time 45 μs, servo voltage 40 volts, peak current 190 Amp. , wire feed rate 5 m/min and cable tension 5 gram. On the other hand, Analysis of Variance (ANOVA, simulation results indicate that pulse-on time is the primary influencing variable which affects the response characteristics contributing 76.00%. The results of verification experiments show improvement in the value of output characteristics at the optimal cutting variables settings. Scanning electron microscopic (SEM analysis of the surface after machining indicates the formation of craters, resolidified material, tool material transfer and increase in the thickness of recast layer at higher values of the pulse on time.
Considerations upon the Machine Learning Technologies
Directory of Open Access Journals (Sweden)
Alin Munteanu
2006-01-01
Full Text Available Artificial intelligence offers superior techniques and methods by which problems from diverse domains may find an optimal solution. The Machine Learning technologies refer to the domain of artificial intelligence aiming to develop the techniques allowing the computers to “learn”. Some systems based on Machine Learning technologies tend to eliminate the necessity of the human intelligence while the others adopt a man-machine collaborative approach.
Directory of Open Access Journals (Sweden)
Craig George Leslie Hopf
2015-12-01
Full Text Available This paper’s primary alternative hypothesis is Ha: profitable exchange-traded horserace betting fund with deterministic payoff exists for acceptable institutional portfolio return—risk. The primary hypothesis challenges the semi-strong efficient market hypothesis applied to horse race wagering. An optimal deterministic betting model (DBM is derived from the existing stochastic model fundamentals, mathematical pooling principles, and new theorem. The exchange-traded betting fund (ETBF is derived from force of interest first principles. An ETBF driven by DBM processes conjointly defines the research’s betting strategy. Alpha is excess return above financial benchmark, and invokes betting strategy alpha that is composed of model alpha and fund alpha. The results and analysis from statistical testing of a global stratified data sample of three hundred galloper horse races accepted at the ninety-five percent confidence-level positive betting strategy alpha, to endorse an exchange-traded horse race betting fund with deterministic payoff into financial market.
International Nuclear Information System (INIS)
Hoisie, A.; Lubeck, O.; Wasserman, H.
1998-01-01
The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, the authors analyze two problem sizes. Their model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor
Arisoy, Yigit Muzaffer
Manufacturing processes may significantly affect the quality of resultant surfaces and structural integrity of the metal end products. Controlling manufacturing process induced changes to the product's surface integrity may improve the fatigue life and overall reliability of the end product. The goal of this study is to model the phenomena that result in microstructural alterations and improve the surface integrity of the manufactured parts by utilizing physics-based process simulations and other computational methods. Two different (both conventional and advanced) manufacturing processes; i.e. machining of Titanium and Nickel-based alloys and selective laser melting of Nickel-based powder alloys are studied. 3D Finite Element (FE) process simulations are developed and experimental data that validates these process simulation models are generated to compare against predictions. Computational process modeling and optimization have been performed for machining induced microstructure that includes; i) predicting recrystallization and grain size using FE simulations and the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model, ii) predicting microhardness using non-linear regression models and the Random Forests method, and iii) multi-objective machining optimization for minimizing microstructural changes. Experimental analysis and computational process modeling of selective laser melting have been also conducted including; i) microstructural analysis of grain sizes and growth directions using SEM imaging and machine learning algorithms, ii) analysis of thermal imaging for spattering, heating/cooling rates and meltpool size, iii) predicting thermal field, meltpool size, and growth directions via thermal gradients using 3D FE simulations, iv) predicting localized solidification using the Phase Field method. These computational process models and predictive models, once utilized by industry to optimize process parameters, have the ultimate potential to improve performance of
Performance and portability of the SciBy virtual machine
DEFF Research Database (Denmark)
Andersen, Rasmus; Vinter, Brian
2010-01-01
The Scientific Bytecode Virtual Machine is a virtual machine designed specifically for performance, security, and portability of scientific applications deployed in a Grid environment. The performance overhead normally incurred by virtual machines is mitigated using native optimized scientific li...
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
Harrisson, G.; Marleau, G.
2012-01-01
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.
Comparison of deterministic and Monte Carlo methods in shielding design
International Nuclear Information System (INIS)
Oliveira, A. D.; Oliveira, C.
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)
S, Kyriacou; E, Kontoleontos; S, Weissenberger; L, Mangani; E, Casartelli; I, Skouteropoulou; M, Gattringer; A, Gehrer; M, Buchmayr
2014-03-01
An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure.
International Nuclear Information System (INIS)
Kyriacou S; Kontoleontos E; Weissenberger S; Mangani L; Casartelli E; Skouteropoulou I; Gattringer M; Gehrer A; Buchmayr M
2014-01-01
An efficient hydraulic optimization procedure, suitable for industrial use, requires an advanced optimization tool (EASY software), a fast solver (block coupled CFD) and a flexible geometry generation tool. EASY optimization software is a PCA-driven metamodel-assisted Evolutionary Algorithm (MAEA (PCA)) that can be used in both single- (SOO) and multiobjective optimization (MOO) problems. In MAEAs, low cost surrogate evaluation models are used to screen out non-promising individuals during the evolution and exclude them from the expensive, problem specific evaluation, here the solution of Navier-Stokes equations. For additional reduction of the optimization CPU cost, the PCA technique is used to identify dependences among the design variables and to exploit them in order to efficiently drive the application of the evolution operators. To further enhance the hydraulic optimization procedure, a very robust and fast Navier-Stokes solver has been developed. This incompressible CFD solver employs a pressure-based block-coupled approach, solving the governing equations simultaneously. This method, apart from being robust and fast, also provides a big gain in terms of computational cost. In order to optimize the geometry of hydraulic machines, an automatic geometry and mesh generation tool is necessary. The geometry generation tool used in this work is entirely based on b-spline curves and surfaces. In what follows, the components of the tool chain are outlined in some detail and the optimization results of hydraulic machine components are shown in order to demonstrate the performance of the presented optimization procedure
Gibril, Mohamed Barakat A.; Idrees, Mohammed Oludare; Yao, Kouame; Shafri, Helmi Zulhaidi Mohd
2018-01-01
The growing use of optimization for geographic object-based image analysis and the possibility to derive a wide range of information about the image in textual form makes machine learning (data mining) a versatile tool for information extraction from multiple data sources. This paper presents application of data mining for land-cover classification by fusing SPOT-6, RADARSAT-2, and derived dataset. First, the images and other derived indices (normalized difference vegetation index, normalized difference water index, and soil adjusted vegetation index) were combined and subjected to segmentation process with optimal segmentation parameters obtained using combination of spatial and Taguchi statistical optimization. The image objects, which carry all the attributes of the input datasets, were extracted and related to the target land-cover classes through data mining algorithms (decision tree) for classification. To evaluate the performance, the result was compared with two nonparametric classifiers: support vector machine (SVM) and random forest (RF). Furthermore, the decision tree classification result was evaluated against six unoptimized trials segmented using arbitrary parameter combinations. The result shows that the optimized process produces better land-use land-cover classification with overall classification accuracy of 91.79%, 87.25%, and 88.69% for SVM and RF, respectively, while the results of the six unoptimized classifications yield overall accuracy between 84.44% and 88.08%. Higher accuracy of the optimized data mining classification approach compared to the unoptimized results indicates that the optimization process has significant impact on the classification quality.
HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks
Directory of Open Access Journals (Sweden)
Luca Marchetti
2017-01-01
Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.
Technique for Increasing Accuracy of Positioning System of Machine Tools
Directory of Open Access Journals (Sweden)
Sh. Ji
2014-01-01
Full Text Available The aim of research is to improve the accuracy of positioning and processing system using a technique for optimization of pressure diagrams of guides in machine tools. The machining quality is directly related to its accuracy, which characterizes an impact degree of various errors of machines. The accuracy of the positioning system is one of the most significant machining characteristics, which allow accuracy evaluation of processed parts.The literature describes that the working area of the machine layout is rather informative to characterize the effect of the positioning system on the macro-geometry of the part surfaces to be processed. To enhance the static accuracy of the studied machine, in principle, two groups of measures are possible. One of them points toward a decrease of the cutting force component, which overturns the slider moments. Another group of measures is related to the changing sizes of the guide facets, which may lead to their profile change.The study was based on mathematical modeling and optimization of the cutting zone coordinates. And we find the formula to determine the surface pressure of the guides. The selected parameters of optimization are vectors of the cutting force and values of slides and guides. Obtained results show that a technique for optimization of coordinates in the cutting zone was necessary to increase a processing accuracy.The research has established that to define the optimal coordinates of the cutting zone we have to change the sizes of slides, value and coordinates of applied forces, reaching the pressure equalization and improving the accuracy of positioning system of machine tools. In different points of the workspace a vector of forces is applied, pressure diagrams are found, which take into account the changes in the parameters of positioning system, and the pressure diagram equalization to provide the most accuracy of machine tools is achieved.
Directory of Open Access Journals (Sweden)
Jin Huang
2017-09-01
Full Text Available Process planning is an important function in a manufacturing system; it specifies the manufacturing requirements and details for the shop floor to convert a part from raw material to the finished form. However, considering only economical criterion with technological constraints is not enough in sustainable manufacturing practice; formerly, criteria about low carbon emission awareness have seldom been taken into account in process planning optimization. In this paper, a mathematical model that considers both machining costs reduction as well as carbon emission reduction is established for the process planning problem. However, due to various flexibilities together with complex precedence constraints between operations, the process planning problem is a non-deterministic polynomial-time (NP hard problem. Aiming at the distinctive feature of the multi-objectives process planning optimization, we then developed a hybrid non-dominated sorting genetic algorithm (NSGA-II to tackle this problem. A local search method that considers both the total cost criterion and the carbon emission criterion are introduced into the proposed algorithm to avoid being trapped into local optima. Moreover, the technique for order preference by similarity to an ideal solution (TOPSIS method is also adopted to determine the best solution from the Pareto front. Experiments have been conducted using Kim’s benchmark. Computational results show that process plan schemes with low carbon emission can be captured, and, more importantly, the proposed hybrid NSGA-II algorithm can obtain more promising optimal Pareto front than the plain NSGA-II algorithm. Meanwhile, according to the computational results of Kim’s benchmark, we find that both of the total machining cost and carbon emission are roughly proportional to the number of operations, and a process plan with less operation may be more satisfactory. This study will draw references for the further research on green
Nonlinear Markov processes: Deterministic case
International Nuclear Information System (INIS)
Frank, T.D.
2008-01-01
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
Govaerts, Paul J; Vaerenberg, Bart; De Ceulaer, Geert; Daemers, Kristin; De Beukelaer, Carina; Schauwers, Karen
2010-08-01
An intelligent agent, Fitting to Outcomes eXpert, was developed to optimize and automate Cochlear implant (CI) programming. The current article describes the rationale, development, and features of this tool. Cochlear implant fitting is a time-consuming procedure to define the value of a subset of the available electric parameters based primarily on behavioral responses. It is comfort-driven with high intraindividual and interindividual variability both with respect to the patient and to the clinician. Its validity in terms of process control can be questioned. Good clinical practice would require an outcome-driven approach. An intelligent agent may help solve the complexity of addressing more electric parameters based on a range of outcome measures. A software application was developed that consists of deterministic rules that analyze the map settings in the processor together with psychoacoustic test results (audiogram, A(section sign)E phoneme discrimination, A(section sign)E loudness scaling, speech audiogram) obtained with that map. The rules were based on the daily clinical practice and the expertise of the CI programmers. The data transfer to and from this agent is either manual or through seamless digital communication with the CI fitting database and the psychoacoustic test suite. It recommends and executes modifications to the map settings to improve the outcome. Fitting to Outcomes eXpert is an operational intelligent agent, the principles of which are described. Its development and modes of operation are outlined, and a case example is given. Fitting to Outcomes eXpert is in use for more than a year now and seems to be capable to improve the measured outcome. It is argued that this novel tool allows a systematic approach focusing on outcome, reducing the fitting time, and improving the quality of fitting. It introduces principles of artificial intelligence in the process of CI fitting.
Interactive Reliability-Based Optimal Design
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Siemaszko, A.
1994-01-01
Interactive design/optimization of large, complex structural systems is considered. The objective function is assumed to model the expected costs. The constraints are reliability-based and/or related to deterministic code requirements. Solution of this optimization problem is divided in four main...... tasks, namely finite element analyses, sensitivity analyses, reliability analyses and application of an optimization algorithm. In the paper it is shown how these four tasks can be linked effectively and how existing information on design variables, Lagrange multipliers and the Hessian matrix can...
Optimal control of hydroelectric facilities
Zhao, Guangzhi
This thesis considers a simple yet realistic model of pump-assisted hydroelectric facilities operating in a market with time-varying but deterministic power prices. Both deterministic and stochastic water inflows are considered. The fluid mechanical and engineering details of the facility are described by a model containing several parameters. We present a dynamic programming algorithm for optimizing either the total energy produced or the total cash generated by these plants. The algorithm allows us to give the optimal control strategy as a function of time and to see how this strategy, and the associated plant value, varies with water inflow and electricity price. We investigate various cases. For a single pumped storage facility experiencing deterministic power prices and water inflows, we investigate the varying behaviour for an oversimplified constant turbine- and pump-efficiency model with simple reservoir geometries. We then generalize this simple model to include more realistic turbine efficiencies, situations with more complicated reservoir geometry, and the introduction of dissipative switching costs between various control states. We find many results which reinforce our physical intuition about this complicated system as well as results which initially challenge, though later deepen, this intuition. One major lesson of this work is that the optimal control strategy does not differ much between two differing objectives of maximizing energy production and maximizing its cash value. We then turn our attention to the case of stochastic water inflows. We present a stochastic dynamic programming algorithm which can find an on-average optimal control in the face of this randomness. As the operator of a facility must be more cautious when inflows are random, the randomness destroys facility value. Following this insight we quantify exactly how much a perfect hydrological inflow forecast would be worth to a dam operator. In our final chapter we discuss the
Deterministic nonlinear systems a short course
Anishchenko, Vadim S; Strelkova, Galina I
2014-01-01
This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems. This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.
Learning to Act: Qualitative Learning of Deterministic Action Models
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2017-01-01
In this article we study learnability of fully observable, universally applicable action models of dynamic epistemic logic. We introduce a framework for actions seen as sets of transitions between propositional states and we relate them to their dynamic epistemic logic representations as action...... in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power—they are identifiable in the limit. We then move on to a particular learning method, i.e. learning via update......, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and unconditional deterministic action models. We propose update learning mechanisms for the afore mentioned classes of actions and analyse...
International Nuclear Information System (INIS)
Gutierrez, Rafael M.; Useche, Gina M.; Buitrago, Elias
2007-01-01
We present a procedure developed to detect stochastic and deterministic information contained in empirical time series, useful to characterize and make models of different aspects of complex phenomena represented by such data. This procedure is applied to a seismological time series to obtain new information to study and understand geological phenomena. We use concepts and methods from nonlinear dynamics and maximum entropy. The mentioned method allows an optimal analysis of the available information
Application of Hybrid Genetic Algorithm Routine in Optimizing Food and Bioengineering Processes
Directory of Open Access Journals (Sweden)
Jaya Shankar Tumuluru
2016-11-01
Full Text Available Optimization is a crucial step in the analysis of experimental results. Deterministic methods only converge on local optimums and require exponentially more time as dimensionality increases. Stochastic algorithms are capable of efficiently searching the domain space; however convergence is not guaranteed. This article demonstrates the novelty of the hybrid genetic algorithm (HGA, which combines both stochastic and deterministic routines for improved optimization results. The new hybrid genetic algorithm developed is applied to the Ackley benchmark function as well as case studies in food, biofuel, and biotechnology processes. For each case study, the hybrid genetic algorithm found a better optimum candidate than reported by the sources. In the case of food processing, the hybrid genetic algorithm improved the anthocyanin yield by 6.44%. Optimization of bio-oil production using HGA resulted in a 5.06% higher yield. In the enzyme production process, HGA predicted a 0.39% higher xylanase yield. Hybridization of the genetic algorithm with a deterministic algorithm resulted in an improved optimum compared to statistical methods.
Petersen, Øyvind Wiig
2014-01-01
Force identification in structural dynamics is an inverse problem concerned with finding loads from measured structural response. The main objective of this thesis is to perform and study state (displacement and velocity) and force estimation by Kalman filtering. Theory on optimal control and state-space models are presented, adapted to linear structural dynamics. Accommodation for measurement noise and model inaccuracies are attained by stochastic-deterministic coupling. Explicit requirem...
THE STIRLING GAS REFRIGERATING MACHINE MECHANICAL DESIGN IMPROVING
Directory of Open Access Journals (Sweden)
V. V. Trandafilov
2016-06-01
Full Text Available To improve the mechanical design of the piston Stirling gas refrigeration machine the structural optimization of rotary vane Stirling gas refrigeration machine is carried out. This paper presents the results of theoretical research. Analysis and prospects of rotary vane Stirling gas refrigeration machine for domestic and industrial refrigeration purpose are represented. The results of a patent search by mechanisms of transformation of rotary vane machines are discussed.
THE STIRLING GAS REFRIGERATING MACHINE MECHANICAL DESIGN IMPROVING
Directory of Open Access Journals (Sweden)
V. V. Trandafilov
2016-02-01
Full Text Available To improve the mechanical design of the piston Stirling gas refrigeration machine the structural optimization of rotary vane Stirling gas refrigeration machine is carried out. This paper presents the results of theoretical research. Analysis and prospects of rotary vane Stirling gas refrigeration machine for domestic and industrial refrigeration purpose are represented. The results of a patent search by mechanisms of transformation of rotary vane machines are discussed
Allowed outage time for test and maintenance - Optimization of safety
International Nuclear Information System (INIS)
Cepin, M.; Mavko, B.
1997-01-01
The main objective of the project is the development and application of methodologies for improvement and optimization of test and maintenance activities for safety related equipment in NPPs on basis of their enhanced safety. The probabilistic safety assessment serves as a base, which does not mean the replacement of the deterministic analyses but the consideration of probabilistic safety assessment results as complement to deterministic results. 15 refs, 2 figs
Quantum deterministic key distribution protocols based on the authenticated entanglement channel
International Nuclear Information System (INIS)
Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua
2010-01-01
Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.
Quantum deterministic key distribution protocols based on the authenticated entanglement channel
Energy Technology Data Exchange (ETDEWEB)
Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua [Department of Electronic Information Engineering, Nanchang University, Nanchang 330031 (China)], E-mail: znr21@163.com, E-mail: znr21@hotmail.com
2010-04-15
Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.
Energy Technology Data Exchange (ETDEWEB)
Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr
2009-08-07
A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.
GA based CNC turning center exploitation process parameters optimization
Directory of Open Access Journals (Sweden)
Z. Car
2009-01-01
Full Text Available This paper presents machining parameters (turning process optimization based on the use of artificial intelligence. To obtain greater efficiency and productivity of the machine tool, optimal cutting parameters have to be obtained. In order to find optimal cutting parameters, the genetic algorithm (GA has been used as an optimal solution finder. Optimization has to yield minimum machining time and minimum production cost, while considering technological and material constrains.
Ex-vivo machine perfusion for kidney preservation.
Hamar, Matyas; Selzner, Markus
2018-06-01
Machine perfusion is a novel strategy to decrease preservation injury, improve graft assessment, and increase organ acceptance for transplantation. This review summarizes the current advances in ex-vivo machine-based kidney preservation technologies over the last year. Ex-vivo perfusion technologies, such as hypothermic and normothermic machine perfusion and controlled oxygenated rewarming, have gained high interest in the field of organ preservation. Keeping kidney grafts functionally and metabolically active during the preservation period offers a unique chance for viability assessment, reconditioning, and organ repair. Normothermic ex-vivo kidney perfusion has been recently translated into clinical practice. Preclinical results suggest that prolonged warm perfusion appears superior than a brief end-ischemic reconditioning in terms of renal function and injury. An established standardized protocol for continuous warm perfusion is still not available for human grafts. Ex-vivo machine perfusion represents a superior organ preservation method over static cold storage. There is still an urgent need for the optimization of the perfusion fluid and machine technology and to identify the optimal indication in kidney transplantation. Recent research is focusing on graft assessment and therapeutic strategies.
The State of Deterministic Thinking among Mothers of Autistic Children
Directory of Open Access Journals (Sweden)
Mehrnoush Esbati
2011-10-01
Full Text Available Objectives: The purpose of the present study was to investigate the effectiveness of cognitive-behavior education on decreasing deterministic thinking in mothers of children with autism spectrum disorders. Methods: Participants were 24 mothers of autistic children who were referred to counseling centers of Tehran and their children’s disorder had been diagnosed at least by a psychiatrist and a counselor. They were randomly selected and assigned into control and experimental groups. Measurement tool was Deterministic Thinking Questionnaire and both groups answered it before and after education and the answers were analyzed by analysis of covariance. Results: The results indicated that cognitive-behavior education decreased deterministic thinking among mothers of autistic children, it decreased four sub scale of deterministic thinking: interaction with others, absolute thinking, prediction of future, and negative events (P<0.05 as well. Discussions: By learning cognitive and behavioral techniques, parents of children with autism can reach higher level of psychological well-being and it is likely that these cognitive-behavioral skills would have a positive impact on general life satisfaction of mothers of children with autism.
Deterministic sensitivity and uncertainty analysis for large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.
1988-01-01
The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment
Some new machine projects studied at LNS
International Nuclear Information System (INIS)
Tkatchenko, A.
1983-01-01
In front of the increasing interest for the synchrotron radiation uses, the electron storage rings have been gradually transformed in light sources. Yet, those machines had not been optimized for this use. So, in the last 10 years, numerous optimized machines have been defined and destinated to sole synchrotron light production up to the X domain. In the French domain, several projects have been elaborated, to satisfy the national needs in far UV and X radiation. - SUPER-ACO project (0,8 GeV) from Orsay. Its realisation is in progress. - SIREM project (1,2 GeV) from Grenoble CEN. - European ESRF project (5 GeV) optimized for X radiation, and for which a work group has been installed at CERN. - A X radiation national machine project (3 or 4 GeV) derived from the ESRF one. - At last, the Mars project, concerning a X radiation source storage ring aimed to a industrial use: the microlithographie [fr
Piecewise deterministic processes in biological models
Rudnicki, Ryszard
2017-01-01
This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...
Deterministic geologic processes and stochastic modeling
International Nuclear Information System (INIS)
Rautman, C.A.; Flint, A.L.
1992-01-01
This paper reports that recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. consideration of the spatial variability indicates that her are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. Because the geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling
Directory of Open Access Journals (Sweden)
Maolong Xi
2016-01-01
Full Text Available This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO for cancer feature gene selection, coupling support vector machine (SVM for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV. Finally, the BQPSO coupling SVM (BQPSO/SVM, binary PSO coupling SVM (BPSO/SVM, and genetic algorithm coupling SVM (GA/SVM are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms.
Sun, Jun; Liu, Li; Fan, Fangyun; Wu, Xiaojun
2016-01-01
This paper focuses on the feature gene selection for cancer classification, which employs an optimization algorithm to select a subset of the genes. We propose a binary quantum-behaved particle swarm optimization (BQPSO) for cancer feature gene selection, coupling support vector machine (SVM) for cancer classification. First, the proposed BQPSO algorithm is described, which is a discretized version of original QPSO for binary 0-1 optimization problems. Then, we present the principle and procedure for cancer feature gene selection and cancer classification based on BQPSO and SVM with leave-one-out cross validation (LOOCV). Finally, the BQPSO coupling SVM (BQPSO/SVM), binary PSO coupling SVM (BPSO/SVM), and genetic algorithm coupling SVM (GA/SVM) are tested for feature gene selection and cancer classification on five microarray data sets, namely, Leukemia, Prostate, Colon, Lung, and Lymphoma. The experimental results show that BQPSO/SVM has significant advantages in accuracy, robustness, and the number of feature genes selected compared with the other two algorithms. PMID:27642363
Scheduling stochastic two-machine flow shop problems to minimize expected makespan
Directory of Open Access Journals (Sweden)
Mehdi Heydari
2013-07-01
Full Text Available During the past few years, despite tremendous contribution on deterministic flow shop problem, there are only limited number of works dedicated on stochastic cases. This paper examines stochastic scheduling problems in two-machine flow shop environment for expected makespan minimization where processing times of jobs are normally distributed. Since jobs have stochastic processing times, to minimize the expected makespan, the expected sum of the second machine’s free times is minimized. In other words, by minimization waiting times for the second machine, it is possible to reach the minimum of the objective function. A mathematical method is proposed which utilizes the properties of the normal distributions. Furthermore, this method can be used as a heuristic method for other distributions, as long as the means and variances are available. The performance of the proposed method is explored using some numerical examples.
Understanding deterministic diffusion by correlated random walks
International Nuclear Information System (INIS)
Klages, R.; Korabel, N.
2002-01-01
Low-dimensional periodic arrays of scatterers with a moving point particle are ideal models for studying deterministic diffusion. For such systems the diffusion coefficient is typically an irregular function under variation of a control parameter. Here we propose a systematic scheme of how to approximate deterministic diffusion coefficients of this kind in terms of correlated random walks. We apply this approach to two simple examples which are a one-dimensional map on the line and the periodic Lorentz gas. Starting from suitable Green-Kubo formulae we evaluate hierarchies of approximations for their parameter-dependent diffusion coefficients. These approximations converge exactly yielding a straightforward interpretation of the structure of these irregular diffusion coefficients in terms of dynamical correlations. (author)
Deterministic one-way simulation of two-way, real-time cellular automata and its related problems
Energy Technology Data Exchange (ETDEWEB)
Umeo, H; Morita, K; Sugata, K
1982-06-13
The authors show that for any deterministic two-way, real-time cellular automaton, m, there exists a deterministic one-way cellular automation which can simulate m in twice real-time. Moreover the authors present a new type of deterministic one-way cellular automata, called circular cellular automata, which are computationally equivalent to deterministic two-way cellular automata. 7 references.
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Activity modes selection for project crashing through deterministic simulation
Directory of Open Access Journals (Sweden)
Ashok Mohanty
2011-12-01
Full Text Available Purpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.
Directory of Open Access Journals (Sweden)
A. Das
2015-12-01
Full Text Available In today’s world of manufacturing by machining process two things are very important, one is productivity and the other one is quality. Quality of a product generally depends upon the surface finish and dimensional deviations. The productivity can be seen as a key economic indicator of innovation in terms of higher material removal rate with a less time and cost in machining industries. Taguchi method is a popular statistical technique for optimization of input parameters to get the best output results. Dry machining is a popular methodology for machining hard material and it has been accepted by many researchers to a great extent because of its low cost and safety. Many scientists have taken various input parameters and studied their effects on different output responses. In the present paper an attempt has been made to study the effect of input parameters such as cutting speed, feed rate and depth of cut on Surface roughness, Tool wear, Power consumption and Chip reduction co-efficient under dry condition using uncoated carbide insert. Signal to noise ratio has been used to select the optimal condition for various output responses. ANOVA table has been drawn for each output responses and finally mathematical model of multiple regression analysis has been prepared and authenticity of the statistical model have been checked by normal probability plot. It has been found from the experimental result that the power consumption and flank wear both were minimum at the cutting speed of 250 rpm and 400 rpm respectively. Chip reduction coefficient has been found minimum at a depth of cut of 0.3 mm and surface roughness was minimum at 0.1 mm/rev. feed rate.
Optimal Stochastic Advertising Strategies for the U.S. Beef Industry
Kun C. Lee; Stanley Schraufnagel; Earl O. Heady
1982-01-01
An important decision variable in the promotional strategy for the beef sector is the optimal level of advertising expenditures over time. Optimal stochastic and deterministic advertising expenditures are derived for the U.S. beef industry for the period `1966 through 1980. They are compared with historical levels and gains realized by optimal advertising strategies are measured. Finally, the optimal advertising expenditures in the future are forecasted.
USING OF OBJECT-ORIENTED DESIGN PRINCIPLES IN ELECTRIC MACHINES DEVELOPMENT
Directory of Open Access Journals (Sweden)
N.N. Zablodskii
2016-03-01
Full Text Available Purpose. To develop the theoretical basis of electrical machines object-oriented design, mathematical models and software to improve their design synthesis, analysis and optimization. Methodology. We have applied object-oriented design theory in electric machines optimal design and mathematical modelling of electromagnetic transients and electromagnetic field distribution. We have correlated the simulated results with the experimental data obtained by means of the double-stator screw dryer with an external solid rotor, brushless turbo-generator exciter and induction motor with squirrel cage rotor. Results. We have developed object-oriented design methodology, transient mathematical modelling and electromagnetic field equations templates for cylindrical electrical machines, improved and remade Cartesian product and genetic optimization algorithms. This allows to develop electrical machines classifications models, included not only structure development but also parallel synthesis of mathematical models and design software, to improve electric machines efficiency and technical performance. Originality. For the first time, we have applied a new way of design and modelling of electrical machines, which is based on the basic concepts of the object-oriented analysis. For the first time is suggested to use a single class template for structural and system organization of electrical machines, invariant to their specific variety. Practical value. We have manufactured screw dryer for coil dust drying and mixing based on the performed object-oriented theory. We have developed object-oriented software for design and optimization of induction motor with squirrel cage rotor of AIR series and brushless turbo-generator exciter. The experimental studies have confirmed the adequacy of the developed object-oriented design methodology.
A deterministic algorithm for fitting a step function to a weighted point-set
Fournier, Hervé
2013-02-01
Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.
Nidheesh, N; Abdul Nazeer, K A; Ameer, P M
2017-12-01
Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Application of Fuzzy Sets for the Improvement of Routing Optimization Heuristic Algorithms
Directory of Open Access Journals (Sweden)
Mattas Konstantinos
2016-12-01
Full Text Available The determination of the optimal circular path has become widely known for its difficulty in producing a solution and for the numerous applications in the scope of organization and management of passenger and freight transport. It is a mathematical combinatorial optimization problem for which several deterministic and heuristic models have been developed in recent years, applicable to route organization issues, passenger and freight transport, storage and distribution of goods, waste collection, supply and control of terminals, as well as human resource management. Scope of the present paper is the development, with the use of fuzzy sets, of a practical, comprehensible and speedy heuristic algorithm for the improvement of the ability of the classical deterministic algorithms to identify optimum, symmetrical or non-symmetrical, circular route. The proposed fuzzy heuristic algorithm is compared to the corresponding deterministic ones, with regard to the deviation of the proposed solution from the best known solution and the complexity of the calculations needed to obtain this solution. It is shown that the use of fuzzy sets reduced up to 35% the deviation of the solution identified by the classical deterministic algorithms from the best known solution.
Sensitivity analysis and optimization algorithms for 3D forging process design
International Nuclear Information System (INIS)
Do, T.T.; Fourment, L.; Laroussi, M.
2004-01-01
This paper presents several approaches for preform shape optimization in 3D forging. The process simulation is carried out using the FORGE3 registered finite element software, and the optimization problem regards the shape of initial axisymmetrical preforms. Several objective functions are considered, like the forging energy, the forging force or a surface defect criterion. Both deterministic and stochastic optimization algorithms are tested for 3D applications. The deterministic approach uses the sensitivity analysis that provides the gradient of the objective function. It is obtained by the adjoint-state method and semi-analytical differentiation. The study of stochastic approaches aims at comparing genetic algorithms and evolution strategies. Numerical results show the feasibility of such approaches, i.e. the achieving of satisfactory solutions within a limited number of 3D simulations, less than fifty. For a more industrial problem, the forging of a gear, encouraging optimization results are obtained
Machine learning in virtual screening.
Melville, James L; Burke, Edmund K; Hirst, Jonathan D
2009-05-01
In this review, we highlight recent applications of machine learning to virtual screening, focusing on the use of supervised techniques to train statistical learning algorithms to prioritize databases of molecules as active against a particular protein target. Both ligand-based similarity searching and structure-based docking have benefited from machine learning algorithms, including naïve Bayesian classifiers, support vector machines, neural networks, and decision trees, as well as more traditional regression techniques. Effective application of these methodologies requires an appreciation of data preparation, validation, optimization, and search methodologies, and we also survey developments in these areas.
Optimizing Unmanned Aircraft System Scheduling
2008-06-01
ASC-U uses a deterministic algorithm to optimize over a given finite time horizon to obtain near-optimal UAS mission area assignments. ASC-U...the details of the algorithm . We set an upper bound on the total number of schedules that can be generated, so as not to create unsolvable ILPs. We...COL_MISSION_NAME)) If Trim( CStr (rMissions(iRow, COL_MISSION_REQUIRED))) <> "" Then If CLng(rMissions(iRow, COL_MISSION_REQUIRED)) > CLng
Boltzmann machines as a model for parallel annealing
Aarts, E.H.L.; Korst, J.H.M.
1991-01-01
The potential of Boltzmann machines to cope with difficult combinatorial optimization problems is investigated. A discussion of various (parallel) models of Boltzmann machines is given based on the theory of Markov chains. A general strategy is presented for solving (approximately) combinatorial
Directory of Open Access Journals (Sweden)
N. N. Guschinski
2015-01-01
Full Text Available The problem of minimizing the weight of transfer machine with a multi-position rotary table by placing of a work-piece at the table for processing of homogeneous batch of work-pieces is considered. To solve this problem the mathematical model and heuristic particle swarm optimization algorithm are proposed. The results of numerical experiments for two real problems of this type are given. The experiments revealed that the particle swarm optimization algorithm is more effective for the solution of the problem compared to the methods of random search and LP-search.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach.
Pasupa, Kitsuchart; Kudisthalert, Wasu
2018-01-01
Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets-Maximum Unbiased Validation Dataset-which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6.
Local deterministic theory surviving the violation of Bell's inequalities
International Nuclear Information System (INIS)
Cormier-Delanoue, C.
1984-01-01
Bell's theorem which asserts that no deterministic theory with hidden variables can give the same predictions as quantum theory, is questioned. Such a deterministic theory is presented and carefully applied to real experiments performed on pairs of correlated photons, derived from the EPR thought experiment. The ensuing predictions violate Bell's inequalities just as quantum mechanics does, and it is further shown that this discrepancy originates in the very nature of radiations. Complete locality is therefore restored while separability remains more limited [fr
Support vector machine for diagnosis cancer disease: A comparative study
Directory of Open Access Journals (Sweden)
Nasser H. Sweilam
2010-12-01
Full Text Available Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very large quadratic programming problem. Traditional optimization methods cannot be directly applied due to memory restrictions. Up to now, several approaches exist for circumventing the above shortcomings and work well. Another learning algorithm, particle swarm optimization, Quantum-behave Particle Swarm for training SVM is introduced. Another approach named least square support vector machine (LSSVM and active set strategy are introduced. The obtained results by these methods are tested on a breast cancer dataset and compared with the exact solution model problem.
Optimal control of gene mutation in DNA replication.
Yu, Juanyi; Li, Jr-Shin; Tarn, Tzyh-Jong
2012-01-01
We propose a molecular-level control system view of the gene mutations in DNA replication from the finite field concept. By treating DNA sequences as state variables, chemical mutagens and radiation as control inputs, one cell cycle as a step increment, and the measurements of the resulting DNA sequence as outputs, we derive system equations for both deterministic and stochastic discrete-time, finite-state systems of different scales. Defining the cost function as a summation of the costs of applying mutagens and the off-trajectory penalty, we solve the deterministic and stochastic optimal control problems by dynamic programming algorithm. In addition, given that the system is completely controllable, we find that the global optimum of both base-to-base and codon-to-codon deterministic mutations can always be achieved within a finite number of steps.
Vesapogu, Joshi Manohar; Peddakotla, Sujatha; Kuppa, Seetha Rama Anjaneyulu
2013-01-01
With the advancements in semiconductor technology, high power medium voltage (MV) Drives are extensively used in numerous industrial applications. Challenging technical requirements of MV Drives is to control multilevel inverter (MLI) with less Total harmonic distortion (%THD) which satisfies IEEE standard 519-1992 harmonic guidelines and less switching losses. Among all modulation control strategies for MLI, Selective harmonic elimination (SHE) technique is one of the traditionally preferred modulation control technique at fundamental switching frequency with better harmonic profile. On the other hand, the equations which are formed by SHE technique are highly non-linear in nature, may exist multiple, single or even no solution at particular modulation index (MI). However, in some MV Drive applications, it is required to operate over a range of MI. Providing analytical solutions for SHE equations during the whole range of MI from 0 to 1, has been a challenging task for researchers. In this paper, an attempt is made to solve SHE equations by using deterministic and stochastic optimization methods and comparative harmonic analysis has been carried out. An effective algorithm which minimizes %THD with less computational effort among all optimization algorithms has been presented. To validate the effectiveness of proposed MPSO technique, an experiment is carried out on a low power proto type of three phase CHB 11- level Inverter using FPGA based Xilinx's Spartan -3A DSP Controller. The experimental results proved that MPSO technique has successfully solved SHE equations over all range of MI from 0 to 1, the %THD obtained over major range of MI also satisfies IEEE 519-1992 harmonic guidelines too.
Cognitive Development Optimization Algorithm Based Support Vector Machines for Determining Diabetes
Directory of Open Access Journals (Sweden)
Utku Kose
2016-03-01
Full Text Available The definition, diagnosis and classification of Diabetes Mellitus and its complications are very important. First of all, the World Health Organization (WHO and other societies, as well as scientists have done lots of studies regarding this subject. One of the most important research interests of this subject is the computer supported decision systems for diagnosing diabetes. In such systems, Artificial Intelligence techniques are often used for several disease diagnostics to streamline the diagnostic process in daily routine and avoid misdiagnosis. In this study, a diabetes diagnosis system, which is formed via both Support Vector Machines (SVM and Cognitive Development Optimization Algorithm (CoDOA has been proposed. Along the training of SVM, CoDOA was used for determining the sigma parameter of the Gauss (RBF kernel function, and eventually, a classification process was made over the diabetes data set, which is related to Pima Indians. The proposed approach offers an alternative solution to the field of Artificial Intelligence-based diabetes diagnosis, and contributes to the related literature on diagnosis processes.
Quantum cloning machines for equatorial qubits
International Nuclear Information System (INIS)
Fan Heng; Matsumoto, Keiji; Wang Xiangbin; Wadati, Miki
2002-01-01
Quantum cloning machines for equatorial qubits are studied. For the case of a one to two phase-covariant quantum cloning machine, we present the networks consisting of quantum gates to realize the quantum cloning transformations. The copied equatorial qubits are shown to be separable by using Peres-Horodecki criterion. The optimal one to M phase-covariant quantum cloning transformations are given
Feasibility Study for Electrical Discharge Machining of Large DU-Mo Castings
Energy Technology Data Exchange (ETDEWEB)
Hill, Mary Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Dombrowski, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Clarke, Kester Diederik [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Forsyth, Robert Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Aikin, Robert M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Alexander, David John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Tegtmeier, Eric Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Robison, Jeffrey Curt [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Beard, Timothy Vance [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Edwards, Randall Lynn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Mauro, Michael Ernest [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Scott, Jeffrey E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division; Strandy, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). SIGMA Division
2016-10-31
U-10 wt. % Mo (U-10Mo) alloys are being developed as low enrichment monolithic fuel for the CONVERT program. Optimization of processing for the monolithic fuel is being pursued with the use of electrical discharge machining (EDM) under CONVERT HPRR WBS 1.2.4.5 Optimization of Coupon Preparation. The process is applicable to manufacturing experimental fuel plate specimens for the Mini-Plate-1 (MP-1) irradiation campaign. The benefits of EDM are reduced machining costs, ability to achieve higher tolerances, stress-free, burr-free surfaces eliminating the need for milling, and the ability to machine complex shapes. Kerf losses are much smaller with EDM (tenths of mm) compared to conventional machining (mm). Reliable repeatability is achievable with EDM due to its computer-generated machining programs.
Feasibility Study for Electrical Discharge Machining of Large DU-Mo Castings
International Nuclear Information System (INIS)
Hill, Mary Ann; Dombrowski, David E.; Clarke, Kester Diederik; Forsyth, Robert Thomas; Aikin, Robert M.; Alexander, David John; Tegtmeier, Eric Lee; Robison, Jeffrey Curt; Beard, Timothy Vance; Edwards, Randall Lynn; Mauro, Michael Ernest; Scott, Jeffrey E.; Strandy, Matthew Thomas
2016-01-01
U-10 wt. % Mo (U-10Mo) alloys are being developed as low enrichment monolithic fuel for the CONVERT program. Optimization of processing for the monolithic fuel is being pursued with the use of electrical discharge machining (EDM) under CONVERT HPRR WBS 1.2.4.5 Optimization of Coupon Preparation. The process is applicable to manufacturing experimental fuel plate specimens for the Mini-Plate-1 (MP-1) irradiation campaign. The benefits of EDM are reduced machining costs, ability to achieve higher tolerances, stress-free, burr-free surfaces eliminating the need for milling, and the ability to machine complex shapes. Kerf losses are much smaller with EDM (tenths of mm) compared to conventional machining (mm). Reliable repeatability is achievable with EDM due to its computer-generated machining programs.
Directory of Open Access Journals (Sweden)
Emmanouil Styvaktakis
2007-01-01
Full Text Available This paper presents the two main types of classification methods for power quality disturbances based on underlying causes: deterministic classification, giving an expert system as an example, and statistical classification, with support vector machines (a novel method as an example. An expert system is suitable when one has limited amount of data and sufficient power system expert knowledge; however, its application requires a set of threshold values. Statistical methods are suitable when large amount of data is available for training. Two important issues to guarantee the effectiveness of a classifier, data segmentation, and feature extraction are discussed. Segmentation of a sequence of data recording is preprocessing to partition the data into segments each representing a duration containing either an event or a transition between two events. Extraction of features is applied to each segment individually. Some useful features and their effectiveness are then discussed. Some experimental results are included for demonstrating the effectiveness of both systems. Finally, conclusions are given together with the discussion of some future research directions.
Directory of Open Access Journals (Sweden)
Yudong Zhang
2013-01-01
Full Text Available Automated abnormal brain detection is extremely of importance for clinical diagnosis. Over last decades numerous methods had been presented. In this paper, we proposed a novel hybrid system to classify a given MR brain image as either normal or abnormal. The proposed method first employed digital wavelet transform to extract features then used principal component analysis (PCA to reduce the feature space. Afterwards, we constructed a kernel support vector machine (KSVM with RBF kernel, using particle swarm optimization (PSO to optimize the parameters C and σ. Fivefold cross-validation was utilized to avoid overfitting. In the experimental procedure, we created a 90 images dataset brain downloaded from Harvard Medical School website. The abnormal brain MR images consist of the following diseases: glioma, metastatic adenocarcinoma, metastatic bronchogenic carcinoma, meningioma, sarcoma, Alzheimer, Huntington, motor neuron disease, cerebral calcinosis, Pick’s disease, Alzheimer plus visual agnosia, multiple sclerosis, AIDS dementia, Lyme encephalopathy, herpes encephalitis, Creutzfeld-Jakob disease, and cerebral toxoplasmosis. The 5-folded cross-validation classification results showed that our method achieved 97.78% classification accuracy, higher than 86.22% by BP-NN and 91.33% by RBF-NN. For the parameter selection, we compared PSO with those of random selection method. The results showed that the PSO is more effective to build optimal KSVM.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
A Theory of Deterministic Event Structures
Lee, I.; Rensink, Arend; Smolka, S.A.
1995-01-01
We present an w-complete algebra of a class of deterministic event structures which are labelled prime event structures where the labelling function satises a certain distinctness condition. The operators of the algebra are summation sequential composition and join. Each of these gives rise to a
Anti-deterministic behaviour of discrete systems that are less predictable than noise
Urbanowicz, Krzysztof; Kantz, Holger; Holyst, Janusz A.
2005-05-01
We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in recurrence plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact, less predictable than noise and hence is different from pseudo-random number generators.
Deterministic dynamics of plasma focus discharges
International Nuclear Information System (INIS)
Gratton, J.; Alabraba, M.A.; Warmate, A.G.; Giudice, G.
1992-04-01
The performance (neutron yield, X-ray production, etc.) of plasma focus discharges fluctuates strongly in series performed with fixed experimental conditions. Previous work suggests that these fluctuations are due to a deterministic ''internal'' dynamics involving degrees of freedom not controlled by the operator, possibly related to adsorption and desorption of impurities from the electrodes. According to these dynamics the yield of a discharge depends on the outcome of the previous ones. We study 8 series of discharges in three different facilities, with various electrode materials and operating conditions. More evidence of a deterministic internal dynamics is found. The fluctuation pattern depends on the electrode materials and other characteristics of the experiment. A heuristic mathematical model that describes adsorption and desorption of impurities from the electrodes and their consequences on the yield is presented. The model predicts steady yield or periodic and chaotic fluctuations, depending on parameters related to the experimental conditions. (author). 27 refs, 7 figs, 4 tabs
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
On the implementation of a deterministic secure coding protocol using polarization entangled photons
Ostermeyer, Martin; Walenta, Nino
2007-01-01
We demonstrate a prototype-implementation of deterministic information encoding for quantum key distribution (QKD) following the ping-pong coding protocol [K. Bostroem, T. Felbinger, Phys. Rev. Lett. 89 (2002) 187902-1]. Due to the deterministic nature of this protocol the need for post-processing the key is distinctly reduced compared to non-deterministic protocols. In the course of our implementation we analyze the practicability of the protocol and discuss some security aspects of informat...
Directory of Open Access Journals (Sweden)
Lagerev I.A.
2016-03-01
Full Text Available The article considers the problems of designing an original damping devices worn for cylindrical hinges in crane-manipulating installations of mobile machines. These devices can significantly reduce the additional impact load on a steel structure manipulators due to the presence of increased gaps in the hinges. Formulated the general formulation of nonlinear constrained optimization of the sizes of the elastic elements of the damping devices. Considered a promising design variants of elastic elements. For circular and arc elastic elements with circular and rectangular cross-section for-mulated the problems of optimal design including criterion functions and systems of geometric, technological, stiffness and strength penalty constraints. Analysis of the impact of various operating and design parameters on the results of optimal design of elastic elements was performed. Were set to the recommended the use of the constructive types of elastic elements to generate the required stiffness of the damper devices.
Real-Time Demand Side Management Algorithm Using Stochastic Optimization
Directory of Open Access Journals (Sweden)
Moses Amoasi Acquah
2018-05-01
Full Text Available A demand side management technique is deployed along with battery energy-storage systems (BESS to lower the electricity cost by mitigating the peak load of a building. Most of the existing methods rely on manual operation of the BESS, or even an elaborate building energy-management system resorting to a deterministic method that is susceptible to unforeseen growth in demand. In this study, we propose a real-time optimal operating strategy for BESS based on density demand forecast and stochastic optimization. This method takes into consideration uncertainties in demand when accounting for an optimal BESS schedule, making it robust compared to the deterministic case. The proposed method is verified and tested against existing algorithms. Data obtained from a real site in South Korea is used for verification and testing. The results show that the proposed method is effective, even for the cases where the forecasted demand deviates from the observed demand.
International Nuclear Information System (INIS)
Salcedo-Sanz, S.; Pastor-Sánchez, A.; Prieto, L.; Blanco-Aguilera, A.; García-Herrera, R.
2014-01-01
Highlights: • A novel approach for short-term wind speed prediction is presented. • The system is formed by a coral reefs optimization algorithm and an extreme learning machine. • Feature selection is carried out with the CRO to improve the ELM performance. • The method is tested in real wind farm data in USA, for the period 2007–2008. - Abstract: This paper presents a novel approach for short-term wind speed prediction based on a Coral Reefs Optimization algorithm (CRO) and an Extreme Learning Machine (ELM), using meteorological predictive variables from a physical model (the Weather Research and Forecast model, WRF). The approach is based on a Feature Selection Problem (FSP) carried out with the CRO, that must obtain a reduced number of predictive variables out of the total available from the WRF. This set of features will be the input of an ELM, that finally provides the wind speed prediction. The CRO is a novel bio-inspired approach, based on the simulation of reef formation and coral reproduction, able to obtain excellent results in optimization problems. On the other hand, the ELM is a new paradigm in neural networks’ training, that provides a robust and extremely fast training of the network. Together, these algorithms are able to successfully solve this problem of feature selection in short-term wind speed prediction. Experiments in a real wind farm in the USA show the excellent performance of the CRO–ELM approach in this FSP wind speed prediction problem
African Journals Online (AJOL)
Ubeku, EU. Vol 17 (2010) - Articles Dynamic Performances of Asynchronous Machines Abstract · Vol 18 (2011) - Articles Optimal Safety Earthing – Earth Electrode Sizing Using A Deterministic Approach Abstract · Vol 18 (2011) - Articles Sensitivity Analysis of a Horizontal Earth Electrode under Impulse Current Using a 2nd ...
Xie, Rui-Fang; Shi, Zhi-Na; Li, Zhi-Cheng; Chen, Pei-Pei; Li, Yi-Min; Zhou, Xin
2014-01-01
Using Dachengqi Tang (DCQT) as a model, high performance liquid chromatography (HPLC) fingerprints were applied to optimize machine extracting process with the Box–Behnken experimental design. HPLC fingerprints were carried out to investigate the chemical ingredients of DCQT; synthetic weighing method based on analytic hierarchy process (AHP) and criteria importance through intercriteria correlation (CRITIC) was performed to calculate synthetic scores of fingerprints; using the mark ingredien...
Deterministic secure communication protocol without using entanglement
Cai, Qing-yu
2003-01-01
We show a deterministic secure direct communication protocol using single qubit in mixed state. The security of this protocol is based on the security proof of BB84 protocol. It can be realized with current technologies.
Test Rig for Valves of Digital Displacement Machines
DEFF Research Database (Denmark)
Nørgård, Christian; Christensen, Jeppe Haals; Bech, Michael Møller
2017-01-01
A test rig for the valves of digital displacement machines has been developed at Aalborg University. It is composed of a commercial radial piston machine, which has been modified to facilitate Digital Displacement operation for a single piston. Prototype valves have been optimized, designed and m...
Experimental Investigation – Magnetic Assisted Electro Discharge Machining
Kesava Reddy, Chirra; Manzoor Hussain, M.; Satyanarayana, S.; Krishna, M. V. S. Murali
2018-04-01
Emerging technology needs advanced machined parts with high strength and temperature resistance, high fatigue life at low production cost with good surface quality to fit into various industrial applications. Electro discharge machine is one of the extensively used machines to manufacture advanced machined parts which cannot be machined by other traditional machine with high precision and accuracy. Machining of DIN 17350-1.2080 (High Carbon High Chromium steel), using electro discharge machining has been discussed in this paper. In the present investigation an effort is made to use permanent magnet at various positions near the spark zone to improve surface quality of the machined surface. Taguchi methodology is used to obtain optimal choice for each machining parameter such as peak current, pulse duration, gap voltage and Servo reference voltage etc. Process parameters have significant influence on machining characteristics and surface finish. Improvement in surface finish is observed when process parameters are set at optimum condition under the influence of magnetic field at various positions.
International Nuclear Information System (INIS)
Goreac, Dan; Kobylanski, Magdalena; Martinez, Miguel
2016-01-01
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product, the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.
Energy Technology Data Exchange (ETDEWEB)
Goreac, Dan, E-mail: Dan.Goreac@u-pem.fr; Kobylanski, Magdalena, E-mail: Magdalena.Kobylanski@u-pem.fr; Martinez, Miguel, E-mail: Miguel.Martinez@u-pem.fr [Université Paris-Est, LAMA (UMR 8050), UPEMLV, UPEC, CNRS (France)
2016-10-15
We study optimal control problems in infinite horizon whxen the dynamics belong to a specific class of piecewise deterministic Markov processes constrained to star-shaped networks (corresponding to a toy traffic model). We adapt the results in Soner (SIAM J Control Optim 24(6):1110–1122, 1986) to prove the regularity of the value function and the dynamic programming principle. Extending the networks and Krylov’s “shaking the coefficients” method, we prove that the value function can be seen as the solution to a linearized optimization problem set on a convenient set of probability measures. The approach relies entirely on viscosity arguments. As a by-product, the dual formulation guarantees that the value function is the pointwise supremum over regular subsolutions of the associated Hamilton–Jacobi integrodifferential system. This ensures that the value function satisfies Perron’s preconization for the (unique) candidate to viscosity solution.
Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.
Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung
2017-04-01
Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.
Li, Ning; Cao, Chao; Wang, Cong
2017-06-15
Supporting simultaneous access of machine-type devices is a critical challenge in machine-to-machine (M2M) communications. In this paper, we propose an optimal scheme to dynamically adjust the Access Class Barring (ACB) factor and the number of random access channel (RACH) resources for clustered machine-to-machine (M2M) communications, in which Delay-Sensitive (DS) devices coexist with Delay-Tolerant (DT) ones. In M2M communications, since delay-sensitive devices share random access resources with delay-tolerant devices, reducing the resources consumed by delay-sensitive devices means that there will be more resources available to delay-tolerant ones. Our goal is to optimize the random access scheme, which can not only satisfy the requirements of delay-sensitive devices, but also take the communication quality of delay-tolerant ones into consideration. We discuss this problem from the perspective of delay-sensitive services by adjusting the resource allocation and ACB scheme for these devices dynamically. Simulation results show that our proposed scheme realizes good performance in satisfying the delay-sensitive services as well as increasing the utilization rate of the random access resources allocated to them.
Cyclic flow shop scheduling problem with two-machine cells
Directory of Open Access Journals (Sweden)
Bożejko Wojciech
2017-06-01
Full Text Available In the paper a variant of cyclic production with setups and two-machine cell is considered. One of the stages of the problem solving consists of assigning each operation to the machine on which it will be carried out. The total number of such assignments is exponential. We propose a polynomial time algorithm finding the optimal operations to machines assignment.
Using a vision cognitive algorithm to schedule virtual machines
Zhao Jiaqi; Mhedheb Yousri; Tao Jie; Jrad Foued; Liu Qinghuai; Streit Achim
2014-01-01
Scheduling virtual machines is a major research topic for cloud computing, because it directly influences the performance, the operation cost and the quality of services. A large cloud center is normally equipped with several hundred thousand physical machines. The mission of the scheduler is to select the best one to host a virtual machine. This is an NPhard global optimization problem with grand challenges for researchers. This work studies the Virtual Machine (VM) scheduling problem on the...
Challenges for coexistence of machine to machine and human to human applications in mobile network
DEFF Research Database (Denmark)
Sanyal, R.; Cianca, E.; Prasad, Ramjee
2012-01-01
A key factor for the evolution of the mobile networks towards 4G is to bring to fruition high bandwidth per mobile node. Eventually, due to the advent of a new class of applications, namely, Machine-to-Machine, we foresee new challenges where bandwidth per user is no more the primal driver...... be evolved to address various nuances of the mobile devices used by man and machines. The bigger question is as follows. Is the state-of-the-art mobile network designed optimally to cater both the Human-to-Human and Machine-to-Machine applications? This paper presents the primary challenges....... As an immediate impact of the high penetration of M2M devices, we envisage a surge in the signaling messages for mobility and location management. The cell size will shrivel due to high tele-density resulting in even more signaling messages related to handoff and location updates. The mobile network should...
Directory of Open Access Journals (Sweden)
Євген Іванович Іванов
2015-11-01
Full Text Available This article describes the processing optimization and minimizing the number of tool changes. Two versions for setting the processing scheme, according to the rules for processing scheme setting for the «processing center» machines were offered. In the first version, according to the classical technology processing of each hole is carried out over all the passes, providing for the required accuracy of sizes and shapes. In the second version the number of tool changes is minimized. This method consists of dividing all the holes into groups according to their diameters. All the holes of the same diameter are processed over one pass; then the tools are changed and the holes of some other diameter are processed and so on. Let us consider the optimization problem. The traveling salesman problem - is one of the most famous problems in the theory of combinatorics. The problem is as follows: a travelling salesman (hawker must find the most advantageous route coming out of a town and visiting all other towns from 2,3,…to...n just once in an unknown order and come back to the first town. The distances between all the towns are known. It is necessary to determine in what order the Salesman must visit the towns so that the route should be the shortest. There exists just one absolutely precise algorithm - sorting options. This option is the longest, so the most inefficient. There are also simpler methods to solve the traveling salesman problem: the branch and bound algorithm and ant colony method and method of genetic algorithms
Distinguishing deterministic and noise components in ELM time series
International Nuclear Information System (INIS)
Zvejnieks, G.; Kuzovkov, V.N
2004-01-01
Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
International Nuclear Information System (INIS)
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case
Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.
Kang, Yun; Lanchier, Nicolas
2011-06-01
We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the
Probabilistic machine learning and artificial intelligence.
Ghahramani, Zoubin
2015-05-28
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Probabilistic machine learning and artificial intelligence
Ghahramani, Zoubin
2015-05-01
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...
Some Considerations about Modern Database Machines
Directory of Open Access Journals (Sweden)
Manole VELICANU
2010-01-01
Full Text Available Optimizing the two computing resources of any computing system - time and space - has al-ways been one of the priority objectives of any database. A current and effective solution in this respect is the computer database. Optimizing computer applications by means of database machines has been a steady preoccupation of researchers since the late seventies. Several information technologies have revolutionized the present information framework. Out of these, those which have brought a major contribution to the optimization of the databases are: efficient handling of large volumes of data (Data Warehouse, Data Mining, OLAP – On Line Analytical Processing, the improvement of DBMS – Database Management Systems facilities through the integration of the new technologies, the dramatic increase in computing power and the efficient use of it (computer networks, massive parallel computing, Grid Computing and so on. All these information technologies, and others, have favored the resumption of the research on database machines and the obtaining in the last few years of some very good practical results, as far as the optimization of the computing resources is concerned.
Keren, Baruch; Pliskin, Joseph S
2011-12-01
The optimal timing for performing radical medical procedures as joint (e.g., hip) replacement must be seriously considered. In this paper we show that under deterministic assumptions the optimal timing for joint replacement is a solution of a mathematical programming problem, and under stochastic assumptions the optimal timing can be formulated as a stochastic programming problem. We formulate deterministic and stochastic models that can serve as decision support tools. The results show that the benefit from joint replacement surgery is heavily dependent on timing. Moreover, for a special case where the patient's remaining life is normally distributed along with a normally distributed survival of the new joint, the expected benefit function from surgery is completely solved. This enables practitioners to draw the expected benefit graph, to find the optimal timing, to evaluate the benefit for each patient, to set priorities among patients and to decide if joint replacement should be performed and when.
Directory of Open Access Journals (Sweden)
Jie Yu
2015-01-01
Full Text Available Virtual power plant (VPP is an aggregation of multiple distributed generations, energy storage, and controllable loads. Affected by natural conditions, the uncontrollable distributed generations within VPP, such as wind and photovoltaic generations, are extremely random and relative. Considering the randomness and its correlation of uncontrollable distributed generations, this paper constructs the chance constraints stochastic optimal dispatch of VPP including stochastic variables and its random correlation. The probability distributions of independent wind and photovoltaic generations are described by empirical distribution functions, and their joint probability density model is established by Frank-copula function. And then, sample average approximation (SAA is applied to convert the chance constrained stochastic optimization model into a deterministic optimization model. Simulation cases are calculated based on the AIMMS. Simulation results of this paper mathematic model are compared with the results of deterministic optimization model without stochastic variables and stochastic optimization considering stochastic variables but not random correlation. Furthermore, this paper analyzes how SAA sampling frequency and the confidence level influence the results of stochastic optimization. The numerical example results show the effectiveness of the stochastic optimal dispatch of VPP considering the randomness and its correlations of distributed generations.
Machine learning for evolution strategies
Kramer, Oliver
2016-01-01
This book introduces numerous algorithmic hybridizations between both worlds that show how machine learning can improve and support evolution strategies. The set of methods comprises covariance matrix estimation, meta-modeling of fitness and constraint functions, dimensionality reduction for search and visualization of high-dimensional optimization processes, and clustering-based niching. After giving an introduction to evolution strategies and machine learning, the book builds the bridge between both worlds with an algorithmic and experimental perspective. Experiments mostly employ a (1+1)-ES and are implemented in Python using the machine learning library scikit-learn. The examples are conducted on typical benchmark problems illustrating algorithmic concepts and their experimental behavior. The book closes with a discussion of related lines of research.
A new accurate curvature matching and optimal tool based five-axis machining algorithm
International Nuclear Information System (INIS)
Lin, Than; Lee, Jae Woo; Bohez, Erik L. J.
2009-01-01
Free-form surfaces are widely used in CAD systems to describe the part surface. Today, the most advanced machining of free from surfaces is done in five-axis machining using a flat end mill cutter. However, five-axis machining requires complex algorithms for gouging avoidance, collision detection and powerful computer-aided manufacturing (CAM) systems to support various operations. An accurate and efficient method is proposed for five-axis CNC machining of free-form surfaces. The proposed algorithm selects the best tool and plans the tool path autonomously using curvature matching and integrated inverse kinematics of the machine tool. The new algorithm uses the real cutter contact tool path generated by the inverse kinematics and not the linearized piecewise real cutter location tool path
Deterministic and efficient quantum cryptography based on Bell's theorem
International Nuclear Information System (INIS)
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-01-01
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology
Energy Technology Data Exchange (ETDEWEB)
Charbonnier, D.
2004-12-15
The physical phenomena observed in turbomachines are generally three-dimensional and unsteady. A recent study revealed that a three-dimensional steady simulation can reproduce the time-averaged unsteady phenomena, since the steady flow field equations integrate deterministic stresses. The objective of this work is thus to develop an unsteady deterministic stresses model. The analogy with turbulence makes it possible to write transport equations for these stresses. The equations are implemented in steady flow solver and e model for the energy deterministic fluxes is also developed and implemented. Finally, this work shows that a three-dimensional steady simulation, by taking into account unsteady effects with transport equations of deterministic stresses, increases the computing time by only approximately 30 %, which remains very interesting compared to an unsteady simulation. (author)
Quantum cloning machines and the applications
Energy Technology Data Exchange (ETDEWEB)
Fan, Heng, E-mail: hfan@iphy.ac.cn [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Collaborative Innovation Center of Quantum Matter, Beijing 100190 (China); Wang, Yi-Nan; Jing, Li [School of Physics, Peking University, Beijing 100871 (China); Yue, Jie-Dong [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Shi, Han-Duo; Zhang, Yong-Liang; Mu, Liang-Zhu [School of Physics, Peking University, Beijing 100871 (China)
2014-11-20
No-cloning theorem is fundamental for quantum mechanics and for quantum information science that states an unknown quantum state cannot be cloned perfectly. However, we can try to clone a quantum state approximately with the optimal fidelity, or instead, we can try to clone it perfectly with the largest probability. Thus various quantum cloning machines have been designed for different quantum information protocols. Specifically, quantum cloning machines can be designed to analyze the security of quantum key distribution protocols such as BB84 protocol, six-state protocol, B92 protocol and their generalizations. Some well-known quantum cloning machines include universal quantum cloning machine, phase-covariant cloning machine, the asymmetric quantum cloning machine and the probabilistic quantum cloning machine. In the past years, much progress has been made in studying quantum cloning machines and their applications and implementations, both theoretically and experimentally. In this review, we will give a complete description of those important developments about quantum cloning and some related topics. On the other hand, this review is self-consistent, and in particular, we try to present some detailed formulations so that further study can be taken based on those results.
Quantum cloning machines and the applications
International Nuclear Information System (INIS)
Fan, Heng; Wang, Yi-Nan; Jing, Li; Yue, Jie-Dong; Shi, Han-Duo; Zhang, Yong-Liang; Mu, Liang-Zhu
2014-01-01
No-cloning theorem is fundamental for quantum mechanics and for quantum information science that states an unknown quantum state cannot be cloned perfectly. However, we can try to clone a quantum state approximately with the optimal fidelity, or instead, we can try to clone it perfectly with the largest probability. Thus various quantum cloning machines have been designed for different quantum information protocols. Specifically, quantum cloning machines can be designed to analyze the security of quantum key distribution protocols such as BB84 protocol, six-state protocol, B92 protocol and their generalizations. Some well-known quantum cloning machines include universal quantum cloning machine, phase-covariant cloning machine, the asymmetric quantum cloning machine and the probabilistic quantum cloning machine. In the past years, much progress has been made in studying quantum cloning machines and their applications and implementations, both theoretically and experimentally. In this review, we will give a complete description of those important developments about quantum cloning and some related topics. On the other hand, this review is self-consistent, and in particular, we try to present some detailed formulations so that further study can be taken based on those results
Sun, Rongyan; Yang, Xu; Ohkubo, Yuji; Endo, Katsuyoshi; Yamamura, Kazuya
2018-02-05
In recent years, reaction-sintered silicon carbide (RS-SiC) has been of interest in many engineering fields because of its excellent properties, such as its light weight, high rigidity, high heat conductance and low coefficient of thermal expansion. However, RS-SiC is difficult to machine owing to its high hardness and chemical inertness and because it contains multiple components. To overcome the problem of the poor machinability of RS-SiC in conventional machining, the application of atmospheric-pressure plasma chemical vaporization machining (AP-PCVM) to RS-SiC was proposed. As a highly efficient and damage-free figuring technique, AP-PCVM has been widely applied for the figuring of single-component materials, such as Si, SiC, quartz crystal wafers, and so forth. However, it has not been applied to RS-SiC since it is composed of multiple components. In this study, we investigated the AP-PCVM etching characteristics for RS-SiC by optimizing the gas composition. It was found that the different etching rates of the different components led to a large surface roughness. A smooth surface was obtained by applying the optimum gas composition, for which the etching rate of the Si component was equal to that of the SiC component.
Effects of optimism on gambling in the rat slot machine task.
Rafa, Dominik; Kregiel, Jakub; Popik, Piotr; Rygula, Rafal
2016-03-01
Although gambling disorder is a serious social problem in modern societies, information about the behavioral traits that could determine vulnerability to this psychopathology is still scarce. In this study, we used a recently developed ambiguous-cue interpretation (ACI) paradigm to investigate whether 'optimism' and 'pessimism' as behavioral traits may determine the gambling-like behavior of rodents. In a series of ACI tests (cognitive bias screening), we identified rats that displayed 'pessimistic' and 'optimistic' traits. Subsequently, using the rat slot machine task (rSMT), we investigated if the 'optimistic'/'pessimistic' traits could determine the crucial feature of gambling-like behavior that has been investigated in rats and humans: the interpretation of 'near-miss' outcomes as a positive (i.e., win) situation. We found that 'optimists' did not interpret 'near-miss', 'near loss', or 'clear win' as win trials more often than their 'pessimistic' conspecifics; however, the 'optimists' were statistically more likely to reach for a reward in the hopeless 'clear loss' situation. This agrees with human studies and provides a platform for modeling interactions between behavioral traits and gambling in animals. Copyright © 2015 Elsevier B.V. All rights reserved.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
International Nuclear Information System (INIS)
Wang Zhi-Gang; Gao Rui-Mei; Fan Xiao-Ming; Han Qi-Xing
2014-01-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ 0 , a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ 0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ 0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ 0 , when the stochastic system obeys some conditions and ℛ 0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations. (general)
Research on intrusion detection based on Kohonen network and support vector machine
Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi
2018-05-01
In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.
Towards deterministic optical quantum computation with coherently driven atomic ensembles
International Nuclear Information System (INIS)
Petrosyan, David
2005-01-01
Scalable and efficient quantum computation with photonic qubits requires (i) deterministic sources of single photons, (ii) giant nonlinearities capable of entangling pairs of photons, and (iii) reliable single-photon detectors. In addition, an optical quantum computer would need a robust reversible photon storage device. Here we discuss several related techniques, based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency, that are capable of implementing all of the above prerequisites for deterministic optical quantum computation with single photons
Recent achievements of the neo-deterministic seismic hazard assessment in the CEI region
International Nuclear Information System (INIS)
Panza, G.F.; Vaccari, F.; Kouteva, M.
2008-03-01
A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales - regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown. (author)
International Nuclear Information System (INIS)
Farbrot, J.E.; Nihlwing, Ch.; Svengren, H.
2005-01-01
New requirements for enhanced safety and design changes in process systems often leads to a step-wise installation of new information and control equipment in the control room of older nuclear power plants, where nowadays modern digital I and C solutions with screen-based human-machine interfaces (HMI) most often are introduced. Human factors (HF) expertise is then required to assist in specifying a unified, integrated HMI, where the entire integration of information is addressed to ensure an optimal and effective interplay between human (operators) and machine (process). Following a controlled design process is the best insurance for ending up with good solutions. This paper addresses the approach taken when introducing modern human-machine communication in the Oskarshamn 1 NPP, the results, and the lessons learned from this work with high operator involvement seen from an HF point of view. Examples of possibilities modern technology might offer for the operators are also addressed. (orig.)
Optimal Finger Search Trees in the Pointer Machine
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Lagogiannis, George; Makris, Christos
2003-01-01
We develop a new finger search tree with worst-case constant update time in the Pointer Machine (PM) model of computation. This was a major problem in the field of Data Structures and was tantalizingly open for over twenty years while many attempts by researchers were made to solve it. The result...
An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud
Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.
2017-08-01
Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.
Statistical physics of hard optimization problems
International Nuclear Information System (INIS)
Zdeborova, L.
2009-01-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an non-deterministic polynomial-complete problem the practically arising instances might, in fact, be easy to solve. The principal the question we address in the article is: How to recognize if an non-deterministic polynomial-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named 'locked' constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability (Authors)
Deterministic algorithms for multi-criteria Max-TSP
Manthey, Bodo
2012-01-01
We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Cyclic machine scheduling with tool transportation - additional calculations
Kuijpers, C.M.H.
2001-01-01
In the PhD Thesis of Kuijpers a cyclic machine scheduling problem with tool transportation is considered. For the problem with two machines, it is shown that there always exists an optimal schedule with a certain structure. This is done by means of an elaborate case study. For a number of cases some
Deterministic Properties of Serially Connected Distributed Lag Models
Directory of Open Access Journals (Sweden)
Piotr Nowak
2013-01-01
Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract
Kumar, R.; Sulaiman, E.; Soomro, H. A.; Jusoh, L. I.; Bahrim, F. S.; Omar, M. F.
2017-08-01
The recent change in innovation and employments of high-temperature magnets, permanent magnet flux switching machine (PMFSM) has turned out to be one of the suitable contenders for seaward boring, however, less intended for downhole because of high atmospheric temperature. Subsequently, this extensive review manages the design enhancement and performance examination of external rotor PMFSM for the downhole application. Preparatory, the essential design parameters required for machine configuration are computed numerically. At that point, the design enhancement strategy is actualized through deterministic technique. At last, preliminary and refined execution of the machine is contrasted and as a consequence, the yield torque is raised from 16.39Nm to 33.57Nm while depreciating the cogging torque and PM weight up to 1.77Nm and 0.79kg, individually. In this manner, it is inferred that purposed enhanced design of 12slot-22pole with external rotor is convenient for the downhole application.
Cilla, Myriam; Borgiani, Edoardo; Martínez, Javier; Duda, Georg N; Checa, Sara
2017-01-01
Today, different implant designs exist in the market; however, there is not a clear understanding of which are the best implant design parameters to achieve mechanical optimal conditions. Therefore, the aim of this project was to investigate if the geometry of a commercial short stem hip prosthesis can be further optimized to reduce stress shielding effects and achieve better short-stemmed implant performance. To reach this aim, the potential of machine learning techniques combined with parametric Finite Element analysis was used. The selected implant geometrical parameters were: total stem length (L), thickness in the lateral (R1) and medial (R2) and the distance between the implant neck and the central stem surface (D). The results show that the total stem length was not the only parameter playing a role in stress shielding. An optimized implant should aim for a decreased stem length and a reduced length of the surface in contact with the bone. The two radiuses that characterize the stem width at the distal cross-section in contact with the bone were less influential in the reduction of stress shielding compared with the other two parameters; but they also play a role where thinner stems present better results.
Deterministic nanoparticle assemblies: from substrate to solution
International Nuclear Information System (INIS)
Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J
2014-01-01
The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)
Non-convex multi-objective optimization
Pardalos, Panos M; Žilinskas, Julius
2017-01-01
Recent results on non-convex multi-objective optimization problems and methods are presented in this book, with particular attention to expensive black-box objective functions. Multi-objective optimization methods facilitate designers, engineers, and researchers to make decisions on appropriate trade-offs between various conflicting goals. A variety of deterministic and stochastic multi-objective optimization methods are developed in this book. Beginning with basic concepts and a review of non-convex single-objective optimization problems; this book moves on to cover multi-objective branch and bound algorithms, worst-case optimal algorithms (for Lipschitz functions and bi-objective problems), statistical models based algorithms, and probabilistic branch and bound approach. Detailed descriptions of new algorithms for non-convex multi-objective optimization, their theoretical substantiation, and examples for practical applications to the cell formation problem in manufacturing engineering, the process design in...
Comparison of optimization of loading patterns on the basis of SA and PMA algorithms
International Nuclear Information System (INIS)
Beliczai, Botond
2007-01-01
Optimization of loading patterns is a very important task from economical point of view in a nuclear power plant. The optimization algorithms used for this purpose can be categorized basically into two categories: deterministic ones and stochastic ones. In the Paks nuclear power plant a deterministic optimization procedure is used to optimize the loading pattern at BOC, so that the core would have maximal reactivity reserve. To the group of stochastic optimization procedures belong mainly simulated annealing (SA) procedures and genetic algorithms (GA). There are new procedures as well, which try to combine the advantages of SAs and GAs. One of them is called population mutation annealing algorithm (PMA). In the Paks NPP we would like to introduce fuel assemblies including burnable poison (Gd) in the near future. In order to be able to find the optimal loading pattern (or near-optimal loading patterns) in that case, we have to optimize our core not only for objective functions defined at BOC, but at EOC as well. For this purpose I used stochastic algorithms (SA and PMA) to investigate loading pattern optimization results for different objective functions at BOC. (author)
Andrei, Alexandru; Welkenhuysen, Marleen; Ameye, Lieveke; Nuttin, Bart; Eberle, Wolfgang
2011-01-01
Understanding the mechanical interactions between implants and the surrounding tissue is known to have an important role for improving the bio-compatibility of such devices. Using a recently developed model, a particular micro-machined neural implant design aiming the reduction of insertion forces dependence on the insertion speed was optimized. Implantations with 10 and 100 μm/s insertion speeds showed excellent agreement with the predicted behavior. Lesion size, gliosis (GFAP), inflammation (ED1) and neuronal cells density (NeuN) was evaluated after 6 week of chronic implantation showing no insertion speed dependence.
Directory of Open Access Journals (Sweden)
Bao Wang
2012-11-01
Full Text Available The accuracy of annual electric load forecasting plays an important role in the economic and social benefits of electric power systems. The least squares support vector machine (LSSVM has been proven to offer strong potential in forecasting issues, particularly by employing an appropriate meta-heuristic algorithm to determine the values of its two parameters. However, these meta-heuristic algorithms have the drawbacks of being hard to understand and reaching the global optimal solution slowly. As a novel meta-heuristic and evolutionary algorithm, the fruit fly optimization algorithm (FOA has the advantages of being easy to understand and fast convergence to the global optimal solution. Therefore, to improve the forecasting performance, this paper proposes a LSSVM-based annual electric load forecasting model that uses FOA to automatically determine the appropriate values of the two parameters for the LSSVM model. By taking the annual electricity consumption of China as an instance, the computational result shows that the LSSVM combined with FOA (LSSVM-FOA outperforms other alternative methods, namely single LSSVM, LSSVM combined with coupled simulated annealing algorithm (LSSVM-CSA, generalized regression neural network (GRNN and regression model.
Dynamic stochastic optimization
Ermoliev, Yuri; Pflug, Georg
2004-01-01
Uncertainties and changes are pervasive characteristics of modern systems involving interactions between humans, economics, nature and technology. These systems are often too complex to allow for precise evaluations and, as a result, the lack of proper management (control) may create significant risks. In order to develop robust strategies we need approaches which explic itly deal with uncertainties, risks and changing conditions. One rather general approach is to characterize (explicitly or implicitly) uncertainties by objec tive or subjective probabilities (measures of confidence or belief). This leads us to stochastic optimization problems which can rarely be solved by using the standard deterministic optimization and optimal control methods. In the stochastic optimization the accent is on problems with a large number of deci sion and random variables, and consequently the focus ofattention is directed to efficient solution procedures rather than to (analytical) closed-form solu tions. Objective an...
Stochastic optimized life cycle models for risk mitigation in power system applications
International Nuclear Information System (INIS)
Sageder, A.
1998-01-01
This ork shows the relevance of stochastic optimization in complex power system applications. It was proven that usual deterministic mean value models not only predict inaccurate results but are also most often on the risky side. The change in the market effects all kind of evaluation processes (e.g. fuel type and technology but especially financial engineering evaluations) in the endeavor of a strict risk mitigation comparison. But not only IPPs also traditional Utilities dash for risk/return optimized investment opportunities. In this study I developed a 2-phase model which can support a decision-maker in finding optimal solutions on investment and profitability. It has to be stated, that in this study no objective function will be optimized in an algorithmically way. On the one hand focus is laid on finding optimal solutions out of different choices (highest return at lowest possible risk); on the other hand the endeavor was to provide a decision makers with a better assessment of the likelihood of outcomes on investment considerations. The first (deterministic) phase computes in a Total Cost of Ownership (TCO) approach (Life cycle Calculation; DCF method). Most of the causal relations (day of operation, escalation of personal expanses, inflation, depreciation period, etc.) are defined within this phase. The second (stochastic) phase is a total new way in optimizing risk/return relations. With the some decision theory mathematics an expected value of stochastic solutions can be calculated. Furthermore probability function have to be defined out of historical data. The model not only supports profitability analysis (including regress and sensitivity analysis) but also supports a decision-maker in a decision process. Emphasis was laid on risk-return analysis, which can give the decision-maker first hand informations of the type of risk return problem (risk concave, averse or linear). Five important parameters were chosen which have the characteristics of typical
Le, Laetitia Minh Maï; Kégl, Balázs; Gramfort, Alexandre; Marini, Camille; Nguyen, David; Cherti, Mehdi; Tfaili, Sana; Tfayli, Ali; Baillet-Guffroy, Arlette; Prognon, Patrice; Chaminade, Pierre; Caudron, Eric
2018-07-01
The use of monoclonal antibodies (mAbs) constitutes one of the most important strategies to treat patients suffering from cancers such as hematological malignancies and solid tumors. These antibodies are prescribed by the physician and prepared by hospital pharmacists. An analytical control enables the quality of the preparations to be ensured. The aim of this study was to explore the development of a rapid analytical method for quality control. The method used four mAbs (Infliximab, Bevacizumab, Rituximab and Ramucirumab) at various concentrations and was based on recording Raman data and coupling them to a traditional chemometric and machine learning approach for data analysis. Compared to conventional linear approach, prediction errors are reduced with a data-driven approach using statistical machine learning methods. In the latter, preprocessing and predictive models are jointly optimized. An additional original aspect of the work involved on submitting the problem to a collaborative data challenge platform called Rapid Analytics and Model Prototyping (RAMP). This allowed using solutions from about 300 data scientists in collaborative work. Using machine learning, the prediction of the four mAbs samples was considerably improved. The best predictive model showed a combined error of 2.4% versus 14.6% using linear approach. The concentration and classification errors were 5.8% and 0.7%, only three spectra were misclassified over the 429 spectra of the test set. This large improvement obtained with machine learning techniques was uniform for all molecules but maximal for Bevacizumab with an 88.3% reduction on combined errors (2.1% versus 17.9%). Copyright © 2018 Elsevier B.V. All rights reserved.
Vollant, A.; Balarac, G.; Corre, C.
2017-09-01
New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.
Deterministic hazard quotients (HQs): Heading down the wrong road
International Nuclear Information System (INIS)
Wilde, L.; Hunter, C.; Simpson, J.
1995-01-01
The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches
Optimal reliability allocation for large software projects through soft computing techniques
DEFF Research Database (Denmark)
Madsen, Henrik; Albeanu, Grigore; Popentiu-Vladicescu, Florin
2012-01-01
or maximizing the system reliability subject to budget constraints. These kinds of optimization problems were considered both in deterministic and stochastic frameworks in literature. Recently, the intuitionistic-fuzzy optimization approach was considered as a soft computing successful modelling approach....... Firstly, a review on existing soft computing approaches to optimization is given. The main section extends the results considering self-organizing migrating algorithms for solving intuitionistic-fuzzy optimization problems attached to complex fault-tolerant software architectures which proved...
S. Boldyreva; S. Fehr (Serge); A. O'Neill; D. Wagner
2008-01-01
textabstractThe study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic
Graphic man-machine interface applied to nuclear reactor designs
International Nuclear Information System (INIS)
Pereira, Claudio M.N.A; Mol, Antonio Carlos A.
1999-01-01
The Man-Machine Interfaces have been of interest of many researchers in the area of nuclear human factors engineering, principally applied to monitoring systems. The clarity of information provides best adaptation of the men to the machine. This work proposes the development of a Graphic Man-Machine Interface applied to nuclear reactor designs as a tool to optimize them. Here is present a prototype of a graphic man-machine interface for the Hammer code developed for PC under the Windows environment. The results of its application are commented. (author)
Hussain, Lal
2018-06-01
Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.
Robust and Reliable Portfolio Optimization Formulation of a Chance Constrained Problem
Directory of Open Access Journals (Sweden)
Sengupta Raghu Nandan
2017-02-01
Full Text Available We solve a linear chance constrained portfolio optimization problem using Robust Optimization (RO method wherein financial script/asset loss return distributions are considered as extreme valued. The objective function is a convex combination of portfolio’s CVaR and expected value of loss return, subject to a set of randomly perturbed chance constraints with specified probability values. The robust deterministic counterpart of the model takes the form of Second Order Cone Programming (SOCP problem. Results from extensive simulation runs show the efficacy of our proposed models, as it helps the investor to (i utilize extensive simulation studies to draw insights into the effect of randomness in portfolio decision making process, (ii incorporate different risk appetite scenarios to find the optimal solutions for the financial portfolio allocation problem and (iii compare the risk and return profiles of the investments made in both deterministic as well as in uncertain and highly volatile financial markets.
Phase conjugation with random fields and with deterministic and random scatterers
International Nuclear Information System (INIS)
Gbur, G.; Wolf, E.
1999-01-01
The theory of distortion correction by phase conjugation, developed since the discovery of this phenomenon many years ago, applies to situations when the field that is conjugated is monochromatic and the medium with which it interacts is deterministic. In this Letter a generalization of the theory is presented that applies to phase conjugation of partially coherent waves interacting with either deterministic or random weakly scattering nonabsorbing media. copyright 1999 Optical Society of America
Directory of Open Access Journals (Sweden)
Angel A. Juan
2015-12-01
Full Text Available Many combinatorial optimization problems (COPs encountered in real-world logistics, transportation, production, healthcare, financial, telecommunication, and computing applications are NP-hard in nature. These real-life COPs are frequently characterized by their large-scale sizes and the need for obtaining high-quality solutions in short computing times, thus requiring the use of metaheuristic algorithms. Metaheuristics benefit from different random-search and parallelization paradigms, but they frequently assume that the problem inputs, the underlying objective function, and the set of optimization constraints are deterministic. However, uncertainty is all around us, which often makes deterministic models oversimplified versions of real-life systems. After completing an extensive review of related work, this paper describes a general methodology that allows for extending metaheuristics through simulation to solve stochastic COPs. ‘Simheuristics’ allow modelers for dealing with real-life uncertainty in a natural way by integrating simulation (in any of its variants into a metaheuristic-driven framework. These optimization-driven algorithms rely on the fact that efficient metaheuristics already exist for the deterministic version of the corresponding COP. Simheuristics also facilitate the introduction of risk and/or reliability analysis criteria during the assessment of alternative high-quality solutions to stochastic COPs. Several examples of applications in different fields illustrate the potential of the proposed methodology.
International Nuclear Information System (INIS)
McLaughlin, Trevor D.; Sjoden, Glenn E.; Manalo, Kevin L.
2011-01-01
With growing concerns over port security and the potential for illicit trafficking of SNM through portable cargo shipping containers, efforts are ongoing to reduce the threat via container monitoring. This paper focuses on answering an important question of how many detectors are necessary for adequate coverage of a cargo container considering the detection of neutrons and gamma rays. Deterministic adjoint transport calculations are performed with compressed helium- 3 polyethylene moderated neutron detectors and sodium activated cesium-iodide gamma-ray scintillation detectors on partial and full container models. Results indicate that the detector capability is dependent on source strength and potential shielding. Using a surrogate weapons grade plutonium leakage source, it was determined that for a 20 foot ISO container, five neutron detectors and three gamma detectors are necessary for adequate coverage. While a large CsI(Na) gamma detector has the potential to monitor the entire height of the container for SNM, the He-3 neutron detector is limited to roughly 1.25 m in depth. Detector blind spots are unavoidable inside the container volume unless additional measures are taken for adequate coverage. (author)
Machine learning of the reactor core loading pattern critical parameters
International Nuclear Information System (INIS)
Trontl, K.; Pevec, D.; Smuc, T.
2007-01-01
The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. In this paper we investigate the applicability of a machine learning model which could be used for fast loading pattern evaluation. We employed a recently introduced machine learning technique, Support Vector Regression (SVR), which has a strong theoretical background in statistical learning theory. Superior empirical performance of the method has been reported on difficult regression problems in different fields of science and technology. SVR is a data driven, kernel based, nonlinear modelling paradigm, in which model parameters are automatically determined by solving a quadratic optimization problem. The main objective of the work reported in this paper was to evaluate the possibility of applying SVR method for reactor core loading pattern modelling. The starting set of experimental data for training and testing of the machine learning algorithm was obtained using a two-dimensional diffusion theory reactor physics computer code. We illustrate the performance of the solution and discuss its applicability, i.e., complexity, speed and accuracy, with a projection to a more realistic scenario involving machine learning from the results of more accurate and time consuming three-dimensional core modelling code. (author)
Thermal design of an electric motor using Particle Swarm Optimization
International Nuclear Information System (INIS)
Jandaud, P-O; Harmand, S; Fakes, M
2012-01-01
In this paper, flow inside an electric machine called starter-alternator is studied parametrically with CFD in order to be used by a thermal lumped model coupled to an optimization algorithm using Particle Swarm Optimization (PSO). In a first case, the geometrical parameters are symmetric allowing us to model only one side of the machine. The optimized thermal results are not conclusive. In a second case, all the parameters are independent. In this case, the flow is strongly influenced by the dissymmetry. Optimization results are this time a clear improvement compared to the original machine.
A Numerical Simulation for a Deterministic Compartmental ...
African Journals Online (AJOL)
In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...
Visualization tool for human-machine interface designers
Prevost, Michael P.; Banda, Carolyn P.
1991-06-01
As modern human-machine systems continue to grow in capabilities and complexity, system operators are faced with integrating and managing increased quantities of information. Since many information components are highly related to each other, optimizing the spatial and temporal aspects of presenting information to the operator has become a formidable task for the human-machine interface (HMI) designer. The authors describe a tool in an early stage of development, the Information Source Layout Editor (ISLE). This tool is to be used for information presentation design and analysis; it uses human factors guidelines to assist the HMI designer in the spatial layout of the information required by machine operators to perform their tasks effectively. These human factors guidelines address such areas as the functional and physical relatedness of information sources. By representing these relationships with metaphors such as spring tension, attractors, and repellers, the tool can help designers visualize the complex constraint space and interacting effects of moving displays to various alternate locations. The tool contains techniques for visualizing the relative 'goodness' of a configuration, as well as mechanisms such as optimization vectors to provide guidance toward a more optimal design. Also available is a rule-based design checker to determine compliance with selected human factors guidelines.
Directory of Open Access Journals (Sweden)
MILOŠ MADIĆ
2015-11-01
Full Text Available The role of non-conventional machining processes (NCMPs in today’s manufacturing environment has been well acknowledged. For effective utilization of the capabilities and advantages of different NCMPs, selection of the most appropriate NCMP for a given machining application requires consideration of different conflicting criteria. The right choice of the NCMP is critical to the success and competitiveness of the company. As the NCMP selection problem involves consideration of different conflicting criteria, of different relative importance, the multi-criteria decision making (MCDM methods are very useful in systematical selection of the most appropriate NCMP. This paper presents the application of a recent MCDM method, i.e., the multi-objective optimization on the basis of ratio analysis (MOORA method to solve NCMP selection which has been defined considering different performance criteria of four most widely used NCMPs. In order to determine the relative significance of considered quality criteria a pair-wise comparison matrix of the analytic hierarchy process was used. The results obtained using the MOORA method showed perfect correlation with those obtained by the technique for order preference by similarity to ideal solution (TOPSIS method which proves the applicability and potentiality of this MCDM method for solving complex NCMP selection problems.
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions
Newton Methods for Large Scale Problems in Machine Learning
Hansen, Samantha Leigh
2014-01-01
The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…
Precision production: enabling deterministic throughput for precision aspheres with MRF
Maloney, Chris; Entezarian, Navid; Dumas, Paul
2017-10-01
Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.
Directory of Open Access Journals (Sweden)
Tai-fu Li
2013-01-01
Full Text Available Multivariate statistical process control is the continuation and development of unitary statistical process control. Most multivariate statistical quality control charts are usually used (in manufacturing and service industries to determine whether a process is performing as intended or if there are some unnatural causes of variation upon an overall statistics. Once the control chart detects out-of-control signals, one difficulty encountered with multivariate control charts is the interpretation of an out-of-control signal. That is, we have to determine whether one or more or a combination of variables is responsible for the abnormal signal. A novel approach for diagnosing the out-of-control signals in the multivariate process is described in this paper. The proposed methodology uses the optimized support vector machines (support vector machine classification based on genetic algorithm to recognize set of subclasses of multivariate abnormal patters, identify the responsible variable(s on the occurrence of abnormal pattern. Multiple sets of experiments are used to verify this model. The performance of the proposed approach demonstrates that this model can accurately classify the source(s of out-of-control signal and even outperforms the conventional multivariate control scheme.
Machine Learning Techniques in Optimal Design
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution
Directory of Open Access Journals (Sweden)
Timo Pukkala
2015-03-01
Full Text Available Background Decisions on forest management are made under risk and uncertainty because the stand development cannot be predicted exactly and future timber prices are unknown. Deterministic calculations may lead to biased advice on optimal forest management. The study optimized continuous cover management of boreal forest in a situation where tree growth, regeneration, and timber prices include uncertainty. Methods Both anticipatory and adaptive optimization approaches were used. The adaptive approach optimized the reservation price function instead of fixed cutting years. The future prices of different timber assortments were described by cross-correlated auto-regressive models. The high variation around ingrowth model was simulated using a model that describes the cross- and autocorrelations of the regeneration results of different species and years. Tree growth was predicted with individual tree models, the predictions of which were adjusted on the basis of a climate-induced growth trend, which was stochastic. Residuals of the deterministic diameter growth model were also simulated. They consisted of random tree factors and cross- and autocorrelated temporal terms. Results Of the analyzed factors, timber price caused most uncertainty in the calculation of the net present value of a certain management schedule. Ingrowth and climate trend were less significant sources of risk and uncertainty than tree growth. Stochastic anticipatory optimization led to more diverse post-cutting stand structures than obtained in deterministic optimization. Cutting interval was shorter when risk and uncertainty were included in the analyses. Conclusions Adaptive optimization and management led to 6%–14% higher net present values than obtained in management that was based on anticipatory optimization. Increasing risk aversion of the forest landowner led to earlier cuttings in a mature stand. The effect of risk attitude on optimization results was small.
Nonlinear programming for classification problems in machine learning
Astorino, Annabella; Fuduli, Antonio; Gaudioso, Manlio
2016-10-01
We survey some nonlinear models for classification problems arising in machine learning. In the last years this field has become more and more relevant due to a lot of practical applications, such as text and web classification, object recognition in machine vision, gene expression profile analysis, DNA and protein analysis, medical diagnosis, customer profiling etc. Classification deals with separation of sets by means of appropriate separation surfaces, which is generally obtained by solving a numerical optimization model. While linear separability is the basis of the most popular approach to classification, the Support Vector Machine (SVM), in the recent years using nonlinear separating surfaces has received some attention. The objective of this work is to recall some of such proposals, mainly in terms of the numerical optimization models. In particular we tackle the polyhedral, ellipsoidal, spherical and conical separation approaches and, for some of them, we also consider the semisupervised versions.