Acceleration techniques in the univariate Lipschitz global optimization
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela
2016-10-01
Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.
Neoliberal Optimism: Applying Market Techniques to Global Health.
Mei, Yuyang
2017-01-01
Global health and neoliberalism are becoming increasingly intertwined as organizations utilize markets and profit motives to solve the traditional problems of poverty and population health. I use field work conducted over 14 months in a global health technology company to explore how the promise of neoliberalism re-envisions humanitarian efforts. In this company's vaccine refrigerator project, staff members expect their investors and their market to allow them to achieve scale and develop accountability to their users in developing countries. However, the translation of neoliberal techniques to the global health sphere falls short of the ideal, as profits are meager and purchasing power remains with donor organizations. The continued optimism in market principles amidst such a non-ideal market reveals the tenacious ideological commitment to neoliberalism in these global health projects.
Chao, Ming; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi; Wei, Jie; Li, Tianfang
2016-01-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. (paper)
Stochastic and global optimization
Dzemyda, Gintautas; Šaltenis, Vydūnas; Zhilinskas, A; Mockus, Jonas
2002-01-01
... and Effectiveness of Controlled Random Search E. M. T. Hendrix, P. M. Ortigosa and I. García 129 9. Discrete Backtracking Adaptive Search for Global Optimization B. P. Kristinsdottir, Z. B. Zabinsky and...
Wells, Kelley C.; Millet, Dylan B.; Bousserez, Nicolas; Henze, Daven K.; Griffis, Timothy J.; Chaliyakunnel, Sreelekha; Dlugokencky, Edward J.; Saikawa, Eri; Xiang, Gao; Prinn, Ronald G.; O'Doherty, Simon; Young, Dickon; Weiss, Ray F.; Dutton, Geoff S.; Elkins, James W.; Krummel, Paul B.; Langenfelds, Ray; Steele, L. Paul
2018-01-01
We present top-down constraints on global monthly N2O emissions for 2011 from a multi-inversion approach and an ensemble of surface observations. The inversions employ the GEOS-Chem adjoint and an array of aggregation strategies to test how well current observations can constrain the spatial distribution of global N2O emissions. The strategies include (1) a standard 4D-Var inversion at native model resolution (4° × 5°), (2) an inversion for six continental and three ocean regions, and (3) a fast 4D-Var inversion based on a novel dimension reduction technique employing randomized singular value decomposition (SVD). The optimized global flux ranges from 15.9 Tg N yr-1 (SVD-based inversion) to 17.5-17.7 Tg N yr-1 (continental-scale, standard 4D-Var inversions), with the former better capturing the extratropical N2O background measured during the HIAPER Pole-to-Pole Observations (HIPPO) airborne campaigns. We find that the tropics provide a greater contribution to the global N2O flux than is predicted by the prior bottom-up inventories, likely due to underestimated agricultural and oceanic emissions. We infer an overestimate of natural soil emissions in the extratropics and find that predicted emissions are seasonally biased in northern midlatitudes. Here, optimized fluxes exhibit a springtime peak consistent with the timing of spring fertilizer and manure application, soil thawing, and elevated soil moisture. Finally, the inversions reveal a major emission underestimate in the US Corn Belt in the bottom-up inventory used here. We extensively test the impact of initial conditions on the analysis and recommend formally optimizing the initial N2O distribution to avoid biasing the inferred fluxes. We find that the SVD-based approach provides a powerful framework for deriving emission information from N2O observations: by defining the optimal resolution of the solution based on the information content of the inversion, it provides spatial information that is lost when
Mechanical Design Optimization Using Advanced Optimization Techniques
Rao, R Venkata
2012-01-01
Mechanical design includes an optimization process in which designers always consider objectives such as strength, deflection, weight, wear, corrosion, etc. depending on the requirements. However, design optimization for a complete mechanical assembly leads to a complicated objective function with a large number of design variables. It is a good practice to apply optimization techniques for individual components or intermediate assemblies than a complete assembly. Analytical or numerical methods for calculating the extreme values of a function may perform well in many practical cases, but may fail in more complex design situations. In real design problems, the number of design parameters can be very large and their influence on the value to be optimized (the goal function) can be very complicated, having nonlinear character. In these complex cases, advanced optimization algorithms offer solutions to the problems, because they find a solution near to the global optimum within reasonable time and computational ...
Introduction to Nonlinear and Global Optimization
Hendrix, E.M.T.; Tóth, B.
2010-01-01
This self-contained text provides a solid introduction to global and nonlinear optimization, providing students of mathematics and interdisciplinary sciences with a strong foundation in applied optimization techniques. The book offers a unique hands-on and critical approach to applied optimization
Stochastic global optimization as a filtering problem
Stinis, Panos
2012-01-01
We present a reformulation of stochastic global optimization as a filtering problem. The motivation behind this reformulation comes from the fact that for many optimization problems we cannot evaluate exactly the objective function to be optimized. Similarly, we may not be able to evaluate exactly the functions involved in iterative optimization algorithms. For example, we may only have access to noisy measurements of the functions or statistical estimates provided through Monte Carlo sampling. This makes iterative optimization algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection of realizations of this stochastic map and picking the realization with the best properties. This motivates the use of filtering techniques to allow focusing on realizations that are more promising than others. In particular, we present a filtering reformulation of global optimization in terms of a special case of sequential importance sampling methods called particle filters. The increasing popularity of particle filters is based on the simplicity of their implementation and their flexibility. We utilize the flexibility of particle filters to construct a stochastic global optimization algorithm which can converge to the optimal solution appreciably faster than naive global optimization. Several examples of parametric exponential density estimation are provided to demonstrate the efficiency of the approach.
Global optimization and simulated annealing
Dekkers, A.; Aarts, E.H.L.
1988-01-01
In this paper we are concerned with global optimization, which can be defined as the problem of finding points on a bounded subset of Rn in which some real valued functionf assumes its optimal (i.e. maximal or minimal) value. We present a stochastic approach which is based on the simulated annealing
Convex analysis and global optimization
Tuy, Hoang
2016-01-01
This book presents state-of-the-art results and methodologies in modern global optimization, and has been a staple reference for researchers, engineers, advanced students (also in applied mathematics), and practitioners in various fields of engineering. The second edition has been brought up to date and continues to develop a coherent and rigorous theory of deterministic global optimization, highlighting the essential role of convex analysis. The text has been revised and expanded to meet the needs of research, education, and applications for many years to come. Updates for this new edition include: · Discussion of modern approaches to minimax, fixed point, and equilibrium theorems, and to nonconvex optimization; · Increased focus on dealing more efficiently with ill-posed problems of global optimization, particularly those with hard constraints;
Evolutionary global optimization, manifolds and applications
Aguiar e Oliveira Junior, Hime
2016-01-01
This book presents powerful techniques for solving global optimization problems on manifolds by means of evolutionary algorithms, and shows in practice how these techniques can be applied to solve real-world problems. It describes recent findings and well-known key facts in general and differential topology, revisiting them all in the context of application to current optimization problems. Special emphasis is put on game theory problems. Here, these problems are reformulated as constrained global optimization tasks and solved with the help of Fuzzy ASA. In addition, more abstract examples, including minimizations of well-known functions, are also included. Although the Fuzzy ASA approach has been chosen as the main optimizing paradigm, the book suggests that other metaheuristic methods could be used as well. Some of them are introduced, together with their advantages and disadvantages. Readers should possess some knowledge of linear algebra, and of basic concepts of numerical analysis and probability theory....
Microwave tomography global optimization, parallelization and performance evaluation
Noghanian, Sima; Desell, Travis; Ashtari, Ali
2014-01-01
This book provides a detailed overview on the use of global optimization and parallel computing in microwave tomography techniques. The book focuses on techniques that are based on global optimization and electromagnetic numerical methods. The authors provide parallelization techniques on homogeneous and heterogeneous computing architectures on high performance and general purpose futuristic computers. The book also discusses the multi-level optimization technique, hybrid genetic algorithm and its application in breast cancer imaging.
Global optimization and sensitivity analysis
Cacuci, D.G.
1990-01-01
A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints
Essays and surveys in global optimization
Audet, Charles; Savard, Giles
2005-01-01
Global optimization aims at solving the most general problems of deterministic mathematical programming. In addition, once the solutions are found, this methodology is also expected to prove their optimality. With these difficulties in mind, global optimization is becoming an increasingly powerful and important methodology. This book is the most recent examination of its mathematical capability, power, and wide ranging solutions to many fields in the applied sciences.
Advances in stochastic and deterministic global optimization
Zhigljavsky, Anatoly; Žilinskas, Julius
2016-01-01
Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...
Simulation-based optimization parametric optimization techniques and reinforcement learning
Gosavi, Abhijit
2003-01-01
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...
On the efficiency of chaos optimization algorithms for global optimization
Yang Dixiong; Li Gang; Cheng Gengdong
2007-01-01
Chaos optimization algorithms as a novel method of global optimization have attracted much attention, which were all based on Logistic map. However, we have noticed that the probability density function of the chaotic sequences derived from Logistic map is a Chebyshev-type one, which may affect the global searching capacity and computational efficiency of chaos optimization algorithms considerably. Considering the statistical property of the chaotic sequences of Logistic map and Kent map, the improved hybrid chaos-BFGS optimization algorithm and the Kent map based hybrid chaos-BFGS algorithm are proposed. Five typical nonlinear functions with multimodal characteristic are tested to compare the performance of five hybrid optimization algorithms, which are the conventional Logistic map based chaos-BFGS algorithm, improved Logistic map based chaos-BFGS algorithm, Kent map based chaos-BFGS algorithm, Monte Carlo-BFGS algorithm, mesh-BFGS algorithm. The computational performance of the five algorithms is compared, and the numerical results make us question the high efficiency of the chaos optimization algorithms claimed in some references. It is concluded that the efficiency of the hybrid optimization algorithms is influenced by the statistical property of chaotic/stochastic sequences generated from chaotic/stochastic algorithms, and the location of the global optimum of nonlinear functions. In addition, it is inappropriate to advocate the high efficiency of the global optimization algorithms only depending on several numerical examples of low-dimensional functions
On benchmarking Stochastic Global Optimization Algorithms
Hendrix, E.M.T.; Lancinskas, A.
2015-01-01
A multitude of heuristic stochastic optimization algorithms have been described in literature to obtain good solutions of the box-constrained global optimization problem often with a limit on the number of used function evaluations. In the larger question of which algorithms behave well on which
Physical optimization of afterloading techniques
Anderson, L.L.
1985-01-01
Physical optimization in brachytherapy refers to the process of determining the radioactive-source configuration which yields a desired dose distribution. In manually afterloaded intracavitary therapy for cervix cancer, discrete source strengths are selected iteratively to minimize the sum of squares of differences between trial and target doses. For remote afterloading with a stepping-source device, optimized (continuously variable) dwell times are obtained, either iteratively or analytically, to give least squares approximations to dose at an arbitrary number of points; in vaginal irradiation for endometrial cancer, the objective has included dose uniformity at applicator surface points in addition to a tapered contour of target dose at depth. For template-guided interstitial implants, seed placement at rectangular-grid mesh points may be least squares optimized within target volumes defined by computerized tomography; effective optimization is possible only for (uniform) seed strength high enough that the desired average peripheral dose is achieved with a significant fraction of empty seed locations. (orig.) [de
Fast global sequence alignment technique
Bonny, Mohamed Talal
2011-11-01
Bioinformatics database is growing exponentially in size. Processing these large amount of data may take hours of time even if super computers are used. One of the most important processing tool in Bioinformatics is sequence alignment. We introduce fast alignment algorithm, called \\'Alignment By Scanning\\' (ABS), to provide an approximate alignment of two DNA sequences. We compare our algorithm with the wellknown sequence alignment algorithms, the \\'GAP\\' (which is heuristic) and the \\'Needleman-Wunsch\\' (which is optimal). The proposed algorithm achieves up to 51% enhancement in alignment score when it is compared with the GAP Algorithm. The evaluations are conducted using different lengths of DNA sequences. © 2011 IEEE.
Optimal beneficiation of global resources
Aloisi de Larderel, J. (Industry and Environment Office, Paris (France). United Nations Environment Programme)
1989-01-01
The growth of the world's population and related human activities are clearly leaving major effects on the environment and on the level of use of natural resources: forests are disappearing, air pollution is leading to acid rains, changes are occuring in the atmospheric ozone and global climate, more and more people lack access to reasonable safe supplies of water, soil pollution is becoming a problem, mineral and energy resources are increasingly being used. Producing more with less, producing more, polluting less, these are basic challenges that the world now faces. Low- and non-waste technologies are certainly one of the keys to those challenges.
Optimization of Mangala Hydropower Station, Pakistan, using Optimization Techniques
Zaman Muhammad
2017-01-01
Full Text Available Hydropower generation is one of the key element in the economy of a country. The present study focusses on the optimal electricity generation from the Mangla reservoir in Pakistan. A mathematical model has been developed for the Mangla hydropower station and particle swarm and genetic algorithm optimization techniques were applied at this model for optimal electricity generation. Results revealed that electricity production increases with the application of optimization techniques at the proposed mathematical model. Genetic Algorithm can produce maximum electricity than Particle swarm optimization but the time of execution of particle swarm optimization is much lesser than the Genetic algorithm. Mangla hydropower station can produce up to 59*109 kWh electricity by using the flows optimally than 47*108 kWh production from traditional methods.
Machine Learning Techniques in Optimal Design
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution
A Direct Search Algorithm for Global Optimization
Enrique Baeyens
2016-06-01
Full Text Available A direct search algorithm is proposed for minimizing an arbitrary real valued function. The algorithm uses a new function transformation and three simplex-based operations. The function transformation provides global exploration features, while the simplex-based operations guarantees the termination of the algorithm and provides global convergence to a stationary point if the cost function is differentiable and its gradient is Lipschitz continuous. The algorithm’s performance has been extensively tested using benchmark functions and compared to some well-known global optimization algorithms. The results of the computational study show that the algorithm combines both simplicity and efficiency and is competitive with the heuristics-based strategies presently used for global optimization.
Global optimization methods for engineering design
Arora, Jasbir S.
1990-01-01
The problem is to find a global minimum for the Problem P. Necessary and sufficient conditions are available for local optimality. However, global solution can be assured only under the assumption of convexity of the problem. If the constraint set S is compact and the cost function is continuous on it, existence of a global minimum is guaranteed. However, in view of the fact that no global optimality conditions are available, a global solution can be found only by an exhaustive search to satisfy Inequality. The exhaustive search can be organized in such a way that the entire design space need not be searched for the solution. This way the computational burden is reduced somewhat. It is concluded that zooming algorithm for global optimizations appears to be a good alternative to stochastic methods. More testing is needed; a general, robust, and efficient local minimizer is required. IDESIGN was used in all numerical calculations which is based on a sequential quadratic programming algorithm, and since feasible set keeps on shrinking, a good algorithm to find an initial feasible point is required. Such algorithms need to be developed and evaluated.
Conference on Convex Analysis and Global Optimization
Pardalos, Panos
2001-01-01
There has been much recent progress in global optimization algo rithms for nonconvex continuous and discrete problems from both a theoretical and a practical perspective. Convex analysis plays a fun damental role in the analysis and development of global optimization algorithms. This is due essentially to the fact that virtually all noncon vex optimization problems can be described using differences of convex functions and differences of convex sets. A conference on Convex Analysis and Global Optimization was held during June 5 -9, 2000 at Pythagorion, Samos, Greece. The conference was honoring the memory of C. Caratheodory (1873-1950) and was en dorsed by the Mathematical Programming Society (MPS) and by the Society for Industrial and Applied Mathematics (SIAM) Activity Group in Optimization. The conference was sponsored by the European Union (through the EPEAEK program), the Department of Mathematics of the Aegean University and the Center for Applied Optimization of the University of Florida, by th...
Deterministic global optimization an introduction to the diagonal approach
Sergeyev, Yaroslav D
2017-01-01
This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...
Generation of Articulated Mechanisms by Optimization Techniques
Kawamoto, Atsushi
2004-01-01
optimization [Paper 2] 3. Branch and bound global optimization [Paper 3] 4. Path-generation problems [Paper 4] In terms of the objective of the articulated mechanism design problems, the first to third papers deal with maximization of output displacement, while the fourth paper solves prescribed path...... generation problems. From a mathematical programming point of view, the methods proposed in the first and third papers are categorized as deterministic global optimization, while those of the second and fourth papers are categorized as gradient-based local optimization. With respect to design variables, only...... directly affects the result of the associated sensitivity analysis. Another critical issue for mechanism design is the concept of mechanical degrees of freedom and this should be also considered for obtaining a proper articulated mechanism. The thesis treats this inherently discrete criterion in some...
Application of surrogate-based global optimization to aerodynamic design
Pérez, Esther
2016-01-01
Aerodynamic design, like many other engineering applications, is increasingly relying on computational power. The growing need for multi-disciplinarity and high fidelity in design optimization for industrial applications requires a huge number of repeated simulations in order to find an optimal design candidate. The main drawback is that each simulation can be computationally expensive – this becomes an even bigger issue when used within parametric studies, automated search or optimization loops, which typically may require thousands of analysis evaluations. The core issue of a design-optimization problem is the search process involved. However, when facing complex problems, the high-dimensionality of the design space and the high-multi-modality of the target functions cannot be tackled with standard techniques. In recent years, global optimization using meta-models has been widely applied to design exploration in order to rapidly investigate the design space and find sub-optimal solutions. Indeed, surrogat...
A Novel Particle Swarm Optimization Algorithm for Global Optimization.
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms.
Global Optimization of Nonlinear Blend-Scheduling Problems
Pedro A. Castillo Castillo
2017-04-01
Full Text Available The scheduling of gasoline-blending operations is an important problem in the oil refining industry. This problem not only exhibits the combinatorial nature that is intrinsic to scheduling problems, but also non-convex nonlinear behavior, due to the blending of various materials with different quality properties. In this work, a global optimization algorithm is proposed to solve a previously published continuous-time mixed-integer nonlinear scheduling model for gasoline blending. The model includes blend recipe optimization, the distribution problem, and several important operational features and constraints. The algorithm employs piecewise McCormick relaxation (PMCR and normalized multiparametric disaggregation technique (NMDT to compute estimates of the global optimum. These techniques partition the domain of one of the variables in a bilinear term and generate convex relaxations for each partition. By increasing the number of partitions and reducing the domain of the variables, the algorithm is able to refine the estimates of the global solution. The algorithm is compared to two commercial global solvers and two heuristic methods by solving four examples from the literature. Results show that the proposed global optimization algorithm performs on par with commercial solvers but is not as fast as heuristic approaches.
Competing intelligent search agents in global optimization
Streltsov, S.; Vakili, P. [Boston Univ., MA (United States); Muchnik, I. [Rutgers Univ., Piscataway, NJ (United States)
1996-12-31
In this paper we present a new search methodology that we view as a development of intelligent agent approach to the analysis of complex system. The main idea is to consider search process as a competition mechanism between concurrent adaptive intelligent agents. Agents cooperate in achieving a common search goal and at the same time compete with each other for computational resources. We propose a statistical selection approach to resource allocation between agents that leads to simple and efficient on average index allocation policies. We use global optimization as the most general setting that encompasses many types of search problems, and show how proposed selection policies can be used to improve and combine various global optimization methods.
Global Optimization Ensemble Model for Classification Methods
Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab
2014-01-01
Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382
Global Optimization Ensemble Model for Classification Methods
Hina Anwar
2014-01-01
Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.
Solving global optimization problems on GPU cluster
Barkalov, Konstantin; Gergel, Victor; Lebedev, Ilya [Lobachevsky State University of Nizhni Novgorod, Gagarin Avenue 23, 603950 Nizhni Novgorod (Russian Federation)
2016-06-08
The paper contains the results of investigation of a parallel global optimization algorithm combined with a dimension reduction scheme. This allows solving multidimensional problems by means of reducing to data-independent subproblems with smaller dimension solved in parallel. The new element implemented in the research consists in using several graphic accelerators at different computing nodes. The paper also includes results of solving problems of well-known multiextremal test class GKLS on Lobachevsky supercomputer using tens of thousands of GPU cores.
Efficient reanalysis techniques for robust topology optimization
Amir, Oded; Sigmund, Ole; Lazarov, Boyan Stefanov
2012-01-01
efficient robust topology optimization procedures based on reanalysis techniques. The approach is demonstrated on two compliant mechanism design problems where robust design is achieved by employing either a worst case formulation or a stochastic formulation. It is shown that the time spent on finite...
A perturbed martingale approach to global optimization
Sarkar, Saikat [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Roy, Debasish, E-mail: royd@civil.iisc.ernet.in [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Vasu, Ram Mohan [Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore 560012 (India)
2014-08-01
A new global stochastic search, guided mainly through derivative-free directional information computable from the sample statistical moments of the design variables within a Monte Carlo setup, is proposed. The search is aided by imparting to the directional update term additional layers of random perturbations referred to as ‘coalescence’ and ‘scrambling’. A selection step, constituting yet another avenue for random perturbation, completes the global search. The direction-driven nature of the search is manifest in the local extremization and coalescence components, which are posed as martingale problems that yield gain-like update terms upon discretization. As anticipated and numerically demonstrated, to a limited extent, against the problem of parameter recovery given the chaotic response histories of a couple of nonlinear oscillators, the proposed method appears to offer a more rational, more accurate and faster alternative to most available evolutionary schemes, prominently the particle swarm optimization. - Highlights: • Evolutionary global optimization is posed as a perturbed martingale problem. • Resulting search via additive updates is a generalization over Gateaux derivatives. • Additional layers of random perturbation help avoid trapping at local extrema. • The approach ensures efficient design space exploration and high accuracy. • The method is numerically assessed via parameter recovery of chaotic oscillators.
Dual Schroedinger Equation as Global Optimization Algorithm
Huang Xiaofei; eGain Communications, Mountain View, CA 94043
2011-01-01
The dual Schroedinger equation is defined as replacing the imaginary number i by -1 in the original one. This paper shows that the dual equation shares the same stationary states as the original one. Different from the original one, it explicitly defines a dynamic process for a system to evolve from any state to lower energy states and eventually to the lowest one. Its power as a global optimization algorithm might be used by nature for constructing atoms and molecules. It shall be interesting to verify its existence in nature.
Software for the grouped optimal aggregation technique
Brown, P. M.; Shaw, G. W. (Principal Investigator)
1982-01-01
The grouped optimal aggregation technique produces minimum variance, unbiased estimates of acreage and production for countries, zones (states), or any designated collection of acreage strata. It uses yield predictions, historical acreage information, and direct acreage estimate from satellite data. The acreage strata are grouped in such a way that the ratio model over historical acreage provides a smaller variance than if the model were applied to each individual stratum. An optimal weighting matrix based on historical acreages, provides the link between incomplete direct acreage estimates and the total, current acreage estimate.
Global Optimization using Interval Analysis : Interval Optimization for Aerospace Applications
Van Kampen, E.
2010-01-01
Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in
Fusion blanket design and optimization techniques
Gohar, Y.
2005-01-01
In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to define the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design techniques of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art techniques and tools for performing blanket design and analysis. This report describes some of the BSDOS techniques and demonstrates its use. In addition, the use of the optimization technique of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this report, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design techniques
Computational optimization techniques applied to microgrids planning
Gamarra, Carlos; Guerrero, Josep M.
2015-01-01
Microgrids are expected to become part of the next electric power system evolution, not only in rural and remote areas but also in urban communities. Since microgrids are expected to coexist with traditional power grids (such as district heating does with traditional heating systems......), their planning process must be addressed to economic feasibility, as a long-term stability guarantee. Planning a microgrid is a complex process due to existing alternatives, goals, constraints and uncertainties. Usually planning goals conflict each other and, as a consequence, different optimization problems...... appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...
Optimal design of RTCs in digital circuit fault self-repair based on global signal optimization
Zhang Junbin; Cai Jinyan; Meng Yafeng
2016-01-01
Since digital circuits have been widely and thoroughly applied in various fields, electronic systems are increasingly more complicated and require greater reliability. Faults may occur in elec-tronic systems in complicated environments. If immediate field repairs are not made on the faults, elec-tronic systems will not run normally, and this will lead to serious losses. The traditional method for improving system reliability based on redundant fault-tolerant technique has been unable to meet the requirements. Therefore, on the basis of (evolvable hardware)-based and (reparation balance technology)-based electronic circuit fault self-repair strategy proposed in our preliminary work, the optimal design of rectification circuits (RTCs) in electronic circuit fault self-repair based on global sig-nal optimization is deeply researched in this paper. First of all, the basic theory of RTC optimal design based on global signal optimization is proposed. Secondly, relevant considerations and suitable ranges are analyzed. Then, the basic flow of RTC optimal design is researched. Eventually, a typical circuit is selected for simulation verification, and detailed simulated analysis is made on five circumstances that occur during RTC evolution. The simulation results prove that compared with the conventional design method based RTC, the global signal optimization design method based RTC is lower in hardware cost, faster in circuit evolution, higher in convergent precision, and higher in circuit evolution success rate. Therefore, the global signal optimization based RTC optimal design method applied in the elec-tronic circuit fault self-repair technology is proven to be feasible, effective, and advantageous.
Proposal of Evolutionary Simplex Method for Global Optimization Problem
Shimizu, Yoshiaki
To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.
Qingyang Zhang
2015-02-01
Full Text Available Bird Mating Optimizer (BMO is a novel meta-heuristic optimization algorithm inspired by intelligent mating behavior of birds. However, it is still insufficient in convergence of speed and quality of solution. To overcome these drawbacks, this paper proposes a hybrid algorithm (TLBMO, which is established by combining the advantages of Teaching-learning-based optimization (TLBO and Bird Mating Optimizer (BMO. The performance of TLBMO is evaluated on 23 benchmark functions, and compared with seven state-of-the-art approaches, namely BMO, TLBO, Artificial Bee Bolony (ABC, Particle Swarm Optimization (PSO, Fast Evolution Programming (FEP, Differential Evolution (DE, Group Search Optimization (GSO. Experimental results indicate that the proposed method performs better than other existing algorithms for global numerical optimization.
Parallel Global Optimization with the Particle Swarm Algorithm (Preprint)
Schutte, J. F; Reinbolt, J. A; Fregly, B. J; Haftka, R. T; George, A. D
2004-01-01
.... To obtain enhanced computational throughput and global search capability, we detail the coarse-grained parallelization of an increasingly popular global search method, the Particle Swarm Optimization (PSO) algorithm...
Evolutionary optimization technique for site layout planning
El Ansary, Ayman M.
2014-02-01
Solving the site layout planning problem is a challenging task. It requires an iterative approach to satisfy design requirements (e.g. energy efficiency, skyview, daylight, roads network, visual privacy, and clear access to favorite views). These design requirements vary from one project to another based on location and client preferences. In the Gulf region, the most important socio-cultural factor is the visual privacy in indoor space. Hence, most of the residential houses in this region are surrounded by high fences to provide privacy, which has a direct impact on other requirements (e.g. daylight and direction to a favorite view). This paper introduces a novel technique to optimally locate and orient residential buildings to satisfy a set of design requirements. The developed technique is based on genetic algorithm which explores the search space for possible solutions. This study considers two dimensional site planning problems. However, it can be extended to solve three dimensional cases. A case study is presented to demonstrate the efficiency of this technique in solving the site layout planning of simple residential dwellings. © 2013 Elsevier B.V. All rights reserved.
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto
Identification of metabolic system parameters using global optimization methods
Gatzke Edward P
2006-01-01
Full Text Available Abstract Background The problem of estimating the parameters of dynamic models of complex biological systems from time series data is becoming increasingly important. Methods and results Particular consideration is given to metabolic systems that are formulated as Generalized Mass Action (GMA models. The estimation problem is posed as a global optimization task, for which novel techniques can be applied to determine the best set of parameter values given the measured responses of the biological system. The challenge is that this task is nonconvex. Nonetheless, deterministic optimization techniques can be used to find a global solution that best reconciles the model parameters and measurements. Specifically, the paper employs branch-and-bound principles to identify the best set of model parameters from observed time course data and illustrates this method with an existing model of the fermentation pathway in Saccharomyces cerevisiae. This is a relatively simple yet representative system with five dependent states and a total of 19 unknown parameters of which the values are to be determined. Conclusion The efficacy of the branch-and-reduce algorithm is illustrated by the S. cerevisiae example. The method described in this paper is likely to be widely applicable in the dynamic modeling of metabolic networks.
Parallel halftoning technique using dot diffusion optimization
Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara
2017-05-01
In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.
Techniques for optimizing inerting in electron processors
Rangwalla, I.J.; Korn, D.J.; Nablo, S.V.
1993-01-01
The design of an ''inert gas'' distribution system in an electron processor must satisfy a number of requirements. The first of these is the elimination or control of beam produced ozone and NO x which can be transported from the process zone by the product into the work area. Since the tolerable levels for O 3 in occupied areas around the processor are 3 in the beam heated process zone, or exhausting and dilution of the gas at the processor exit. The second requirement of the inerting system is to provide a suitable environment for completing efficient, free radical initiated addition polymerization. The competition between radical loss through de-excitation and that from O 2 quenching must be understood. This group has used gas chromatographic analysis of electron cured coatings to study the trade-offs of delivered dose, dose rate and O 2 concentrations in the process zone to determine the tolerable ranges of parameter excursions for production quality control purposes. These techniques are described for an ink coating system on paperboard, where a broad range of process parameters have been studied (D, D radical, O 2 ). It is then shown how the technique is used to optimize the use of higher purity (10-100 ppm O 2 ) nitrogen gas for inerting, in combination with lower purity (2-20,000 ppm O 2 ) non-cryogenically produced gas, as from a membrane or pressure swing adsorption generators. (author)
3rd World Congress on Global Optimization in Engineering & Science
Ruan, Ning; Xing, Wenxun; WCGO-III; Advances in Global Optimization
2015-01-01
This proceedings volume addresses advances in global optimization—a multidisciplinary research field that deals with the analysis, characterization, and computation of global minima and/or maxima of nonlinear, non-convex, and nonsmooth functions in continuous or discrete forms. The volume contains selected papers from the third biannual World Congress on Global Optimization in Engineering & Science (WCGO), held in the Yellow Mountains, Anhui, China on July 8-12, 2013. The papers fall into eight topical sections: mathematical programming; combinatorial optimization; duality theory; topology optimization; variational inequalities and complementarity problems; numerical optimization; stochastic models and simulation; and complex simulation and supply chain analysis.
4th International Conference on Frontiers in Global Optimization
Pardalos, Panos
2004-01-01
Global Optimization has emerged as one of the most exciting new areas of mathematical programming. Global optimization has received a wide attraction from many fields in the past few years, due to the success of new algorithms for addressing previously intractable problems from diverse areas such as computational chemistry and biology, biomedicine, structural optimization, computer sciences, operations research, economics, and engineering design and control. This book contains refereed invited papers submitted at the 4th international confer ence on Frontiers in Global Optimization held at Santorini, Greece during June 8-12, 2003. Santorini is one of the few sites of Greece, with wild beauty created by the explosion of a volcano which is in the middle of the gulf of the island. The mystic landscape with its numerous mult-extrema, was an inspiring location particularly for researchers working on global optimization. The three previous conferences on "Recent Advances in Global Opti mization", "State-of-the-...
Narinder Singh
2018-03-01
Full Text Available The quest for an efficient nature-inspired optimization technique has continued over the last few decades. In this paper, a hybrid nature-inspired optimization technique has been proposed. The hybrid algorithm has been constructed using Mean Grey Wolf Optimizer (MGWO and Whale Optimizer Algorithm (WOA. We have utilized the spiral equation of Whale Optimizer Algorithm for two procedures in the Hybrid Approach GWO (HAGWO algorithm: (i firstly, we used the spiral equation in Grey Wolf Optimizer algorithm for balance between the exploitation and the exploration process in the new hybrid approach; and (ii secondly, we also applied this equation in the whole population in order to refrain from the premature convergence and trapping in local minima. The feasibility and effectiveness of the hybrid algorithm have been tested by solving some standard benchmarks, XOR, Baloon, Iris, Breast Cancer, Welded Beam Design, Pressure Vessel Design problems and comparing the results with those obtained through other metaheuristics. The solutions prove that the newly existing hybrid variant has higher stronger stability, faster convergence rate and computational accuracy than other nature-inspired metaheuristics on the maximum number of problems and can successfully resolve the function of constrained nonlinear optimization in reality.
Decentralized Control Using Global Optimization (DCGO) (Preprint)
Flint, Matthew; Khovanova, Tanya; Curry, Michael
2007-01-01
The coordination of a team of distributed air vehicles requires a complex optimization, balancing limited communication bandwidths, non-instantaneous planning times and network delays, while at the...
Application of Nontraditional Optimization Techniques for Airfoil Shape Optimization
R. Mukesh
2012-01-01
Full Text Available The method of optimization algorithms is one of the most important parameters which will strongly influence the fidelity of the solution during an aerodynamic shape optimization problem. Nowadays, various optimization methods, such as genetic algorithm (GA, simulated annealing (SA, and particle swarm optimization (PSO, are more widely employed to solve the aerodynamic shape optimization problems. In addition to the optimization method, the geometry parameterization becomes an important factor to be considered during the aerodynamic shape optimization process. The objective of this work is to introduce the knowledge of describing general airfoil geometry using twelve parameters by representing its shape as a polynomial function and coupling this approach with flow solution and optimization algorithms. An aerodynamic shape optimization problem is formulated for NACA 0012 airfoil and solved using the methods of simulated annealing and genetic algorithm for 5.0 deg angle of attack. The results show that the simulated annealing optimization scheme is more effective in finding the optimum solution among the various possible solutions. It is also found that the SA shows more exploitation characteristics as compared to the GA which is considered to be more effective explorer.
Interactive Cosegmentation Using Global and Local Energy Optimization
Xingping Dong,; Jianbing Shen,; Shao, Ling; Yang, Ming-Hsuan
2015-01-01
We propose a novel interactive cosegmentation method using global and local energy optimization. The global energy includes two terms: 1) the global scribbled energy and 2) the interimage energy. The first one utilizes the user scribbles to build the Gaussian mixture model and improve the cosegmentation performance. The second one is a global constraint, which attempts to match the histograms of common objects. To minimize the local energy, we apply the spline regression to learn the smoothne...
Spatiotemporal radiotherapy planning using a global optimization approach
Adibi, Ali; Salari, Ehsan
2018-02-01
This paper aims at quantifying the extent of potential therapeutic gain, measured using biologically effective dose (BED), that can be achieved by altering the radiation dose distribution over treatment sessions in fractionated radiotherapy. To that end, a spatiotemporally integrated planning approach is developed, where the spatial and temporal dose modulations are optimized simultaneously. The concept of equivalent uniform BED (EUBED) is used to quantify and compare the clinical quality of spatiotemporally heterogeneous dose distributions in target and critical structures. This gives rise to a large-scale non-convex treatment-plan optimization problem, which is solved using global optimization techniques. The proposed spatiotemporal planning approach is tested on two stylized cancer cases resembling two different tumor sites and sensitivity analysis is performed for radio-biological and EUBED parameters. Numerical results validate that spatiotemporal plans are capable of delivering a larger BED to the target volume without increasing the BED in critical structures compared to conventional time-invariant plans. In particular, this additional gain is attributed to the irradiation of different regions of the target volume at different treatment sessions. Additionally, the trade-off between the potential therapeutic gain and the number of distinct dose distributions is quantified, which suggests a diminishing marginal gain as the number of dose distributions increases.
Theory and Algorithms for Global/Local Design Optimization
Haftka, Raphael T
2004-01-01
... the component and overall design as well as on exploration of global optimization algorithms. In the former category, heuristic decomposition was followed with proof that it solves the original problem...
Theory and Algorithms for Global/Local Design Optimization
Watson, Layne T; Guerdal, Zafer; Haftka, Raphael T
2005-01-01
The motivating application for this research is the global/local optimal design of composite aircraft structures such as wings and fuselages, but the theory and algorithms are more widely applicable...
Global optimization of silicon nanowires for efficient parametric processes
Vukovic, Dragana; Xu, Jing; Mørk, Jesper
2013-01-01
We present a global optimization of silicon nanowires for parametric single-pump mixing. For the first time, the effect of surface roughness-induced loss is included in the analysis, significantly influencing the optimum waveguide dimensions.......We present a global optimization of silicon nanowires for parametric single-pump mixing. For the first time, the effect of surface roughness-induced loss is included in the analysis, significantly influencing the optimum waveguide dimensions....
Global optimization framework for solar building design
Silva, N.; Alves, N.; Pascoal-Faria, P.
2017-07-01
The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.
Optimal placement of FACTS devices using optimization techniques: A review
Gaur, Dipesh; Mathew, Lini
2018-03-01
Modern power system is dealt with overloading problem especially transmission network which works on their maximum limit. Today’s power system network tends to become unstable and prone to collapse due to disturbances. Flexible AC Transmission system (FACTS) provides solution to problems like line overloading, voltage stability, losses, power flow etc. FACTS can play important role in improving static and dynamic performance of power system. FACTS devices need high initial investment. Therefore, FACTS location, type and their rating are vital and should be optimized to place in the network for maximum benefit. In this paper, different optimization methods like Particle Swarm Optimization (PSO), Genetic Algorithm (GA) etc. are discussed and compared for optimal location, type and rating of devices. FACTS devices such as Thyristor Controlled Series Compensator (TCSC), Static Var Compensator (SVC) and Static Synchronous Compensator (STATCOM) are considered here. Mentioned FACTS controllers effects on different IEEE bus network parameters like generation cost, active power loss, voltage stability etc. have been analyzed and compared among the devices.
Optimal Technique in Cardiac Anesthesia Recovery
Svircevic, V.
2014-01-01
The aim of this thesis is to evaluate fast-track cardiac anesthesia techniques and investigate their impact on postoperative mortality, morbidity and quality of life. The following topics will be discussed in the thesis. (1.) Is fast track cardiac anesthesia a safe technique for cardiac surgery?
9th International Conference on Optimization : Techniques and Applications
Wang, Song; Wu, Soon-Yi
2015-01-01
This book presents the latest research findings and state-of-the-art solutions on optimization techniques and provides new research direction and developments. Both the theoretical and practical aspects of the book will be much beneficial to experts and students in optimization and operation research community. It selects high quality papers from The International Conference on Optimization: Techniques and Applications (ICOTA2013). The conference is an official conference series of POP (The Pacific Optimization Research Activity Group; there are over 500 active members). These state-of-the-art works in this book authored by recognized experts will make contributions to the development of optimization with its applications.
Hybrid Techniques for Optimizing Complex Systems
2009-12-01
relay placement problem, we modeled the network as a mechanical system with springs and a viscous damper ⎯a widely used approach for solving optimization...fundamental mathematical tools in many branches of physics such as fluid and solid mechanics, and general relativity [108]. More recently, several
Evolutionary optimization technique for site layout planning
El Ansary, Ayman M.; Shalaby, Mohamed
2014-01-01
of design requirements. The developed technique is based on genetic algorithm which explores the search space for possible solutions. This study considers two dimensional site planning problems. However, it can be extended to solve three dimensional cases. A
Optimal Technique in Cardiac Anesthesia Recovery
Svircevic, V.
2014-01-01
The aim of this thesis is to evaluate fast-track cardiac anesthesia techniques and investigate their impact on postoperative mortality, morbidity and quality of life. The following topics will be discussed in the thesis. (1.) Is fast track cardiac anesthesia a safe technique for cardiac surgery? (2.) Does thoracic epidural anesthesia have an effect on mortality and morbidity after cardiac surgery? (3.) Does thoracic epidural anesthesia have an effect on quality of life after cardiac surgery? ...
Optimizing human activity patterns using global sensitivity analysis.
Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M
2014-12-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
A novel technique for active vibration control, based on optimal
In the last few decades, researchers have proposed many control techniques to suppress unwanted vibrations in a structure. In this work, a novel and simple technique is proposed for the active vibration control. In this technique, an optimal tracking control is employed to suppress vibrations in a structure by simultaneously ...
Complex energy system management using optimization techniques
Bridgeman, Stuart; Hurdowar-Castro, Diana; Allen, Rick; Olason, Tryggvi; Welt, Francois
2010-09-15
Modern energy systems are often very complex with respect to the mix of generation sources, energy storage, transmission, and avenues to market. Historically, power was provided by government organizations to load centers, and pricing was provided in a regulatory manner. In recent years, this process has been displaced by the independent system operator (ISO). This complexity makes the operation of these systems very difficult, since the components of the system are interdependent. Consequently, computer-based large-scale simulation and optimization methods like Decision Support Systems are now being used. This paper discusses the application of a DSS to operations and planning systems.
Global Optimization for Bus Line Timetable Setting Problem
Qun Chen
2014-01-01
Full Text Available This paper defines bus timetables setting problem during each time period divided in terms of passenger flow intensity; it is supposed that passengers evenly arrive and bus runs are set evenly; the problem is to determine bus runs assignment in each time period to minimize the total waiting time of passengers on platforms if the number of the total runs is known. For such a multistage decision problem, this paper designed a dynamic programming algorithm to solve it. Global optimization procedures using dynamic programming are developed. A numerical example about bus runs assignment optimization of a single line is given to demonstrate the efficiency of the proposed methodology, showing that optimizing buses’ departure time using dynamic programming can save computational time and find the global optimal solution.
A dynamic global and local combined particle swarm optimization algorithm
Jiao Bin; Lian Zhigang; Chen Qunxian
2009-01-01
Particle swarm optimization (PSO) algorithm has been developing rapidly and many results have been reported. PSO algorithm has shown some important advantages by providing high speed of convergence in specific problems, but it has a tendency to get stuck in a near optimal solution and one may find it difficult to improve solution accuracy by fine tuning. This paper presents a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm to improve the performance of original PSO, in which all particles dynamically share the best information of the local particle, global particle and group particles. It is tested with a set of eight benchmark functions with different dimensions and compared with original PSO. Experimental results indicate that the DGLCPSO algorithm improves the search performance on the benchmark functions significantly, and shows the effectiveness of the algorithm to solve optimization problems.
Bayer Digester Optimization Studies using Computer Techniques
Kotte, Jan J.; Schleider, Victor H.
Theoretically required heat transfer performance by the multistaged flash heat reclaim system of a high pressure Bayer digester unit is determined for various conditions of discharge temperature, excess flash vapor and indirect steam addition. Solution of simultaneous heat balances around the digester vessels and the heat reclaim system yields the magnitude of available heat for representation of each case on a temperature-enthalpy diagram, where graphical fit of the number of flash stages fixes the heater requirements. Both the heat balances and the trial-and-error graphical solution are adapted to solution by digital computer techniques.
A branch and bound algorithm for the global optimization of Hessian Lipschitz continuous functions
Fowkes, Jaroslav M.
2012-06-21
We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation techniques to the objective function within an overlapping branch and bound algorithm for convex constrained global optimization. Unlike other branch and bound algorithms, lower bounds are obtained via nonconvex underestimators of the function. For a numerical example, we apply the proposed branch and bound algorithm to radial basis function approximations. © 2012 Springer Science+Business Media, LLC.
Global Optimization of a Periodic System using a Genetic Algorithm
Stucke, David; Crespi, Vincent
2001-03-01
We use a novel application of a genetic algorithm global optimizatin technique to find the lowest energy structures for periodic systems. We apply this technique to colloidal crystals for several different stoichiometries of binary and trinary colloidal crystals. This application of a genetic algorithm is decribed and results of likely candidate structures are presented.
Diagnosis of scaphoid fracture: optimal imaging techniques
Geijer M
2013-07-01
Full Text Available Mats Geijer Center for Medical Imaging and Physiology, Skåne University Hospital and Lund University, Lund, Sweden Abstract: This review aims to provide an overview of modern imaging techniques for evaluation of scaphoid fracture, with emphasis on occult fractures and an outlook on the possible evolution of imaging; it also gives an overview of the pathologic and anatomic basis for selection of techniques. Displaced scaphoid fractures detected by wrist radiography, with or without special scaphoid views, pose no diagnostic problems. After wrist trauma with clinically suspected scaphoid fracture and normal scaphoid radiography, most patients will have no clinically important fracture. Between 5% and 19% of patients (on average 16% in meta-analyses will, however, have an occult scaphoid fracture which, untreated, may lead to later, potentially devastating, complications. Follow-up imaging may be done with repeat radiography, tomosynthesis, computed tomography, magnetic resonance imaging (MRI, or bone scintigraphy. However, no method is perfect, and choice of imaging may be based on availability, cost, perceived accuracy, or personal preference. Generally, MRI and bone scintigraphy are regarded as the most sensitive modalities, but both are flawed by false positive results at various rates. Keywords: occult fracture, wrist, radiography, computed tomography, magnetic resonance imaging, radionuclide imaging
Global-local optimization of flapping kinematics in hovering flight
Ghommem, Mehdi; Hajj, M. R.; Mook, Dean T.; Stanford, Bret K.; Bé ran, Philip S.; Watson, Layne T.
2013-01-01
The kinematics of a hovering wing are optimized by combining the 2-d unsteady vortex lattice method with a hybrid of global and local optimization algorithms. The objective is to minimize the required aerodynamic power under a lift constraint. The hybrid optimization is used to efficiently navigate the complex design space due to wing-wake interference present in hovering aerodynamics. The flapping wing is chosen so that its chord length and flapping frequency match the morphological and flight properties of two insects with different masses. The results suggest that imposing a delay between the different oscillatory motions defining the flapping kinematics, and controlling the way through which the wing rotates at the end of each half stroke can improve aerodynamic power under a lift constraint. Furthermore, our optimization analysis identified optimal kinematics that agree fairly well with observed insect kinematics, as well as previously published numerical results.
Global-local optimization of flapping kinematics in hovering flight
Ghommem, Mehdi
2013-06-01
The kinematics of a hovering wing are optimized by combining the 2-d unsteady vortex lattice method with a hybrid of global and local optimization algorithms. The objective is to minimize the required aerodynamic power under a lift constraint. The hybrid optimization is used to efficiently navigate the complex design space due to wing-wake interference present in hovering aerodynamics. The flapping wing is chosen so that its chord length and flapping frequency match the morphological and flight properties of two insects with different masses. The results suggest that imposing a delay between the different oscillatory motions defining the flapping kinematics, and controlling the way through which the wing rotates at the end of each half stroke can improve aerodynamic power under a lift constraint. Furthermore, our optimization analysis identified optimal kinematics that agree fairly well with observed insect kinematics, as well as previously published numerical results.
Dispositional Optimism and Terminal Decline in Global Quality of Life
Zaslavsky, Oleg; Palgi, Yuval; Rillamas-Sun, Eileen; LaCroix, Andrea Z.; Schnall, Eliezer; Woods, Nancy F.; Cochrane, Barbara B.; Garcia, Lorena; Hingle, Melanie; Post, Stephen; Seguin, Rebecca; Tindle, Hilary; Shrira, Amit
2015-01-01
We examined whether dispositional optimism relates to change in global quality of life (QOL) as a function of either chronological age or years to impending death. We used a sample of 2,096 deceased postmenopausal women from the Women's Health Initiative clinical trials who were enrolled in the 2005-2010 Extension Study and for whom at least 1…
Complicated problem solution techniques in optimal parameter searching
Gergel', V.P.; Grishagin, V.A.; Rogatneva, E.A.; Strongin, R.G.; Vysotskaya, I.N.; Kukhtin, V.V.
1992-01-01
An algorithm is presented of a global search for numerical solution of multidimentional multiextremal multicriteria optimization problems with complicated constraints. A boundedness of object characteristic changes is assumed at restricted changes of its parameters (Lipschitz condition). The algorithm was realized as a computer code. The algorithm was realized as a computer code. The programme was used to solve in practice the different applied optimization problems. 10 refs.; 3 figs
A. P. Karpenko
2014-01-01
Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.
Frolov, A.M.
1986-01-01
The problem of exact variational calculations of few-particle systems in the exponential basis of the relative coordinates using nonlinear parameters is studied. The techniques of stepwise optimization and global chaos of nonlinear parameters are used to calculate the S and P states of homonuclear muonic molecules with an error of no more than +0.001 eV. The global-chaos technique also has proved to be successful in the case of the nuclear systems 3 H and 3 He
Global Sufficient Optimality Conditions for a Special Cubic Minimization Problem
Xiaomei Zhang
2012-01-01
Full Text Available We present some sufficient global optimality conditions for a special cubic minimization problem with box constraints or binary constraints by extending the global subdifferential approach proposed by V. Jeyakumar et al. (2006. The present conditions generalize the results developed in the work of V. Jeyakumar et al. where a quadratic minimization problem with box constraints or binary constraints was considered. In addition, a special diagonal matrix is constructed, which is used to provide a convenient method for justifying the proposed sufficient conditions. Then, the reformulation of the sufficient conditions follows. It is worth noting that this reformulation is also applicable to the quadratic minimization problem with box or binary constraints considered in the works of V. Jeyakumar et al. (2006 and Y. Wang et al. (2010. Finally some examples demonstrate that our optimality conditions can effectively be used for identifying global minimizers of the certain nonconvex cubic minimization problem.
Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239
Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
Tabu search, a versatile technique for the functions optimization
Castillo M, J.A.
2003-01-01
The basic elements of the Tabu search technique are presented, putting emphasis in the qualities that it has in comparison with the traditional methods of optimization known as in descending pass. Later on some modifications are sketched that have been implemented in the technique along the time, so that this it is but robust. Finally they are given to know some areas where this technique has been applied, obtaining successful results. (Author)
Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches
Jui-Yu Wu
2013-01-01
Full Text Available Stochastic global optimization (SGO algorithms such as the particle swarm optimization (PSO approach have become popular for solving unconstrained global optimization (UGO problems. The PSO approach, which belongs to the swarm intelligence domain, does not require gradient information, enabling it to overcome this limitation of traditional nonlinear programming methods. Unfortunately, PSO algorithm implementation and performance depend on several parameters, such as cognitive parameter, social parameter, and constriction coefficient. These parameters are tuned by using trial and error. To reduce the parametrization of a PSO method, this work presents two efficient hybrid SGO approaches, namely, a real-coded genetic algorithm-based PSO (RGA-PSO method and an artificial immune algorithm-based PSO (AIA-PSO method. The specific parameters of the internal PSO algorithm are optimized using the external RGA and AIA approaches, and then the internal PSO algorithm is applied to solve UGO problems. The performances of the proposed RGA-PSO and AIA-PSO algorithms are then evaluated using a set of benchmark UGO problems. Numerical results indicate that, besides their ability to converge to a global minimum for each test UGO problem, the proposed RGA-PSO and AIA-PSO algorithms outperform many hybrid SGO algorithms. Thus, the RGA-PSO and AIA-PSO approaches can be considered alternative SGO approaches for solving standard-dimensional UGO problems.
A global optimization method for evaporative cooling systems based on the entransy theory
Yuan, Fang; Chen, Qun
2012-01-01
Evaporative cooling technique, one of the most widely used methods, is essential to both energy conservation and environment protection. This contribution introduces a global optimization method for indirect evaporative cooling systems with coupled heat and mass transfer processes based on the entransy theory to improve their energy efficiency. First, we classify the irreversible processes in the system into the heat transfer process, the coupled heat and mass transfer process and the mixing process of waters in different branches, where the irreversibility is evaluated by the entransy dissipation. Then through the total system entransy dissipation, we establish the theoretical relationship of the user demands with both the geometrical structures of each heat exchanger and the operating parameters of each fluid, and derive two optimization equation groups focusing on two typical optimization problems. Finally, an indirect evaporative cooling system is taken as an example to illustrate the applications of the newly proposed optimization method. It is concluded that there exists an optimal circulating water flow rate with the minimum total thermal conductance of the system. Furthermore, with different user demands and moist air inlet conditions, it is the global optimization, other than parametric analysis, will obtain the optimal performance of the system. -- Highlights: ► Introduce a global optimization method for evaporative cooling systems. ► Establish the direct relation between user demands and the design parameters. ► Obtain two groups of optimization equations for two typical optimization objectives. ► Solving the equations offers the optimal design parameters for the system. ► Provide the instruction for the design of coupled heat and mass transfer systems.
An Optimal Method for Developing Global Supply Chain Management System
Hao-Chun Lu
2013-01-01
Full Text Available Owing to the transparency in supply chains, enhancing competitiveness of industries becomes a vital factor. Therefore, many developing countries look for a possible method to save costs. In this point of view, this study deals with the complicated liberalization policies in the global supply chain management system and proposes a mathematical model via the flow-control constraints, which are utilized to cope with the bonded warehouses for obtaining maximal profits. Numerical experiments illustrate that the proposed model can be effectively solved to obtain the optimal profits in the global supply chain environment.
GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES
Rogers, Adam; Fiege, Jason D.
2012-01-01
Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.
Global Optimization of Minority Game by Smart Agents
Yan-Bo Xie; Bing-Hong Wang; Chin-Kun Hu; Tao Zhou
2004-01-01
We propose a new model of minority game with so-called smart agents such that the standard deviation and the total loss in this model reach the theoretical minimum values in the limit of long time. The smart agents use trail and error method to make a choice but bring global optimization to the system, which suggests that the economic systems may have the ability to self-organize into a highly optimized state by agents who are forced to make decisions based on inductive thinking for their lim...
An Algorithm for Global Optimization Inspired by Collective Animal Behavior
Erik Cuevas
2012-01-01
Full Text Available A metaheuristic algorithm for global optimization called the collective animal behavior (CAB is introduced. Animal groups, such as schools of fish, flocks of birds, swarms of locusts, and herds of wildebeest, exhibit a variety of behaviors including swarming about a food source, milling around a central locations, or migrating over large distances in aligned groups. These collective behaviors are often advantageous to groups, allowing them to increase their harvesting efficiency, to follow better migration routes, to improve their aerodynamic, and to avoid predation. In the proposed algorithm, the searcher agents emulate a group of animals which interact with each other based on the biological laws of collective motion. The proposed method has been compared to other well-known optimization algorithms. The results show good performance of the proposed method when searching for a global optimum of several benchmark functions.
A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization
Zhijun Luo
2014-01-01
Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.
Operation optimization of distributed generation using artificial intelligent techniques
Mahmoud H. Elkazaz
2016-06-01
Full Text Available Future smart grids will require an observable, controllable and flexible network architecture for reliable and efficient energy delivery. The use of artificial intelligence and advanced communication technologies is essential in building a fully automated system. This paper introduces a new technique for online optimal operation of distributed generation (DG resources, i.e. a hybrid fuel cell (FC and photovoltaic (PV system for residential applications. The proposed technique aims to minimize the total daily operating cost of a group of residential homes by managing the operation of embedded DG units remotely from a control centre. The target is formed as an objective function that is solved using genetic algorithm (GA optimization technique. The optimal settings of the DG units obtained from the optimization process are sent to each DG unit through a fully automated system. The results show that the proposed technique succeeded in defining the optimal operating points of the DGs that affect directly the total operating cost of the entire system.
Globally optimal superconducting magnets part II: symmetric MSE coil arrangement.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
A globally optimal superconducting magnet coil design procedure based on the Minimum Stored Energy (MSE) current density map is outlined. The method has the ability to arrange coils in a manner that generates a strong and homogeneous axial magnetic field over a predefined region, and ensures the stray field external to the assembly and peak magnetic field at the wires are in acceptable ranges. The outlined strategy of allocating coils within a given domain suggests that coils should be placed around the perimeter of the domain with adjacent coils possessing alternating winding directions for optimum performance. The underlying current density maps from which the coils themselves are derived are unique, and optimized to possess minimal stored energy. Therefore, the method produces magnet designs with the lowest possible overall stored energy. Optimal coil layouts are provided for unshielded and shielded short bore symmetric superconducting magnets.
Global optimization for quantum dynamics of few-fermion systems
Li, Xikun; Pecak, Daniel; Sowiński, Tomasz; Sherson, Jacob; Nielsen, Anne E. B.
2018-03-01
Quantum state preparation is vital to quantum computation and quantum information processing tasks. In adiabatic state preparation, the target state is theoretically obtained with nearly perfect fidelity if the control parameter is tuned slowly enough. As this, however, leads to slow dynamics, it is often desirable to be able to carry out processes more rapidly. In this work, we employ two global optimization methods to estimate the quantum speed limit for few-fermion systems confined in a one-dimensional harmonic trap. Such systems can be produced experimentally in a well-controlled manner. We determine the optimized control fields and achieve a reduction in the ramping time of more than a factor of four compared to linear ramping. We also investigate how robust the fidelity is to small variations of the control fields away from the optimized shapes.
An optimization planning technique for Suez Canal Network in Egypt
Abou El-Ela, A.A.; El-Zeftawy, A.A.; Allam, S.M.; Atta, Gasir M. [Electrical Engineering Dept., Faculty of Eng., Shebin El-Kom (Egypt)
2010-02-15
This paper introduces a proposed optimization technique POT for predicting the peak load demand and planning of transmission line systems. Many of traditional methods have been presented for long-term load forecasting of electrical power systems. But, the results of these methods are approximated. Therefore, the artificial neural network (ANN) technique for long-term peak load forecasting is modified and discussed as a modern technique in long-term load forecasting. The modified technique is applied on the Egyptian electrical network dependent on its historical data to predict the electrical peak load demand forecasting up to year 2017. This technique is compared with extrapolation of trend curves as a traditional method. The POT is applied also to obtain the optimal planning of transmission lines for the 220 kV of Suez Canal Network (SCN) using the ANN technique. The minimization of the transmission network costs are considered as an objective function, while the transmission lines (TL) planning constraints are satisfied. Zafarana site on the Red Sea coast is considered as an optimal site for installing big wind farm (WF) units in Egypt. So, the POT is applied to plan both the peak load and the electrical transmission of SCN with and without considering WF to develop the impact of WF units on the electrical transmission system of Egypt, considering the reliability constraints which were taken as a separate model in the previous techniques. The application on SCN shows the capability and the efficiently of the proposed techniques to obtain the predicting peak load demand and the optimal planning of transmission lines of SCN up to year 2017. (author)
An improved technique for the prediction of optimal image resolution ...
user
2010-10-04
Oct 4, 2010 ... Available online at http://www.academicjournals.org/AJEST ... robust technique for predicting optimal image resolution for the mapping of savannah ecosystems was developed. .... whether to purchase multi-spectral imagery acquired by GeoEye-2 ..... Analysis of the spectral behaviour of the pasture class in.
Active load sharing technique for on-line efficiency optimization in DC microgrids
Sanseverino, E. Riva; Zizzo, G.; Boscaino, V.
2017-01-01
Recently, DC power distribution is gaining more and more importance over its AC counterpart achieving increased efficiency, greater flexibility, reduced volumes and capital cost. In this paper, a 24-120-325V two-level DC distribution system for home appliances, each including three parallel DC......-DC converters, is modeled. An active load sharing technique is proposed for the on-line optimization of the global efficiency of the DC distribution network. The algorithm aims at the instantaneous efficiency optimization of the whole DC network, based on the on-line load current sampling. A Look Up Table......, is created to store the real efficiencies of the converters taking into account components tolerances. A MATLAB/Simulink model of the DC distribution network has been set up and a Genetic Algorithm has been employed for the global efficiency optimization. Simulation results are shown to validate the proposed...
Global structural optimizations of surface systems with a genetic algorithm
Chuang, Feng-Chuan
2005-01-01
Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Al n (n up to 23) were performed using a genetic algorithm coupled with a tight-binding potential. Second, a genetic algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of √3 x √3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems
Groenwold, A.A.; Wood, D.W.; Etman, L.F.P.; Tosserams, S.
2009-01-01
We implement and test a globally convergent sequential approximate optimization algorithm based on (convexified) diagonal quadratic approximations. The algorithm resides in the class of globally convergent optimization methods based on conservative convex separable approximations developed by
A Novel Hybrid Firefly Algorithm for Global Optimization.
Lina Zhang
Full Text Available Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA, is proposed by combining the advantages of both the firefly algorithm (FA and differential evolution (DE. FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA, differential evolution (DE and particle swarm optimization (PSO in the sense of avoiding local minima and increasing the convergence rate.
A Global Network Alignment Method Using Discrete Particle Swarm Optimization.
Huang, Jiaxiang; Gong, Maoguo; Ma, Lijia
2016-10-19
Molecular interactions data increase exponentially with the advance of biotechnology. This makes it possible and necessary to comparatively analyse the different data at a network level. Global network alignment is an important network comparison approach to identify conserved subnetworks and get insight into evolutionary relationship across species. Network alignment which is analogous to subgraph isomorphism is known to be an NP-hard problem. In this paper, we introduce a novel heuristic Particle-Swarm-Optimization based Network Aligner (PSONA), which optimizes a weighted global alignment model considering both protein sequence similarity and interaction conservations. The particle statuses and status updating rules are redefined in a discrete form by using permutation. A seed-and-extend strategy is employed to guide the searching for the superior alignment. The proposed initialization method "seeds" matches with high sequence similarity into the alignment, which guarantees the functional coherence of the mapping nodes. A greedy local search method is designed as the "extension" procedure to iteratively optimize the edge conservations. PSONA is compared with several state-of-art methods on ten network pairs combined by five species. The experimental results demonstrate that the proposed aligner can map the proteins with high functional coherence and can be used as a booster to effectively refine the well-studied aligners.
Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro
2018-06-01
A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.
Global optimization of minority game by intelligent agents
Xie, Yan-Bo; Wang, Bing-Hong; Hu, Chin-Kun; Zhou, Tao
2005-10-01
We propose a new model of minority game with intelligent agents who use trail and error method to make a choice such that the standard deviation σ2 and the total loss in this model reach the theoretical minimum values in the long time limit and the global optimization of the system is reached. This suggests that the economic systems can self-organize into a highly optimized state by agents who make decisions based on inductive thinking, limited knowledge, and capabilities. When other kinds of agents are also present, the simulation results and analytic calculations show that the intelligent agent can gain profits from producers and are much more competent than the noise traders and conventional agents in original minority games proposed by Challet and Zhang.
An Adaptive Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-11-03
In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.
Global optimization in the adaptive assay of subterranean uranium nodules
Vulkan, U.; Ben-Haim, Y.
1989-01-01
An adaptive assay is one in which the design of the assay system is modified during operation in response to measurements obtained on-line. The present work has two aims: to design an adaptive system for borehole assay of isolated subterranean uranium nodules, and to investigate globality of optimal design in adaptive assay. It is shown experimentally that reasonably accurate estimates of uranium mass are obtained for a wide range of nodule shapes, on the basis of an adaptive assay system based on a simple geomorphological model. Furthermore, two concepts are identified which underlie the optimal design of the assay system. The adaptive assay approach shows promise for successful measurement of spatially random material in many geophysical applications. (author)
A concept for global optimization of topology design problems
Stolpe, Mathias; Achtziger, Wolfgang; Kawamoto, Atsushi
2006-01-01
We present a concept for solving topology design problems to proven global optimality. We propose that the problems are modeled using the approach of simultaneous analysis and design with discrete design variables and solved with convergent branch and bound type methods. This concept is illustrated...... on two applications. The first application is the design of stiff truss structures where the bar areas are chosen from a finite set of available areas. The second considered application is simultaneous topology and geometry design of planar articulated mechanisms. For each application we outline...
A Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-06-24
Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.
Simulated Annealing-Based Krill Herd Algorithm for Global Optimization
Gai-Ge Wang
2013-01-01
Full Text Available Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH, for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH method is proposed for optimization tasks. A new krill selecting (KS operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA. In addition, a kind of elitism scheme is used to save the best individuals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.
An Image Morphing Technique Based on Optimal Mass Preserving Mapping
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2013-01-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128
TECHNIQUE OF OPTIMAL AUDIT PLANNING FOR INFORMATION SECURITY MANAGEMENT SYSTEM
F. N. Shago
2014-03-01
Full Text Available Complication of information security management systems leads to the necessity of improving the scientific and methodological apparatus for these systems auditing. Planning is an important and determining part of information security management systems auditing. Efficiency of audit will be defined by the relation of the reached quality indicators to the spent resources. Thus, there is an important and urgent task of developing methods and techniques for optimization of the audit planning, making it possible to increase its effectiveness. The proposed technique gives the possibility to implement optimal distribution for planning time and material resources on audit stages on the basis of dynamics model for the ISMS quality. Special feature of the proposed approach is the usage of a priori data as well as a posteriori data for the initial audit planning, and also the plan adjustment after each audit event. This gives the possibility to optimize the usage of audit resources in accordance with the selected criteria. Application examples of the technique are given while planning audit information security management system of the organization. The result of computational experiment based on the proposed technique showed that the time (cost audit costs can be reduced by 10-15% and, consequently, quality assessments obtained through audit resources allocation can be improved with respect to well-known methods of audit planning.
Optimization of Hydraulic Machinery Bladings by Multilevel CFD Techniques
Thum Susanne
2005-01-01
Full Text Available The numerical design optimization for complex hydraulic machinery bladings requires a high number of design parameters and the use of a precise CFD solver yielding high computational costs. To reduce the CPU time needed, a multilevel CFD method has been developed. First of all, the 3D blade geometry is parametrized by means of a geometric design tool to reduce the number of design parameters. To keep geometric accuracy, a special B-spline modification technique has been developed. On the first optimization level, a quasi-3D Euler code (EQ3D is applied. To guarantee a sufficiently accurate result, the code is calibrated by a Navier-Stokes recalculation of the initial design and can be recalibrated after a number of optimization steps by another Navier-Stokes computation. After having got a convergent solution, the optimization process is repeated on the second level using a full 3D Euler code yielding a more accurate flow prediction. Finally, a 3D Navier-Stokes code is applied on the third level to search for the optimum optimorum by means of a fine-tuning of the geometrical parameters. To show the potential of the developed optimization system, the runner blading of a water turbine having a specific speed n q = 41 1 / min was optimized applying the multilevel approach.
Huang, Si-Da; Shang, Cheng; Zhang, Xiao-Jie; Liu, Zhi-Pan
2017-09-01
While the underlying potential energy surface (PES) determines the structure and other properties of a material, it has been frustrating to predict new materials from theory even with the advent of supercomputing facilities. The accuracy of the PES and the efficiency of PES sampling are two major bottlenecks, not least because of the great complexity of the material PES. This work introduces a "Global-to-Global" approach for material discovery by combining for the first time a global optimization method with neural network (NN) techniques. The novel global optimization method, named the stochastic surface walking (SSW) method, is carried out massively in parallel for generating a global training data set, the fitting of which by the atom-centered NN produces a multi-dimensional global PES; the subsequent SSW exploration of large systems with the analytical NN PES can provide key information on the thermodynamics and kinetics stability of unknown phases identified from global PESs. We describe in detail the current implementation of the SSW-NN method with particular focuses on the size of the global data set and the simultaneous energy/force/stress NN training procedure. An important functional material, TiO 2 , is utilized as an example to demonstrate the automated global data set generation, the improved NN training procedure and the application in material discovery. Two new TiO 2 porous crystal structures are identified, which have similar thermodynamics stability to the common TiO 2 rutile phase and the kinetics stability for one of them is further proved from SSW pathway sampling. As a general tool for material simulation, the SSW-NN method provides an efficient and predictive platform for large-scale computational material screening.
A decoupled power flow algorithm using particle swarm optimization technique
Acharjee, P.; Goswami, S.K.
2009-01-01
A robust, nondivergent power flow method has been developed using the particle swarm optimization (PSO) technique. The decoupling properties between the power system quantities have been exploited in developing the power flow algorithm. The speed of the power flow algorithm has been improved using a simple perturbation technique. The basic power flow algorithm and the improvement scheme have been designed to retain the simplicity of the evolutionary approach. The power flow is rugged, can determine the critical loading conditions and also can handle the flexible alternating current transmission system (FACTS) devices efficiently. Test results on standard test systems show that the proposed method can find the solution when the standard power flows fail.
Quantitative Portfolio Optimization Techniques Applied to the Brazilian Stock Market
André Alves Portela Santos
2012-09-01
Full Text Available In this paper we assess the out-of-sample performance of two alternative quantitative portfolio optimization techniques - mean-variance and minimum variance optimization – and compare their performance with respect to a naive 1/N (or equally-weighted portfolio and also to the market portfolio given by the Ibovespa. We focus on short selling-constrained portfolios and consider alternative estimators for the covariance matrices: sample covariance matrix, RiskMetrics, and three covariance estimators proposed by Ledoit and Wolf (2003, Ledoit and Wolf (2004a and Ledoit and Wolf (2004b. Taking into account alternative portfolio re-balancing frequencies, we compute out-of-sample performance statistics which indicate that the quantitative approaches delivered improved results in terms of lower portfolio volatility and better risk-adjusted returns. Moreover, the use of more sophisticated estimators for the covariance matrix generated optimal portfolios with lower turnover over time.
Material saving by means of CWR technology using optimization techniques
Pérez, Iñaki; Ambrosio, Cristina
2017-10-01
Material saving is currently a must for the forging companies, as material costs sum up to 50% for parts made of steel and up to 90% in other materials like titanium. For long products, cross wedge rolling (CWR) technology can be used to obtain forging preforms with a suitable distribution of the material along its own axis. However, defining the correct preform dimensions is not an easy task and it could need an intensive trial-and-error campaign. To speed up the preform definition, it is necessary to apply optimization techniques on Finite Element Models (FEM) able to reproduce the material behaviour when being rolled. Meta-models Assisted Evolution Strategies (MAES), that combine evolutionary algorithms with Kriging meta-models, are implemented in FORGE® software and they allow reducing optimization computation costs in a relevant way. The paper shows the application of these optimization techniques to the definition of the right preform for a shaft from a vehicle of the agricultural sector. First, the current forging process, based on obtaining the forging preform by means of an open die forging operation, is showed. Then, the CWR preform optimization is developed by using the above mentioned optimization techniques. The objective is to reduce, as much as possible, the initial billet weight, so that a calculation of flash weight reduction due to the use of the proposed preform is stated. Finally, a simulation of CWR process for the defined preform is carried out to check that most common failures (necking, spirals,..) in CWR do not appear in this case.
Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-04-15
Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2017-09-01
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
Novel optimization technique of isolated microgrid with hydrogen energy storage.
Beshr, Eman Hassan; Abdelghany, Hazem; Eteiba, Mahmoud
2018-01-01
This paper presents a novel optimization technique for energy management studies of an isolated microgrid. The system is supplied by various Distributed Energy Resources (DERs), Diesel Generator (DG), a Wind Turbine Generator (WTG), Photovoltaic (PV) arrays and supported by fuel cell/electrolyzer Hydrogen storage system for short term storage. Multi-objective optimization is used through non-dominated sorting genetic algorithm to suit the load requirements under the given constraints. A novel multi-objective flower pollination algorithm is utilized to check the results. The Pros and cons of the two optimization techniques are compared and evaluated. An isolated microgrid is modelled using MATLAB software package, dispatch of active/reactive power, optimal load flow analysis with slack bus selection are carried out to be able to minimize fuel cost and line losses under realistic constraints. The performance of the system is studied and analyzed during both summer and winter conditions and three case studies are presented for each condition. The modified IEEE 15 bus system is used to validate the proposed algorithm.
Electrostatic afocal-zoom lens design using computer optimization technique
Sise, Omer, E-mail: omersise@gmail.com
2014-12-15
Highlights: • We describe the detailed design of a five-element electrostatic afocal-zoom lens. • The simplex optimization is used to optimize lens voltages. • The method can be applied to multi-element electrostatic lenses. - Abstract: Electron optics is the key to the successful operation of electron collision experiments where well designed electrostatic lenses are needed to drive electron beam before and after the collision. In this work, the imaging properties and aberration analysis of an electrostatic afocal-zoom lens design were investigated using a computer optimization technique. We have found a whole new range of voltage combinations that has gone unnoticed until now. A full range of voltage ratios and spherical and chromatic aberration coefficients were systematically analyzed with a range of magnifications between 0.3 and 3.2. The grid-shadow evaluation was also employed to show the effect of spherical aberration. The technique is found to be useful for searching the optimal configuration in a multi-element lens system.
Novel optimization technique of isolated microgrid with hydrogen energy storage.
Eman Hassan Beshr
Full Text Available This paper presents a novel optimization technique for energy management studies of an isolated microgrid. The system is supplied by various Distributed Energy Resources (DERs, Diesel Generator (DG, a Wind Turbine Generator (WTG, Photovoltaic (PV arrays and supported by fuel cell/electrolyzer Hydrogen storage system for short term storage. Multi-objective optimization is used through non-dominated sorting genetic algorithm to suit the load requirements under the given constraints. A novel multi-objective flower pollination algorithm is utilized to check the results. The Pros and cons of the two optimization techniques are compared and evaluated. An isolated microgrid is modelled using MATLAB software package, dispatch of active/reactive power, optimal load flow analysis with slack bus selection are carried out to be able to minimize fuel cost and line losses under realistic constraints. The performance of the system is studied and analyzed during both summer and winter conditions and three case studies are presented for each condition. The modified IEEE 15 bus system is used to validate the proposed algorithm.
Nuclear-fuel-cycle optimization: methods and modelling techniques
Silvennoinen, P.
1982-01-01
This book present methods applicable to analyzing fuel-cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After an introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective. Subsequent chapters deal with the fuel-cycle problems faced by a power utility. The fuel-cycle models cover the entire cycle from the supply of uranium to the disposition of spent fuel. The chapter headings are: Nuclear Fuel Cycle, Uranium Supply and Demand, Basic Model of the LWR (light water reactor) Fuel Cycle, Resolution of Uncertainties, Assessment of Proliferation Risks, Multigoal Optimization, Generalized Fuel-Cycle Models, Reactor Strategy Calculations, and Interface with Energy Strategies. 47 references, 34 figures, 25 tables
GMG: A Guaranteed, Efficient Global Optimization Algorithm for Remote Sensing.
D' Helon, CD
2004-08-18
The monocular passive ranging (MPR) problem in remote sensing consists of identifying the precise range of an airborne target (missile, plane, etc.) from its observed radiance. This inverse problem may be set as a global optimization problem (GOP) whereby the difference between the observed and model predicted radiances is minimized over the possible ranges and atmospheric conditions. Using additional information about the error function between the predicted and observed radiances of the target, we developed GMG, a new algorithm to find the Global Minimum with a Guarantee. The new algorithm transforms the original continuous GOP into a discrete search problem, thereby guaranteeing to find the position of the global minimum in a reasonably short time. The algorithm is first applied to the golf course problem, which serves as a litmus test for its performance in the presence of both complete and degraded additional information. GMG is further assessed on a set of standard benchmark functions and then applied to various realizations of the MPR problem.
Optimal correction and design parameter search by modern methods of rigorous global optimization
Makino, K.; Berz, M.
2011-01-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle
Achtziger, Wolfgang; Stolpe, Mathias
2007-01-01
this problem is well-studied for continuous bar areas, we consider in this study the case of discrete areas. This problem is of major practical relevance if the truss must be built from pre-produced bars with given areas. As a special case, we consider the design problem for a single available bar area, i.......e., a 0/1 problem. In contrast to the heuristic methods considered in many other approaches, our goal is to compute guaranteed globally optimal structures. This is done by a branch-and-bound method for which convergence can be proven. In this branch-and-bound framework, lower bounds of the optimal......-integer problems. The main intention of this paper is to provide optimal solutions for single and multiple load benchmark examples, which can be used for testing and validating other methods or heuristics for the treatment of this discrete topology design problem....
Annealing evolutionary stochastic approximation Monte Carlo for global optimization
Liang, Faming
2010-04-08
In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.
Miró, Anton; Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Egea, Jose A; Jiménez, Laureano
2012-05-10
The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.
Implementation and verification of global optimization benchmark problems
Posypkin, Mikhail; Usov, Alexander
2017-12-01
The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.
Global optimization applied to GPS positioning by ambiguity functions
Baselga, Sergio
2010-01-01
Differential GPS positioning with carrier-phase observables is commonly done in a process that involves determination of the unknown integer ambiguity values. An alternative approach, named the ambiguity function method, was already proposed in the early days of GPS positioning. By making use of a trigonometric function ambiguity unknowns are eliminated from the functional model before the estimation process. This approach has significant advantages, such as ease of use and insensitivity to cycle slips, but requires such high accuracy in the initial approximate coordinates that its use has been practically dismissed from consideration. In this paper a novel strategy is proposed so that the need for highly accurate initial coordinates disappears: the application of a global optimization method to the ambiguity functions model. The use of this strategy enables the ambiguity function method to compete with the present prevailing approach of ambiguity resolution
Global optimization numerical strategies for rate-independent processes
Benešová, Barbora
2011-01-01
Roč. 50, č. 2 (2011), s. 197-220 ISSN 0925-5001 R&D Projects: GA ČR GAP201/10/0357 Grant - others:GA MŠk(CZ) LC06052 Program:LC Institutional research plan: CEZ:AV0Z20760514 Keywords : rate-independent processes * numerical global optimization * energy estimates based algorithm Subject RIV: BA - General Mathematics Impact factor: 1.196, year: 2011 http://math.hnue.edu.vn/portal/rss.viewpage.php?id=0000037780&ap=L3BvcnRhbC9ncmFiYmVyLnBocD9jYXRpZD0xMDEyJnBhZ2U9Mg==
Implementation and verification of global optimization benchmark problems
Posypkin Mikhail
2017-12-01
Full Text Available The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its’ gradient at a given point and the interval estimates of a function and its’ gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.
Adjusting process count on demand for petascale global optimization
Sosonkina, Masha; Watson, Layne T.; Radcliffe, Nicholas R.; Haftka, Rafael T.; Trosset, Michael W.
2013-01-01
There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.
Photon attenuation correction technique in SPECT based on nonlinear optimization
Suzuki, Shigehito; Wakabayashi, Misato; Okuyama, Keiichi; Kuwamura, Susumu
1998-01-01
Photon attenuation correction in SPECT was made using a nonlinear optimization theory, in which an optimum image is searched so that the sum of square errors between observed and reprojected projection data is minimized. This correction technique consists of optimization and step-width algorithms, which determine at each iteration a pixel-by-pixel directional value of search and its step-width, respectively. We used the conjugate gradient and quasi-Newton methods as the optimization algorithm, and Curry rule and the quadratic function method as the step-width algorithm. Statistical fluctuations in the corrected image due to statistical noise in the emission projection data grew as the iteration increased, depending on the combination of optimization and step-width algorithms. To suppress them, smoothing for directional values was introduced. Computer experiments and clinical applications showed a pronounced reduction in statistical fluctuations of the corrected image for all combinations. Combinations using the conjugate gradient method were superior in noise characteristic and computation time. The use of that method with the quadratic function method was optimum if noise property was regarded as important. (author)
Reliability analysis of large scaled structures by optimization technique
Ishikawa, N.; Mihara, T.; Iizuka, M.
1987-01-01
This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)
Optimization Techniques for 3D Graphics Deployment on Mobile Devices
Koskela, Timo; Vatjus-Anttila, Jarkko
2015-03-01
3D Internet technologies are becoming essential enablers in many application areas including games, education, collaboration, navigation and social networking. The use of 3D Internet applications with mobile devices provides location-independent access and richer use context, but also performance issues. Therefore, one of the important challenges facing 3D Internet applications is the deployment of 3D graphics on mobile devices. In this article, we present an extensive survey on optimization techniques for 3D graphics deployment on mobile devices and qualitatively analyze the applicability of each technique from the standpoints of visual quality, performance and energy consumption. The analysis focuses on optimization techniques related to data-driven 3D graphics deployment, because it supports off-line use, multi-user interaction, user-created 3D graphics and creation of arbitrary 3D graphics. The outcome of the analysis facilitates the development and deployment of 3D Internet applications on mobile devices and provides guidelines for future research.
Greenhouse Environmental Control Using Optimized MIMO PID Technique
Fateh BOUNAAMA
2011-10-01
Full Text Available Climate control for protected crops brings the added dimension of a biological system into a physical system control situation. The thermally dynamic nature of a greenhouse suggests that disturbance attenuation (load control of external temperature, humidity, and sunlight is far more important than is the case for controlling other types of buildings. This paper investigates the application of multi-inputs multi-outputs (MIMO PID controller to a MIMO greenhouse environmental model with actuation constraints. This method is based on decoupling the system at low frequency point. The optimal tuning values are determined using genetic algorithms optimization (GA. The inside outsides climate model of the environmental greenhouse, and the automatically collected data sets of Avignon, France are used to simulate and test this technique. The control objective is to maintain a highly coupled inside air temperature and relative humidity of strongly perturbed greenhouse, at specified set-points, by the ventilation/cooling and moisturizing operations.
WFH: closing the global gap--achieving optimal care.
Skinner, Mark W
2012-07-01
For 50 years, the World Federation of Hemophilia (WFH) has been working globally to close the gap in care and to achieve Treatment for All patients, men and women, with haemophilia and other inherited bleeding disorders, regardless of where they might live. The WFH estimates that more than one in 1000 men and women has a bleeding disorder equating to 6,900,000 worldwide. To close the gap in care between developed and developing nations a continued focus on the successful strategies deployed heretofore will be required. However, in response to the rapid advances in treatment and emerging therapeutic advances on the horizon it will also require fresh approaches and renewed strategic thinking. It is difficult to predict what each therapeutic advance on the horizon will mean for the future, but there is no doubt that we are in a golden age of research and development, which has the prospect of revolutionizing treatment once again. An improved understanding of "optimal" treatment is fundamental to the continued evolution of global care. The challenges of answering government and payer demands for evidence-based medicine, and cost justification for the introduction and enhancement of treatment, are ever-present and growing. To sustain and improve care it is critical to build the body of outcome data for individual patients, within haemophilia treatment centers (HTCs), nationally, regionally and globally. Emerging therapeutic advances (longer half-life therapies and gene transfer) should not be justified or brought to market based only on the notion that they will be economically more affordable, although that may be the case, but rather more importantly that they will be therapeutically more advantageous. Improvements in treatment adherence, reductions in bleeding frequency (including microhemorrhages), better management of trough levels, and improved health outcomes (including quality of life) should be the foremost considerations. As part of a new WFH strategic plan
A practical globalization of one-shot optimization for optimal design of tokamak divertors
Blommaert, Maarten, E-mail: maarten.blommaert@kuleuven.be [Institute of Energy and Climate Research (IEK-4), FZ Jülich GmbH, D-52425 Jülich (Germany); Dekeyser, Wouter; Baelmans, Martine [KU Leuven, Department of Mechanical Engineering, 3001 Leuven (Belgium); Gauger, Nicolas R. [TU Kaiserslautern, Chair for Scientific Computing, 67663 Kaiserslautern (Germany); Reiter, Detlev [Institute of Energy and Climate Research (IEK-4), FZ Jülich GmbH, D-52425 Jülich (Germany)
2017-01-01
In past studies, nested optimization methods were successfully applied to design of the magnetic divertor configuration in nuclear fusion reactors. In this paper, so-called one-shot optimization methods are pursued. Due to convergence issues, a globalization strategy for the one-shot solver is sought. Whereas Griewank introduced a globalization strategy using a doubly augmented Lagrangian function that includes primal and adjoint residuals, its practical usability is limited by the necessity of second order derivatives and expensive line search iterations. In this paper, a practical alternative is offered that avoids these drawbacks by using a regular augmented Lagrangian merit function that penalizes only state residuals. Additionally, robust rank-two Hessian estimation is achieved by adaptation of Powell's damped BFGS update rule. The application of the novel one-shot approach to magnetic divertor design is considered in detail. For this purpose, the approach is adapted to be complementary with practical in parts adjoint sensitivities. Using the globalization strategy, stable convergence of the one-shot approach is achieved.
ABCluster: the artificial bee colony algorithm for cluster global optimization.
Zhang, Jun; Dolg, Michael
2015-10-07
Global optimization of cluster geometries is of fundamental importance in chemistry and an interesting problem in applied mathematics. In this work, we introduce a relatively new swarm intelligence algorithm, i.e. the artificial bee colony (ABC) algorithm proposed in 2005, to this field. It is inspired by the foraging behavior of a bee colony, and only three parameters are needed to control it. We applied it to several potential functions of quite different nature, i.e., the Coulomb-Born-Mayer, Lennard-Jones, Morse, Z and Gupta potentials. The benchmarks reveal that for long-ranged potentials the ABC algorithm is very efficient in locating the global minimum, while for short-ranged ones it is sometimes trapped into a local minimum funnel on a potential energy surface of large clusters. We have released an efficient, user-friendly, and free program "ABCluster" to realize the ABC algorithm. It is a black-box program for non-experts as well as experts and might become a useful tool for chemists to study clusters.
Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen
2014-09-01
For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multivariate Analysis Techniques for Optimal Vision System Design
Sharifzadeh, Sara
The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...
Optimization of AFP-radioimmunoassay using Antibody Capture Technique
Moustafa, K.A.
2003-01-01
Alpha-fetoprotein (AFP) is a substance produced by the unborn baby. When the neural tube is not properly formed large amounts of AFP pass into the amniotic fluid and reach the mother's blood. By measuring AFP in the mother's blood and amniotic fluid, it is possible to tell whether or not there is a chance that the unborn baby has a neural tube defect. AFP also used as a tumor marker for hepatocellular carcinoma. There are many different techniques for measuring AFP in blood, but the most accurate one is the immunoassay technique. The immunoassays can be classified on the basis of methodology into three classes; (1) the antibody capture assays, (2) the antigen capture assay, (3)the two-antibody sandwich assays. In this present study, the antibody capture assay in which the antigen is attached to a solid support, and labeled antibody is allowed to bind, will be optimized
Arasomwan, Martins Akugbe; Adewumi, Aderemi Oluyinka
2013-01-01
Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted. PMID:24324383
Nuclear fuel cycle optimization - methods and modelling techniques
Silvennoinen, P.
1982-01-01
This book is aimed at presenting methods applicable in the analysis of fuel cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After a succinct introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective and subsequent chapters deal with the fuel cycle problems faced by a power utility. A fundamental material flow model is introduced first in the context of light water reactor fuel cycles. Besides the minimum cost criterion, the text also deals with other objectives providing for a treatment of cost uncertainties and of the risk of proliferation of nuclear weapons. Methods to assess mixed reactor strategies, comprising also other reactor types than the light water reactor, are confined to cost minimization. In the final Chapter, the integration of nuclear capacity within a generating system is examined. (author)
Memetic Algorithms to Solve a Global Nonlinear Optimization Problem. A Review
M. K. Sakharov
2015-01-01
Full Text Available In recent decades, evolutionary algorithms have proven themselves as the powerful optimization techniques of search engine. Their popularity is due to the fact that they are easy to implement and can be used in all areas, since they are based on the idea of universal evolution. For example, in the problems of a large number of local optima, the traditional optimization methods, usually, fail in finding the global optimum. To solve such problems using a variety of stochastic methods, in particular, the so-called population-based algorithms, which are a kind of evolutionary methods. The main disadvantage of this class of methods is their slow convergence to the exact solution in the neighborhood of the global optimum, as these methods incapable to use the local information about the landscape of the function. This often limits their use in largescale real-world problems where the computation time is a critical factor.One of the promising directions in the field of modern evolutionary computation are memetic algorithms, which can be regarded as a combination of population search of the global optimum and local procedures for verifying solutions, which gives a synergistic effect. In the context of memetic algorithms, the meme is an implementation of the local optimization method to refine solution in the search.The concept of memetic algorithms provides ample opportunities for the development of various modifications of these algorithms, which can vary the frequency of the local search, the conditions of its end, and so on. The practically significant memetic algorithm modifications involve the simultaneous use of different memes. Such algorithms are called multi-memetic.The paper gives statement of the global problem of nonlinear unconstrained optimization, describes the most promising areas of AI modifications, including hybridization and metaoptimization. The main content of the work is the classification and review of existing varieties of
Optimization of analytical techniques to characterize antibiotics in aquatic systems
Al Mokh, S.
2013-01-01
Antibiotics are considered as pollutants when they are present in aquatic ecosystems, ultimate receptacles of anthropogenic substances. These compounds are studied as their persistence in the environment or their effects on natural organisms. Numerous efforts have been made worldwide to assess the environmental quality of different water resources for the survival of aquatic species, but also for human consumption and health risk related. Towards goal, the optimization of analytical techniques for these compounds in aquatic systems remains a necessity. Our objective is to develop extraction and detection methods for 12 molecules of aminoglycosides and colistin in sewage treatment plants and hospitals waters. The lack of analytical methods for analysis of these compounds and the deficiency of studies for their detection in water is the reason for their study. Solid Phase Extraction (SPE) in classic mode (offline) or online followed by Liquid Chromatography analysis coupled with Mass Spectrometry (LC/MS/MS) is the most method commonly used for this type of analysis. The parameters are optimized and validated to ensure the best conditions for the environmental analysis. This technique was applied to real samples of wastewater treatment plants in Bordeaux and Lebanon. (author)
An entropy flow optimization technique for helium liquefaction cycles
Minta, M.; Smith, J.L.
1984-01-01
This chapter proposes a new method of analyzing thermodynamic cycles based on a continuous distribution of precooling over the temperature range of the cycle. The method gives the optimum distribution of precooling over the temperature range of the cycle by specifying the mass flow to be expanded at each temperature. The result is used to select a cycle configuration with discrete expansions and to initialize the independent variables for final optimization. Topics considered include the continuous precooling model, the results for ideal gas, the results for real gas, and the application to the design of a saturated vapor compression (SVC) cycle. The optimization technique for helium liquefaction cycles starts with the minimization of the generated entropy in a cycle model with continuous precooling. The pressure ratio, the pressure level and the distribution of the heat exchange are selected based on the results of the continuous precooling analysis. It is concluded that the technique incorporates the non-ideal behavior of helium in the procedure and allows the trade-off between heat exchange area and losses to be determined
Using simulation-optimization techniques to improve multiphase aquifer remediation
Finsterle, S.; Pruess, K. [Lawrence Berkeley Laboratory, Berkeley, CA (United States)
1995-03-01
The T2VOC computer model for simulating the transport of organic chemical contaminants in non-isothermal multiphase systems has been coupled to the ITOUGH2 code which solves parameter optimization problems. This allows one to use linear programming and simulated annealing techniques to solve groundwater management problems, i.e. the optimization of operations for multiphase aquifer remediation. A cost function has to be defined, containing the actual and hypothetical expenses of a cleanup operation which depend - directly or indirectly - on the state variables calculated by T2VOC. Subsequently, the code iteratively determines a remediation strategy (e.g. pumping schedule) which minimizes, for instance, pumping and energy costs, the time for cleanup, and residual contamination. We discuss an illustrative sample problem to discuss potential applications of the code. The study shows that the techniques developed for estimating model parameters can be successfully applied to the solution of remediation management problems. The resulting optimum pumping scheme depends, however, on the formulation of the remediation goals and the relative weighting between individual terms of the cost function.
Optimized evaporation technique for leachate treatment: Small scale implementation.
Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz
2016-04-01
This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature. Copyright © 2016. Published by Elsevier Ltd.
Wei Li
2015-01-01
Full Text Available We propose a new optimization algorithm inspired by the formation and change of the cloud in nature, referred to as Cloud Particles Differential Evolution (CPDE algorithm. The cloud is assumed to have three states in the proposed algorithm. Gaseous state represents the global exploration. Liquid state represents the intermediate process from the global exploration to the local exploitation. Solid state represents the local exploitation. The best solution found so far acts as a nucleus. In gaseous state, the nucleus leads the population to explore by condensation operation. In liquid state, cloud particles carry out macrolocal exploitation by liquefaction operation. A new mutation strategy called cloud differential mutation is introduced in order to solve a problem that the misleading effect of a nucleus may cause the premature convergence. In solid state, cloud particles carry out microlocal exploitation by solidification operation. The effectiveness of the algorithm is validated upon different benchmark problems. The results have been compared with eight well-known optimization algorithms. The statistical analysis on performance evaluation of the different algorithms on 10 benchmark functions and CEC2013 problems indicates that CPDE attains good performance.
I. Nayak
2017-06-01
Full Text Available In the present research work, four different multi response optimization techniques, viz. multiple response signal-to-noise (MRSN ratio, weighted signal-to-noise (WSN ratio, Grey relational analysis (GRA and VIKOR (VlseKriterijumska Optimizacija I Kompromisno Resenje in Serbian methods have been used to optimize the electro-discharge machining (EDM performance characteristics such as material removal rate (MRR, tool wear rate (TWR and surface roughness (SR simultaneously. Experiments have been planned on a D2 steel specimen based on L9 orthogonal array. Experimental results are analyzed using the standard procedure. The optimum level combinations of input process parameters such as voltage, current, pulse-on-time and pulse-off-time, and percentage contributions of each process parameter using ANOVA technique have been determined. Different correlations have been developed between the various input process parameters and output performance characteristics. Finally, the optimum performances of these four methods are compared and the results show that WSN ratio method is the best multiresponse optimization technique for this process. From the analysis, it is also found that the current has the maximum effect on the overall performance of EDM operation as compared to other process parameters.
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-01
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-05
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP
Teo, Jing Chun; Foin, Nicolas; Otsuka, Fumiyuki; Bulluck, Heerajnarain; Fam, Jiang Ming; Wong, Philip; Low, Fatt Hoe; Leo, Hwa Liang; Mari, Jean-Martial; Joner, Michael; Girard, Michael J A; Virmani, Renu; Bezerra, HG.; Costa, MA.; Guagliumi, G.; Rollins, AM.; Simon, D.; Gutiérrez-Chico, JL.; Alegría-Barrero, E.; Teijeiro-Mestre, R.; Chan, PH.; Tsujioka, H.; de Silva, R.; Otsuka, F.; Joner, M.; Prati, F.; Virmani, R.; Narula, J.; Members, WC.; Levine, GN.; Bates, ER.; Blankenship, JC.; Bailey, SR.; Bittl, JA.; Prati, F.; Guagliumi, G.; Mintz, G.S.; Costa, Marco; Regar, E.; Akasaka, T.; Roleder, T.; Jąkała, J.; Kałuża, GL.; Partyka, Ł.; Proniewska, K.; Pociask, E.; Girard, MJA.; Strouthidis, NG.; Ethier, CR.; Mari, JM.; Mari, JM.; Strouthidis, NG.; Park, SC.; Girard, MJA.; van der Lee, R.; Foin, N.; Otsuka, F.; Wong, P.K.; Mari, J-M.; Joner, M.; Nakano, M.; Vorpahl, M.; Otsuka, F.; Taniwaki, M.; Yazdani, SK.; Finn, AV.; Nakano, M.; Yahagi, K.; Yamamoto, H.; Taniwaki, M.; Otsuka, F.; Ladich, ER.; Girard, MJ.; Ang, M.; Chung, CW.; Farook, M.; Strouthidis, N.; Mehta, JS.; Foin, N.; Mari, JM.; Nijjer, S.; Sen, S.; Petraco, R.; Ghione, M.; Liu, X.; Kang, JU.; Virmani, R.; Kolodgie, F.D.; Burke, AP.; Farb, A.; Schwartz, S.M.; Yahagi, K.; Kolodgie, F.D.; Otsuka, F.; Finn, AV.; Davis, HR.; Joner, M.; Kume, T.; Akasaka, T.; Kawamoto, T.; Watanabe, N.; Toyota, E.; Neishi, Y.; Rieber, J.; Meissner, O.; Babaryka, G.; Reim, S.; Oswald, M.E.; Koenig, A.S.; Tearney, G. J.; Regar, E.; Akasaka, T.; Adriaenssens, T.; Barlis, P.; Bezerra, HG.; Yabushita, H.; Bouma, BE.; Houser, S. L.; Aretz, HT.; Jang, I-K.; Schlendorf, KH.; Guo, J.; Sun, L.; Chen, Y.D.; Tian, F.; Liu, HB.; Chen, L.; Kawasaki, M.; Bouma, BE.; Bressner, J. E.; Houser, S. L.; Nadkarni, S. K.; MacNeill, BD.; Jansen, CHP.; Onthank, DC.; Cuello, F.; Botnar, RM.; Wiethoff, AJ.; Warley, A.; von Birgelen, C.; Hartmann, A. M.; Kubo, T.; Akasaka, T.; Shite, J.; Suzuki, T.; Uemura, S.; Yu, B.; Habara, M.; Nasu, K.; Terashima, M.; Kaneda, H.; Yokota, D.; Ko, E.; Virmani, R.; Burke, AP.; Kolodgie, F.D.; Farb, A.; Takarada, S.; Imanishi, T.; Kubo, T.; Tanimoto, T.; Kitabata, H.; Nakamura, N.; Hattori, K.; Ozaki, Y.; Ismail, TF.; Okumura, M.; Naruse, H.; Kan, S.; Nishio, R.; Shinke, T.; Otake, H.; Nakagawa, M.; Nagoshi, R.; Inoue, T.; Sinclair, H.D.; Bourantas, C.; Bagnall, A.; Mintz, G.S.; Kunadian, V.; Tearney, G. J.; Yabushita, H.; Houser, S. L.; Aretz, HT.; Jang, I-K.; Schlendorf, KH.; van Soest, G.; Goderie, T.; Regar, E.; Koljenović, S.; Leenders, GL. van; Gonzalo, N.; Xu, C.; Schmitt, JM.; Carlier, SG.; Virmani, R.; van der Meer, FJ; Faber, D.J.; Sassoon, DMB.; Aalders, M.C.; Pasterkamp, G.; Leeuwen, TG. van; Schmitt, JM.; Knuttel, A.; Yadlowsky, M.; Eckhaus, MA.; Karamata, B.; Laubscher, M.; Leutenegger, M.; Bourquin, S.; Lasser, T.; Lambelet, P.; Vermeer, K.A.; Mo, J.; Weda, J.J.A.; Lemij, H.G.; Boer, JF. de
2016-01-01
PURPOSE To optimize conventional coronary optical coherence tomography (OCT) images using the attenuation-compensated technique to improve identification of plaques and the external elastic lamina (EEL) contour. METHOD The attenuation-compensated technique was optimized via manipulating contrast
SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization
Li, Dengwang; Wang, Jie [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)
2015-06-15
Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is
SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization
Li, Dengwang; Wang, Jie; Kapp, Daniel S.; Xing, Lei
2015-01-01
Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is
Sugny, D.; Bomble, L.; Ribeyre, T.; Dulieu, O.; Desouter-Lecomte, M.
2009-01-01
Implementation of quantum controlled-NOT (CNOT) gates in realistic molecular systems is studied using stimulated Raman adiabatic passage (STIRAP) techniques optimized in the time domain by genetic algorithms or coupled with optimal control theory. In the first case, with an adiabatic solution (a series of STIRAP processes) as starting point, we optimize in the time domain different parameters of the pulses to obtain a high fidelity in two realistic cases under consideration. A two-qubit CNOT gate constructed from different assignments in rovibrational states is considered in diatomic (NaCs) or polyatomic (SCCl 2 ) molecules. The difficulty of encoding logical states in pure rotational states with STIRAP processes is illustrated. In such circumstances, the gate can be implemented by optimal control theory and the STIRAP sequence can then be used as an interesting trial field. We discuss the relative merits of the two methods for rovibrational computing (structure of the control field, duration of the control, and efficiency of the optimization).
Airfoil shape optimization using non-traditional optimization technique and its validation
R. Mukesh
2014-07-01
Full Text Available Computational fluid dynamics (CFD is one of the computer-based solution methods which is more widely employed in aerospace engineering. The computational power and time required to carry out the analysis increase as the fidelity of the analysis increases. Aerodynamic shape optimization has become a vital part of aircraft design in the recent years. Generally if we want to optimize an airfoil we have to describe the airfoil and for that, we need to have at least hundred points of x and y co-ordinates. It is really difficult to optimize airfoils with this large number of co-ordinates. Nowadays many different schemes of parameter sets are used to describe general airfoil such as B-spline, and PARSEC. The main goal of these parameterization schemes is to reduce the number of needed parameters as few as possible while controlling the important aerodynamic features effectively. Here the work has been done on the PARSEC geometry representation method. The objective of this work is to introduce the knowledge of describing general airfoil using twelve parameters by representing its shape as a polynomial function. And also we have introduced the concept of Genetic Algorithm to optimize the aerodynamic characteristics of a general airfoil for specific conditions. A MATLAB program has been developed to implement PARSEC, Panel Technique, and Genetic Algorithm. This program has been tested for a standard NACA 2411 airfoil and optimized to improve its coefficient of lift. Pressure distribution and co-efficient of lift for airfoil geometries have been calculated using the Panel method. The optimized airfoil has improved co-efficient of lift compared to the original one. The optimized airfoil is validated using wind tunnel data.
Automatic Construction and Global Optimization of a Multisentiment Lexicon
Xiaoping Yang
2016-01-01
Full Text Available Manual annotation of sentiment lexicons costs too much labor and time, and it is also difficult to get accurate quantification of emotional intensity. Besides, the excessive emphasis on one specific field has greatly limited the applicability of domain sentiment lexicons (Wang et al., 2010. This paper implements statistical training for large-scale Chinese corpus through neural network language model and proposes an automatic method of constructing a multidimensional sentiment lexicon based on constraints of coordinate offset. In order to distinguish the sentiment polarities of those words which may express either positive or negative meanings in different contexts, we further present a sentiment disambiguation algorithm to increase the flexibility of our lexicon. Lastly, we present a global optimization framework that provides a unified way to combine several human-annotated resources for learning our 10-dimensional sentiment lexicon SentiRuc. Experiments show the superior performance of SentiRuc lexicon in category labeling test, intensity labeling test, and sentiment classification tasks. It is worth mentioning that, in intensity label test, SentiRuc outperforms the second place by 21 percent.
A DE-Based Scatter Search for Global Optimization Problems
Kun Li
2015-01-01
Full Text Available This paper proposes a hybrid scatter search (SS algorithm for continuous global optimization problems by incorporating the evolution mechanism of differential evolution (DE into the reference set updated procedure of SS to act as the new solution generation method. This hybrid algorithm is called a DE-based SS (SSDE algorithm. Since different kinds of mutation operators of DE have been proposed in the literature and they have shown different search abilities for different kinds of problems, four traditional mutation operators are adopted in the hybrid SSDE algorithm. To adaptively select the mutation operator that is most appropriate to the current problem, an adaptive mechanism for the candidate mutation operators is developed. In addition, to enhance the exploration ability of SSDE, a reinitialization method is adopted to create a new population and subsequently construct a new reference set whenever the search process of SSDE is trapped in local optimum. Computational experiments on benchmark problems show that the proposed SSDE is competitive or superior to some state-of-the-art algorithms in the literature.
Essays on variational approximation techniques for stochastic optimization problems
Deride Silva, Julio A.
This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence
Pozo, Carlos; Marín-Sanguino, Alberto; Alves, Rui; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Sorribas, Albert
2011-08-25
Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.
Sorribas Albert
2011-08-01
Full Text Available Abstract Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.
Zhiwei Ye
2015-01-01
Full Text Available Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Techniques for optimizing nanotips derived from frozen taylor cones
Hirsch, Gregory
2017-12-05
Optimization techniques are disclosed for producing sharp and stable tips/nanotips relying on liquid Taylor cones created from electrically conductive materials with high melting points. A wire substrate of such a material with a preform end in the shape of a regular or concave cone, is first melted with a focused laser beam. Under the influence of a high positive potential, a Taylor cone in a liquid/molten state is formed at that end. The cone is then quenched upon cessation of the laser power, thus freezing the Taylor cone. The tip of the frozen Taylor cone is reheated by the laser to allow its precise localized melting and shaping. Tips thus obtained yield desirable end-forms suitable as electron field emission sources for a variety of applications. In-situ regeneration of the tip is readily accomplished. These tips can also be employed as regenerable bright ion sources using field ionization/desorption of introduced chemical species.
Optimization Techniques for Dimensionally Truncated Sparse Grids on Heterogeneous Systems
Deftu, A.
2013-02-01
Given the existing heterogeneous processor landscape dominated by CPUs and GPUs, topics such as programming productivity and performance portability have become increasingly important. In this context, an important question refers to how can we develop optimization strategies that cover both CPUs and GPUs. We answer this for fastsg, a library that provides functionality for handling efficiently high-dimensional functions. As it can be employed for compressing and decompressing large-scale simulation data, it finds itself at the core of a computational steering application which serves us as test case. We describe our experience with implementing fastsg\\'s time critical routines for Intel CPUs and Nvidia Fermi GPUs. We show the differences and especially the similarities between our optimization strategies for the two architectures. With regard to our test case for which achieving high speedups is a "must" for real-time visualization, we report a speedup of up to 6.2x times compared to the state-of-the-art implementation of the sparse grid technique for GPUs. © 2013 IEEE.
Global-Local Analysis and Optimization of a Composite Civil Tilt-Rotor Wing
Rais-Rohani, Masound
1999-01-01
This report gives highlights of an investigation on the design and optimization of a thin composite wing box structure for a civil tilt-rotor aircraft. Two different concepts are considered for the cantilever wing: (a) a thin monolithic skin design, and (b) a thick sandwich skin design. Each concept is examined with three different skin ply patterns based on various combinations of 0, +/-45, and 90 degree plies. The global-local technique is used in the analysis and optimization of the six design models. The global analysis is based on a finite element model of the wing-pylon configuration while the local analysis uses a uniformly supported plate representing a wing panel. Design allowables include those on vibration frequencies, panel buckling, and material strength. The design optimization problem is formulated as one of minimizing the structural weight subject to strength, stiffness, and d,vnamic constraints. Six different loading conditions based on three different flight modes are considered in the design optimization. The results of this investigation reveal that of all the loading conditions the one corresponding to the rolling pull-out in the airplane mode is the most stringent. Also the frequency constraints are found to drive the skin thickness limits, rendering the buckling constraints inactive. The optimum skin ply pattern for the monolithic skin concept is found to be (((0/+/-45/90/(0/90)(sub 2))(sub s))(sub s), while for the sandwich skin concept the optimal ply pattern is found to be ((0/+/-45/90)(sub 2s))(sub s).
Po-Chen Cheng
2015-06-01
Full Text Available In this paper, an asymmetrical fuzzy-logic-control (FLC-based maximum power point tracking (MPPT algorithm for photovoltaic (PV systems is presented. Two membership function (MF design methodologies that can improve the effectiveness of the proposed asymmetrical FLC-based MPPT methods are then proposed. The first method can quickly determine the input MF setting values via the power–voltage (P–V curve of solar cells under standard test conditions (STC. The second method uses the particle swarm optimization (PSO technique to optimize the input MF setting values. Because the PSO approach must target and optimize a cost function, a cost function design methodology that meets the performance requirements of practical photovoltaic generation systems (PGSs is also proposed. According to the simulated and experimental results, the proposed asymmetrical FLC-based MPPT method has the highest fitness value, therefore, it can successfully address the tracking speed/tracking accuracy dilemma compared with the traditional perturb and observe (P&O and symmetrical FLC-based MPPT algorithms. Compared to the conventional FLC-based MPPT method, the obtained optimal asymmetrical FLC-based MPPT can improve the transient time and the MPPT tracking accuracy by 25.8% and 0.98% under STC, respectively.
Hediyeh Karimi
2013-01-01
Full Text Available It has been predicted that the nanomaterials of graphene will be among the candidate materials for postsilicon electronics due to their astonishing properties such as high carrier mobility, thermal conductivity, and biocompatibility. Graphene is a semimetal zero gap nanomaterial with demonstrated ability to be employed as an excellent candidate for DNA sensing. Graphene-based DNA sensors have been used to detect the DNA adsorption to examine a DNA concentration in an analyte solution. In particular, there is an essential need for developing the cost-effective DNA sensors holding the fact that it is suitable for the diagnosis of genetic or pathogenic diseases. In this paper, particle swarm optimization technique is employed to optimize the analytical model of a graphene-based DNA sensor which is used for electrical detection of DNA molecules. The results are reported for 5 different concentrations, covering a range from 0.01 nM to 500 nM. The comparison of the optimized model with the experimental data shows an accuracy of more than 95% which verifies that the optimized model is reliable for being used in any application of the graphene-based DNA sensor.
Optimal technique for deep breathing exercises after cardiac surgery.
Westerdahl, E
2015-06-01
Cardiac surgery patients often develop a restrictive pulmonary impairment and gas exchange abnormalities in the early postoperative period. Chest physiotherapy is routinely prescribed in order to reduce or prevent these complications. Besides early mobilization, positioning and shoulder girdle exercises, various breathing exercises have been implemented as a major component of postoperative care. A variety of deep breathing maneuvres are recommended to the spontaneously breathing patient to reduce atelectasis and to improve lung function in the early postoperative period. Different breathing exercises are recommended in different parts of the world, and there is no consensus about the most effective breathing technique after cardiac surgery. Arbitrary instructions are given, and recommendations on performance and duration vary between hospitals. Deep breathing exercises are a major part of this therapy, but scientific evidence for the efficacy has been lacking until recently, and there is a lack of trials describing how postoperative breathing exercises actually should be performed. The purpose of this review is to provide a brief overview of postoperative breathing exercises for patients undergoing cardiac surgery via sternotomy, and to discuss and suggest an optimal technique for the performance of deep breathing exercises.
A Monte Carlo simulation technique to determine the optimal portfolio
Hassan Ghodrati
2014-03-01
Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.
Global Optimization Employing Gaussian Process-Based Bayesian Surrogates
Roland Preuss
2018-03-01
Full Text Available The simulation of complex physics models may lead to enormous computer running times. Since the simulations are expensive it is necessary to exploit the computational budget in the best possible manner. If for a few input parameter settings an output data set has been acquired, one could be interested in taking these data as a basis for finding an extremum and possibly an input parameter set for further computer simulations to determine it—a task which belongs to the realm of global optimization. Within the Bayesian framework we utilize Gaussian processes for the creation of a surrogate model function adjusted self-consistently via hyperparameters to represent the data. Although the probability distribution of the hyperparameters may be widely spread over phase space, we make the assumption that only the use of their expectation values is sufficient. While this shortcut facilitates a quickly accessible surrogate, it is somewhat justified by the fact that we are not interested in a full representation of the model by the surrogate but to reveal its maximum. To accomplish this the surrogate is fed to a utility function whose extremum determines the new parameter set for the next data point to obtain. Moreover, we propose to alternate between two utility functions—expected improvement and maximum variance—in order to avoid the drawbacks of each. Subsequent data points are drawn from the model function until the procedure either remains in the points found or the surrogate model does not change with the iteration. The procedure is applied to mock data in one and two dimensions in order to demonstrate proof of principle of the proposed approach.
Globally optimal, minimum stored energy, double-doughnut superconducting magnets.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2010-01-01
The use of the minimum stored energy current density map-based methodology of designing closed-bore symmetric superconducting magnets was described recently. The technique is further developed to cater for the design of interventional-type MRI systems, and in particular open symmetric magnets of the double-doughnut configuration. This extends the work to multiple magnet domain configurations. The use of double-doughnut magnets in MRI scanners has previously been hindered by the ability to deliver strong magnetic fields over a sufficiently large volume appropriate for imaging, essentially limiting spatial resolution, signal-to-noise ratio, and field of view. The requirement of dedicated interventional space restricts the manner in which the coils can be arranged and placed. The minimum stored energy optimal coil arrangement ensures that the field strength is maximized over a specific region of imaging. The design method yields open, dual-domain magnets capable of delivering greater field strengths than those used prior to this work, and at the same time it provides an increase in the field-of-view volume. Simulation results are provided for 1-T double-doughnut magnets with at least a 50-cm 1-ppm (parts per million) field of view and 0.7-m gap between the two doughnuts. Copyright (c) 2009 Wiley-Liss, Inc.
Global stability-based design optimization of truss structures using ...
Furthermore, a pure pareto-ranking based multi-objective optimization model is employed for the design optimization of the truss structure with multiple objectives. The computational performance of the optimization model is increased by implementing an island model into its evolutionary search mechanism. The proposed ...
Feng Zou
2016-01-01
Full Text Available An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO, which is considering the teacher’s behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods.
Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A
2012-07-02
Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of
A Simple But Effective Canonical Dual Theory Unified Algorithm for Global Optimization
Zhang, Jiapu
2011-01-01
Numerical global optimization methods are often very time consuming and could not be applied for high-dimensional nonconvex/nonsmooth optimization problems. Due to the nonconvexity/nonsmoothness, directly solving the primal problems sometimes is very difficult. This paper presents a very simple but very effective canonical duality theory (CDT) unified global optimization algorithm. This algorithm has convergence is proved in this paper. More important, for this CDT-unified algorithm, numerous...
Use of advanced modeling techniques to optimize thermal packaging designs.
Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar
2010-01-01
Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed
Effective Energy Methods for Global Optimization for Biopolymer Structure Prediction
Shalloway, David
1998-01-01
.... Its main strength is that it uncovers and exploits the intrinsic "hidden structures" of biopolymer energy landscapes to efficiently perform global minimization using a hierarchical search procedure...
Optimized inspection techniques and structural analysis in lifetime management
Aguado, M.T.; Marcelles, I.
1993-01-01
Preservation of the option of extending the service lifetime of a nuclear power plant beyond its normal design lifetime requires correct remaining lifetime management from the very beginning of plant operation. The methodology used in plant remaining lifetime management is essentially based on the use of standard inspections, surveillance and monitoring programs and calculations, such as thermal-stress and fracture mechanics analysis. The inspection techniques should be continuously optimized, in order to be able to detect and dimension existing defects with the highest possible degree of accuracy. The information obtained during the inspection is combined with the historical data of the components: design, quality, operation, maintenance, and transients, and with the results of destructive testing, fracture mechanics and thermal fatigue analysis. These data are used to estimate the remaining lifetime of nuclear power plant components, systems and structures with the highest degree possible of accuracy. The use of this methodology allows component repairs and replacements to be reduced or avoided and increases the safety levels and availability of the nuclear power plant. Use of this strategy avoids the need for heavy investments at the end of the licensing period
Muscle optimization techniques impact the magnitude of calculated hip joint contact forces
Wesseling, M.; Derikx, L.C.; de Groote, F.; Bartels, W.; Meyer, C.; Verdonschot, Nicolaas Jacobus Joseph; Jonkers, I.
2015-01-01
In musculoskeletal modelling, several optimization techniques are used to calculate muscle forces, which strongly influence resultant hip contact forces (HCF). The goal of this study was to calculate muscle forces using four different optimization techniques, i.e., two different static optimization
An Evaluation of the Sniffer Global Optimization Algorithm Using Standard Test Functions
Butler, Roger A. R.; Slaminka, Edward E.
1992-03-01
The performance of Sniffer—a new global optimization algorithm—is compared with that of Simulated Annealing. Using the number of function evaluations as a measure of efficiency, the new algorithm is shown to be significantly better at finding the global minimum of seven standard test functions. Several of the test functions used have many local minima and very steep walls surrounding the global minimum. Such functions are intended to thwart global minimization algorithms.
Larsen, Anders Astrup; Bendsøe, Martin P.; Schmidt, Henrik Nikolaj Blicher
2007-01-01
The aim of this paper is to optimize a thermal model of a friction stir welding process. The optimization is performed using a space mapping technique in which an analytical model is used along with the FEM model to be optimized. The results are compared to traditional gradient based optimization...
Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.
2010-03-01
Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.
Lagos, Soledad R.; Velis, Danilo R.
2018-02-01
We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.
Optimal fringe angle selection for digital fringe projection technique.
Wang, Yajun; Zhang, Song
2013-10-10
Existing digital fringe projection (DFP) systems mainly use either horizontal or vertical fringe patterns for three-dimensional shape measurement. This paper reveals that these two fringe directions are usually not optimal where the phase change is the largest to a given depth variation. We propose a novel and efficient method to determine the optimal fringe angle by projecting a set of horizontal and vertical fringe patterns onto a step-height object and by further analyzing two resultant phase maps. Experiments demonstrate the existence of the optimal angle and the success of the proposed optimal angle determination method.
Global optimization for overall HVAC systems - Part I problem formulation and analysis
Lu Lu; Cai Wenjian; Chai, Y.S.; Xie Lihua
2005-01-01
This paper presents the global optimization technologies for overall heating, ventilating and air conditioning (HVAC) systems. The objective function of global optimization and constraints are formulated based on mathematical models of the major components. All these models are associated with power consumption components and heat exchangers for transferring cooling load. The characteristics of all the major components are briefly introduced by models, and the interactions between them are analyzed and discussed to show the complications of the problem. According to the characteristics of the operating components, the complicated original optimization problem for overall HVAC systems is transformed and simplified into a compact form ready for optimization
A branch and bound algorithm for the global optimization of Hessian Lipschitz continuous functions
Fowkes, Jaroslav M.; Gould, Nicholas I. M.; Farmer, Chris L.
2012-01-01
We present a branch and bound algorithm for the global optimization of a twice differentiable nonconvex objective function with a Lipschitz continuous Hessian over a compact, convex set. The algorithm is based on applying cubic regularisation
Martorell, S.; Serradell, V.; Munoz, A.; Sanchez, A.
1997-01-01
Background, objective, scope, detailed working plan and follow-up and final product of the project ''Global optimization of maintenance and surveillance testing based on reliability and probabilistic safety assessment'' are described
Khurram Hammed
2016-01-01
Full Text Available This paper presents a stochastic global optimization technique known as Particle Swarm Optimization (PSO for joint estimation of amplitude and direction of arrival of the targets in RADAR communication system. The proposed scheme is an excellent optimization methodology and a promising approach for solving the DOA problems in communication systems. Moreover, PSO is quite suitable for real time scenario and easy to implement in hardware. In this study, uniform linear array is used and targets are supposed to be in far field of the arrays. Formulation of the fitness function is based on mean square error and this function requires a single snapshot to obtain the best possible solution. To check the accuracy of the algorithm, all of the results are taken by varying the number of antenna elements and targets. Finally, these results are compared with existing heuristic techniques to show the accuracy of PSO.
Sørensen, Søren N.; Stolpe, Mathias
2015-01-01
rate. The capabilities of the method and the effect of active versus inactive manufacturing constraints are demonstrated on several numerical examples of limited size, involving at most 320 binary variables. Most examples are solved to guaranteed global optimality and may constitute benchmark examples...... but is, however, convex in the original mixed binary nested form. Convexity is the foremost important property of optimization problems, and the proposed method can guarantee the global or near-global optimal solution; unlike most topology optimization methods. The material selection is limited...... for popular topology optimization methods and heuristics based on solving sequences of non-convex problems. The results will among others demonstrate that the difficulty of the posed problem is highly dependent upon the composition of the constitutive properties of the material candidates....
Globally Optimal Segmentation of Permanent-Magnet Systems
Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders
2016-01-01
Permanent-magnet systems are widely used for generation of magnetic fields with specific properties. The reciprocity theorem, an energy-equivalence principle in magnetostatics, can be employed to calculate the optimal remanent flux density of the permanent-magnet system, given any objective...... remains unsolved. We show that the problem of optimal segmentation of a two-dimensional permanent-magnet assembly with respect to a linear objective functional can be reduced to the problem of piecewise linear approximation of a plane curve by perimeter maximization. Once the problem has been cast...
The Tunneling Method for Global Optimization in Multidimensional Scaling.
Groenen, Patrick J. F.; Heiser, Willem J.
1996-01-01
A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)
Global Launcher Trajectory Optimization for Lunar Base Settlement
Pagano, A.; Mooij, E.
2010-01-01
The problem of a mission to the Moon to set a permanent outpost can be tackled by dividing the journey into three phases: the Earth ascent, the Earth-Moon transfer and the lunar landing. In this paper we present an optimization analysis of Earth ascent trajectories of existing launch vehicles
Vertical bifacial solar farms: Physics, design, and global optimization
Khan, M. Ryyan; Hanna, Amir; Sun, Xingshu; Alam, Muhammad A.
2017-01-01
10–20% more energy than a traditional monofacial farm for a practical row-spacing of 2 m (corresponding to 1.2 m high panels). With the prospect of additional 5–20% energy gain from reduced soiling and tilt optimization, bifacial solar farm do offer a
Global stability-based design optimization of truss structures using ...
The quality of current pareto front obtained in the end of a whole genetic search is assessed according to its closeness to the ...... better optimal designation with a lower displacement value of 0.3075 in. satisfying the service- .... Internal force. R.
Advanced Gradient Based Optimization Techniques Applied on Sheet Metal Forming
Endelt, Benny; Nielsen, Karl Brian
2005-01-01
The computational-costs for finite element simulations of general sheet metal forming processes are considerable, especially measured in time. In combination with optimization, the performance of the optimization algorithm is crucial for the overall performance of the system, i.e. the optimization algorithm should gain as much information about the system in each iteration as possible. Least-square formulation of the object function is widely applied for solution of inverse problems, due to the superior performance of this formulation.In this work focus will be on small problems which are defined as problems with less than 1000 design parameters; as the majority of real life optimization and inverse problems, represented in literature, can be characterized as small problems, typically with less than 20 design parameters.We will show that the least square formulation is well suited for two classes of inverse problems; identification of constitutive parameters and process optimization.The scalability and robustness of the approach are illustrated through a number of process optimizations and inverse material characterization problems; tube hydro forming, two step hydro forming, flexible aluminum tubes, inverse identification of material parameters
Castillo, Alejandro; Martín-del-Campo, Cecilia; Montes-Tadeo, José-Luis; François, Juan-Luis; Ortiz-Servin, Juan-José; Perusquía-del-Cueto, Raúl
2014-01-01
Highlights: • Different metaheuristic optimization techniques were compared. • The optimal enrichment and gadolinia distribution in a BWR fuel lattice was studied. • A decision making tool based on the Position Vector of Minimum Regret was applied. • Similar results were found for the different optimization techniques. - Abstract: In the present study a comparison of the performance of five heuristic techniques for optimization of combinatorial problems is shown. The techniques are: Ant Colony System, Artificial Neural Networks, Genetic Algorithms, Greedy Search and a hybrid of Path Relinking and Scatter Search. They were applied to obtain an “optimal” enrichment and gadolinia distribution in a fuel lattice of a boiling water reactor. All techniques used the same objective function for qualifying the different distributions created during the optimization process as well as the same initial conditions and restrictions. The parameters included in the objective function are the k-infinite multiplication factor, the maximum local power peaking factor, the average enrichment and the average gadolinia concentration of the lattice. The CASMO-4 code was used to obtain the neutronic parameters. The criteria for qualifying the optimization techniques include also the evaluation of the best lattice with burnup and the number of evaluations of the objective function needed to obtain the best solution. In conclusion all techniques obtain similar results, but there are methods that found better solutions faster than others. A decision analysis tool based on the Position Vector of Minimum Regret was applied to aggregate the criteria in order to rank the solutions according to three functions: neutronic grade at 0 burnup, neutronic grade with burnup and global cost which aggregates the computing time in the decision. According to the results Greedy Search found the best lattice in terms of the neutronic grade at 0 burnup and also with burnup. However, Greedy Search is
Saborido, Rubén; Ruiz, Ana B; Luque, Mariano
2017-01-01
In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA ( global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.
Avoiding spurious submovement decompositions: a globally optimal algorithm
Rohrer, Brandon Robinson; Hogan, Neville
2003-01-01
Evidence for the existence of discrete submovements underlying continuous human movement has motivated many attempts to extract them. Although they produce visually convincing results, all of the methodologies that have been employed are prone to produce spurious decompositions. Examples of potential failures are given. A branch-and-bound algorithm for submovement extraction, capable of global nonlinear minimization (and hence capable of avoiding spurious decompositions), is developed and demonstrated.
Corzo, Gerald; Solomatine, Dimitri
2007-05-01
Natural phenomena are multistationary and are composed of a number of interacting processes, so one single model handling all processes often suffers from inaccuracies. A solution is to partition data in relation to such processes using the available domain knowledge or expert judgment, to train separate models for each of the processes, and to merge them in a modular model (committee). In this paper a problem of water flow forecast in watershed hydrology is considered where the flow process can be presented as consisting of two subprocesses -- base flow and excess flow, so that these two processes can be separated. Several approaches to data separation techniques are studied. Two case studies with different forecast horizons are considered. Parameters of the algorithms responsible for data partitioning are optimized using genetic algorithms and global pattern search. It was found that modularization of ANN models using domain knowledge makes models more accurate, if compared with a global model trained on the whole data set, especially when forecast horizon (and hence the complexity of the modelled processes) is increased.
Jian-Guo Zheng
2015-01-01
Full Text Available Artificial bee colony (ABC algorithm is a popular swarm intelligence technique inspired by the intelligent foraging behavior of honey bees. However, ABC is good at exploration but poor at exploitation and its convergence speed is also an issue in some cases. To improve the performance of ABC, a novel ABC combined with grenade explosion method (GEM and Cauchy operator, namely, ABCGC, is proposed. GEM is embedded in the onlooker bees’ phase to enhance the exploitation ability and accelerate convergence of ABCGC; meanwhile, Cauchy operator is introduced into the scout bees’ phase to help ABCGC escape from local optimum and further enhance its exploration ability. Two sets of well-known benchmark functions are used to validate the better performance of ABCGC. The experiments confirm that ABCGC is significantly superior to ABC and other competitors; particularly it converges to the global optimum faster in most cases. These results suggest that ABCGC usually achieves a good balance between exploitation and exploration and can effectively serve as an alternative for global optimization.
A Global Optimization Algorithm for Sum of Linear Ratios Problem
Yuelin Gao; Siqiao Jin
2013-01-01
We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the c...
Global Optimization for Transport Network Expansion and Signal Setting
Liu, Haoxiang; Wang, David Z. W.; Yue, Hao
2015-01-01
This paper proposes a model to address an urban transport planning problem involving combined network design and signal setting in a saturated network. Conventional transport planning models usually deal with the network design problem and signal setting problem separately. However, the fact that network capacity design and capacity allocation determined by network signal setting combine to govern the transport network performance requires the optimal transport planning to consider the two pr...
Machine learning techniques for optical communication system optimization
Zibar, Darko; Wass, Jesper; Thrane, Jakob
In this paper, machine learning techniques relevant to optical communication are presented and discussed. The focus is on applying machine learning tools to optical performance monitoring and performance prediction.......In this paper, machine learning techniques relevant to optical communication are presented and discussed. The focus is on applying machine learning tools to optical performance monitoring and performance prediction....
Optimal Technique for Abdominal Fascial Closure in Liver Transplant Patients
Unal Aydin
2010-01-01
Conclusion: Our results indicate that the novel technique used in this study contributed to overcoming early and late postoperative complications associated with closure of the abdominal fascia in liver transplant patients. In addition, this new technique has proven to be easily applicable, faster, safer and efficient in these patients; it is also potentially useful for conventional surgery.
Fast globally optimal segmentation of 3D prostate MRI with axial symmetry prior.
Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron
2013-01-01
We propose a novel global optimization approach to segmenting a given 3D prostate T2w magnetic resonance (MR) image, which enforces the inherent axial symmetry of the prostate shape and simultaneously performs a sequence of 2D axial slice-wise segmentations with a global 3D coherence prior. We show that the proposed challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. With this regard, we introduce a novel coupled continuous max-flow model, which is dual to the studied convex relaxed optimization formulation and leads to an efficient multiplier augmented algorithm based on the modern convex optimization theory. Moreover, the new continuous max-flow based algorithm was implemented on GPUs to achieve a substantial improvement in computation. Experimental results using public and in-house datasets demonstrate great advantages of the proposed method in terms of both accuracy and efficiency.
Global issues and opportunities for optimized retinoblastoma care.
Gallie, Brenda L; Zhao, Junyang; Vandezande, Kirk; White, Abigail; Chan, Helen S L
2007-12-01
The RB1 gene is important in all human cancers. Studies of human retinoblastoma point to a rare retinal cell with extreme dependency on RB1 for initiation but not progression to full malignancy. In developed countries, genetic testing within affected families can predict children at high risk of retinoblastoma before birth; chemotherapy with local therapy often saves eyes and vision; and mortality is 4%. In less developed countries where 92% of children with retinoblastoma are born, mortality reaches 90%. Global collaboration is building for the dramatic change in mortality that awareness, simple expertise and therapies could achieve in less developed countries. Copyright 2007 Wiley-Liss, Inc.
Fast globally optimal segmentation of cells in fluorescence microscopy images.
Bergeest, Jan-Philip; Rohr, Karl
2011-01-01
Accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression in high-throughput screening applications. We propose a new approach for segmenting cell nuclei which is based on active contours and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images of different cell types. We have also performed a quantitative comparison with previous segmentation approaches.
Optimization Techniques for Dimensionally Truncated Sparse Grids on Heterogeneous Systems
Deftu, A.; Murarasu, A.
2013-01-01
and especially the similarities between our optimization strategies for the two architectures. With regard to our test case for which achieving high speedups is a "must" for real-time visualization, we report a speedup of up to 6.2x times compared to the state
An improved technique for the prediction of optimal image resolution ...
Past studies to predict optimal image resolution required for generating spatial information for savannah ecosystems have yielded different outcomes, hence providing a knowledge gap that was investigated in the present study. The postulation, for the present study, was that by graphically solving two simultaneous ...
Optimal Component Lumping: problem formulation and solution techniques
Lin, Bao; Leibovici, Claude F.; Jørgensen, Sten Bay
2008-01-01
This paper presents a systematic method for optimal lumping of a large number of components in order to minimize the loss of information. In principle, a rigorous composition-based model is preferable to describe a system accurately. However, computational intensity and numerical issues restrict ...
Optimization of an embedded rail structure using a numerical technique
Markine, V.L.; De Man, A.P.; Esveld, C.
2000-01-01
This paper presents several steps of a procedure for design of a railway track aiming at the development of optimal track structures under various predefined service and environmental conditions. The structural behavior of the track is analyzed using a finite element model in which the track and a
A Global Optimization Algorithm for Sum of Linear Ratios Problem
Yuelin Gao
2013-01-01
Full Text Available We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the convergence of the algorithm is proved. Numerical experiments are reported to show the effectiveness of the proposed algorithm.
Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm
V. D. Sulimov
2014-01-01
Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search
Development and Application of Optimization Techniques for Composite Laminates.
1983-09-01
Institute of Technolgy Air University in Partial Fulfillment of the Requirements for the Degree of Master of Science by Gerald V. Flanagan, S.B. Lt. USAF...global minima [9]. An informal definition of convexity is that any two points in the space can be connected by a straight line which does not pass out of...question. A quick look at gradient information suggests that too few angles (2 for example) will make the laminate sensitive to small changes in
GPU-Based Techniques for Global Illumination Effects
Szirmay-Kalos, László; Sbert, Mateu
2008-01-01
This book presents techniques to render photo-realistic images by programming the Graphics Processing Unit (GPU). We discuss effects such as mirror reflections, refractions, caustics, diffuse or glossy indirect illumination, radiosity, single or multiple scattering in participating media, tone reproduction, glow, and depth of field. This book targets game developers, graphics programmers, and also students with some basic understanding of computer graphics algorithms, rendering APIs like Direct3D or OpenGL, and shader programming. In order to make this book self-contained, the most important c
Purchasing and inventory management techniques for optimizing inventory investment
McFarlane, I.; Gehshan, T.
1993-01-01
In an effort to reduce operations and maintenance costs among nuclear plants, many utilities are taking a closer look at their inventory investment. Various approaches for inventory reduction have been used and discussed, but these approaches are often limited to an inventory management perspective. Interaction with purchasing and planning personnel to reduce inventory investment is a necessity in utility efforts to become more cost competitive. This paper addresses the activities that purchasing and inventory management personnel should conduct in an effort to optimize inventory investment while maintaining service-level goals. Other functions within a materials management organization, such as the warehousing and investment recovery functions, can contribute to optimizing inventory investment. However, these are not addressed in this paper because their contributions often come after inventory management and purchasing decisions have been made
Optimal fuel loading pattern design using artificial intelligence techniques
Kim, Han Gon; Chang, Soon Heung; Lee, Byung Ho
1993-01-01
The Optimal Fuel Shuffling System (OFSS) is developed for optimal design of PWR fuel loading pattern. OFSS is a hybrid system that a rule based system, a fuzzy logic, and an artificial neural network are connected each other. The rule based system classifies loading patterns into two classes using several heuristic rules and a fuzzy rule. A fuzzy rule is introduced to achieve more effective and fast searching. Its membership function is automatically updated in accordance with the prediction results. The artificial neural network predicts core parameters for the patterns generated from the rule based system. The back-propagation network is used for fast prediction of core parameters. The artificial neural network and the fuzzy logic can be used as the tool for improvement of existing algorithm's capabilities. OFSS was demonstrated and validated for cycle 1 of Kori unit 1 PWR. (Author)
Vertical bifacial solar farms: Physics, design, and global optimization
Khan, M. Ryyan
2017-09-04
There have been sustained interest in bifacial solar cell technology since 1980s, with prospects of 30–50% increase in the output power from a stand-alone panel. Moreover, a vertical bifacial panel reduces dust accumulation and provides two output peaks during the day, with the second peak aligned to the peak electricity demand. Recent commercialization and anticipated growth of bifacial panel market have encouraged a closer scrutiny of the integrated power-output and economic viability of bifacial solar farms, where mutual shading will erode some of the anticipated energy gain associated with an isolated, single panel. Towards that goal, in this paper we focus on geography-specific optimization of ground-mounted vertical bifacial solar farms for the entire world. For local irradiance, we combine the measured meteorological data with the clear-sky model. In addition, we consider the effects of direct, diffuse, and albedo light. We assume the panel is configured into sub-strings with bypass-diodes. Based on calculated light collection and panel output, we analyze the optimum farm design for maximum yearly output at any given location in the world. Our results predict that, regardless of the geographical location, a vertical bifacial farm will yield 10–20% more energy than a traditional monofacial farm for a practical row-spacing of 2 m (corresponding to 1.2 m high panels). With the prospect of additional 5–20% energy gain from reduced soiling and tilt optimization, bifacial solar farm do offer a viable technology option for large-scale solar energy generation.
Comparative Study of Retinal Vessel Segmentation Based on Global Thresholding Techniques
Temitope Mapayi
2015-01-01
Full Text Available Due to noise from uneven contrast and illumination during acquisition process of retinal fundus images, the use of efficient preprocessing techniques is highly desirable to produce good retinal vessel segmentation results. This paper develops and compares the performance of different vessel segmentation techniques based on global thresholding using phase congruency and contrast limited adaptive histogram equalization (CLAHE for the preprocessing of the retinal images. The results obtained show that the combination of preprocessing technique, global thresholding, and postprocessing techniques must be carefully chosen to achieve a good segmentation performance.
Characterization of PV panel and global optimization of its model parameters using genetic algorithm
Ismail, M.S.; Moghavvemi, M.; Mahlia, T.M.I.
2013-01-01
Highlights: • Genetic Algorithm optimization ability had been utilized to extract parameters of PV panel model. • Effect of solar radiation and temperature variations was taken into account in fitness function evaluation. • We used Matlab-Simulink to simulate operation of the PV-panel to validate results. • Different cases were analyzed to ascertain which of them gives more accurate results. • Accuracy and applicability of this approach to be used as a valuable tool for PV modeling were clearly validated. - Abstract: This paper details an improved modeling technique for a photovoltaic (PV) module; utilizing the optimization ability of a genetic algorithm, with different parameters of the PV module being computed via this approach. The accurate modeling of any PV module is incumbent upon the values of these parameters, as it is imperative in the context of any further studies concerning different PV applications. Simulation, optimization and the design of the hybrid systems that include PV are examples of these applications. The global optimization of the parameters and the applicability for the entire range of the solar radiation and a wide range of temperatures are achievable via this approach. The Manufacturer’s Data Sheet information is used as a basis for the purpose of parameter optimization, with an average absolute error fitness function formulated; and a numerical iterative method used to solve the voltage-current relation of the PV module. The results of single-diode and two-diode models are evaluated in order to ascertain which of them are more accurate. Other cases are also analyzed in this paper for the purpose of comparison. The Matlab–Simulink environment is used to simulate the operation of the PV module, depending on the extracted parameters. The results of the simulation are compared with the Data Sheet information, which is obtained via experimentation in order to validate the reliability of the approach. Three types of PV modules
Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei
2014-04-01
Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.
Christopher Expósito-Izquierdo
2017-02-01
Full Text Available This paper summarizes the main contributions of the Ph.D. thesis of Christopher Exp\\'osito-Izquierdo. This thesis seeks to develop a wide set of intelligent heuristic and meta-heuristic algorithms aimed at solving some of the most highlighted optimization problems associated with the transshipment and storage of containers at conventional maritime container terminals. Under the premise that no optimization technique can have a better performance than any other technique under all possible assumptions, the main point of interest in the domain of maritime logistics is to propose optimization techniques superior in terms of effectiveness and computational efficiency to previous proposals found in the scientific literature when solving individual optimization problems under realistic scenarios. Simultaneously, these optimization techniques should be enough competitive to be potentially implemented in practice. }}
Cuong D. Tran
2015-05-01
Full Text Available It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease.
Tran, Cuong D.; Gopalsamy, Geetha L.; Mortimer, Elissa K.; Young, Graeme P.
2015-01-01
It is well recognised that zinc deficiency is a major global public health issue, particularly in young children in low-income countries with diarrhoea and environmental enteropathy. Zinc supplementation is regarded as a powerful tool to correct zinc deficiency as well as to treat a variety of physiologic and pathologic conditions. However, the dose and frequency of its use as well as the choice of zinc salt are not clearly defined regardless of whether it is used to treat a disease or correct a nutritional deficiency. We discuss the application of zinc stable isotope tracer techniques to assess zinc physiology, metabolism and homeostasis and how these can address knowledge gaps in zinc supplementation pharmacokinetics. This may help to resolve optimal dose, frequency, length of administration, timing of delivery to food intake and choice of zinc compound. It appears that long-term preventive supplementation can be administered much less frequently than daily but more research needs to be undertaken to better understand how best to intervene with zinc in children at risk of zinc deficiency. Stable isotope techniques, linked with saturation response and compartmental modelling, also have the potential to assist in the continued search for simple markers of zinc status in health, malnutrition and disease. PMID:26035248
Optimization of connection techniques for multipoint satellite videoconference
Perrone, A.; Puccio, A.; Tirro, S.
1985-12-01
Videoconferencing is increasingly considered a convenient substitute for business travels, and satellites will remain for a long time the most convenient means for quick network implementation. The paper gives indications about the most promising connection and demand assignment techniques, and defines a possible protocol for information exchange among involved entities.
Optimizing Nuclear Reactor Operation Using Soft Computing Techniques
Entzinger, J.O.; Ruan, D.; Kahraman, Cengiz
2006-01-01
The strict safety regulations for nuclear reactor control make it di±cult to implement new control techniques such as fuzzy logic control (FLC). FLC however, can provide very desirable advantages over classical control, like robustness, adaptation and the capability to include human experience into
Xu, Yun-Chao; Chen, Qun
2013-01-01
The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases
Decomposition based parallel processing technique for efficient collaborative optimization
Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon
2000-01-01
In practical design studies, most of designers solve multidisciplinary problems with complex design structure. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder original design processes to minimize total cost and time. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology
Electric power systems advanced forecasting techniques and optimal generation scheduling
Catalão, João P S
2012-01-01
Overview of Electric Power Generation SystemsCláudio MonteiroUncertainty and Risk in Generation SchedulingRabih A. JabrShort-Term Load ForecastingAlexandre P. Alves da Silva and Vitor H. FerreiraShort-Term Electricity Price ForecastingNima AmjadyShort-Term Wind Power ForecastingGregor Giebel and Michael DenhardPrice-Based Scheduling for GencosGovinda B. Shrestha and Songbo QiaoOptimal Self-Schedule of a Hydro Producer under UncertaintyF. Javier Díaz and Javie
Vaziri Yazdi Pin, Mohammad
Electric power distribution systems are the last high voltage link in the chain of production, transport, and delivery of the electric energy, the fundamental goals of which are to supply the users' demand safely, reliably, and economically. The number circuit miles traversed by distribution feeders in the form of visible overhead or imbedded underground lines, far exceed those of all other bulk transport circuitry in the transmission system. Development and expansion of the distribution systems, similar to other systems, is directly proportional to the growth in demand and requires careful planning. While growth of electric demand has recently slowed through efforts in the area of energy management, the need for a continued expansion seems inevitable for the near future. Distribution system and expansions are also independent of current issues facing both the suppliers and the consumers of electrical energy. For example, deregulation, as an attempt to promote competition by giving more choices to the consumers, while it will impact the suppliers' planning strategies, it cannot limit the demand growth or the system expansion in the global sense. Curiously, despite presence of technological advancements and a 40-year history of contributions in the area, many of the major utilities still relay on experience and resort to rudimentary techniques when planning expansions. A comprehensive literature review of the contributions and careful analyses of the proposed algorithms for distribution expansion, confirmed that the problem is a complex, multistage and multiobjective problem for which a practical solution remains to be developed. In this research, based on the 15-year experience of a utility engineer, the practical expansion problem has been clearly defined and the existing deficiencies in the previous work identified and analyzed. The expansion problem has been formulated as a multistage planning problem in line with a natural course of development and industry
MOGO: Model-Oriented Global Optimization of Petascale Applications
Malony, Allen D.; Shende, Sameer S.
2012-09-14
The MOGO project was initiated under in 2008 under the DOE Program Announcement for Software Development Tools for Improved Ease-of-Use on Petascale systems (LAB 08-19). The MOGO team consisted of Oak Ridge National Lab, Argonne National Lab, and the University of Oregon. The overall goal of MOGO was to attack petascale performance analysis by developing a general framework where empirical performance data could be efficiently and accurately compared with performance expectations at various levels of abstraction. This information could then be used to automatically identify and remediate performance problems. MOGO was be based on performance models derived from application knowledge, performance experiments, and symbolic analysis. MOGO was able to make reasonable impact on existing DOE applications and systems. New tools and techniques were developed, which, in turn, were used on important DOE applications on DOE LCF systems to show significant performance improvements.
Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.; Valavi, K.
2013-01-01
Highlights: • SGHS enhanced the convergence rate of LPO using some improvements in comparison to basic HS and GHS. • SGHS optimization algorithm obtained averagely better fitness relative to basic HS and GHS algorithms. • Upshot of the SGHS implementation in the LPO reveals its flexibility, efficiency and reliability. - Abstract: The aim of this work is to apply the new developed optimization algorithm, Self-adaptive Global best Harmony Search (SGHS), for PWRs fuel management optimization. SGHS algorithm has some modifications in comparison with basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms such as dynamically change of parameters. For the demonstration of SGHS ability to find an optimal configuration of fuel assemblies, basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms also have been developed and investigated. For this purpose, Self-adaptive Global best Harmony Search Nodal Expansion package (SGHSNE) has been developed implementing HS, GHS and SGHS optimization algorithms for the fuel management operation of nuclear reactor cores. This package uses developed average current nodal expansion code which solves the multi group diffusion equation by employment of first and second orders of Nodal Expansion Method (NEM) for two dimensional, hexagonal and rectangular geometries, respectively, by one node per a FA. Loading pattern optimization was performed using SGHSNE package for some test cases to present the SGHS algorithm capability in converging to near optimal loading pattern. Results indicate that the convergence rate and reliability of the SGHS method are quite promising and practically, SGHS improves the quality of loading pattern optimization results relative to HS and GHS algorithms. As a result, it has the potential to be used in the other nuclear engineering optimization problems
Thummala, Prasanth; Schneider, Henrik; Zhang, Zhe
2015-01-01
.The energy efficiency is optimized using a proposed new automatic winding layout (AWL) technique and a comprehensive loss model.The AWL technique generates a large number of transformer winding layouts.The transformer parasitics such as dc resistance, leakage inductance and self-capacitance are calculated...... for each winding layout.An optimization technique is formulated to minimize the sum of energy losses during charge and discharge operations.The efficiency and energy loss distribution results from the optimization routine provide a deep insight into the high voltage transformer designand its impact...
A Guiding Evolutionary Algorithm with Greedy Strategy for Global Optimization Problems
Leilei Cao
2016-01-01
Full Text Available A Guiding Evolutionary Algorithm (GEA with greedy strategy for global optimization problems is proposed. Inspired by Particle Swarm Optimization, the Genetic Algorithm, and the Bat Algorithm, the GEA was designed to retain some advantages of each method while avoiding some disadvantages. In contrast to the usual Genetic Algorithm, each individual in GEA is crossed with the current global best one instead of a randomly selected individual. The current best individual served as a guide to attract offspring to its region of genotype space. Mutation was added to offspring according to a dynamic mutation probability. To increase the capability of exploitation, a local search mechanism was applied to new individuals according to a dynamic probability of local search. Experimental results show that GEA outperformed the other three typical global optimization algorithms with which it was compared.
Optimization of fast dissolving etoricoxib tablets prepared by sublimation technique
Patel D; Patel M
2008-01-01
The purpose of this investigation was to develop fast dissolving tablets of etoricoxib. Granules containing etoricoxib, menthol, crospovidone, aspartame and mannitol were prepared by wet granulation technique. Menthol was sublimed from the granules by exposing the granules to vacuum. The porous granules were then compressed in to tablets. Alternatively, tablets were first prepared and later exposed to vacuum. The tablets were evaluated for percentage friability and disintegration time. A 3 2 ...
Optimal deep neural networks for sparse recovery via Laplace techniques
Limmer, Steffen; Stanczak, Slawomir
2017-01-01
This paper introduces Laplace techniques for designing a neural network, with the goal of estimating simplex-constraint sparse vectors from compressed measurements. To this end, we recast the problem of MMSE estimation (w.r.t. a pre-defined uniform input distribution) as the problem of computing the centroid of some polytope that results from the intersection of the simplex and an affine subspace determined by the measurements. Owing to the specific structure, it is shown that the centroid ca...
Clausen, Jens; Zilinskas, A,
2002-01-01
We consider the problem of optimizing a Lipshitzian function. The branch and bound technique is a well-known solution method, and the key components for this are the subdivision scheme, the bound calculation scheme, and the initialization. For Lipschitzian optimization, the bound calculations are...
Brown, Aaron J.
2015-01-01
The International Space Station's (ISS) trajectory is coordinated and executed by the Trajectory Operations and Planning (TOPO) group at NASA's Johnson Space Center. TOPO group personnel routinely generate look-ahead trajectories for the ISS that incorporate translation burns needed to maintain its orbit over the next three to twelve months. The burns are modeled as in-plane, horizontal burns, and must meet operational trajectory constraints imposed by both NASA and the Russian Space Agency. In generating these trajectories, TOPO personnel must determine the number of burns to model, each burn's Time of Ignition (TIG), and magnitude (i.e. deltaV) that meet these constraints. The current process for targeting these burns is manually intensive, and does not take advantage of more modern techniques that can reduce the workload needed to find feasible burn solutions, i.e. solutions that simply meet the constraints, or provide optimal burn solutions that minimize the total DeltaV while simultaneously meeting the constraints. A two-level, hybrid optimization technique is proposed to find both feasible and globally optimal burn solutions for ISS trajectory planning. For optimal solutions, the technique breaks the optimization problem into two distinct sub-problems, one for choosing the optimal number of burns and each burn's optimal TIG, and the other for computing the minimum total deltaV burn solution that satisfies the trajectory constraints. Each of the two aforementioned levels uses a different optimization algorithm to solve one of the sub-problems, giving rise to a hybrid technique. Level 2, or the outer level, uses a genetic algorithm to select the number of burns and each burn's TIG. Level 1, or the inner level, uses the burn TIGs from Level 2 in a sequential quadratic programming (SQP) algorithm to compute a minimum total deltaV burn solution subject to the trajectory constraints. The total deltaV from Level 1 is then used as a fitness function by the genetic
The Global Optimal Algorithm of Reliable Path Finding Problem Based on Backtracking Method
Liang Shen
2017-01-01
Full Text Available There is a growing interest in finding a global optimal path in transportation networks particularly when the network suffers from unexpected disturbance. This paper studies the problem of finding a global optimal path to guarantee a given probability of arriving on time in a network with uncertainty, in which the travel time is stochastic instead of deterministic. Traditional path finding methods based on least expected travel time cannot capture the network user’s risk-taking behaviors in path finding. To overcome such limitation, the reliable path finding algorithms have been proposed but the convergence of global optimum is seldom addressed in the literature. This paper integrates the K-shortest path algorithm into Backtracking method to propose a new path finding algorithm under uncertainty. The global optimum of the proposed method can be guaranteed. Numerical examples are conducted to demonstrate the correctness and efficiency of the proposed algorithm.
Parallel processing based decomposition technique for efficient collaborative optimization
Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon
2001-01-01
In practical design studies, most of designers solve multidisciplinary problems with large sized and complex design system. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder the original design processes to minimize total computational cost. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology
OPTIMAL DATA REPLACEMENT TECHNIQUE FOR COOPERATIVE CACHING IN MANET
P. Kuppusamy
2014-09-01
Full Text Available A cooperative caching approach improves data accessibility and reduces query latency in Mobile Ad hoc Network (MANET. Maintaining the cache is challenging issue in large MANET due to mobility, cache size and power. The previous research works on caching primarily have dealt with LRU, LFU and LRU-MIN cache replacement algorithms that offered low query latency and greater data accessibility in sparse MANET. This paper proposes Memetic Algorithm (MA to locate the better replaceable data based on neighbours interest and fitness value of cached data to store the newly arrived data. This work also elects ideal CH using Meta heuristic search Ant Colony Optimization algorithm. The simulation results shown that proposed algorithm reduces the latency, control overhead and increases the packet delivery rate than existing approach by increasing nodes and speed respectively.
Conference on "State of the Art in Global Optimization : Computational Methods and Applications"
Pardalos, P
1996-01-01
Optimization problems abound in most fields of science, engineering, and technology. In many of these problems it is necessary to compute the global optimum (or a good approximation) of a multivariable function. The variables that define the function to be optimized can be continuous and/or discrete and, in addition, many times satisfy certain constraints. Global optimization problems belong to the complexity class of NP-hard prob lems. Such problems are very difficult to solve. Traditional descent optimization algorithms based on local information are not adequate for solving these problems. In most cases of practical interest the number of local optima increases, on the aver age, exponentially with the size of the problem (number of variables). Furthermore, most of the traditional approaches fail to escape from a local optimum in order to continue the search for the global solution. Global optimization has received a lot of attention in the past ten years, due to the success of new algorithms for solvin...
Global warming and carbon taxation. Optimal policy and the role of administration costs
Williams, M.
1995-01-01
This paper develops a model relating CO 2 emissions to atmosphere concentrations, global temperature change and economic damages. For a variety of parameter assumptions, the model provides estimates of the marginal cost of emissions in various years. The optimal carbon tax is a function of the marginal emission cost and the costs of administering the tax. This paper demonstrates that under any reasonable assumptions, the optimal carbon tax is zero for at least several decades. (author)
Portnoy, David, E-mail: david.portnoy@jhuapl.edu [Johns Hopkins University Applied Physics Laboratory, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States); Feuerbach, Robert; Heimberg, Jennifer [Johns Hopkins University Applied Physics Laboratory, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States)
2011-10-01
Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the 'threat' set of
Portnoy, David; Feuerbach, Robert; Heimberg, Jennifer
2011-01-01
Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the 'threat' set of spectra
Portnoy, David; Feuerbach, Robert; Heimberg, Jennifer
2011-10-01
Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the "threat" set of spectra
Dong, Huachao; Song, Baowei; Wang, Peng; Huang, Shuai
2015-01-01
In this paper, a novel kriging-based algorithm for global optimization of computationally expensive black-box functions is presented. This algorithm utilizes a multi-start approach to find all of the local optimal values of the surrogate model and performs searches within the neighboring area around these local optimal positions. Compared with traditional surrogate-based global optimization method, this algorithm provides another kind of balance between exploitation and exploration on kriging-based model. In addition, a new search strategy is proposed and coupled into this optimization process. The local search strategy employs a kind of improved 'Minimizing the predictor' method, which dynamically adjusts search direction and radius until finds the optimal value. Furthermore, the global search strategy utilizes the advantage of kriging-based model in predicting unexplored regions to guarantee the reliability of the algorithm. Finally, experiments on 13 test functions with six algorithms are set up and the results show that the proposed algorithm is very promising.
Pulmonary CT angiography: optimization of contrast enhancement technique
Ma Lianju; Tang Guangjian; Fu Jiazhen
2012-01-01
Objective: To derive and evaluate the formula of exactly calculating the contrast dosage used during pulmonary CT angiography (CTPA). Methods: Time density curves in 27 patients who underwent CTPA were collected and analyzed,the formula for calculating contrast dosage during CTPA was derived. 68 patients suspected of pulmonary embolism (PE) clinically but no PE on CTPA were divided randomly into group A, with bolus tracing technique (n=26), and group B, with small dose injection contrast test (SDCT) (n=42). The CT values of the right main pulmonary artery (RMPA), right upper pulmonary vein (RUPV), right posterior basal PA, right lower PV (RLPV) and the aorta were calculated. The total contrast dosage and the hard beam artifact in the SVC were compared between the two groups.Student's t test, Chi-square test and Mann-Whitney U test were used. Results: The ratio of the time from starting injection to enhancement peak of caudal end of SVC and the time to enhancement peak of the main pulmonary trunk was 0.65 ±0.09 (about 2/3), the formula for contrast dosage calculation was derived as (DTs/3 + STs/2) FR ml/s. The CT values of RMPA and RLPA between the two groups [(301 ±117), (329 ± 122) and (283 ±95), (277 ±98) HU respectively] were not significantly different (t=1.060, P=0.292; t=2.056, P=0.044), but the differences of CT values in the paired PA and PV between the two groups (median were 22.5, 58.0 and 170.5, 166.5 HU respectively) were significant (U=292, P=0.001 and U=325, P=0.005), contrast artifact of the SVC (grade 1-3) in group B (n=34, 7, 1 respectively) was significantly less than in group A (n=11, 10, 5 respectively, χ 2 =10.714, P=0.002), the contrast dosage injected in group A was ( 87.6 ± 7.3) ml, and in group B was (40.0 ±5.4) ml (P<0.01). Conclusion: CTPA with SDCT technique is superior to that with conventional bolus tracing technique regarding contrast dosage and contrast artifact in the SVC. (authors)
Optimal time-domain technique for pulse width modulation in power electronics
I. Mayergoyz
2018-05-01
Full Text Available Optimal time-domain technique for pulse width modulation is presented. It is based on exact and explicit analytical solutions for inverter circuits, obtained for any sequence of input voltage rectangular pulses. Two optimal criteria are discussed and illustrated by numerical examples.
Application of Advanced Particle Swarm Optimization Techniques to Wind-thermal Coordination
Singh, Sri Niwas; Østergaard, Jacob; Yadagiri, J.
2009-01-01
wind-thermal coordination algorithm is necessary to determine the optimal proportion of wind and thermal generator capacity that can be integrated into the system. In this paper, four versions of Particle Swarm Optimization (PSO) techniques are proposed for solving wind-thermal coordination problem...
Theoretical properties of the global optimizer of two layer neural network
Boob, Digvijay; Lan, Guanghui
2017-01-01
In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset. We look at this problem in the setting where the number of parameters is greater than the number of sampled points. We show that for a wide class of differentiable activation functions (this class involves "almost" all functions which are not piecewise linear), we have that first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular. ...
A global optimization algorithm inspired in the behavior of selfish herds.
Fausto, Fernando; Cuevas, Erik; Valdivia, Arturo; González, Adrián
2017-10-01
In this paper, a novel swarm optimization algorithm called the Selfish Herd Optimizer (SHO) is proposed for solving global optimization problems. SHO is based on the simulation of the widely observed selfish herd behavior manifested by individuals within a herd of animals subjected to some form of predation risk. In SHO, individuals emulate the predatory interactions between groups of prey and predators by two types of search agents: the members of a selfish herd (the prey) and a pack of hungry predators. Depending on their classification as either a prey or a predator, each individual is conducted by a set of unique evolutionary operators inspired by such prey-predator relationship. These unique traits allow SHO to improve the balance between exploration and exploitation without altering the population size. To illustrate the proficiency and robustness of the proposed method, it is compared to other well-known evolutionary optimization approaches such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Firefly Algorithm (FA), Differential Evolution (DE), Genetic Algorithms (GA), Crow Search Algorithm (CSA), Dragonfly Algorithm (DA), Moth-flame Optimization Algorithm (MOA) and Sine Cosine Algorithm (SCA). The comparison examines several standard benchmark functions, commonly considered within the literature of evolutionary algorithms. The experimental results show the remarkable performance of our proposed approach against those of the other compared methods, and as such SHO is proven to be an excellent alternative to solve global optimization problems. Copyright © 2017 Elsevier B.V. All rights reserved.
Global optimal path planning of an autonomous vehicle for overtaking a moving obstacle
B. Mashadi
Full Text Available In this paper, the global optimal path planning of an autonomous vehicle for overtaking a moving obstacle is proposed. In this study, the autonomous vehicle overtakes a moving vehicle by performing a double lane-change maneuver after detecting it in a proper distance ahead. The optimal path of vehicle for performing the lane-change maneuver is generated by a path planning program in which the sum of lateral deviation of the vehicle from a reference path and the rate of steering angle become minimum while the lateral acceleration of vehicle does not exceed a safe limit value. A nonlinear optimal control theory with the lateral vehicle dynamics equations and inequality constraint of lateral acceleration are used to generate the path. The indirect approach for solving the optimal control problem is used by applying the calculus of variation and the Pontryagin's Minimum Principle to obtain first-order necessary conditions for optimality. The optimal path is generated as a global optimal solution and can be used as the benchmark of the path generated by the local motion planning of autonomous vehicles. A full nonlinear vehicle model in CarSim software is used for path following simulation by importing path data from the MATLAB code. The simulation results show that the generated path for the autonomous vehicle satisfies all vehicle dynamics constraints and hence is a suitable overtaking path for the following vehicle.
Optimized digital filtering techniques for radiation detection with HPGe detectors
Salathe, Marco, E-mail: marco.salathe@mpi-hd.mpg.de; Kihm, Thomas, E-mail: mizzi@mpi-hd.mpg.de
2016-02-01
This paper describes state-of-the-art digital filtering techniques that are part of GEANA, an automatic data analysis software used for the GERDA experiment. The discussed filters include a novel, nonlinear correction method for ballistic deficits, which is combined with one of three shaping filters: a pseudo-Gaussian, a modified trapezoidal, or a modified cusp filter. The performance of the filters is demonstrated with a 762 g Broad Energy Germanium (BEGe) detector, produced by Canberra, that measures γ-ray lines from radioactive sources in an energy range between 59.5 and 2614.5 keV. At 1332.5 keV, together with the ballistic deficit correction method, all filters produce a comparable energy resolution of ~1.61 keV FWHM. This value is superior to those measured by the manufacturer and those found in publications with detectors of a similar design and mass. At 59.5 keV, the modified cusp filter without a ballistic deficit correction produced the best result, with an energy resolution of 0.46 keV. It is observed that the loss in resolution by using a constant shaping time over the entire energy range is small when using the ballistic deficit correction method.
Optimization of digital radiography techniques for specific application
Harara, W.
2010-12-01
A low cost digital radiography system (DRS) for testing weld joints and castings in laboratory was assembled. The DRS is composed from X-ray source, scintillator, first surface mirror with Aluminum coating, charged coupled device (CCD) camera and lens. The DRS was used to test flawed carbon steel welded plates with thicknesses up to 12 mm. The comparison between the digital radiographs of the plates weldments and the radiographs of the same plates weldments using medium speed film type had shown that, the detection capability of the weld flaws are nearly identical for the two radiography techniques, while the sensitivity achieved in digital radiography of the plates weldments was one IQI wire less than the sensitivity achieved by conventional radiography of the same plates weldments according to EN 462-1. Further, the DRS was also successfully used to test (100 x 100 x 100) mm Aluminum casting with artificial flaws of varied dimensions and orientations. The resulted digital radiographs of the casting show that, all the flaws had been detected and their dimensions can be measured accurately, this confirm that, The proposed DRS can be used to detect and measure the flaws in the Aluminum and others light metals castings accurately. (author)
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-01-01
cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural
Woolstencroft, W.
2004-01-01
The pace of change in the energy utility world is accelerating. The new political, environmental, and competitive pressures in all European countries mandate new ways to operate and find efficiencies. We are proposing a lot broader use of optimization technologies as they are starting to be practiced by lead edge energy companies. We will present a holistic case for optimization techniques at the global and local level that are integrated with distributed control systems and each other. They yield a very high degree of transparency, high speed optimization and fast reaction capability with complete profit understanding. This case deals with most of the pressures facing modern utility companies. It is most appropriate for companies that operate a wider variety of generating technologies and which support the central processes like asset management, portfolio optimization, and utilities production planning. We will present best practice examples from industry and give indications of the gains made by those already practicing these techniques. Gains of 3 to 5 % of variable operating costs are standard for fairly small IT and organizational behaviour adjustments. (author)
PS-FW: A Hybrid Algorithm Based on Particle Swarm and Fireworks for Global Optimization
Chen, Shuangqing; Wei, Lixin; Guan, Bing
2018-01-01
Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems. PMID:29675036
Optimal estimation of regional N2O emissions using a three-dimensional global model
Huang, J.; Golombek, A.; Prinn, R.
2004-12-01
In this study, we use the MATCH (Model of Atmospheric Transport and Chemistry) model and Kalman filtering techniques to optimally estimate N2O emissions from seven source regions around the globe. The MATCH model was used with NCEP assimilated winds at T62 resolution (192 longitude by 94 latitude surface grid, and 28 vertical levels) from July 1st 1996 to December 31st 2000. The average concentrations of N2O in the lowest four layers of the model were then compared with the monthly mean observations from six national/global networks (AGAGE, CMDL (HATS), CMDL (CCGG), CSIRO, CSIR and NIES), at 48 surface sites. A 12-month-running-mean smoother was applied to both the model results and the observations, due to the fact that the model was not able to reproduce the very small observed seasonal variations. The Kalman filter was then used to solve for the time-averaged regional emissions of N2O for January 1st 1997 to June 30th 2000. The inversions assume that the model stratospheric destruction rates, which lead to a global N2O lifetime of 130 years, are correct. It also assumes normalized emission spatial distributions from each region based on previous studies. We conclude that the global N2O emission flux is about 16.2 TgN/yr, with {34.9±1.7%} from South America and Africa, {34.6±1.5%} from South Asia, {13.9±1.5%} from China/Japan/South East Asia, {8.0±1.9%} from all oceans, {6.4±1.1%} from North America and North and West Asia, {2.6±0.4%} from Europe, and {0.9±0.7%} from New Zealand and Australia. The errors here include the measurement standard deviation, calibration differences among the six groups, grid volume/measurement site mis-match errors estimated from the model, and a procedure to account approximately for the modeling errors.
Saur, S; Frengen, J; Fjellsboe, L M B; Lindmo, T
2009-01-01
The contralateral breast (CLB) doses for three tangential techniques were characterized by using a female thorax phantom and GafChromic EBT film. Dose calculations by the pencil beam and collapsed cone algorithms were included for comparison. The film dosimetry reveals a highly inhomogeneous dose distribution within the CLB, and skin doses due to the medial fields that are several times higher than the interior dose. These phenomena are not correctly reproduced by the calculation algorithms. All tangential techniques were found to give a mean CLB dose of approximately 0.5 Gy. All wedged fields resulted in higher CLB doses than the corresponding open fields, and the lateral open fields resulted in higher CLB doses than the medial open fields. More than a twofold increase in the mean CLB dose from the medial open field was observed for a 90 deg. change of the collimator orientation. Replacing the physical wedge with a virtual wedge reduced the mean dose to the CLB by 35% and 16% for the medial and lateral fields, respectively. Lead shielding reduced the skin dose for a tangential technique by approximately 50%, but the mean CLB dose was only reduced by approximately 11%. Finally, a technique based on open medial fields in combination with several IMRT fields is proposed as a technique for minimizing the CLB dose. With and without lead shielding, the mean CLB dose using this technique was found to be 0.20 and 0.27 Gy, respectively.
Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes
Sheng Liu
2013-01-01
Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.
Global optimization based on noisy evaluations: An empirical study of two statistical approaches
Vazquez, Emmanuel; Villemonteix, Julien; Sidorkiewicz, Maryan; Walter, Eric
2008-01-01
The optimization of the output of complex computer codes has often to be achieved with a small budget of evaluations. Algorithms dedicated to such problems have been developed and compared, such as the Expected Improvement algorithm (El) or the Informational Approach to Global Optimization (IAGO). However, the influence of noisy evaluation results on the outcome of these comparisons has often been neglected, despite its frequent appearance in industrial problems. In this paper, empirical convergence rates for El and IAGO are compared when an additive noise corrupts the result of an evaluation. IAGO appears more efficient than El and various modifications of El designed to deal with noisy evaluations. Keywords. Global optimization; computer simulations; kriging; Gaussian process; noisy evaluations.
2017-11-01
on Bio -Inspired Optimization Techniques by Canh Ly, Nghia Tran, and Ozlem Kilic Approved for public release; distribution is...Research Laboratory Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio -Inspired Optimization Techniques by...SUBTITLE Methodology for Designing and Developing a New Ultra-Wideband Antenna Based on Bio -Inspired Optimization Techniques 5a. CONTRACT NUMBER
The optimal injection technique for the osteoarthritic ankle: A randomized, cross-over trial
Witteveen, Angelique G. H.; Kok, Aimee; Sierevelt, Inger N.; Kerkhoffs, Gino M. M. J.; van Dijk, C. Niek
2013-01-01
Background: To optimize the injection technique for the osteoarthritic ankle in order to enhance the effect of intra-articular injections and minimize adverse events. Methods: Randomized cross-over trial. Comparing two injection techniques in patients with symptomatic ankle osteoarthritis. Patients
Virtual Power Plant and Microgrids controller for Energy Management based on optimization techniques
Maher G. M. Abdolrasol
2017-06-01
Full Text Available This paper discuss virtual power plant (VPP and Microgrid controller for energy management system (EMS based on optimization techniques by using two optimization techniques namely Backtracking search algorithm (BSA and particle swarm optimization algorithm (PSO. The research proposes use of multi Microgrid in the distribution networks to aggregate the power form distribution generation and form it into single Microgrid and let these Microgrid deal directly with the central organizer called virtual power plant. VPP duties are price forecast, demand forecast, weather forecast, production forecast, shedding loads, make intelligent decision and for aggregate & optimizes the data. This huge system has been tested and simulated by using Matlab simulink. These paper shows optimizations of two methods were really significant in the results. But BSA is better than PSO to search for better parameters which could make more power saving as in the results and the discussion.
Zarzalejo, L.F.; Ramirez, L.; Polo, J. [DER-CIEMAT, Madrid (Spain). Renewable Energy Dept.
2005-07-01
Artificial intelligence techniques, such as fuzzy logic and neural networks, have been used for estimating hourly global radiation from satellite images. The models have been fitted to measured global irradiance data from 15 Spanish terrestrial stations. Both satellite imaging data and terrestrial information from the years 1994, 1995 and 1996 were used. The results of these artificial intelligence models were compared to a multivariate regression based upon Heliosat I model. A general better behaviour was observed for the artificial intelligence models. (author)
Zarzalejo, Luis F.; Ramirez, Lourdes; Polo, Jesus
2005-01-01
Artificial intelligence techniques, such as fuzzy logic and neural networks, have been used for estimating hourly global radiation from satellite images. The models have been fitted to measured global irradiance data from 15 Spanish terrestrial stations. Both satellite imaging data and terrestrial information from the years 1994, 1995 and 1996 were used. The results of these artificial intelligence models were compared to a multivariate regression based upon Heliosat I model. A general better behaviour was observed for the artificial intelligence models
Ding, Yi; Goel, Lalit; Wang, Peng
2012-01-01
cost of the system will also increase. The reserve structure of a MSS should be determined based on striking a balance between the required reliability and the reserve cost. The objective of reserve management for a MSS is to schedule the reserve at the minimum system reserve cost while maintaining......Electric power generating systems are typical examples of multi-state systems (MSS). Sufficient reserve is critically important for maintaining generating system reliabilities. The reliability of a system can be increased by increasing the reserve capacity, noting that at the same time the reserve...... the required level of supply reliability to its customers. In previous research, Genetic Algorithm (GA) has been used to solve most reliability optimization problems. However, the GA is not very computationally efficient in some cases. In this chapter a new heuristic optimization technique—the particle swarm...
MUDASIR AHMED MEMON
2017-01-01
Full Text Available In this paper, PSO (Particle Swarm Optimization based technique is proposed to derive optimized switching angles that minimizes the THD (Total Harmonic Distortion and reduces the effect of selected low order non-triple harmonics from the output of the multilevel inverter. Conventional harmonic elimination techniques have plenty of limitations, and other heuristic techniques also not provide the satisfactory results. In this paper, single phase symmetrical cascaded H-Bridge 11-Level multilevel inverter is considered, and proposed algorithm is utilized to obtain the optimized switching angles that reduced the effect of 5th, 7th, 11th and 13th non-triplen harmonics from the output voltage of the multilevel inverter. A simulation result indicates that this technique outperforms other methods in terms of minimizing THD and provides high-quality output voltage waveform.
Memon, M.A.; Memon, S.; Khan, S.
2017-01-01
In this paper, PSO (Particle Swarm Optimization) based technique is proposed to derive optimized switching angles that minimizes the THD (Total Harmonic Distortion) and reduces the effect of selected low order non-triple harmonics from the output of the multilevel inverter. Conventional harmonic elimination techniques have plenty of limitations, and other heuristic techniques also not provide the satisfactory results. In this paper, single phase symmetrical cascaded H-Bridge 11-Level multilevel inverter is considered, and proposed algorithm is utilized to obtain the optimized switching angles that reduced the effect of 5th, 7th, 11th and 13th non-triplen harmonics from the output voltage of the multilevel inverter. A simulation result indicates that this technique outperforms other methods in terms of minimizing THD and provides high-quality output voltage waveform. (author)
Hyo Seon Park
2014-01-01
Full Text Available Since genetic algorithm-based optimization methods are computationally expensive for practical use in the field of structural optimization, a resizing technique-based hybrid genetic algorithm for the drift design of multistory steel frame buildings is proposed to increase the convergence speed of genetic algorithms. To reduce the number of structural analyses required for the convergence, a genetic algorithm is combined with a resizing technique that is an efficient optimal technique to control the drift of buildings without the repetitive structural analysis. The resizing technique-based hybrid genetic algorithm proposed in this paper is applied to the minimum weight design of three steel frame buildings. To evaluate the performance of the algorithm, optimum weights, computational times, and generation numbers from the proposed algorithm are compared with those from a genetic algorithm. Based on the comparisons, it is concluded that the hybrid genetic algorithm shows clear improvements in convergence properties.
Li Chen; Liao Huailin; Huang Ru; Wang Yangyuan
2008-01-01
In this paper, a complementary metal-oxide semiconductor (CMOS)-compatible silicon substrate optimization technique is proposed to achieve effective isolation. The selective growth of porous silicon is used to effectively suppress the substrate crosstalk. The isolation structures are fabricated in standard CMOS process and then this post-CMOS substrate optimization technique is carried out to greatly improve the performances of crosstalk isolation. Three-dimensional electro-magnetic simulation is implemented to verify the obvious effect of our substrate optimization technique. The morphologies and growth condition of porous silicon fabricated have been investigated in detail. Furthermore, a thick selectively grown porous silicon (SGPS) trench for crosstalk isolation has been formed and about 20dB improvement in substrate isolation is achieved. These results demonstrate that our post-CMOS SGPS technique is very promising for RF IC applications. (cross-disciplinary physics and related areas of science and technology)
Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization
Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan
2017-01-01
Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.
An adaptive dual-optimal path-planning technique for unmanned air vehicles
Whitfield Clifford A.
2016-01-01
Full Text Available A multi-objective technique for unmanned air vehicle path-planning generation through task allocation has been developed. The dual-optimal path-planning technique generates real-time adaptive flight paths based on available flight windows and environmental influenced objectives. The environmentally-influenced flight condition determines the aircraft optimal orientation within a downstream virtual window of possible vehicle destinations that is based on the vehicle’s kinematics. The intermittent results are then pursued by a dynamic optimization technique to determine the flight path. This path-planning technique is a multi-objective optimization procedure consisting of two goals that do not require additional information to combine the conflicting objectives into a single-objective. The technique was applied to solar-regenerative high altitude long endurance flight which can benefit significantly from an adaptive real-time path-planning technique. The objectives were to determine the minimum power required flight paths while maintaining maximum solar power for continual surveillance over an area of interest (AOI. The simulated path generation technique prolonged the flight duration over a sustained turn loiter flight path by approximately 2 months for a year of flight. The potential for prolonged solar powered flight was consistent for all latitude locations, including 2 months of available flight at 60° latitude, where sustained turn flight was no longer capable.
Statistical distributions of optimal global alignment scores of random protein sequences
Tang Jiaowei
2005-10-01
Full Text Available Abstract Background The inference of homology from statistically significant sequence similarity is a central issue in sequence alignments. So far the statistical distribution function underlying the optimal global alignments has not been completely determined. Results In this study, random and real but unrelated sequences prepared in six different ways were selected as reference datasets to obtain their respective statistical distributions of global alignment scores. All alignments were carried out with the Needleman-Wunsch algorithm and optimal scores were fitted to the Gumbel, normal and gamma distributions respectively. The three-parameter gamma distribution performs the best as the theoretical distribution function of global alignment scores, as it agrees perfectly well with the distribution of alignment scores. The normal distribution also agrees well with the score distribution frequencies when the shape parameter of the gamma distribution is sufficiently large, for this is the scenario when the normal distribution can be viewed as an approximation of the gamma distribution. Conclusion We have shown that the optimal global alignment scores of random protein sequences fit the three-parameter gamma distribution function. This would be useful for the inference of homology between sequences whose relationship is unknown, through the evaluation of gamma distribution significance between sequences.
Weitian Lin
2014-01-01
Full Text Available Particle swarm optimization algorithm (PSOA is an advantage optimization tool. However, it has a tendency to get stuck in a near optimal solution especially for middle and large size problems and it is difficult to improve solution accuracy by fine-tuning parameters. According to the insufficiency, this paper researches the local and global search combine particle swarm algorithm (LGSCPSOA, and its convergence and obtains its convergence qualification. At the same time, it is tested with a set of 8 benchmark continuous functions and compared their optimization results with original particle swarm algorithm (OPSOA. Experimental results indicate that the LGSCPSOA improves the search performance especially on the middle and large size benchmark functions significantly.
Rattá, G.A., E-mail: giuseppe.ratta@ciemat.es [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain); Vega, J. [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain); Murari, A. [Consorzio RFX, Associazione EURATOM/ENEA per la Fusione, Padua (Italy); Dormido-Canto, S. [Dpto. de Informática y Automática, Universidad Nacional de Educación a Distancia, Madrid (Spain); Moreno, R. [Laboratorio Nacional de Fusión, CIEMAT, Madrid (Spain)
2016-11-15
Highlights: • A global optimization method based on genetic algorithms was developed. • It allowed improving the prediction of disruptions using APODIS architecture. • It also provides the potential opportunity to develop a spectrum of future predictors using different training datasets. • The future analysis of how their structures reassemble and evolve in each test may help to improve the development of disruption predictors for ITER. - Abstract: Since year 2010, the APODIS architecture has proven its accuracy predicting disruptions in JET tokamak. Nevertheless, it has shown margins for improvements, fact indisputable after the enhanced performances achieved in posterior upgrades. In this article, a complete optimization driven by Genetic Algorithms (GA) is applied to it aiming at considering all possible combination of signals, signal features, quantity of models, their characteristics and internal parameters. This global optimization targets the creation of the best possible system with a reduced amount of required training data. The results harbor no doubts about the reliability of the global optimization method, allowing to outperform the ones of previous versions: 91.77% of predictions (89.24% with an anticipation higher than 10 ms) with a 3.55% of false alarms. Beyond its effectiveness, it also provides the potential opportunity to develop a spectrum of future predictors using different training datasets.
Rattá, G.A.; Vega, J.; Murari, A.; Dormido-Canto, S.; Moreno, R.
2016-01-01
Highlights: • A global optimization method based on genetic algorithms was developed. • It allowed improving the prediction of disruptions using APODIS architecture. • It also provides the potential opportunity to develop a spectrum of future predictors using different training datasets. • The future analysis of how their structures reassemble and evolve in each test may help to improve the development of disruption predictors for ITER. - Abstract: Since year 2010, the APODIS architecture has proven its accuracy predicting disruptions in JET tokamak. Nevertheless, it has shown margins for improvements, fact indisputable after the enhanced performances achieved in posterior upgrades. In this article, a complete optimization driven by Genetic Algorithms (GA) is applied to it aiming at considering all possible combination of signals, signal features, quantity of models, their characteristics and internal parameters. This global optimization targets the creation of the best possible system with a reduced amount of required training data. The results harbor no doubts about the reliability of the global optimization method, allowing to outperform the ones of previous versions: 91.77% of predictions (89.24% with an anticipation higher than 10 ms) with a 3.55% of false alarms. Beyond its effectiveness, it also provides the potential opportunity to develop a spectrum of future predictors using different training datasets.
Yang, Y.; Özgen, S.
2017-06-01
During the last few decades, CFD (Computational Fluid Dynamics) has developed greatly and has become a more reliable tool for the conceptual phase of aircraft design. This tool is generally combined with an optimization algorithm. In the optimization phase, the need for regenerating the computational mesh might become cumbersome, especially when the number of design parameters is high. For this reason, several mesh generation and deformation techniques have been developed in the past decades. One of the most widely used techniques is the Spring Analogy. There are numerous spring analogy related techniques reported in the literature: linear spring analogy, torsional spring analogy, semitorsional spring analogy, and ball vertex spring analogy. This paper gives the explanation of linear spring analogy method and angle inclusion in the spring analogy method. In the latter case, two di¨erent solution methods are proposed. The best feasible method will later be used for two-dimensional (2D) Airfoil Design Optimization with objective function being to minimize sectional drag for a required lift coe©cient at di¨erent speeds. Design variables used in the optimization include camber and thickness distribution of the airfoil. SU2 CFD is chosen as the §ow solver during the optimization procedure. The optimization is done by using Phoenix ModelCenter Optimization Tool.
K.D. Mohapatra
2016-11-01
Full Text Available The objective of the present work is to use a suitable method that can optimize the process parameters like pulse on time (TON, pulse off time (TOFF, wire feed rate (WF, wire tension (WT and servo voltage (SV to attain the maximum value of MRR and minimum value of surface roughness during the production of a fine pitch spur gear made of copper. The spur gear has a pressure angle of 20⁰ and pitch circle diameter of 70 mm. The wire has a diameter of 0.25 mm and is made of brass. Experiments were conducted according to Taguchi’s orthogonal array concept with five factors and two levels. Thus, Taguchi quality loss design technique is used to optimize the output responses carried out from the experiments. Another optimization technique i.e. desirability with grey Taguchi technique has been used to optimize the process parameters. Both the optimized results are compared to find out the best combination of MRR and surface roughness. A confirmation test was carried out to identify the significant improvement in the machining performance in case of Taguchi quality loss. Finally, it was concluded that desirability with grey Taguchi technique produced a better result than the Taguchi quality loss technique in case of MRR and Taguchi quality loss gives a better result in case of surface roughness. The quality of the wire after the cutting operation has been presented in the scanning electron microscopy (SEM figure.
Optimization of freeform surfaces using intelligent deformation techniques for LED applications
Isaac, Annie Shalom; Neumann, Cornelius
2018-04-01
For many years, optical designers have great interests in designing efficient optimization algorithms to bring significant improvement to their initial design. However, the optimization is limited due to a large number of parameters present in the Non-uniform Rationaly b-Spline Surfaces. This limitation was overcome by an indirect technique known as optimization using freeform deformation (FFD). In this approach, the optical surface is placed inside a cubical grid. The vertices of this grid are modified, which deforms the underlying optical surface during the optimization. One of the challenges in this technique is the selection of appropriate vertices of the cubical grid. This is because these vertices share no relationship with the optical performance. When irrelevant vertices are selected, the computational complexity increases. Moreover, the surfaces created by them are not always feasible to manufacture, which is the same problem faced in any optimization technique while creating freeform surfaces. Therefore, this research addresses these two important issues and provides feasible design techniques to solve them. Finally, the proposed techniques are validated using two different illumination examples: street lighting lens and stop lamp for automobiles.
Zou, Dexuan; Li, Steven; Li, Zongyan; Kong, Xiangyong
2017-01-01
Highlights: • A new global particle swarm optimization (NGPSO) is proposed. • NGPSO has strong convergence and desirable accuracy. • NGPSO is used to handle the economic emission dispatch with or without transmission losses. • The equality constraint can be satisfied by solving a quadratic equation. • The inequality constraints can be satisfied by using penalty function method. - Abstract: A new global particle swarm optimization (NGPSO) algorithm is proposed to solve the economic emission dispatch (EED) problems in this paper. NGPSO is different from the traditional particle swarm optimization (PSO) algorithm in two aspects. First, NGPSO uses a new position updating equation which relies on the global best particle to guide the searching activities of all particles. Second, it uses the randomization based on the uniform distribution to slightly disturb the flight trajectories of particles during the late evolutionary process. The two steps enable NGPSO to effectively execute a number of global searches, and thus they increase the chance of exploring promising solution space, and reduce the probabilities of getting trapped into local optima for all particles. On the other hand, the two objective functions of EED are normalized separately according to all candidate solutions, and then they are incorporated into one single objective function. The transformation steps are very helpful in eliminating the difference caused by the different dimensions of the two functions, and thus they strike a balance between the fuel cost and emission. In addition, a simple and common penalty function method is employed to facilitate the satisfactions of EED’s constraints. Based on these improvements in PSO, objective functions and constraints handling, high-quality solutions can be obtained for EED problems. Five examples are chosen to testify the performance of three improved PSOs on solving EED problems with or without transmission losses. Experimental results show that
SGO: A fast engine for ab initio atomic structure global optimization by differential evolution
Chen, Zhanghui; Jia, Weile; Jiang, Xiangwei; Li, Shu-Shen; Wang, Lin-Wang
2017-10-01
As the high throughout calculations and material genome approaches become more and more popular in material science, the search for optimal ways to predict atomic global minimum structure is a high research priority. This paper presents a fast method for global search of atomic structures at ab initio level. The structures global optimization (SGO) engine consists of a high-efficiency differential evolution algorithm, accelerated local relaxation methods and a plane-wave density functional theory code running on GPU machines. The purpose is to show what can be achieved by combining the superior algorithms at the different levels of the searching scheme. SGO can search the global-minimum configurations of crystals, two-dimensional materials and quantum clusters without prior symmetry restriction in a relatively short time (half or several hours for systems with less than 25 atoms), thus making such a task a routine calculation. Comparisons with other existing methods such as minima hopping and genetic algorithm are provided. One motivation of our study is to investigate the properties of magnetic systems in different phases. The SGO engine is capable of surveying the local minima surrounding the global minimum, which provides the information for the overall energy landscape of a given system. Using this capability we have found several new configurations for testing systems, explored their energy landscape, and demonstrated that the magnetic moment of metal clusters fluctuates strongly in different local minima.
Efficient algorithms for multidimensional global optimization in genetic mapping of complex traits
Kajsa Ljungberg
2010-10-01
Full Text Available Kajsa Ljungberg1, Kateryna Mishchenko2, Sverker Holmgren11Division of Scientific Computing, Department of Information Technology, Uppsala University, Uppsala, Sweden; 2Department of Mathematics and Physics, Mälardalen University College, Västerås, SwedenAbstract: We present a two-phase strategy for optimizing a multidimensional, nonconvex function arising during genetic mapping of quantitative traits. Such traits are believed to be affected by multiple so called QTL, and searching for d QTL results in a d-dimensional optimization problem with a large number of local optima. We combine the global algorithm DIRECT with a number of local optimization methods that accelerate the final convergence, and adapt the algorithms to problem-specific features. We also improve the evaluation of the QTL mapping objective function to enable exploitation of the smoothness properties of the optimization landscape. Our best two-phase method is demonstrated to be accurate in at least six dimensions and up to ten times faster than currently used QTL mapping algorithms.Keywords: global optimization, QTL mapping, DIRECT
Rasmussen, Marie-Louise Højlund; Stolpe, Mathias
2008-01-01
the physics, and the cuts (Combinatorial Benders’ and projected Chvátal–Gomory) come from an understanding of the particular mathematical structure of the reformulation. The impact of a stronger representation is investigated on several truss topology optimization problems in two and three dimensions.......The subject of this article is solving discrete truss topology optimization problems with local stress and displacement constraints to global optimum. We consider a formulation based on the Simultaneous ANalysis and Design (SAND) approach. This intrinsically non-convex problem is reformulated...
Jeevanandham Arumugam
2009-01-01
Full Text Available In this paper a classical lead-lag power system stabilizer is used for demonstration. The stabilizer parameters are selected in such a manner to damp the rotor oscillations. The problem of selecting the stabilizer parameters is converted to a simple optimization problem with an eigen value based objective function and it is proposed to employ simulated annealing and particle swarm optimization for solving the optimization problem. The objective function allows the selection of the stabilizer parameters to optimally place the closed-loop eigen values in the left hand side of the complex s-plane. The single machine connected to infinite bus system and 10-machine 39-bus system are considered for this study. The effectiveness of the stabilizer tuned using the best technique, in enhancing the stability of power system. Stability is confirmed through eigen value analysis and simulation results and suitable heuristic technique will be selected for the best performance of the system.
Studies Regarding Design and Optimization of Mechanisms Using Modern Techniques of CAD and CAE
Marius Tufoi
2010-01-01
Full Text Available The paper presents applications of modern techniques of CAD (Computer Aided Design and CAE (Computer Aided Engineering to design and optimize the mechanisms used in mechanical engineering. The use exemplification of these techniques was achieved by designing and optimizing parts of a drawing installation for horizontal continuous casting of metals. By applying these design methods and using finite element method at simulations on designed mechanisms results a number of advantages over traditional methods of drawing and design: speed in drawing, design and optimization of parts and mechanisms, kinematic analysis option, kinetostatic and dynamic through simulation, without requiring physical realization of the part or mechanism, the determination by finite element method of tension, elongations, travel and safety factor and the possibility of optimization for these sizes to ensure the mechanical strength of each piece separately. Achieving these studies was possible using SolidWorks 2009 software suite.
Bostroem, P.-A.; Svensson, M.; Lilja, B.
1988-01-01
To evaluate left ventricular function in coronary artery disease, radionuclide measurements of global and regional ejection fraction (EF), regional wall motion and phase analyses of left ventricular contraction were performed by equilibrium technique, using sup(99m)Tc. One group of patients with angina pectoris and one group with myocardial infarction were compared with a control group. All above-mentioned parameters significantly separated the infarction group from the reference group both at rest and during work, while the group of patients with angina pectoris showed disturbances mainly during work, such as impaired ability to increase global and regional ejection fraction and regional wall motion. Adding regional analysis and phase analysis to the global EF determination increases the possibility of studying the left ventricular function. However, this addition has a limited value in detecting impaired left ventricular function compared to the determination of just global EF in patients with angina pectoris and in patients with myocardial infarction. (author)
External costs in the global energy optimization models. A tool in favour of sustain ability
Cabal Cuesta, H.
2007-01-01
The aim of this work is the analysis of the effects of the GHG external costs internalization in the energy systems. This may provide a useful tool to support decision makers to help reaching the energy systems sustain ability. External costs internalization has been carried out using two methods. First, CO 2 externalities of different power generation technologies have been internalized to evaluate their effects on the economic competitiveness of these present and future technologies. The other method consisted of analysing and optimizing the global energy system, from an economic and environmental point of view, using the global energy optimization model generator, TIMES, with a time horizon of 50 years. Finally, some scenarios regarding environmental and economic strategic measures have been analysed. (Author)
Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.
Ibrahim, Ahmed; Alfa, Attahiru
2017-08-01
This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.
New Techniques for Optimal Treatment Planning for LINAC-based Sterotactic Radiosurgery
Suh, Tae Suk
1992-01-01
Since LINAC-based stereotactic radiosurgery uses multiple noncoplanar arcs, three-dimensional dose evaluation and many beam parameters, a lengthy computation time is required to optimize even the simplest case by a trial and error. The basic approach presented in this paper is to show promising methods using an experimental optimization and an analytic optimization. The purpose of this paper is not to describe the detailed methods, but introduce briefly, proceeding research done currently or in near future. A more detailed description will be shown in ongoing published papers. Experimental optimization is based on two approaches. One is shaping the target volumes through the use of multiple isocenters determined from dose experience and testing. The other method is conformal therapy using a beam eye view technique and field shaping. The analytic approach is to adapt computer-aided design optimization in finding optimum irradiation parameters automatically
Connection between optimal control theory and adiabatic-passage techniques in quantum systems
Assémat, E.; Sugny, D.
2012-08-01
This work explores the relationship between optimal control theory and adiabatic passage techniques in quantum systems. The study is based on a geometric analysis of the Hamiltonian dynamics constructed from Pontryagin's maximum principle. In a three-level quantum system, we show that the stimulated Raman adiabatic passage technique can be associated to a peculiar Hamiltonian singularity. One deduces that the adiabatic pulse is solution of the optimal control problem only for a specific cost functional. This analysis is extended to the case of a four-level quantum system.
Thenmozhi Srinivasan
2015-01-01
Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.
Global optimization of proteins using a dynamical lattice model: Ground states and energy landscapes
Dressel, F.; Kobe, S.
2004-01-01
A simple approach is proposed to investigate the protein structure. Using a low complexity model, a simple pairwise interaction and the concept of global optimization, we are able to calculate ground states of proteins, which are in agreement with experimental data. All possible model structures of small proteins are available below a certain energy threshold. The exact lowenergy landscapes for the trp cage protein (1L2Y) is presented showing the connectivity of all states and energy barriers.
Achtziger, Wolfgang; Stolpe, Mathias
2009-01-01
we use the theory developed in Part I to design a convergent nonlinear branch-and-bound method tailored to solve large-scale instances of the original discrete problem. The problem formulation and the needed theoretical results from Part I are repeated such that this paper is self-contained. We focus...... the largest discrete topology design problems solved by means of global optimization....
The global percutaneous shuttling technique tip for arthroscopic rotator cuff repair
Bryan G. Vopat
2014-05-01
Full Text Available Most arthroscopic rotator cuff repairs utilize suture passing devices placed through arthro- scopic cannulas. These devices are limited by the size of the passing device where the suture is passed through the tendon. An alternative technique has been used in the senior author’s practice for the past ten years, where sutures are placed through the rotator cuff tendon using percutaneous passing devices. This technique, dubbed the global percutaneous shuttling technique of rotator cuff repair, affords the placement of sutures from nearly any angle and location in the shoulder, and has the potential advantage of larger suture bites through the tendon edge. These advantages may increase the area of tendon available to compress to the rotator cuff footprint and improve tendon healing and outcomes. The aim of this study is to describe the global percutaneous shuttling (GPS technique and report our results using this method. The GPS technique can be used for any full thickness rotator cuff tear and is particularly useful for massive cuff tears with poor tissue quality. We recently followed up 22 patients with an average follow up of 32 months to validate its usefulness. American Shoulder and Elbow Surgeons scores improved significantly from 37 preoperatively to 90 postoperatively (P<0.0001. This data supports the use of the GPS technique for arthroscopic rotator cuff repair. Further biomechanical studies are currently being performed to assess the improvements in tendon footprint area with this technique.
Ringed Seal Search for Global Optimization via a Sensitive Search Model.
Younes Saadi
Full Text Available The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive and exploitation (intensive of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be
Souza Lima, Carlos A. [Instituto de Engenharia Nuclear - Divisao de Reatores/PPGIEN, Rua Helio de Almeida 75, Cidade Universitaria - Ilha do Fundao, P.O. Box: 68550 - Zip Code: 21941-972, Rio de Janeiro (Brazil); Instituto Politecnico, Universidade do Estado do Rio de Janeiro, Pos-Graduacao em Modelagem Computacional, Rua Alberto Rangel - s/n, Vila Nova, Nova Friburgo, Zip Code: 28630-050, Nova Friburgo (Brazil); Lapa, Celso Marcelo F.; Pereira, Claudio Marcio do N.A. [Instituto de Engenharia Nuclear - Divisao de Reatores/PPGIEN, Rua Helio de Almeida 75, Cidade Universitaria - Ilha do Fundao, P.O. Box: 68550 - Zip Code: 21941-972, Rio de Janeiro (Brazil); Instituto Nacional de Ciencia e Tecnologia de Reatores Nucleares Inovadores (INCT) (Brazil); Cunha, Joao J. da [Eletronuclear Eletrobras Termonuclear - Gerencia de Analise de Seguranca Nuclear, Rua da Candelaria, 65, 7 andar. Centro, Zip Code: 20091-906, Rio de Janeiro (Brazil); Alvim, Antonio Carlos M. [Universidade Federal do Rio de Janeiro, COPPE/Nuclear, Cidade Universitaria - Ilha do Fundao s/n, P.O.Box 68509 - Zip Code: 21945-970, Rio de Janeiro (Brazil); Instituto Nacional de Ciencia e Tecnologia de Reatores Nucleares Inovadores (INCT) (Brazil)
2011-06-15
Research highlights: > Performance of PSO and GA techniques applied to similar system design. > This work uses ANGRA1 (two loop PWR) core as a prototype. > Results indicate that PSO technique is more adequate than GA to solve this kind of problem. - Abstract: This paper compares the performance of two optimization techniques, particle swarm optimization (PSO) and genetic algorithm (GA) applied to the design a typical reduced scale two loop Pressurized Water Reactor (PWR) core, at full power in single phase forced circulation flow. This comparison aims at analyzing the performance in reaching the global optimum, considering that both heuristics are based on population search methods, that is, methods whose population (candidate solution set) evolve from one generation to the next using a combination of deterministic and probabilistic rules. The simulated PWR, similar to ANGRA 1 power plant, was used as a case example to compare the performance of PSO and GA. Results from simulations indicated that PSO is more adequate to solve this kind of problem.
Souza Lima, Carlos A.; Lapa, Celso Marcelo F.; Pereira, Claudio Marcio do N.A.; Cunha, Joao J. da; Alvim, Antonio Carlos M.
2011-01-01
Research highlights: → Performance of PSO and GA techniques applied to similar system design. → This work uses ANGRA1 (two loop PWR) core as a prototype. → Results indicate that PSO technique is more adequate than GA to solve this kind of problem. - Abstract: This paper compares the performance of two optimization techniques, particle swarm optimization (PSO) and genetic algorithm (GA) applied to the design a typical reduced scale two loop Pressurized Water Reactor (PWR) core, at full power in single phase forced circulation flow. This comparison aims at analyzing the performance in reaching the global optimum, considering that both heuristics are based on population search methods, that is, methods whose population (candidate solution set) evolve from one generation to the next using a combination of deterministic and probabilistic rules. The simulated PWR, similar to ANGRA 1 power plant, was used as a case example to compare the performance of PSO and GA. Results from simulations indicated that PSO is more adequate to solve this kind of problem.
Tuning of PID controller using optimization techniques for a MIMO process
Thulasi dharan, S.; Kavyarasan, K.; Bagyaveereswaran, V.
2017-11-01
In this paper, two processes were considered one is Quadruple tank process and the other is CSTR (Continuous Stirred Tank Reactor) process. These are majorly used in many industrial applications for various domains, especially, CSTR in chemical plants.At first mathematical model of both the process is to be done followed by linearization of the system due to MIMO process and controllers are the major part to control the whole process to our desired point as per the applications so the tuning of the controller plays a major role among the whole process. For tuning of parameters we use two optimizations techniques like Particle Swarm Optimization, Genetic Algorithm. The above techniques are majorly used in different applications to obtain which gives the best among all, we use these techniques to obtain the best tuned values among many. Finally, we will compare the performance of the each process with both the techniques.
A Novel Analytical Technique for Optimal Allocation of Capacitors in Radial Distribution Systems
Sarfaraz Nawaz
2017-07-01
Full Text Available In this paper, a novel analytical technique is proposed to determine the optimal size and location of shunt capacitor units in radial distribution systems. An objective function is formulated to reduce real power loss, to improve the voltage profile and to increase annual cost savings. A new constant, the Loss Sensitivity Constant (LSC, is proposed here. The value of LSC decides the location and size of candidate buses. The technique is demonstrated on an IEEE-33 bus system at different load levels and the 130-bus distribution system of Jamawa Ramgarh village, Jaipur city. The obtained results are compared with the latest optimization techniques to show the effectiveness and robustness of the proposed technique.
Sequential Optimization of Global Sequence Alignments Relative to Different Cost Functions
Odat, Enas M.
2011-05-01
The purpose of this dissertation is to present a methodology to model global sequence alignment problem as directed acyclic graph which helps to extract all possible optimal alignments. Moreover, a mechanism to sequentially optimize sequence alignment problem relative to different cost functions is suggested. Sequence alignment is mostly important in computational biology. It is used to find evolutionary relationships between biological sequences. There are many algo- rithms that have been developed to solve this problem. The most famous algorithms are Needleman-Wunsch and Smith-Waterman that are based on dynamic program- ming. In dynamic programming, problem is divided into a set of overlapping sub- problems and then the solution of each subproblem is found. Finally, the solutions to these subproblems are combined into a final solution. In this thesis it has been proved that for two sequences of length m and n over a fixed alphabet, the suggested optimization procedure requires O(mn) arithmetic operations per cost function on a single processor machine. The algorithm has been simulated using C#.Net programming language and a number of experiments have been done to verify the proved statements. The results of these experiments show that the number of optimal alignments is reduced after each step of optimization. Furthermore, it has been verified that as the sequence length increased linearly then the number of optimal alignments increased exponentially which also depends on the cost function that is used. Finally, the number of executed operations increases polynomially as the sequence length increase linearly.
Kumar, S.; Singh, A.; Dhar, A.
2017-08-01
The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.
Economic optimization of a global strategy to address the pandemic threat.
Pike, Jamison; Bogich, Tiffany; Elwood, Sarah; Finnoff, David C; Daszak, Peter
2014-12-30
Emerging pandemics threaten global health and economies and are increasing in frequency. Globally coordinated strategies to combat pandemics, similar to current strategies that address climate change, are largely adaptive, in that they attempt to reduce the impact of a pathogen after it has emerged. However, like climate change, mitigation strategies have been developed that include programs to reduce the underlying drivers of pandemics, particularly animal-to-human disease transmission. Here, we use real options economic modeling of current globally coordinated adaptation strategies for pandemic prevention. We show that they would be optimally implemented within 27 y to reduce the annual rise of emerging infectious disease events by 50% at an estimated one-time cost of approximately $343.7 billion. We then analyze World Bank data on multilateral "One Health" pandemic mitigation programs. We find that, because most pandemics have animal origins, mitigation is a more cost-effective policy than business-as-usual adaptation programs, saving between $344.0.7 billion and $360.3 billion over the next 100 y if implemented today. We conclude that globally coordinated pandemic prevention policies need to be enacted urgently to be optimally effective and that strategies to mitigate pandemics by reducing the impact of their underlying drivers are likely to be more effective than business as usual.
Economic optimization of a global strategy to address the pandemic threat
Pike, Jamison; Bogich, Tiffany; Elwood, Sarah; Finnoff, David C.; Daszak, Peter
2014-01-01
Emerging pandemics threaten global health and economies and are increasing in frequency. Globally coordinated strategies to combat pandemics, similar to current strategies that address climate change, are largely adaptive, in that they attempt to reduce the impact of a pathogen after it has emerged. However, like climate change, mitigation strategies have been developed that include programs to reduce the underlying drivers of pandemics, particularly animal-to-human disease transmission. Here, we use real options economic modeling of current globally coordinated adaptation strategies for pandemic prevention. We show that they would be optimally implemented within 27 y to reduce the annual rise of emerging infectious disease events by 50% at an estimated one-time cost of approximately $343.7 billion. We then analyze World Bank data on multilateral “One Health” pandemic mitigation programs. We find that, because most pandemics have animal origins, mitigation is a more cost-effective policy than business-as-usual adaptation programs, saving between $344.0.7 billion and $360.3 billion over the next 100 y if implemented today. We conclude that globally coordinated pandemic prevention policies need to be enacted urgently to be optimally effective and that strategies to mitigate pandemics by reducing the impact of their underlying drivers are likely to be more effective than business as usual. PMID:25512538
Göktürkler, G; Balkaya, Ç
2012-01-01
Three naturally inspired meta-heuristic algorithms—the genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO)—were used to invert some of the self-potential (SP) anomalies originated by some polarized bodies with simple geometries. Both synthetic and field data sets were considered. The tests with the synthetic data comprised of the solutions with both noise-free and noisy data; in the tests with the field data some SP anomalies observed over a copper belt (India), graphite deposits (Germany) and metallic sulfide (Turkey) were inverted. The model parameters included the electric dipole moment, polarization angle, depth, shape factor and origin of the anomaly. The estimated parameters were compared with those from previous studies using various optimization algorithms, mainly least-squares approaches, on the same data sets. During the test studies the solutions by GA, PSO and SA were characterized as being consistent with each other; a good starting model was not a requirement to reach the global minimum. It can be concluded that the global optimization algorithms considered in this study were able to yield compatible solutions with those from widely used local optimization algorithms. (paper)
Better Drumming Through Calibration: Techniques for Pre-Performance Robotic Percussion Optimization
Murphy, Jim; Kapur, Ajay; Carnegie, Dale
2012-01-01
A problem with many contemporary musical robotic percussion systems lies in the fact that solenoids fail to respond lin-early to linear increases in input velocity. This nonlinearity forces performers to individually tailor their compositions to specific robotic drummers. To address this problem, we introduce a method of pre-performance calibration using metaheuristic search techniques. A variety of such techniques are introduced and evaluated and the results of the optimized solenoid-based p...
Global optimization methods for the aerodynamic shape design of transonic cascades
Mengistu, T.; Ghaly, W.
2003-01-01
Two global optimization algorithms, namely Genetic Algorithm (GA) and Simulated Annealing (SA), have been applied to the aerodynamic shape optimization of transonic cascades; the objective being the redesign of an existing turbomachine airfoil to improve its performance by minimizing the total pressure loss while satisfying a number of constraints. This is accomplished by modifying the blade camber line; keeping the same blade thickness distribution, mass flow rate and the same flow turning. The objective is calculated based on an Euler solver and the blade camber line is represented with non-uniform rational B-splines (NURBS). The SA and GA methods were first assessed for known test functions and their performance in optimizing the blade shape for minimum loss is then demonstrated on a transonic turbine cascade where it is shown to produce a significant reduction in total pressure loss by eliminating the passage shock. (author)
Gurcan, Metin N.; Sahiner, Berkman; Chan Heangping; Hadjiiski, Lubomir; Petrick, Nicholas
2001-01-01
Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area A z under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost
Solar photovoltaic power forecasting using optimized modified extreme learning machine technique
Manoja Kumar Behera
2018-06-01
Full Text Available Prediction of photovoltaic power is a significant research area using different forecasting techniques mitigating the effects of the uncertainty of the photovoltaic generation. Increasingly high penetration level of photovoltaic (PV generation arises in smart grid and microgrid concept. Solar source is irregular in nature as a result PV power is intermittent and is highly dependent on irradiance, temperature level and other atmospheric parameters. Large scale photovoltaic generation and penetration to the conventional power system introduces the significant challenges to microgrid a smart grid energy management. It is very critical to do exact forecasting of solar power/irradiance in order to secure the economic operation of the microgrid and smart grid. In this paper an extreme learning machine (ELM technique is used for PV power forecasting of a real time model whose location is given in the Table 1. Here the model is associated with the incremental conductance (IC maximum power point tracking (MPPT technique that is based on proportional integral (PI controller which is simulated in MATLAB/SIMULINK software. To train single layer feed-forward network (SLFN, ELM algorithm is implemented whose weights are updated by different particle swarm optimization (PSO techniques and their performance are compared with existing models like back propagation (BP forecasting model. Keywords: PV array, Extreme learning machine, Maximum power point tracking, Particle swarm optimization, Craziness particle swarm optimization, Accelerate particle swarm optimization, Single layer feed-forward network
Alirezaei, M.; Kanarachos, S.A.; Scheepers, B.T.M.; Maurice, J.P.
2013-01-01
Development and experimentally evaluation of an optimal Vehicle Dynamic Control (VDC) strategy based on the State Dependent Riccati Equation (SDRE) control technique is presented. The proposed nonlinear controller is based on a nonlinear vehicle model with nonlinear tire characteristics. A novel
Space-mapping techniques applied to the optimization of a safety isolating transformer
T.V. Tran; S. Brisset; D. Echeverria (David); D.J.P. Lahaye (Domenico); P. Brochet
2007-01-01
textabstractSpace-mapping optimization techniques allow to allign low-fidelity and high-fidelity models in order to reduce the computational time and increase the accuracy of the solution. The main idea is to build an approximate model from the difference of response between both models. Therefore
Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.
1991-03-01
To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).
Jude Hemanth Duraisamy
2016-01-01
Full Text Available Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA and Particle Swarm Optimization (PSO have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT and Finite Ridgelet Transform (FRIT are used in combination with GA and PSO to improve the efficiency of the image steganography system.
Li Qin; Yang Lizhi; Song Lixia; Qin De'en; Xue Yongshe; Wang Zhipeng
2012-01-01
Aim at high rate of large blast fragmentation, a big difficulty in long hole drilling and blasting underground uranium mine stope, it is pointed out at the same time of taking integrated technical management measures, the key is to optimize the drilling and blasting parameters and insure safety the act of one that primes, adopt 'minimum burden' blasting technique, renew the stope fragmentation process, and use new process of hole bottom indirect initiation fragmentation; optimize the detonating circuit and use safe, reliable and economically rational duplex non-electric detonating circuit. The production practice shows that under the guarantee of strictly controlled construction quality, the application of optimized blast fragmentation technique has enhanced the reliability of safety detonation and preferably solved the problem of high rate of large blast fragments. (authors)
A global carbon assimilation system based on a dual optimization method
Zheng, H.; Li, Y.; Chen, J. M.; Wang, T.; Huang, Q.; Huang, W. X.; Wang, L. H.; Li, S. M.; Yuan, W. P.; Zheng, X.; Zhang, S. P.; Chen, Z. Q.; Jiang, F.
2015-02-01
Ecological models are effective tools for simulating the distribution of global carbon sources and sinks. However, these models often suffer from substantial biases due to inaccurate simulations of complex ecological processes. We introduce a set of scaling factors (parameters) to an ecological model on the basis of plant functional type (PFT) and latitudes. A global carbon assimilation system (GCAS-DOM) is developed by employing a dual optimization method (DOM) to invert the time-dependent ecological model parameter state and the net carbon flux state simultaneously. We use GCAS-DOM to estimate the global distribution of the CO2 flux on 1° × 1° grid cells for the period from 2001 to 2007. Results show that land and ocean absorb -3.63 ± 0.50 and -1.82 ± 0.16 Pg C yr-1, respectively. North America, Europe and China contribute -0.98 ± 0.15, -0.42 ± 0.08 and -0.20 ± 0.29 Pg C yr-1, respectively. The uncertainties in the flux after optimization by GCAS-DOM have been remarkably reduced by more than 60%. Through parameter optimization, GCAS-DOM can provide improved estimates of the carbon flux for each PFT. Coniferous forest (-0.97 ± 0.27 Pg C yr-1) is the largest contributor to the global carbon sink. Fluxes of once-dominant deciduous forest generated by the Boreal Ecosystems Productivity Simulator (BEPS) are reduced to -0.78 ± 0.23 Pg C yr-1, the third largest carbon sink.
Lee, T. R.; Wood, W. T.; Dale, J.
2017-12-01
Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.
Design refinement of multilayer optical thin film devices with two optimization techniques
Apparao, K.V.S.R.
1992-01-01
The design efficiency of two different optimization techniques of designing multilayer optical thin film devices is compared. Ten different devices of varying complexities are chosen as design examples for the comparison. The design refinement efficiency and the design parameter characteristics of all the sample designs obtained with the two techniques are compared. The results of the comparison demonstrate that the new method of design developed using damped least squares technique with indirect derivatives give superior and efficient designs compared to the method developed with direct derivatives. (author). 23 refs., 4 tabs., 14 figs
A Standalone PV System with a Hybrid P&O MPPT Optimization Technique
S. Hota
2017-12-01
Full Text Available In this paper a maximum power point tracking (MPPT design for a photovoltaic (PV system using a hybrid optimization technique is proposed. For maximum power transfer, maximum harvestable power from a PV cell in a dynamically changing surrounding should be known. The proposed technique is compared with the conventional Perturb and Observe (P&O technique. A comparative analysis of power-voltage and current-voltage characteristics of a PV cell with and without the MPPT module when connected to the grid was performed in SIMULINK, to demonstrate the increment in the efficiency of the PV module after using the MPPT module.
Stolpe, Mathias; Bendsøe, Martin P.
2007-01-01
This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities...
Stolpe, Mathias; Bendsøe, Martin P.
2007-01-01
This paper present some initial results pertaining to a search for globally optimal solutions to a challenging benchmark example proposed by Zhou and Rozvany. This means that we are dealing with global optimization of the classical single load minimum compliance topology design problem with a fixed...... finite element discretization and with discrete design variables. Global optimality is achieved by the implementation of some specially constructed convergent nonlinear branch and cut methods, based on the use of natural relaxations and by applying strengthening constraints (linear valid inequalities......) and cuts....
Optimization Techniques for Improving the Performance of Silicone-Based Dielectric Elastomers
Skov, Anne Ladegaard; Yu, Liyun
2017-01-01
the electro-mechanical performance of dielectric elastomers are highlighted. Various optimization methods for improved energy transduction are investigated and discussed, with special emphasis placed on the promise each method holds. The compositing and blending of elastomers are shown to be simple, versatile...... methods that can solve a number of optimization issues. More complicated methods, involving chemical modification of the silicone backbone as well as controlling the network structure for improved mechanical properties, are shown to solve yet more issues. From the analysis, it is obvious...... that there is not a single optimization technique that will lead to the universal optimization of dielectric elastomer films, though each method may lead to elastomers with certain features, and thus certain potentials....
Comparison of global sensitivity analysis techniques and importance measures in PSA
Borgonovo, E.; Apostolakis, G.E.; Tarantola, S.; Saltelli, A.
2003-01-01
This paper discusses application and results of global sensitivity analysis techniques to probabilistic safety assessment (PSA) models, and their comparison to importance measures. This comparison allows one to understand whether PSA elements that are important to the risk, as revealed by importance measures, are also important contributors to the model uncertainty, as revealed by global sensitivity analysis. We show that, due to epistemic dependence, uncertainty and global sensitivity analysis of PSA models must be performed at the parameter level. A difficulty arises, since standard codes produce the calculations at the basic event level. We discuss both the indirect comparison through importance measures computed for basic events, and the direct comparison performed using the differential importance measure and the Fussell-Vesely importance at the parameter level. Results are discussed for the large LLOCA sequence of the advanced test reactor PSA
Optimizing rice yields while minimizing yield-scaled global warming potential.
Pittelkow, Cameron M; Adviento-Borbe, Maria A; van Kessel, Chris; Hill, James E; Linquist, Bruce A
2014-05-01
To meet growing global food demand with limited land and reduced environmental impact, agricultural greenhouse gas (GHG) emissions are increasingly evaluated with respect to crop productivity, i.e., on a yield-scaled as opposed to area basis. Here, we compiled available field data on CH4 and N2 O emissions from rice production systems to test the hypothesis that in response to fertilizer nitrogen (N) addition, yield-scaled global warming potential (GWP) will be minimized at N rates that maximize yields. Within each study, yield N surplus was calculated to estimate deficit or excess N application rates with respect to the optimal N rate (defined as the N rate at which maximum yield was achieved). Relationships between yield N surplus and GHG emissions were assessed using linear and nonlinear mixed-effects models. Results indicate that yields increased in response to increasing N surplus when moving from deficit to optimal N rates. At N rates contributing to a yield N surplus, N2 O and yield-scaled N2 O emissions increased exponentially. In contrast, CH4 emissions were not impacted by N inputs. Accordingly, yield-scaled CH4 emissions decreased with N addition. Overall, yield-scaled GWP was minimized at optimal N rates, decreasing by 21% compared to treatments without N addition. These results are unique compared to aerobic cropping systems in which N2 O emissions are the primary contributor to GWP, meaning yield-scaled GWP may not necessarily decrease for aerobic crops when yields are optimized by N fertilizer addition. Balancing gains in agricultural productivity with climate change concerns, this work supports the concept that high rice yields can be achieved with minimal yield-scaled GWP through optimal N application rates. Moreover, additional improvements in N use efficiency may further reduce yield-scaled GWP, thereby strengthening the economic and environmental sustainability of rice systems. © 2013 John Wiley & Sons Ltd.
Seeram, Euclid; Davidson, Rob; Bushong, Stewart; Swan, Hans
2013-01-01
The purpose of this paper is to review the literature on exposure technique approaches in Computed Radiography (CR) imaging as a means of radiation dose optimization in CR imaging. Specifically the review assessed three approaches: optimization of kVp; optimization of mAs; and optimization of the Exposure Indicator (EI) in practice. Only papers dating back to 2005 were described in this review. The major themes, patterns, and common findings from the literature reviewed showed that important features are related to radiation dose management strategies for digital radiography include identification of the EI as a dose control mechanism and as a “surrogate for dose management”. In addition the use of the EI has been viewed as an opportunity for dose optimization. Furthermore optimization research has focussed mainly on optimizing the kVp in CR imaging as a means of implementing the ALARA philosophy, and studies have concentrated on mainly chest imaging using different CR systems such as those commercially available from Fuji, Agfa, Kodak, and Konica-Minolta. These studies have produced “conflicting results”. In addition, a common pattern was the use of automatic exposure control (AEC) and the measurement of constant effective dose, and the use of a dose-area product (DAP) meter
Comparison of global optimization approaches for robust calibration of hydrologic model parameters
Jung, I. W.
2015-12-01
Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
QuickVina: accelerating AutoDock Vina using gradient-based heuristics for global optimization.
Handoko, Stephanus Daniel; Ouyang, Xuchang; Su, Chinh Tran To; Kwoh, Chee Keong; Ong, Yew Soon
2012-01-01
Predicting binding between macromolecule and small molecule is a crucial phase in the field of rational drug design. AutoDock Vina, one of the most widely used docking software released in 2009, uses an empirical scoring function to evaluate the binding affinity between the molecules and employs the iterated local search global optimizer for global optimization, achieving a significantly improved speed and better accuracy of the binding mode prediction compared its predecessor, AutoDock 4. In this paper, we propose further improvement in the local search algorithm of Vina by heuristically preventing some intermediate points from undergoing local search. Our improved version of Vina-dubbed QVina-achieved a maximum acceleration of about 25 times with the average speed-up of 8.34 times compared to the original Vina when tested on a set of 231 protein-ligand complexes while maintaining the optimal scores mostly identical. Using our heuristics, larger number of different ligands can be quickly screened against a given receptor within the same time frame.
Liang, Faming
2014-04-03
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.
GHOLAMIAN, A. S.
2009-06-01
Full Text Available In this paper, a magnet shape optimization method for reduction of cogging torque and torque ripple in Permanent Magnet (PM brushless DC motors is presented by using the reduced basis technique coupled by finite element and design of experiments methods. The primary objective of the method is to reduce the enormous number of design variables required to define the magnet shape. The reduced basis technique is a weighted combination of several basis shapes. The aim of the method is to find the best combination using the weights for each shape as the design variables. A multi-level design process is developed to find suitable basis shapes or trial shapes at each level that can be used in the reduced basis technique. Each level is treated as a separated optimization problem until the required objective is achieved. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the magnet shape optimization of a 6-poles/18-slots PM BLDC motor.
A characteristic study of CCF modeling techniques and optimization of CCF defense strategies
Kim, Min Chull
2000-02-01
Common Cause Failures (CCFs ) are among the major contributors to risk and core damage frequency (CDF ) from operating nuclear power plants (NPPs ). Our study on CCF focused on the following aspects : 1) a characteristic study on the CCF modeling techniques and 2) development of the optimal CCF defense strategy. Firstly, the characteristics of CCF modeling techniques were studied through sensitivity study of CCF occurrence probability upon system redundancy. The modeling techniques considered in this study include those most widely used worldwide, i.e., beta factor, MGL, alpha factor, and binomial failure rate models. We found that MGL and alpha factor models are essentially identical in terms of the CCF probability. Secondly, in the study for CCF defense, the various methods identified in the previous studies for defending against CCF were classified into five different categories. Based on these categories, we developed a generic method by which the optimal CCF defense strategy can be selected. The method is not only qualitative but also quantitative in nature: the selection of the optimal strategy among candidates is based on the use of analytic hierarchical process (AHP). We applied this method to two motor-driven valves for containment sump isolation in Ulchin 3 and 4 nuclear power plants. The result indicates that the method for developing an optimal CCF defense strategy is effective
Loading pattern optimization by multi-objective simulated annealing with screening technique
Tong, K. P.; Hyun, C. L.; Hyung, K. J.; Chang, H. K.
2006-01-01
This paper presents a new multi-objective function which is made up of the main objective term as well as penalty terms related to the constraints. All the terms are represented in the same functional form and the coefficient of each term is normalized so that each term has equal weighting in the subsequent simulated annealing optimization calculations. The screening technique introduced in the previous work is also adopted in order to save computer time in 3-D neutronics evaluation of trial loading patterns. For numerical test of the new multi-objective function in the loading pattern optimization, the optimum loading patterns for the initial and the cycle 7 reload PWR core of Yonggwang Unit 4 are calculated by the simulated annealing algorithm with screening technique. A total of 10 optimum loading patterns are obtained for the initial core through 10 independent simulated annealing optimization runs. For the cycle 7 reload core one optimum loading pattern has been obtained from a single simulated annealing optimization run. More SA optimization runs will be conducted to optimum loading patterns for the cycle 7 reload core and results will be presented in the further work. (authors)
A study of optimization techniques in HDR brachytherapy for the prostate
Pokharel, Ghana Shyam
. Based on our study, DVH based objective function performed better than traditional variance based objective function in creating a clinically acceptable plan when executed under identical conditions. Thirdly, we studied the multiobjective optimization strategy using both DVH and variance based objective functions. The optimization strategy was to create several Pareto optimal solutions by scanning the clinically relevant part of the Pareto front. This strategy was adopted to decouple optimization from decision such that user could select final solution from the pool of alternative solutions based on his/her clinical goals. The overall quality of treatment plan improved using this approach compared to traditional class solution approach. In fact, the final optimized plan selected using decision engine with DVH based objective was comparable to typical clinical plan created by an experienced physicist. Next, we studied the hybrid technique comprising both stochastic and deterministic algorithm to optimize both dwell positions and dwell times. The simulated annealing algorithm was used to find optimal catheter distribution and the DVH based algorithm was used to optimize 3D dose distribution for given catheter distribution. This unique treatment planning and optimization tool was capable of producing clinically acceptable highly reproducible treatment plans in clinically reasonable time. As this algorithm was able to create clinically acceptable plans within clinically reasonable time automatically, it is really appealing for real time procedures. Next, we studied the feasibility of multiobjective optimization using evolutionary algorithm for real time HDR brachytherapy for the prostate. The algorithm with properly tuned algorithm specific parameters was able to create clinically acceptable plans within clinically reasonable time. However, the algorithm was let to run just for limited number of generations not considered optimal, in general, for such algorithms. This was
Radioactive tracer technique in process optimization: applications in the chemical industry
Charlton, J.S.
1989-01-01
Process optimization is concerned with the selection of the most appropriate technological design of the process and with controlling its operation to obtain maximum benefit. The role of radioactive tracers in process optimization is discussed and the various circumstances under which such techniques may be beneficially applied are identified. Case studies are presented which illustrate how radioisotopes may be used to monitor plant performance under dynamic conditions to improve production efficiency and to investigate the cause of production limitations. In addition, the use of sealed sources to provide information complementary to the tracer study is described. (author)
Analysis on the Metrics used in Optimizing Electronic Business based on Learning Techniques
Irina-Steliana STAN
2014-09-01
Full Text Available The present paper proposes a methodology of analyzing the metrics related to electronic business. The drafts of the optimizing models include KPIs that can highlight the business specific, if only they are integrated by using learning-based techniques. Having set the most important and high-impact elements of the business, the models should get in the end the link between them, by automating business flows. The human resource will be found in the situation of collaborating more and more with the optimizing models which will translate into high quality decisions followed by profitability increase.
Linear triangular optimization technique and pricing scheme in residential energy management systems
Anees, Amir; Hussain, Iqtadar; AlKhaldi, Ali Hussain; Aslam, Muhammad
2018-06-01
This paper presents a new linear optimization algorithm for power scheduling of electric appliances. The proposed system is applied in a smart home community, in which community controller acts as a virtual distribution company for the end consumers. We also present a pricing scheme between community controller and its residential users based on real-time pricing and likely block rates. The results of the proposed optimization algorithm demonstrate that by applying the anticipated technique, not only end users can minimise the consumption cost, but it can also reduce the power peak to an average ratio which will be beneficial for the utilities as well.
JongHyup Lee
2016-08-01
Full Text Available For practical deployment of wireless sensor networks (WSN, WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections.
Lee, JongHyup; Pak, Dohyun
2016-01-01
For practical deployment of wireless sensor networks (WSN), WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections. PMID:27589743
Research on optimal investment path of transmission corridor under the global energy Internet
Huang, Yuehui; Li, Pai; Wang, Qi; Liu, Jichun; Gao, Han
2018-02-01
Under the background of the global energy Internet, the investment planning of transmission corridor from XinJiang to Germany is studied in this article, which passes through four countries: Kazakhstan, Russia, Belarus and Poland. Taking the specific situation of different countries into account, including the length of transmission line, unit construction cost, completion time, transmission price, state tariff, inflation rate and so on, this paper constructed a power transmission investment model. Finally, the dynamic programming method is used to simulate the example, and the optimal strategies under different objective functions are obtained.
Global stability, periodic solutions, and optimal control in a nonlinear differential delay model
Anatoli F. Ivanov
2010-09-01
Full Text Available A nonlinear differential equation with delay serving as a mathematical model of several applied problems is considered. Sufficient conditions for the global asymptotic stability and for the existence of periodic solutions are given. Two particular applications are treated in detail. The first one is a blood cell production model by Mackey, for which new periodicity criteria are derived. The second application is a modified economic model with delay due to Ramsey. An optimization problem for a maximal consumption is stated and solved for the latter.
Global Convergence of a Spectral Conjugate Gradient Method for Unconstrained Optimization
Jinkui Liu
2012-01-01
Full Text Available A new nonlinear spectral conjugate descent method for solving unconstrained optimization problems is proposed on the basis of the CD method and the spectral conjugate gradient method. For any line search, the new method satisfies the sufficient descent condition gkTdk<−∥gk∥2. Moreover, we prove that the new method is globally convergent under the strong Wolfe line search. The numerical results show that the new method is more effective for the given test problems from the CUTE test problem library (Bongartz et al., 1995 in contrast to the famous CD method, FR method, and PRP method.
T. Hikmet Karakoc; Onder Turan [School of Civil Aviation, Anadolu University, Eskisehir (Turkey)
2008-09-30
The main objective of the present study is to perform minimizing specific fuel consumption of a non afterburning high bypass turbofan engine with separate exhaust streams and unmixed flow for reducing global effect. The values of engine design parameters are optimized for maintaining minimum specific fuel consumption of high bypass turbofan engine under different flight conditions, different fuel types and design criteria. The backbones of optimization approach consisted of elitism-based genetic algorithm coupled with real parametric cycle analysis of a turbofan engine. For solving optimization problem a new software program is developed in MATLAB programming language, while objective function is determined for minimizing the specific fuel consumption. The input variables included the compressor pressure ratio ({pi}{sub c}), bypass ratio ({alpha}) and the fuel heating value [h{sub PR}-(kJ/kg)]. Hydrogen was selected as fuel type in real parametric cycle analysis of commercial turbofans. It may be concluded that the software program developed can successfully solve optimization problems at 10{le}{pi}{sub c}{le}20, 2{le}{alpha}{le}10 and h{sub PR} 120,000 with aircraft flight Mach number {le}0.8.
Model-data fusion across ecosystems: from multisite optimizations to global simulations
Kuppel, S.; Peylin, P.; Maignan, F.; Chevallier, F.; Kiely, G.; Montagnani, L.; Cescatti, A.
2014-11-01
This study uses a variational data assimilation framework to simultaneously constrain a global ecosystem model with eddy covariance measurements of daily net ecosystem exchange (NEE) and latent heat (LE) fluxes from a large number of sites grouped in seven plant functional types (PFTs). It is an attempt to bridge the gap between the numerous site-specific parameter optimization works found in the literature and the generic parameterization used by most land surface models within each PFT. The present multisite approach allows deriving PFT-generic sets of optimized parameters enhancing the agreement between measured and simulated fluxes at most of the sites considered, with performances often comparable to those of the corresponding site-specific optimizations. Besides reducing the PFT-averaged model-data root-mean-square difference (RMSD) and the associated daily output uncertainty, the optimization improves the simulated CO2 balance at tropical and temperate forests sites. The major site-level NEE adjustments at the seasonal scale are reduced amplitude in C3 grasslands and boreal forests, increased seasonality in temperate evergreen forests, and better model-data phasing in temperate deciduous broadleaf forests. Conversely, the poorer performances in tropical evergreen broadleaf forests points to deficiencies regarding the modelling of phenology and soil water stress for this PFT. An evaluation with data-oriented estimates of photosynthesis (GPP - gross primary productivity) and ecosystem respiration (Reco) rates indicates distinctively improved simulations of both gross fluxes. The multisite parameter sets are then tested against CO2 concentrations measured at 53 locations around the globe, showing significant adjustments of the modelled seasonality of atmospheric CO2 concentration, whose relevance seems PFT-dependent, along with an improved interannual variability. Lastly, a global-scale evaluation with remote sensing NDVI (normalized difference vegetation index
Surawski, N. C.; Sullivan, A. L.; Roxburgh, S. H.; Meyer, M.; Polglase, P. J.
2016-12-01
Vegetation fires are a complex phenomenon and have a range of global impacts including influences on climate. Even though fire is a necessary disturbance for the maintenance of some ecosystems, a range of anthropogenically deleterious consequences are associated with it, such as damage to assets and infrastructure, loss of life, as well as degradation to air quality leading to negative impacts on human health. Estimating carbon emissions from fire relies on a carbon mass balance technique which has evolved with two different interpretations in the fire emissions community. Databases reporting global fire emissions estimates use an approach based on `consumed biomass' which is an approximation to the biogeochemically correct `burnt carbon' approach. Disagreement between the two methods occurs because the `consumed biomass' accounting technique assumes that all burnt carbon is volatilized and emitted. By undertaking a global review of the fraction of burnt carbon emitted to the atmosphere, we show that the `consumed biomass' accounting approach overestimates global carbon emissions by 4.0%, or 100 Teragrams, annually. The required correction is significant and represents 9% of the net global forest carbon sink estimated annually. To correctly partition burnt carbon between that emitted to the atmosphere and that remaining as a post-fire residue requires the post-burn carbon content to be estimated, which is quite often not undertaken in atmospheric emissions studies. To broaden our understanding of ecosystem carbon fluxes, it is recommended that the change in carbon content associated with burnt residues be accounted for. Apart from correctly partitioning burnt carbon between the emitted and residue pools, it enables an accounting approach which can assess the efficacy of fire management operations targeted at sequestering carbon from fire. These findings are particularly relevant for the second commitment period for the Kyoto protocol, since improved landscape fire
Hagan, Aaron; Sawant, Amit; Folkerts, Michael; Modiri, Arezoo
2018-01-01
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of 26% in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the
A Preconditioning Technique for First-Order Primal-Dual Splitting Method in Convex Optimization
Meng Wen
2017-01-01
Full Text Available We introduce a preconditioning technique for the first-order primal-dual splitting method. The primal-dual splitting method offers a very general framework for solving a large class of optimization problems arising in image processing. The key idea of the preconditioning technique is that the constant iterative parameters are updated self-adaptively in the iteration process. We also give a simple and easy way to choose the diagonal preconditioners while the convergence of the iterative algorithm is maintained. The efficiency of the proposed method is demonstrated on an image denoising problem. Numerical results show that the preconditioned iterative algorithm performs better than the original one.
Artificial intelligence search techniques for optimization of the cold source geometry
Azmy, Y.Y.
1988-01-01
Most optimization studies of cold neutron sources have concentrated on the numerical prediction or experimental measurement of the cold moderator optimum thickness which produces the largest cold neutron leakage for a given thermal neutron source. Optimizing the geometrical shape of the cold source, however, is a more difficult problem because the optimized quantity, the cold neutron leakage, is an implicit function of the shape which is the unknown in such a study. We draw an analogy between this problem and a state space search, then we use a simple Artificial Intelligence (AI) search technique to determine the optimum cold source shape based on a two-group, r-z diffusion model. We implemented this AI design concept in the computer program AID which consists of two modules, a physical model module and a search module, which can be independently modified, improved, or made more sophisticated. 7 refs., 1 fig
Artificial intelligence search techniques for the optimization of cold source geometry
Azmy, Y.Y.
1988-01-01
Most optimization studies of cold neutron sources have concentrated on the numerical prediction or experimental measurement of the cold moderator optimum thickness that produces the largest cold neutron leakage for a given thermal neutron source. Optimizing the geometric shape of the cold source, however, is a more difficult problem because the optimized quantity, the cold neutron leakage, is an implicit function of the shape, which is the unknown in such a study. An analogy is drawn between this problem and a state space search, then a simple artificial intelligence (AI) search technique is used to determine the optimum cold source shape based on a two-group, r-z diffusion model. This AI design concept was implemented in the computer program AID, which consists of two modules, a physical model module, and a search module, which can be independently modified, improved, or made more sophisticated
Optimization models and techniques for implementation and pricing of electricity markets
Madrigal Martinez, M.
2001-01-01
The operation and planning of vertically integrated electric power systems can be optimized using models that simulate solutions to problems. As the electric power industry is going through a period of restructuring, there is a need for new optimization tools. This thesis describes the importance of optimization tools and presents techniques for implementing them. It also presents methods for pricing primary electricity markets. Three modeling groups are studied. The first considers a simplified continuous and discrete model for power pool auctions. The second considers the unit commitment problem, and the third makes use of a new type of linear network-constrained clearing system model for daily markets for power and spinning reserve. The newly proposed model considers bids for supply and demand and bilateral contracts. It is a direct current model for the transmission network
Tong, S.S.; Powell, D.; Goel, S.
1992-02-01
A new software system called Engineous combines artificial intelligence and numerical methods for the design and optimization of complex aerospace systems. Engineous combines the advanced computational techniques of genetic algorithms, expert systems, and object-oriented programming with the conventional methods of numerical optimization and simulated annealing to create a design optimization environment that can be applied to computational models in various disciplines. Engineous has produced designs with higher predicted performance gains that current manual design processes - on average a 10-to-1 reduction of turnaround time - and has yielded new insights into product design. It has been applied to the aerodynamic preliminary design of an aircraft engine turbine, concurrent aerodynamic and mechanical preliminary design of an aircraft engine turbine blade and disk, a space superconductor generator, a satellite power converter, and a nuclear-powered satellite reactor and shield. 23 refs
A reduced scale two loop PWR core designed with particle swarm optimization technique
Lima Junior, Carlos A. Souza; Pereira, Claudio M.N.A; Lapa, Celso M.F.; Cunha, Joao J.; Alvim, Antonio C.M.
2007-01-01
Reduced scale experiments are often employed in engineering projects because they are much cheaper than real scale testing. Unfortunately, designing reduced scale thermal-hydraulic circuit or equipment, with the capability of reproducing, both accurately and simultaneously, all physical phenomena that occur in real scale and at operating conditions, is a difficult task. To solve this problem, advanced optimization techniques, such as Genetic Algorithms, have been applied. Following this research line, we have performed investigations, using the Particle Swarm Optimization (PSO) Technique, to design a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power and non accidental operating conditions. Obtained results show that the proposed methodology is a promising approach for forced flow reduced scale experiments. (author)
Ho-Lung Hung
2008-08-01
Full Text Available A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.
Lee Shu-Hong
2008-01-01
Full Text Available Abstract A suboptimal partial transmit sequence (PTS based on particle swarm optimization (PSO algorithm is presented for the low computation complexity and the reduction of the peak-to-average power ratio (PAPR of an orthogonal frequency division multiplexing (OFDM system. In general, PTS technique can improve the PAPR statistics of an OFDM system. However, it will come with an exhaustive search over all combinations of allowed phase weighting factors and the search complexity increasing exponentially with the number of subblocks. In this paper, we work around potentially computational intractability; the proposed PSO scheme exploits heuristics to search the optimal combination of phase factors with low complexity. Simulation results show that the new technique can effectively reduce the computation complexity and PAPR reduction.
Mehiddin Al-Baali
2015-12-01
Full Text Available We deal with the design of parallel algorithms by using variable partitioning techniques to solve nonlinear optimization problems. We propose an iterative solution method that is very efficient for separable functions, our scope being to discuss its performance for general functions. Experimental results on an illustrative example have suggested some useful modifications that, even though they improve the efficiency of our parallel method, leave some questions open for further investigation.
Wroblewski, David [Mentor, OH; Katrompas, Alexander M [Concord, OH; Parikh, Neel J [Richmond Heights, OH
2009-09-01
A method and apparatus for optimizing the operation of a power generating plant using artificial intelligence techniques. One or more decisions D are determined for at least one consecutive time increment, where at least one of the decisions D is associated with a discrete variable for the operation of a power plant device in the power generating plant. In an illustrated embodiment, the power plant device is a soot cleaning device associated with a boiler.
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Search method optimization technique for thermal design of high power RFQ structure
Sharma, N.K.; Joshi, S.C.
2009-01-01
RRCAT has taken up the development of 3 MeV RFQ structure for the low energy part of 100 MeV H - ion injector linac. RFQ is a precision machined resonating structure designed for high rf duty factor. RFQ structural stability during high rf power operation is an important design issue. The thermal analysis of RFQ has been performed using ANSYS finite element analysis software and optimization of various parameters is attempted using Search Method optimization technique. It is an effective optimization technique for the systems governed by a large number of independent variables. The method involves examining a number of combinations of values of independent variables and drawing conclusions from the magnitude of the objective function at these combinations. In these methods there is a continuous improvement in the objective function throughout the course of the search and hence these methods are very efficient. The method has been employed in optimization of various parameters (called independent variables) of RFQ like cooling water flow rate, cooling water inlet temperatures, cavity thickness etc. involved in RFQ thermal design. The temperature rise within RFQ structure is the objective function during the thermal design. Using ANSYS Programming Development Language (APDL), various multiple iterative programmes are written and the analysis are performed to minimize the objective function. The dependency of the objective function on various independent variables is established and the optimum values of the parameters are evaluated. The results of the analysis are presented in the paper. (author)
Development and verification of local/global analysis techniques for laminated composites
Griffin, O. Hayden, Jr.
1989-01-01
Analysis and design methods for laminated composite materials have been the subject of considerable research over the past 20 years, and are currently well developed. In performing the detailed three-dimensional analyses which are often required in proximity to discontinuities, however, analysts often encounter difficulties due to large models. Even with the current availability of powerful computers, models which are too large to run, either from a resource or time standpoint, are often required. There are several approaches which can permit such analyses, including substructuring, use of superelements or transition elements, and the global/local approach. This effort is based on the so-called zoom technique to global/local analysis, where a global analysis is run, with the results of that analysis applied to a smaller region as boundary conditions, in as many iterations as is required to attain an analysis of the desired region. Before beginning the global/local analyses, it was necessary to evaluate the accuracy of the three-dimensional elements currently implemented in the Computational Structural Mechanics (CSM) Testbed. It was also desired to install, using the Experimental Element Capability, a number of displacement formulation elements which have well known behavior when used for analysis of laminated composites.
The Multipoint Global Shape Optimization of Flying Configuration with Movable Leading Edges Flaps
Adriana NASTASE
2012-12-01
Full Text Available The aerodynamical global optimized (GO shape of flying configuration (FC, at two cruising Mach numbers, can be realized by morphing. Movable leading edge flaps are used for this purpose. The equations of the surfaces of the wing, of the fuselage and of the flaps in stretched position are approximated in form of superpositions of homogeneous polynomes in two variables with free coefficients. These coefficients together with the similarity parameters of the planform of the FC are the free parameters of the global optimization. Two enlarged variational problems with free boundaries occur. The first one consists in the determination of the GO shape of the wing-fuselageFC, with the flaps in retracted position, which must be of minimum drag, at higher cruising Mach number. The second enlarged variational problem consists in the determination of the GO shape of the flaps in stretched position in such a manner that the entire FC shall be of minimum drag at the second lower Mach number. The iterative optimum-optimorum (OO theory of the author is used for the solving of these both enlarged variational problems. The inviscid GO shape of the FC is used only in the first step of iteration and the own developed hybrid solutions for the compressible Navier-Stokes partial-differential equations (PDEs are used for the determination of the friction drag coefficient and up the second step of iteration of OO theory.
An efficient global energy optimization approach for robust 3D plane segmentation of point clouds
Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian
2018-03-01
Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)
Protein structure modeling for CASP10 by multiple layers of global optimization.
Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2014-02-01
In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.
Wieberger, Florian; Kolb, Tristan; Neuber, Christian; Ober, Christopher K; Schmidt, Hans-Werner
2013-04-08
In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.
Hans-Werner Schmidt
2013-04-01
Full Text Available In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.
Kiyotaka Masuda
2016-06-01
Full Text Available In Japan, greenhouse gas emissions from rice production, especially CH4 emissions in rice paddy fields, are the primary contributors to global warming from agriculture. When prolonged midseason drainage for mitigating CH4 emissions from rice paddy fields is practiced with environmentally friendly rice production based on reduced use of synthetic pesticides and chemical fertilizers, Japanese rice farmers can receive an agri-environmental direct payment. This paper examines the economic and environmental effects of the agri-environmental direct payment on the adoption of a measure to mitigate global warming in Japanese rice farms using a combined application of linear programming and life cycle assessment at the farm scale. Eco-efficiency, which is defined as net farm income divided by global warming potential, is used as an integrated indicator for assessing the economic and environmental feasibilities. The results show that under the current direct payment level, the prolonged midseason drainage technique does not improve the eco-efficiency of Japanese rice farms because the practice of this technique in environmentally friendly rice production causes large economic disadvantages in exchange for small environmental advantages. The direct payment rates for agri-environmental measures should be determined based on the condition that environmentally friendly agricultural practices improve eco-efficiency compared with conventional agriculture.
Olha Pryhara
2006-03-01
Full Text Available This article examines existing techniques and proposes its own for analyzing the attractiveness of international commodity markets in light of the globalization of world economic processes. Taking into account the supranational nature of the world economic environment when examining categories in the attractiveness of international commodity markets, the author introduces a multilevel system of indicators: market attractiveness at the mega-level – global level; market attractiveness at the macro-level national level; market attractiveness at the mezo-level – level of an individual sector. The attractiveness of international commodity markets is considered to be the degree of conformity between market environment factors and the mega-, macro- and mezo-levels of the economic interests of enterprises concerning the entry into and strategies for their activity on the international commodity markets in the short-, medium- and long-term. The author designs a stage-by-stage technique for strategically analyzing the attractiveness of international commodity markets in order to frame efficient market strategies of enterprises. Relying on the proposed techniques, she rates the integrated indicators of market accessibility and the possibility of realizing the economic interests of enterprises in target markets, bringing the index data into a matrix of «market accessibility – opportunity for realizing the economic interests of enterprises.» The analysis of a country’s position in the matrix makes it possible to frame efficient market strategies for enterprises.
Gas removal technique to maintain global environment. Chikyu kankyo hozen no tame no bojo gijutsu
Yamada, K [The University of Tokyo, Tokyo (Japan). Faculty of Engineering
1992-10-12
This paper describes the removal technique of gases such as CO2, SO2 and NOx which have the deep relation to the maintenance of global environment. This paper describes partially r SO2 and NOx which are the primary cause of acid rain. As for the removal of CO2 generated from fixed sources (thermal power stations and others), the separation technique and isolation-fixation technique have been researched on and developed. Of the separation method, the effect of the chemical absorption method and the adsorption method is proved with the preceding experiments. The isolation method is differently researched on as to store under deep sea or ground but may be urgent and temporary. The fixation of CO2 is a serious global problem which relates to the afforestation and forests. The fixation which uses coral reefs in ocean as the absorption source has a potential. As for the processing of substances causing acid rain, the desulfurization from petroleum and the flue gas desulfurization have the excellent results. The improvement of combustion method or the flue gas denitrification at the fixed sources are used to remove NOx. The removal of NOx from all diesel cars is difficult compared with the exhaust gas cleaning of gasoline cars and is not commercialized. 11 refs., 1 fig., 2 tabs.
Application of PIXE technique to studies on global warming/cooling effect of atmospheric aerosols
Kasahara, M.; Hoeller, R.; Tohno, S.; Onishi, Y.; Ma, C.-J.
2002-01-01
During the last decade, the importance of global warming has been recognized worldwide. Atmospheric aerosols play an important role in the global warming/cooling effects. The physicochemical properties of aerosol particles are fundamental to understanding such effects. In this study, the PIXE technique was applied to measure the average chemical properties of aerosols. Micro-PIXE was also applied to investigate the mixing state of the individual aerosol particle. The chemical composition data were used to estimate the optical properties of aerosols. The average values of aerosol radiative forcing were -1.53 w/m 2 in Kyoto and +3.3 w/m 2 in Nagoya, indicating cooling and warming effects respectively. The difference of radiative forcing in the two cities may be caused by the large difference in chemical composition of aerosols
Validation of a simple isotopic technique for the measurement of global and separated renal function
Chachati, A.; Meyers, A.; Rigo, P.; Godon, J.P.
1986-01-01
Schlegel and Gates described an isotopic method for the measurement of global and separated glomerular filtration rate (GFR) and effective renal plasma flow (ERPF) based on the determination by scintillation camera of the fraction of the injected dose (99mTc-DTPA-[ 131 I]hippuran) present in the kidneys 1-3 min after its administration. This method requires counting of the injected dose and attenuation correction, but no blood or urine sampling. We validated this technique by the simultaneous infusion of inulin and para-amino hippuric acid (PAH) in patients with various levels of renal function (anuric to normal). To better define individual renal function we studied 9 kidneys in patients either nephrectomized or with a nephrostomy enabling separated function measurement. A good correlation between inulin, PAH clearance, and isotopic GFR-ERPF measurement for both global and separate renal function was observed
Bandaru, Sunith; Deb, Kalyanmoy
2011-09-01
In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.
Abdulbaset El Hadi Saad
2017-10-01
Full Text Available Advanced global optimization algorithms have been continuously introduced and improved to solve various complex design optimization problems for which the objective and constraint functions can only be evaluated through computation intensive numerical analyses or simulations with a large number of design variables. The often implicit, multimodal, and ill-shaped objective and constraint functions in high-dimensional and “black-box” forms demand the search to be carried out using low number of function evaluations with high search efficiency and good robustness. This work investigates the performance of six recently introduced, nature-inspired global optimization methods: Artificial Bee Colony (ABC, Firefly Algorithm (FFA, Cuckoo Search (CS, Bat Algorithm (BA, Flower Pollination Algorithm (FPA and Grey Wolf Optimizer (GWO. These approaches are compared in terms of search efficiency and robustness in solving a set of representative benchmark problems in smooth-unimodal, non-smooth unimodal, smooth multimodal, and non-smooth multimodal function forms. In addition, four classic engineering optimization examples and a real-life complex mechanical system design optimization problem, floating offshore wind turbines design optimization, are used as additional test cases representing computationally-expensive black-box global optimization problems. Results from this comparative study show that the ability of these global optimization methods to obtain a good solution diminishes as the dimension of the problem, or number of design variables increases. Although none of these methods is universally capable, the study finds that GWO and ABC are more efficient on average than the other four in obtaining high quality solutions efficiently and consistently, solving 86% and 80% of the tested benchmark problems, respectively. The research contributes to future improvements of global optimization methods.
Mas-Coma, S; Bargues, M D; Valero, M A
2014-12-01
Before the 1990s, human fascioliasis diagnosis focused on individual patients in hospitals or health centres. Case reports were mainly from developed countries and usually concerned isolated human infection in animal endemic areas. From the mid-1990s onwards, due to the progressive description of human endemic areas and human infection reports in developing countries, but also new knowledge on clinical manifestations and pathology, new situations, hitherto neglected, entered in the global scenario. Human fascioliasis has proved to be pronouncedly more heterogeneous than previously thought, including different transmission patterns and epidemiological situations. Stool and blood techniques, the main tools for diagnosis in humans, have been improved for both patient and survey diagnosis. Present availabilities for human diagnosis are reviewed focusing on advantages and weaknesses, sample management, egg differentiation, qualitative and quantitative diagnosis, antibody and antigen detection, post-treatment monitoring and post-control surveillance. Main conclusions refer to the pronounced difficulties of diagnosing fascioliasis in humans given the different infection phases and parasite migration capacities, clinical heterogeneity, immunological complexity, different epidemiological situations and transmission patterns, the lack of a diagnostic technique covering all needs and situations, and the advisability for a combined use of different techniques, at least including a stool technique and a blood technique.
Noha H. El-Amary
2018-03-01
Full Text Available This paper studies the effect on the rate of growth of carbon dioxide emission in seaports’ atmosphere of replacing a part of the fossil fuel electrical power generation by clean renewable electrical energies, through two different scheduling strategies. The increased rate of harmful greenhouse gas emissions due to conventional electrical power generation severely affects the whole global atmosphere. Carbon dioxide and other greenhouse gases emissions are responsible for a significant share of global warming. Developing countries participate in this environmental distortion to a great percentage. Two different suggested strategies for renewable electrical energy scheduling are discussed in this paper, to attain a sustainable green port by the utilization of two mutual sequential clean renewable energies, which are biomass and photovoltaic (PV energy. The first strategy, which is called the eco-availability mode, is a simple method. It is based on operating the renewable electrical energy sources during the available time of operation, taking into consideration the simple and basic technical issues only, without considering the sophisticated technical and economical models. The available operation time is determined by the environmental condition. This strategy is addressed to result on the maximum available Biomass and PV energy generation based on the least environmental and technical conditions (panel efficiency, minimum average daily sunshine hours per month, minimum average solar insolation per month. The second strategy, which is called the Intelligent Scheduling (IS mode, relies on an intelligent Reconfigured Whale Optimization Technique (RWOT based-model. In this strategy, some additional technical and economical issues are considered. The studied renewable electrical energy generation system is considered in two scenarios, which are with and without storage units. The objective (cost function of the scheduling optimization problem, for
Jiang, He; Dong, Yao; Wang, Jianzhou; Li, Yuqin
2015-01-01
Highlights: • CS-hard-ridge-RBF and DE-hard-ridge-RBF are proposed to forecast solar radiation. • Pearson and Apriori algorithm are used to analyze correlations between the data. • Hard-ridge penalty is added to reduce the number of nodes in the hidden layer. • CS algorithm and DE algorithm are used to determine the optimal parameters. • Proposed two models have higher forecasting accuracy than RBF and hard-ridge-RBF. - Abstract: Due to the scarcity of equipment and the high costs of maintenance, far fewer observations of solar radiation are made than observations of temperature, precipitation and other weather factors. Therefore, it is increasingly important to study several relevant meteorological factors to accurately forecast solar radiation. For this research, monthly average global solar radiation and 12 meteorological parameters from 1998 to 2010 at four sites in the United States were collected. Pearson correlation coefficients and Apriori association rules were successfully used to analyze correlations between the data, which provided a basis for these relative parameters as input variables. Two effective and innovative methods were developed to forecast monthly average global solar radiation by converting a RBF neural network into a multiple linear regression problem, adding a hard-ridge penalty to reduce the number of nodes in the hidden layer, and applying intelligent optimization algorithms, such as the cuckoo search algorithm (CS) and differential evolution (DE), to determine the optimal center and scale parameters. The experimental results show that the proposed models produce much more accurate forecasts than other models
Negotiation and Optimality in an Economic Model of Global Climate Change
Gottinger, H. [International Institute for Environmental Economics and Management IIEEM, University of Maastricht, Maastricht (Netherlands)
2000-03-01
The paper addresses the problem of governmental intervention in a multi-country regime of controlling global climate change. Using a simplified case of a two-country, two-sector general equilibrium model the paper shows that the global optimal time path of economic outputs and temperature will converge to a unique steady state provided that consumers care enough about the future. To answer a set of questions relating to 'what will happen if governments decide to correct the problem of global warming?' we study the equilibrium outcome in a bargaining game where two countries negotiate an agreement on future consumption and production plans for the purpose of correcting the problem of climate change. It is shown that the agreement arising from such a negotiation process achieves the best outcome and that it can be implemented in decentralised economies by a system of taxes, subsidies and transfers. By employing the recent advances in non-cooperative bargaining theory, the agreement between two countries is derived endogenously through a well-specified bargaining procedure.
Negotiation and Optimality in an Economic Model of Global Climate Change
Gottinger, H.
2000-03-01
The paper addresses the problem of governmental intervention in a multi-country regime of controlling global climate change. Using a simplified case of a two-country, two-sector general equilibrium model the paper shows that the global optimal time path of economic outputs and temperature will converge to a unique steady state provided that consumers care enough about the future. To answer a set of questions relating to 'what will happen if governments decide to correct the problem of global warming?' we study the equilibrium outcome in a bargaining game where two countries negotiate an agreement on future consumption and production plans for the purpose of correcting the problem of climate change. It is shown that the agreement arising from such a negotiation process achieves the best outcome and that it can be implemented in decentralised economies by a system of taxes, subsidies and transfers. By employing the recent advances in non-cooperative bargaining theory, the agreement between two countries is derived endogenously through a well-specified bargaining procedure
Prediction of energy demands using neural network with model identification by global optimization
Yokoyama, Ryohei; Wakui, Tetsuya; Satake, Ryoichi [Department of Mechanical Engineering, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531 (Japan)
2009-02-15
To operate energy supply plants properly from the viewpoints of stable energy supply, and energy and cost savings, it is important to predict energy demands accurately as basic conditions. Several methods of predicting energy demands have been proposed, and one of them is to use neural networks. Although local optimization methods such as gradient ones have conventionally been adopted in the back propagation procedure to identify the values of model parameters, they have the significant drawback that they can derive only local optimal solutions. In this paper, a global optimization method called ''Modal Trimming Method'' proposed for non-linear programming problems is adopted to identify the values of model parameters. In addition, the trend and periodic change are first removed from time series data on energy demand, and the converted data is used as the main input to a neural network. Furthermore, predicted values of air temperature and relative humidity are considered as additional inputs to the neural network, and their effect on the prediction of energy demand is investigated. This approach is applied to the prediction of the cooling demand in a building used for a bench mark test of a variety of prediction methods, and its validity and effectiveness are clarified. (author)
Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.
Schoenbrod, Betina; Quispe, Benjamin; Cattaneo, Alberto; Rodriguez, Ivanna; Chocron, Mauricio; Farias, Silvia
2012-09-01
Atucha II NPP is a Pressurized Vessel Heavy Water Reactor (PVHWR) of 740 MWe designed by SIEMENSKWU. After some years of delay, this NPP is in advanced construction state, being the beginning of commercial operation expected for 2013. Nucleoelectrica Argentina (N.A.S.A.) is the company in charge of the finalization of this project and the future operation of the plant. The Comision Nacional de Energia Atomica (C.N.E.A.) is the R and D nuclear institution in the country that, among many other topics, provides technical support to the stations. The Commissioning Chemistry Division of CNAII is in charge of the commissioning of the demineralization water plant and the organization of the chemical laboratory. The water plant started operating successfully in July 2010 and is providing the plant with nuclear grade purity water. Currently, in the conventional ('cold') laboratory several activities are taking place. On one hand, analytical techniques for the future operation of the plant are being tested and optimized. On the other hand, the laboratory is participating in the cleaning and conservation of the different components of the plant, providing technical support and the necessary analysis. To define the analytical techniques for the normal operation of the plant, the parameters to be measured and their range were established in the Chemistry Manual. The necessary equipment and reagents were bought. In this work, a summary of the analytical techniques that are being implemented and optimized is presented. Common anions (chloride, sulfate, fluoride, bromide and nitrate) are analyzed by ion chromatography. Cations, mainly sodium, are determined by absorption spectrometry. A UV-Vis spectrometer is used to determine silicates, iron, ammonia, DQO, total solids, true color and turbidity. TOC measurements are performed with a TOC analyzer. To optimize the methods, several parameters are evaluated: linearity, detection and quantification limits, precision and
Determination of the optimal tolerance for MLC positioning in sliding window and VMAT techniques
Hernandez, V.; Abella, R.; Calvo, J. F.; Jurado-Bruggemann, D.; Sancho, I.; Carrasco, P.
2015-01-01
Purpose: Several authors have recommended a 2 mm tolerance for multileaf collimator (MLC) positioning in sliding window treatments. In volumetric modulated arc therapy (VMAT) treatments, however, the optimal tolerance for MLC positioning remains unknown. In this paper, the authors present the results of a multicenter study to determine the optimal tolerance for both techniques. Methods: The procedure used is based on dynalog file analysis. The study was carried out using seven Varian linear accelerators from five different centers. Dynalogs were collected from over 100 000 clinical treatments and in-house software was used to compute the number of tolerance faults as a function of the user-defined tolerance. Thus, the optimal value for this tolerance, defined as the lowest achievable value, was investigated. Results: Dynalog files accurately predict the number of tolerance faults as a function of the tolerance value, especially for low fault incidences. All MLCs behaved similarly and the Millennium120 and the HD120 models yielded comparable results. In sliding window techniques, the number of beams with an incidence of hold-offs >1% rapidly decreases for a tolerance of 1.5 mm. In VMAT techniques, the number of tolerance faults sharply drops for tolerances around 2 mm. For a tolerance of 2.5 mm, less than 0.1% of the VMAT arcs presented tolerance faults. Conclusions: Dynalog analysis provides a feasible method for investigating the optimal tolerance for MLC positioning in dynamic fields. In sliding window treatments, the tolerance of 2 mm was found to be adequate, although it can be reduced to 1.5 mm. In VMAT treatments, the typically used 5 mm tolerance is excessively high. Instead, a tolerance of 2.5 mm is recommended
Wang, Yong; Cai, Zixing; Zhou, Yuren
2009-01-01
A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...
Consistency of seven different GNSS global ionospheric mapping techniques during one solar cycle
Roma-Dollase, David; Hernández-Pajares, Manuel; Krankowski, Andrzej; Kotulak, Kacper; Ghoddousi-Fard, Reza; Yuan, Yunbin; Li, Zishen; Zhang, Hongping; Shi, Chuang; Wang, Cheng; Feltens, Joachim; Vergados, Panagiotis; Komjathy, Attila; Schaer, Stefan; García-Rigo, Alberto; Gómez-Cama, José M.
2018-06-01
In the context of the International GNSS Service (IGS), several IGS Ionosphere Associated Analysis Centers have developed different techniques to provide global ionospheric maps (GIMs) of vertical total electron content (VTEC) since 1998. In this paper we present a comparison of the performances of all the GIMs created in the frame of IGS. Indeed we compare the classical ones (for the ionospheric analysis centers CODE, ESA/ESOC, JPL and UPC) with the new ones (NRCAN, CAS, WHU). To assess the quality of them in fair and completely independent ways, two assessment methods are used: a direct comparison to altimeter data (VTEC-altimeter) and to the difference of slant total electron content (STEC) observed in independent ground reference stations (dSTEC-GPS). The main conclusion of this study, performed during one solar cycle, is the consistency of the results between so many different GIM techniques and implementations.
Delahaye, P., E-mail: delahaye@ganil.fr; Jardin, P.; Maunoury, L. [GANIL, CEA/DSM-CNRS/IN2P3, Blvd. Becquerel, BP 55027, 14076 Caen Cedex 05 (France); Galatà, A.; Patti, G. [INFN–Laboratori Nazionali di Legnaro, Viale dell’Università 2, 35020 Legnaro (Padova) (Italy); Angot, J.; Lamy, T.; Thuillier, T. [LPSC–Université Grenoble Alpes–CNRS/IN2P3, 53 rue des Martyrs, 38026 Grenoble Cedex (France); Cam, J. F.; Traykov, E.; Ban, G. [LPC Caen, 6 Blvd. Maréchal Juin, 14050 Caen Cedex (France); Celona, L. [INFN–Laboratori Nazionali del Sud, via S. Sofia 62, 95125 Catania (Italy); Choinski, J.; Gmaj, P. [Heavy Ion Laboratory, University of Warsaw, ul. Pasteura 5a, 02 093 Warsaw (Poland); Koivisto, H.; Kolhinen, V.; Tarvainen, O. [Department of Physics, University of Jyväskylä, PB 35 (YFL), 40351 Jyväskylä (Finland); Vondrasek, R. [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, Illinois 60439 (United States); Wenander, F. [ISOLDE, CERN, 1211 Geneva 23 (Switzerland)
2016-02-15
The present paper summarizes the results obtained from the past few years in the framework of the Enhanced Multi-Ionization of short-Lived Isotopes for Eurisol (EMILIE) project. The EMILIE project aims at improving the charge breeding techniques with both Electron Cyclotron Resonance Ion Sources (ECRIS) and Electron Beam Ion Sources (EBISs) for European Radioactive Ion Beam (RIB) facilities. Within EMILIE, an original technique for debunching the beam from EBIS charge breeders is being developed, for making an optimal use of the capabilities of CW post-accelerators of the future facilities. Such a debunching technique should eventually resolve duty cycle and time structure issues which presently complicate the data-acquisition of experiments. The results of the first tests of this technique are reported here. In comparison with charge breeding with an EBIS, the ECRIS technique had lower performance in efficiency and attainable charge state for metallic ion beams and also suffered from issues related to beam contamination. In recent years, improvements have been made which significantly reduce the differences between the two techniques, making ECRIS charge breeding more attractive especially for CW machines producing intense beams. Upgraded versions of the Phoenix charge breeder, originally developed by LPSC, will be used at SPES and GANIL/SPIRAL. These two charge breeders have benefited from studies undertaken within EMILIE, which are also briefly summarized here.
Libraro, Paola
The general electric propulsion orbit-raising maneuver of a spacecraft must contend with four main limiting factors: the longer time of flight, multiple eclipses prohibiting continuous thrusting, long exposure to radiation from the Van Allen belt and high power requirement of the electric engines. In order to optimize a low-thrust transfer with respect to these challenges, the choice of coordinates and corresponding equations of motion used to describe the kinematical and dynamical behavior of the satellite is of critical importance. This choice can potentially affect the numerical optimization process as well as limit the set of mission scenarios that can be investigated. To increase the ability to determine the feasible set of mission scenarios able to address the challenges of an all-electric orbit-raising, a set of equations free of any singularities is required to consider a completely arbitrary injection orbit. For this purpose a new quaternion-based formulation of a spacecraft translational dynamics that is globally nonsingular has been developed. The minimum-time low-thrust problem has been solved using the new set of equations of motion inside a direct optimization scheme in order to investigate optimal low-thrust trajectories over the full range of injection orbit inclinations between 0 and 90 degrees with particular focus on high-inclinations. The numerical results consider a specific mission scenario in order to analyze three key aspects of the problem: the effect of the initial guess on the shape and duration of the transfer, the effect of Earth oblateness on transfer time and the role played by, radiation damage and power degradation in all-electric minimum-time transfers. Finally trade-offs between mass and cost savings are introduced through a test case.
Artificial intelligent techniques for optimizing water allocation in a reservoir watershed
Chang, Fi-John; Chang, Li-Chiu; Wang, Yu-Chung
2014-05-01
This study proposes a systematical water allocation scheme that integrates system analysis with artificial intelligence techniques for reservoir operation in consideration of the great uncertainty upon hydrometeorology for mitigating droughts impacts on public and irrigation sectors. The AI techniques mainly include a genetic algorithm and adaptive-network based fuzzy inference system (ANFIS). We first derive evaluation diagrams through systematic interactive evaluations on long-term hydrological data to provide a clear simulation perspective of all possible drought conditions tagged with their corresponding water shortages; then search the optimal reservoir operating histogram using genetic algorithm (GA) based on given demands and hydrological conditions that can be recognized as the optimal base of input-output training patterns for modelling; and finally build a suitable water allocation scheme through constructing an adaptive neuro-fuzzy inference system (ANFIS) model with a learning of the mechanism between designed inputs (water discount rates and hydrological conditions) and outputs (two scenarios: simulated and optimized water deficiency levels). The effectiveness of the proposed approach is tested on the operation of the Shihmen Reservoir in northern Taiwan for the first paddy crop in the study area to assess the water allocation mechanism during drought periods. We demonstrate that the proposed water allocation scheme significantly and substantially avails water managers of reliably determining a suitable discount rate on water supply for both irrigation and public sectors, and thus can reduce the drought risk and the compensation amount induced by making restrictions on agricultural use water.
Andriani, Dian; Wresta, Arini; Atmaja, Tinton Dwi; Saepudin, Aep
2014-02-01
Biogas from anaerobic digestion of organic materials is a renewable energy resource that consists mainly of CH4 and CO2. Trace components that are often present in biogas are water vapor, hydrogen sulfide, siloxanes, hydrocarbons, ammonia, oxygen, carbon monoxide, and nitrogen. Considering the biogas is a clean and renewable form of energy that could well substitute the conventional source of energy (fossil fuels), the optimization of this type of energy becomes substantial. Various optimization techniques in biogas production process had been developed, including pretreatment, biotechnological approaches, co-digestion as well as the use of serial digester. For some application, the certain purity degree of biogas is needed. The presence of CO2 and other trace components in biogas could affect engine performance adversely. Reducing CO2 content will significantly upgrade the quality of biogas and enhancing the calorific value. Upgrading is generally performed in order to meet the standards for use as vehicle fuel or for injection in the natural gas grid. Different methods for biogas upgrading are used. They differ in functioning, the necessary quality conditions of the incoming gas, and the efficiency. Biogas can be purified from CO2 using pressure swing adsorption, membrane separation, physical or chemical CO2 absorption. This paper reviews the various techniques, which could be used to optimize the biogas production as well as to upgrade the biogas quality.
Nair, Archana; Singh, Gurjeet; Mohanty, U. C.
2018-01-01
The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.
Automatic spinal cord localization, robust to MRI contrasts using global curve optimization.
Gros, Charley; De Leener, Benjamin; Dupont, Sara M; Martin, Allan R; Fehlings, Michael G; Bakshi, Rohit; Tummala, Subhash; Auclair, Vincent; McLaren, Donald G; Callot, Virginie; Cohen-Adad, Julien; Sdika, Michaël
2018-02-01
During the last two decades, MRI has been increasingly used for providing valuable quantitative information about spinal cord morphometry, such as quantification of the spinal cord atrophy in various diseases. However, despite the significant improvement of MR sequences adapted to the spinal cord, automatic image processing tools for spinal cord MRI data are not yet as developed as for the brain. There is nonetheless great interest in fully automatic and fast processing methods to be able to propose quantitative analysis pipelines on large datasets without user bias. The first step of most of these analysis pipelines is to detect the spinal cord, which is challenging to achieve automatically across the broad range of MRI contrasts, field of view, resolutions and pathologies. In this paper, a fully automated, robust and fast method for detecting the spinal cord centerline on MRI volumes is introduced. The algorithm uses a global optimization scheme that attempts to strike a balance between a probabilistic localization map of the spinal cord center point and the overall spatial consistency of the spinal cord centerline (i.e. the rostro-caudal continuity of the spinal cord). Additionally, a new post-processing feature, which aims to automatically split brain and spine regions is introduced, to be able to detect a consistent spinal cord centerline, independently from the field of view. We present data on the validation of the proposed algorithm, known as "OptiC", from a large dataset involving 20 centers, 4 contrasts (T 2 -weighted n = 287, T 1 -weighted n = 120, T 2 ∗ -weighted n = 307, diffusion-weighted n = 90), 501 subjects including 173 patients with a variety of neurologic diseases. Validation involved the gold-standard centerline coverage, the mean square error between the true and predicted centerlines and the ability to accurately separate brain and spine regions. Overall, OptiC was able to cover 98.77% of the gold-standard centerline, with a
Population Structures in Russia: Optimality and Dependence on Parameters of Global Evolution
Yuri Yegorov
2016-07-01
Full Text Available The paper is devoted to analytical investigation of the division of geographical space into urban and rural areas with application to Russia. Yegorov (2005, 2006, 2009 has suggested the role of population density on economics. A city has an attractive potential based on scale economies. The optimal city size depends on the balance between its attractive potential and the cost of living that can be approximated by equilibrium land rent and commuting cost. For moderate scale effects optimal population of a city depends negatively on transport costs that are related positively with energy price index. The optimal agricultural density of population can also be constructed. The larger is a land slot per peasant, the higher will be the output from one unit of his labour force applied to this slot. But at the same time, larger farm size results in increase of energy costs, related to land development, collecting the crop and bringing it to the market. In the last 10 years we have observed substantial rise of both food and energy prices at the world stock markets. However, the income of farmers did not grow as fast as food price index. This can shift optimal rural population density to lower level, causing migration to cities (and we observe this tendency globally. Any change in those prices results in suboptimality of existing spatial structures. If changes are slow, the optimal infrastructure can be adjusted by simple migration. If the shocks are high, adaptation may be impossible and shock will persist. This took place in early 1990es in the former USSR, where after transition to world price for oil in domestic markets existing spatial infrastructure became suboptimal and resulted in persistent crisis, leading to deterioration of both industry and agriculture. Russia is the largest country but this is also its problem. Having large resource endowment per capita, it is problematic to build sufficient infrastructure. Russia has too low population
Optimizing Orbit-Instrument Configuration for Global Precipitation Mission (GPM) Satellite Fleet
Smith, Eric A.; Adams, James; Baptista, Pedro; Haddad, Ziad; Iguchi, Toshio; Im, Eastwood; Kummerow, Christian; Einaudi, Franco (Technical Monitor)
2001-01-01
Following the scientific success of the Tropical Rainfall Measuring Mission (TRMM) spearheaded by a group of NASA and NASDA scientists, their external scientific collaborators, and additional investigators within the European Union's TRMM Research Program (EUROTRMM), there has been substantial progress towards the development of a new internationally organized, global scale, and satellite-based precipitation measuring mission. The highlights of this newly developing mission are a greatly expanded scope of measuring capability and a more diversified set of science objectives. The mission is called the Global Precipitation Mission (GPM). Notionally, GPM will be a constellation-type mission involving a fleet of nine satellites. In this fleet, one member is referred to as the "core" spacecraft flown in an approximately 70 degree inclined non-sun-synchronous orbit, somewhat similar to TRMM in that it carries both a multi-channel polarized passive microwave radiometer (PMW) and a radar system, but in this case it will be a dual frequency Ku-Ka band radar system enabling explicit measurements of microphysical DSD properties. The remainder of fleet members are eight orbit-synchronized, sun-synchronous "constellation" spacecraft each carrying some type of multi-channel PMW radiometer, enabling no worse than 3-hour diurnal sampling over the entire globe. In this configuration the "core" spacecraft serves as a high quality reference platform for training and calibrating the PMW rain retrieval algorithms used with the "constellation" radiometers. Within NASA, GPM has advanced to the pre-formulation phase which has enabled the initiation of a set of science and technology studies which will help lead to the final mission design some time in the 2003 period. This presentation first provides an overview of the notional GPM program and mission design, including its organizational and programmatic concepts, scientific agenda, expected instrument package, and basic flight
Hosseini-Ashrafi, M.E.; Bagherebadian, H.; Yahaqi, E.
1999-01-01
A method has been developed which, by using the geometric information from treatment sample cases, selects from a given data set an initial treatment plan as a step for treatment plan optimization. The method uses an artificial neural network (ANN) classification technique to select a best matching plan from the 'optimized' ANN database. Separate back-propagation ANN classifiers were trained using 50, 60 and 77 examples for three groups of treatment case classes (up to 21 examples from each class were used). The performance of the classifiers in selecting the correct treatment class was tested using the leave-one-out method; the networks were optimized with respect their architecture. For the three groups used in this study, successful classification fractions of 0.83, 0.98 and 0.93 were achieved by the optimized ANN classifiers. The automated response of the ANN may be used to arrive at a pre-plan where many treatment parameters may be identified and therefore a significant reduction in the steps required to arrive at the optimum plan may be achieved. Treatment planning 'experience' and also results from lengthy calculations may be used for training the ANN. (author)
Hernandez, Wilmar
2007-01-01
In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.
Optimized scheduling technique of null subcarriers for peak power control in 3GPP LTE downlink.
Cho, Soobum; Park, Sang Kyu
2014-01-01
Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system.
A New Method for Global Optimization Based on Stochastic Differential Equations.
1984-12-01
Serie Naranja, n. 204, IINAS-UNAM, Mx ic o D. F. , 1979. [6] A. V. Levy, A. Montalvo, S. G6mez, A. Cald’er6n, ’Topics in global optimi~zation", in: J...FTFOPT aF 455. £ 456. C S7ART SERIES OF TR IAL5 457. C 458. DO 30 IC x 1,M7RIA&. 459. C 46r’. C SET INITIALIZATION IN&EX FOR NOISE GENERATOR 461. C 1 462...Ia iunghezza del passo di integrazione temporale , t k =o+ hi+ h 2+ ... + h kl rk e u ksono due vettori aleatori in n.-dimensioni scelti ii primo da
Kucukgoz, Mehmet; Harmanci, Oztan; Mihcak, Mehmet K.; Venkatesan, Ramarathnam
2005-03-01
In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.
Zhongbo Sun
2014-01-01
Full Text Available Two modified three-term type conjugate gradient algorithms which satisfy both the descent condition and the Dai-Liao type conjugacy condition are presented for unconstrained optimization. The first algorithm is a modification of the Hager and Zhang type algorithm in such a way that the search direction is descent and satisfies Dai-Liao’s type conjugacy condition. The second simple three-term type conjugate gradient method can generate sufficient decent directions at every iteration; moreover, this property is independent of the steplength line search. Also, the algorithms could be considered as a modification of the MBFGS method, but with different zk. Under some mild conditions, the given methods are global convergence, which is independent of the Wolfe line search for general functions. The numerical experiments show that the proposed methods are very robust and efficient.
G. Senthilkumar
2014-09-01
Full Text Available In this work, transesterification of sunflower oil for obtaining biodiesel was studied. Taguchi’s methodology (L9 orthogonal array was selected to optimize the most significant variables (methanol, catalyst concentration and stirrer speed in transesterification process. Experiments have conducted based on development of L9 orthogonal array by using Taguchi technique. Analysis of Variance (ANOVA and the regression equations were used to find the optimum yield of sunflower methyl ester under the influence of methanol, catalyst & stirrer speed. The study resulted in a maximum yield of sun flower methyl ester as 96% with the optimal conditions of methanol 110 ml with 0.5% by wt. of sodium hydroxide (NaOH stirred at 1200 rpm. The yield was analyzed on the basis of “larger is better”. Finally, confirmation tests were carried out to verify the experimental results.
Development of a parameter optimization technique for the design of automatic control systems
Whitaker, P. H.
1977-01-01
Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.
Inverse Optimization and Forecasting Techniques Applied to Decision-making in Electricity Markets
Saez Gallego, Javier
patterns that the load traditionally exhibited. On the other hand, this thesis is motivated by the decision-making processes of market players. In response to these challenges, this thesis provides mathematical models for decision-making under uncertainty in electricity markets. Demand-side bidding refers......This thesis deals with the development of new mathematical models that support the decision-making processes of market players. It addresses the problems of demand-side bidding, price-responsive load forecasting and reserve determination. From a methodological point of view, we investigate a novel...... approach to model the response of aggregate price-responsive load as a constrained optimization model, whose parameters are estimated from data by using inverse optimization techniques. The problems tackled in this dissertation are motivated, on one hand, by the increasing penetration of renewable energy...
Maria Oksa
2011-09-01
Full Text Available In this work High Velocity Oxy-fuel (HVOF thermal spray techniques, spraying process optimization, and characterization of coatings are reviewed. Different variants of the technology are described and the main differences in spray conditions in terms of particle kinetics and thermal energy are rationalized. Methods and tools for controlling the spray process are presented as well as their use in optimizing the coating process. It will be shown how the differences from the starting powder to the final coating formation affect the coating microstructure and performance. Typical properties of HVOF sprayed coatings and coating performance is described. Also development of testing methods used for the evaluation of coating properties and current status of standardization is presented. Short discussion of typical applications is done.
Giniyatulin, R.N.; Komarov, V.L.; Kuzmin, E.G.; Makhankov, A.N.; Mazul, I.V.; Yablokov, N.A.; Zhuk, A.N.
2002-01-01
Joining of tungsten with copper-based cooling structure and armour geometry optimization are the major aspects in development of the tungsten-armoured plasma facing components (PFC). Fabrication techniques and high heat flux (HHF) tests of tungsten-armoured components have to reflect different PFC designs and acceptable manufacturing cost. The authors present the recent results of tungsten-armoured mock-ups development based on manufacturing and HHF tests. Two aspects were investigated--selection of armour geometry and examination of tungsten-copper bonding techniques. Brazing and casting tungsten-copper bonding techniques were used in small mock-ups. The mock-ups with armour tiles (20x5x10, 10x10x10, 20x20x10, 27x27x10) mm 3 in dimensions were tested by cyclic heat fluxes in the range of (5-20) MW/m 2 , the number of thermal cycles varied from hundreds to several thousands for each mock-up. The results of the tests show the applicability of different geometry and different bonding technique to corresponding heat loading. A medium-scale mock-up 0.6-m in length was manufactured and tested. HHF tests of the medium-scale mock-up have demonstrated the applicability of the applied bonding techniques and armour geometry for full-scale PFC's manufacturing
Quantifying global fossil-fuel CO2 emissions: from OCO-2 to optimal observing designs
Ye, X.; Lauvaux, T.; Kort, E. A.; Oda, T.; Feng, S.; Lin, J. C.; Yang, E. G.; Wu, D.; Kuze, A.; Suto, H.; Eldering, A.
2017-12-01
Cities house more than half of the world's population and are responsible for more than 70% of the world anthropogenic CO2 emissions. Therefore, quantifications of emissions from major cities, which are only less than a hundred intense emitting spots across the globe, should allow us to monitor changes in global fossil-fuel CO2 emissions, in an independent, objective way. Satellite platforms provide favorable temporal and spatial coverage to collect urban CO2 data to quantify the anthropogenic contributions to the global carbon budget. We present here the optimal observation design for future NASA's OCO-2 and Japanese GOSAT missions, based on real-data (i.e. OCO-2) experiments and Observing System Simulation Experiments (OSSE's) to address different error components in the urban CO2 budget calculation. We identify the major sources of emission uncertainties for various types of cities with different ecosystems and geographical features, such as urban plumes over flat terrains, accumulated enhancements within basins, and complex weather regimes in coastal areas. Atmospheric transport errors were characterized under various meteorological conditions using the Weather Research and Forecasting (WRF) model at 1-km spatial resolution, coupled to the Open-source Data Inventory for Anthropogenic CO2 (ODIAC) emissions. We propose and discuss the optimized urban sampling strategies to address some difficulties from the seasonality in cloud cover and emissions, vegetation density in and around cities, and address the daytime sampling bias using prescribed diurnal cycles. These factors are combined in pseudo data experiments in which we evaluate the relative impact of uncertainties on inverse estimates of CO2 emissions for cities across latitudinal and climatological zones. We propose here several sampling strategies to minimize the uncertainties in target mode for tracking urban fossil-fuel CO2 emissions over the globe for future satellite missions, such as OCO-3 and future
A Novel Global MPP Tracking of Photovoltaic System based on Whale Optimization Algorithm
Santhan Kumar Cherukuri
2016-11-01
Full Text Available To harvest maximum amount of solar energy and to attain higher efficiency, photovoltaic generation (PVG systems are to be operated at their maximum power point (MPP under both variable climatic and partial shaded condition (PSC. From literature most of conventional MPP tracking (MPPT methods are able to guarantee MPP successfully under uniform shading condition but fails to get global MPP as they may trap at local MPP under PSC, which adversely deteriorates the efficiency of Photovoltaic Generation (PVG system. In this paper a novel MPPT based on Whale Optimization Algorithm (WOA is proposed to analyze analytic modeling of PV system considering both series and shunt resistances for MPP tracking under PSC. The proposed algorithm is tested on 6S, 3S2P and 2S3P Photovoltaic array configurations for different shading patterns and results are presented. To compare the performance, GWO and PSO MPPT algorithms are also simulated and results are also presented. From the results it is noticed that proposed MPPT method is superior to other MPPT methods with reference to accuracy and tracking speed. Article History: Received July 23rd 2016; Received in revised form September 15th 2016; Accepted October 1st 2016; Available online How to Cite This Article: Kumar, C.H.S and Rao, R.S. (2016 A Novel Global MPP Tracking of Photovoltaic System based on Whale Optimization Algorithm. Int. Journal of Renewable Energy Development, 5(3, 225-232. http://dx.doi.org/10.14710/ijred.5.3.225-232
Towards continuous global measurements and optimal emission estimates of NF3
Arnold, T.; Muhle, J.; Salameh, P.; Harth, C.; Ivy, D. J.; Weiss, R. F.
2011-12-01
We present an analytical method for the continuous in situ measurement of nitrogen trifluoride (NF3) - an anthropogenic gas with a global warming potential of ~16800 over a 100 year time horizon. NF3 is not included in national reporting emissions inventories under the United Nations Framework Convention on Climate Change (UNFCCC). However, it is a rapidly emerging greenhouse gas due to emission from a growing number of manufacturing facilities with increasing output and modern end-use applications, namely in microcircuit etching, and in production of flat panel displays and thin-film photovoltaic cells. Despite success in measuring the most volatile long lived halogenated species such as CF4, the Medusa preconcentration GC/MS system of Miller et al. (2008) is unable to detect NF3 under remote operation. Using altered techniques of gas separation and chromatography after initial preconcentration, we are now able to make continuous atmospheric measurements of NF3 with average precisions NF3 produced. Emission factors are shown to have reduced over the last decade; however, rising production and end-use have caused the average global atmospheric concentration to double between 2005 and 2011 i.e. half the atmospheric NF3 present today originates from emissions after 2005. Finally we show the first continuous in situ measurements from La Jolla, California, illustrating how global deployment of our technique could improve the temporal and spatial scale of NF3 'top-down' emission estimates over the coming years. These measurements will be important for independent verification of emissions should NF3 be regulated under a new climate treaty.
Imen Chaari
2017-03-01
Full Text Available This article presents the results of the 2-year iroboapp research project that aims at devising path planning algorithms for large grid maps with much faster execution times while tolerating very small slacks with respect to the optimal path. We investigated both exact and heuristic methods. We contributed with the design, analysis, evaluation, implementation and experimentation of several algorithms for grid map path planning for both exact and heuristic methods. We also designed an innovative algorithm called relaxed A-star that has linear complexity with relaxed constraints, which provides near-optimal solutions with an extremely reduced execution time as compared to A-star. We evaluated the performance of the different algorithms and concluded that relaxed A-star is the best path planner as it provides a good trade-off among all the metrics, but we noticed that heuristic methods have good features that can be exploited to improve the solution of the relaxed exact method. This led us to design new hybrid algorithms that combine our relaxed A-star with heuristic methods which improve the solution quality of relaxed A-star at the cost of slightly higher execution time, while remaining much faster than A* for large-scale problems. Finally, we demonstrate how to integrate the relaxed A-star algorithm in the robot operating system as a global path planner and show that it outperforms its default path planner with an execution time 38% faster on average.
Carlos Pozo
Full Text Available Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study
Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Sorribas, Albert; Jiménez, Laureano
2012-01-01
Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the
Zainal Ariffin, S.; Razlan, A.; Ali, M. Mohd; Efendee, A. M.; Rahman, M. M.
2018-03-01
Background/Objectives: The paper discusses about the optimum cutting parameters with coolant techniques condition (1.0 mm nozzle orifice, wet and dry) to optimize surface roughness, temperature and tool wear in the machining process based on the selected setting parameters. The selected cutting parameters for this study were the cutting speed, feed rate, depth of cut and coolant techniques condition. Methods/Statistical Analysis Experiments were conducted and investigated based on Design of Experiment (DOE) with Response Surface Method. The research of the aggressive machining process on aluminum alloy (A319) for automotive applications is an effort to understand the machining concept, which widely used in a variety of manufacturing industries especially in the automotive industry. Findings: The results show that the dominant failure mode is the surface roughness, temperature and tool wear when using 1.0 mm nozzle orifice, increases during machining and also can be alternative minimize built up edge of the A319. The exploration for surface roughness, productivity and the optimization of cutting speed in the technical and commercial aspects of the manufacturing processes of A319 are discussed in automotive components industries for further work Applications/Improvements: The research result also beneficial in minimizing the costs incurred and improving productivity of manufacturing firms. According to the mathematical model and equations, generated by CCD based RSM, experiments were performed and cutting coolant condition technique using size nozzle can reduces tool wear, surface roughness and temperature was obtained. Results have been analyzed and optimization has been carried out for selecting cutting parameters, shows that the effectiveness and efficiency of the system can be identified and helps to solve potential problems.
Simpson, J. J.; Taflove, A.
2005-12-01
We report a finite-difference time-domain (FDTD) computational solution of Maxwell's equations [1] that models the possibility of detecting and characterizing ionospheric disturbances above seismic regions. Specifically, we study anomalies in Schumann resonance spectra in the extremely low frequency (ELF) range below 30 Hz as observed in Japan caused by a hypothetical cylindrical ionospheric disturbance above Taiwan. We consider excitation of the global Earth-ionosphere waveguide by lightning in three major thunderstorm regions of the world: Southeast Asia, South America (Amazon region), and Africa. Furthermore, we investigate varying geometries and characteristics of the ionospheric disturbance above Taiwan. The FDTD technique used in this study enables a direct, full-vector, three-dimensional (3-D) time-domain Maxwell's equations calculation of round-the-world ELF propagation accounting for arbitrary horizontal as well as vertical geometrical and electrical inhomogeneities and anisotropies of the excitation, ionosphere, lithosphere, and oceans. Our entire-Earth model grids the annular lithosphere-atmosphere volume within 100 km of sea level, and contains over 6,500,000 grid-points (63 km laterally between adjacent grid points, 5 km radial resolution). We use our recently developed spherical geodesic gridding technique having a spatial discretization best described as resembling the surface of a soccer ball [2]. The grid is comprised entirely of hexagonal cells except for a small fixed number of pentagonal cells needed for completion. Grid-cell areas and locations are optimized to yield a smoothly varying area difference between adjacent cells, thereby maximizing numerical convergence. We compare our calculated results with measured data prior to the Chi-Chi earthquake in Taiwan as reported by Hayakawa et. al. [3]. Acknowledgement This work was suggested by Dr. Masashi Hayakawa, University of Electro-Communications, Chofugaoka, Chofu Tokyo. References [1] A
Dynamical optimization techniques for the calculation of electronic structure in solids
Benedek, R.; Min, B.I.; Garner, J.
1989-01-01
The method of dynamical simulated annealing, recently introduced by Car and Parrinello, provides a new tool for electronic structure computation as well as for molecular dynamics simulation. In this paper, we explore an optimization technique that is complementary to dynamical simulated annealing, the method of steepest descents (SD). As an illustration, SD is applied to calculate the total energy of diamond-Si, a system previously treated by Car and Parrinello. The adaptation of SD to treat metallic systems is discussed and a numerical application is presented. (author) 18 refs., 3 figs
Jackson, C. E.; Illfelder, H. M. J.; Pineda, G.
1998-12-31
Field implementation of an integrated wellsite geological steering service is described. The service provides timely, useful feedback from real-time logging-while-drilling (LWD) measurements for making immediate course corrections. Interactive multi-dimensional displays of both the geological and petrophysical properties of the formation being penetrated by the wellbore are a prominent feature of the service; the optimization of the drilling is the result of the visualization afforded by the displays. The paper reviews forward modelling techniques, provides a detailed explanation of the principles underlying this new application, and illustrates the application by examples from the field. 5 refs., 1 tab., 8 figs.
A three-stage strategy for optimal price offering by a retailer based on clustering techniques
Mahmoudi-Kohan, N.; Shayesteh, E.; Moghaddam, M. Parsa; Sheikh-El-Eslami, M.K.
2010-01-01
In this paper, an innovative strategy for optimal price offering to customers for maximizing the profit of a retailer is proposed. This strategy is based on load profile clustering techniques and includes three stages. For the purpose of clustering, an improved weighted fuzzy average K-means is proposed. Also, in this paper a new acceptance function for increasing the profit of the retailer is proposed. The new method is evaluated by implementation on a group of 300 customers of a 20 kV distribution network. (author)
Techniques for Optimizing Surgical Scars, Part 2: Hypertrophic Scars and Keloids.
Potter, Kathryn; Konda, Sailesh; Ren, Vicky Zhen; Wang, Apphia Lihan; Srinivasan, Aditya; Chilukuri, Suneel
2017-01-01
Surgical management of benign or malignant cutaneous tumors may result in noticeable scars that are of great concern to patients, regardless of sex, age, or ethnicity. Techniques to optimize surgical scars are discussed in this three-part review. Part 2 focuses on scar revision for hypertrophic and keloids scars. Scar revision options for hypertrophic and keloid scars include corticosteroids, bleomycin, fluorouracil, verapamil, avotermin, hydrogel scaffold, nonablative fractional lasers, ablative and fractional ablative lasers, pulsed dye laser (PDL), flurandrenolide tape, imiquimod, onion extract, silicone, and scar massage.
A three-stage strategy for optimal price offering by a retailer based on clustering techniques
Mahmoudi-Kohan, N.; Shayesteh, E. [Islamic Azad University (Garmsar Branch), Garmsar (Iran); Moghaddam, M. Parsa; Sheikh-El-Eslami, M.K. [Tarbiat Modares University, Tehran (Iran)
2010-12-15
In this paper, an innovative strategy for optimal price offering to customers for maximizing the profit of a retailer is proposed. This strategy is based on load profile clustering techniques and includes three stages. For the purpose of clustering, an improved weighted fuzzy average K-means is proposed. Also, in this paper a new acceptance function for increasing the profit of the retailer is proposed. The new method is evaluated by implementation on a group of 300 customers of a 20 kV distribution network. (author)
Nacelle Chine Installation Based on Wind-Tunnel Test Using Efficient Global Optimization
Kanazaki, Masahiro; Yokokawa, Yuzuru; Murayama, Mitsuhiro; Ito, Takeshi; Jeong, Shinkyu; Yamamoto, Kazuomi
Design exploration of a nacelle chine installation was carried out. The nacelle chine improves stall performance when deploying multi-element high-lift devices. This study proposes an efficient design process using a Kriging surrogate model to determine the nacelle chine installation point in wind-tunnel tests. The design exploration was conducted in a wind-tunnel using the JAXA high-lift aircraft model at the JAXA Large-scale Low-speed Wind Tunnel. The objective was to maximize the maximum lift. The chine installation points were designed on the engine nacelle in the axial and chord-wise direction, while the geometry of the chine was fixed. In the design process, efficient global optimization (EGO) which includes Kriging model and genetic algorithm (GA) was employed. This method makes it possible both to improve the accuracy of the response surface and to explore the global optimum efficiently. Detailed observations of flowfields using the Particle Image Velocimetry method confirmed the chine effect and design results.
Local search for optimal global map generation using mid-decadal landsat images
Khatib, L.; Gasch, J.; Morris, Robert; Covington, S.
2007-01-01
NASA and the US Geological Survey (USGS) are seeking to generate a map of the entire globe using Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor data from the "mid-decadal" period of 2004 through 2006. The global map is comprised of thousands of scene locations and, for each location, tens of different images of varying quality to chose from. Furthermore, it is desirable for images of adjacent scenes be close together in time of acquisition, to avoid obvious discontinuities due to seasonal changes. These characteristics make it desirable to formulate an automated solution to the problem of generating the complete map. This paper formulates a Global Map Generator problem as a Constraint Optimization Problem (GMG-COP) and describes an approach to solving it using local search. Preliminary results of running the algorithm on image data sets are summarized. The results suggest a significant improvement in map quality using constraint-based solutions. Copyright ?? 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Ali Wagdy Mohamed
2014-11-01
Full Text Available In this paper, a novel version of Differential Evolution (DE algorithm based on a couple of local search mutation and a restart mechanism for solving global numerical optimization problems over continuous space is presented. The proposed algorithm is named as Restart Differential Evolution algorithm with Local Search Mutation (RDEL. In RDEL, inspired by Particle Swarm Optimization (PSO, a novel local mutation rule based on the position of the best and the worst individuals among the entire population of a particular generation is introduced. The novel local mutation scheme is joined with the basic mutation rule through a linear decreasing function. The proposed local mutation scheme is proven to enhance local search tendency of the basic DE and speed up the convergence. Furthermore, a restart mechanism based on random mutation scheme and a modified Breeder Genetic Algorithm (BGA mutation scheme is combined to avoid stagnation and/or premature convergence. Additionally, an exponent increased crossover probability rule and a uniform scaling factors of DE are introduced to promote the diversity of the population and to improve the search process, respectively. The performance of RDEL is investigated and compared with basic differential evolution, and state-of-the-art parameter adaptive differential evolution variants. It is discovered that the proposed modifications significantly improve the performance of DE in terms of quality of solution, efficiency and robustness.
Zdravko Bazdan
2010-12-01
Full Text Available The aim of this study is to point to the fact that economic diplomacy is a relatively new practice in international economics, specifically the expansion of the occurrence of Intelligence Revolution. The history in global relations shows that without economic diplomacy there is no optimal economic growth and social development. It is important to note that economic diplomacy should be important for our country and the political elite, as well as for the administration of Croatian economic subjects that want to compete in international market economy. Comparative analysis are particularly highlighted by French experience. Therefore, Croatia should copy the practice of those countries that are successful in economic diplomacy. And in the curricula - especially of our economic faculties - we should introduce the course of Economic Diplomacy. It is important to note, that in order to form our optimal model of economic diplomacy which would be headed by the President of Republic of Croatia formula should be based on: Intelligence Security Agency (SOA, Intelligence Service of the Ministry of Foreign Affairs and European Integration, Intelligence Service of the Croatian Chamber of Commerce and the Intelligence Service of the Ministry of Economy, Labor and Entrepreneurship. Described model would consist of intelligence subsystem with at least twelve components.
A global review of freshwater crayfish temperature tolerance, preference, and optimal growth
Westhoff, Jacob T.; Rosenberger, Amanda E.
2016-01-01
Conservation efforts, environmental planning, and management must account for ongoing ecosystem alteration due to a changing climate, introduced species, and shifting land use. This type of management can be facilitated by an understanding of the thermal ecology of aquatic organisms. However, information on thermal ecology for entire taxonomic groups is rarely compiled or summarized, and reviews of the science can facilitate its advancement. Crayfish are one of the most globally threatened taxa, and ongoing declines and extirpation could have serious consequences on aquatic ecosystem function due to their significant biomass and ecosystem roles. Our goal was to review the literature on thermal ecology for freshwater crayfish worldwide, with emphasis on studies that estimated temperature tolerance, temperature preference, or optimal growth. We also explored relationships between temperature metrics and species distributions. We located 56 studies containing information for at least one of those three metrics, which covered approximately 6 % of extant crayfish species worldwide. Information on one or more metrics existed for all 3 genera of Astacidae, 4 of the 12 genera of Cambaridae, and 3 of the 15 genera of Parastacidae. Investigations employed numerous methodological approaches for estimating these parameters, which restricts comparisons among and within species. The only statistically significant relationship we observed between a temperature metric and species range was a negative linear relationship between absolute latitude and optimal growth temperature. We recommend expansion of studies examining the thermal ecology of freshwater crayfish and identify and discuss methodological approaches that can improve standardization and comparability among studies.
3D prostate TRUS segmentation using globally optimized volume-preserving prior.
Qiu, Wu; Rajchl, Martin; Guo, Fumin; Sun, Yue; Ukwatta, Eranga; Fenster, Aaron; Yuan, Jing
2014-01-01
An efficient and accurate segmentation of 3D transrectal ultrasound (TRUS) images plays an important role in the planning and treatment of the practical 3D TRUS guided prostate biopsy. However, a meaningful segmentation of 3D TRUS images tends to suffer from US speckles, shadowing and missing edges etc, which make it a challenging task to delineate the correct prostate boundaries. In this paper, we propose a novel convex optimization based approach to extracting the prostate surface from the given 3D TRUS image, while preserving a new global volume-size prior. We, especially, study the proposed combinatorial optimization problem by convex relaxation and introduce its dual continuous max-flow formulation with the new bounded flow conservation constraint, which results in an efficient numerical solver implemented on GPUs. Experimental results using 12 patient 3D TRUS images show that the proposed approach while preserving the volume-size prior yielded a mean DSC of 89.5% +/- 2.4%, a MAD of 1.4 +/- 0.6 mm, a MAXD of 5.2 +/- 3.2 mm, and a VD of 7.5% +/- 6.2% in - 1 minute, deomonstrating the advantages of both accuracy and efficiency. In addition, the low standard deviation of the segmentation accuracy shows a good reliability of the proposed approach.
Jarmo Nurmi
2017-05-01
Full Text Available This paper addresses the energy-inefficiency problem of four-degrees-of-freedom (4-DOF hydraulic manipulators through redundancy resolution in robotic closed-loop controlled applications. Because conventional methods typically are local and have poor performance for resolving redundancy with respect to minimum hydraulic energy consumption, global energy-optimal redundancy resolution is proposed at the valve-controlled actuator and hydraulic power system interaction level. The energy consumption of the widely popular valve-controlled load-sensing (LS and constant-pressure (CP systems is effectively minimised through cost functions formulated in a discrete-time dynamic programming (DP approach with minimum state representation. A prescribed end-effector path and important actuator constraints at the position, velocity and acceleration levels are also satisfied in the solution. Extensive field experiments performed on a forestry hydraulic manipulator demonstrate the performance of the proposed solution. Approximately 15–30% greater hydraulic energy consumption was observed with the conventional methods in the LS and CP systems. These results encourage energy-optimal redundancy resolution in future robotic applications of hydraulic manipulators.
Development of a fuzzy optimization model, supporting global warming decision-making
Leimbach, M.
1996-01-01
An increasing number of models have been developed to support global warming response policies. The model constructors are facing a lot of uncertainties which limit the evidence of these models. The support of climate policy decision-making is only possible in a semi-quantitative way, as presented by a Fuzzy model. The model design is based on an optimization approach, integrated in a bounded risk decision-making framework. Given some regional emission-related and impact-related restrictions, optimal emission paths can be calculated. The focus is not only on carbon dioxide but on other greenhouse gases too. In the paper, the components of the model will be described. Cost coefficients, emission boundaries and impact boundaries are represented as Fuzzy parameters. The Fuzzy model will be transformed into a computational one by using an approach of Rommelfanger. In the second part, some problems of applying the model to computations will be discussed. This includes discussions on the data situation and the presentation, as well as interpretation of results of sensitivity analyses. The advantage of the Fuzzy approach is that the requirements regarding data precision are not so strong. Hence, the effort for data acquisition can be reduced and computations can be started earlier. 9 figs., 3 tabs., 17 refs., 1 appendix
Demirhan, Haydar; Kayhan Atilgan, Yasemin
2015-01-01
Highlights: • Precise horizontal global solar radiation estimation models are proposed for Turkey. • Genetic programming technique is used to construct the models. • Robust coplot analysis is applied to reduce the impact of outlier observations. • Better estimation and prediction properties are observed for the models. - Abstract: Renewable energy sources have been attracting more and more attention of researchers due to the diminishing and harmful nature of fossil energy sources. Because of the importance of solar energy as a renewable energy source, an accurate determination of significant covariates and their relationships with the amount of global solar radiation reaching the Earth is a critical research problem. There are numerous meteorological and terrestrial covariates that can be used in the analysis of horizontal global solar radiation. Some of these covariates are highly correlated with each other. It is possible to find a large variety of linear or non-linear models to explain the amount of horizontal global solar radiation. However, models that explain the amount of global solar radiation with the smallest set of covariates should be obtained. In this study, use of the robust coplot technique to reduce the number of covariates before going forward with advanced modelling techniques is considered. After reducing the dimensionality of model space, yearly and monthly mean daily horizontal global solar radiation estimation models for Turkey are built by using the genetic programming technique. It is observed that application of robust coplot analysis is helpful for building precise models that explain the amount of global solar radiation with the minimum number of covariates without suffering from outlier observations and the multicollinearity problem. Consequently, over a dataset of Turkey, precise yearly and monthly mean daily global solar radiation estimation models are introduced using the model spaces obtained by robust coplot technique and
Shen, Bo [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL
2017-01-01
Oak Ridge National laboratory (ORNL) recently conducted extensive laboratory, drop-in investigations for lower Global Warming Potential (GWP) refrigerants to replace R-22 and R-410A. ORNL studied propane, DR-3, ARM-20B, N-20B and R-444B as lower GWP refrigerant replacement for R-22 in a mini-split room air conditioner (RAC) originally designed for R-22; and, R-32, DR-55, ARM-71A, and L41-2, in a mini-split RAC designed for R-410A. We obtained laboratory testing results with very good energy balance and nominal measurement uncertainty. Drop-in studies are not enough to judge the overall performance of the alternative refrigerants since their thermodynamic and transport properties might favor different heat exchanger configurations, e.g. cross-flow, counter flow, etc. This study compares optimized performances of individual refrigerants using a physics-based system model tools. The DOE/ORNL Heat Pump Design Model (HPDM) was used to model the mini-split RACs by inputting detailed heat exchangers geometries, compressor displacement and efficiencies as well as other relevant system components. The RAC models were calibrated against the lab data for each individual refrigerant. The calibrated models were then used to conduct a design optimization for the cooling performance by varying the compressor displacement to match the required capacity, and changing the number of circuits, refrigerant flow direction, tube diameters, air flow rates in the condenser and evaporator at 100% and 50% cooling capacities. This paper compares the optimized performance results for all alternative refrigerants and highlights best candidates for R-22 and R-410A replacement.
Mansoor Ahmed Siddiqui
2017-06-01
Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.
DATA MINING WORKSPACE AS AN OPTIMIZATION PREDICTION TECHNIQUE FOR SOLVING TRANSPORT PROBLEMS
Anastasiia KUPTCOVA
2016-09-01
Full Text Available This article addresses the study related to forecasting with an actual high-speed decision making under careful modelling of time series data. The study uses data-mining modelling for algorithmic optimization of transport goals. Our finding brings to the future adequate techniques for the fitting of a prediction model. This model is going to be used for analyses of the future transaction costs in the frontiers of the Czech Republic. Time series prediction methods for the performance of prediction models in the package of Statistics are Exponential, ARIMA and Neural Network approaches. The primary target for a predictive scenario in the data mining workspace is to provide modelling data faster and with more versatility than the other management techniques.
Miranda, A.; Echevarria, J.F.; Rondon, S.; Leiva, P.; Sendoya, F.A.; Amalfi, J.; Lopez, M.; Dominguez, H.
1999-01-01
The paper deals with the study of the main parameters of thermal cycle in Orbital Automatic Weld, as a particular process of the GTAW Weld technique. Also is concerned with the investigation of microstructural and mechanical properties of welded joints made with Orbital Technique in SA 210 Steel, a particular alloy widely use during the construction of Economizers of Power Plants. A number of PC software were used in this sense in order to anticipate the main mechanical and structural characteristics of Weld metal and the Heat Affected Zone (HAZ). The papers also might be of great value during selection of optimal Weld parameters to produce sound and high quality Welds during the construction / assembling of structural components in high requirements industrial sectors and also to make a reliable prediction of weld properties
Protopopescu, V.; D'Helon, C.; Barhen, J.
2003-06-01
A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.
Optimization of MKID noise performance via readout technique for astronomical applications
Czakon, Nicole G.; Schlaerth, James A.; Day, Peter K.; Downes, Thomas P.; Duan, Ran P.; Gao, Jiansong; Glenn, Jason; Golwala, Sunil R.; Hollister, Matt I.; LeDuc, Henry G.; Mazin, Benjamin A.; Maloney, Philip R.; Noroozian, Omid; Nguyen, Hien T.; Sayers, Jack; Siegel, Seth; Vaillancourt, John E.; Vayonakis, Anastasios; Wilson, Philip R.; Zmuidzinas, Jonas
2010-07-01
Detectors employing superconducting microwave kinetic inductance detectors (MKIDs) can be read out by measuring changes in either the resonator frequency or dissipation. We will discuss the pros and cons of both methods, in particular, the readout method strategies being explored for the Multiwavelength Sub/millimeter Inductance Camera (MUSIC) to be commissioned at the CSO in 2010. As predicted theoretically and observed experimentally, the frequency responsivity is larger than the dissipation responsivity, by a factor of 2-4 under typical conditions. In the absence of any other noise contributions, it should be easier to overcome amplifier noise by simply using frequency readout. The resonators, however, exhibit excess frequency noise which has been ascribed to a surface distribution of two-level fluctuators sensitive to specific device geometries and fabrication techniques. Impressive dark noise performance has been achieved using modified resonator geometries employing interdigitated capacitors (IDCs). To date, our noise measurement and modeling efforts have assumed an onresonance readout, with the carrier power set well below the nonlinear regime. Several experimental indicators suggested to us that the optimal readout technique may in fact require a higher readout power, with the carrier tuned somewhat off resonance, and that a careful systematic study of the optimal readout conditions was needed. We will present the results of such a study, and discuss the optimum readout conditions as well as the performance that can be achieved relative to BLIP.
Ito, Fuminori, E-mail: fuminoito@spice.ocn.ne.jp [Tokyo Metropolitan University, Department of Applied Chemistry, Graduate School of Urban Environmental Sciences (Japan)
2016-09-15
In this study, we report the optimization of a solvent evaporation technique for preparing monodisperse poly-(lactide-co-glycolide) (PLGA) nanospheres, from a mixture of solvents composed of ethanol and PVA solution. Various experimental conditions were investigated in order to control the particle size and size distribution of the nanospheres. In addition, nanospheres containing rifampicin (RFP, an antituberculosis drug), were prepared using PLGA of various molecular weights, to study the effects of RFP as a model hydrophobic drug. The results showed that a higher micro-homogenizer stirring rate facilitated the preparation of monodisperse PLGA nanospheres with a low coefficient of variation (~20 %), with sizes below 200 nm. Increasing the PLGA concentration from 0.1 to 0.5 g resulted in an increase in the size of the obtained nanospheres from 130 to 174 nm. The molecular weight of PLGA had little effect on the particle sizes and particle size distributions of the nanospheres. However, the drug loading efficiencies of the obtained RFP/PLGA nanospheres decreased when the molecular weight of PLGA was increased. Based on these experiments, an optimized technique was established for the preparation of monodisperse PLGA nanospheres, using the method developed by the authors.Graphical Abstract.
All-automatic swimmer tracking system based on an optimized scaled composite JTC technique
Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.
2016-04-01
In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.
A comprehensive review of prostate cancer brachytherapy: defining an optimal technique
Vicini, Frank A.; Kini, Vijay R.; Edmundson, Gregory B.S.; Gustafson, Gary S.; Stromberg, Jannifer; Martinez, Alvaro
1999-01-01
Purpose: A comprehensive review of prostate cancer brachytherapy literature was performed to determine if an optimal method of implantation could be identified, and to compare and contrast techniques currently in use. Methods and Materials: A MEDLINE search was conducted to obtain all articles in the English language on prostate cancer brachytherapy from 1985 through 1998. Articles were reviewed and grouped to determine the primary technique of implantation, the method or philosophy of source placement and/or dose specification, the technique to evaluate implant quality, overall treatment results (based upon pretreatment prostate specific antigen, (PSA), and biochemical control) and clinical, pathological or biochemical outcome based upon implant quality. Results: A total of 178 articles were identified in the MEDLINE database. Of these, 53 studies discussed evaluable techniques of implantation and were used for this analysis. Of these studies, 52% used preoperative ultrasound to determine the target volume to be implanted, 16% used preoperative computerized tomography (CT) scans, and 18% placed seeds with an open surgical technique. An additional 11% of studies placed seeds or needles under ultrasound guidance using interactive real-time dosimetry. The number and distribution of radioactive sources to be implanted or the method used to prescribe dose was determined using nomograms in 27% of studies, a least squares optimization technique in 11%, or not stated in 35%. In the remaining 26%, sources were described as either uniformly, differentially, or peripherally placed in the gland. To evaluate implant quality, 28% of studies calculated some type of dose-volume histogram, 21% calculated the matched peripheral dose, 19% the minimum peripheral dose, 14% used some type of CT-based qualitative review and, in 18% of studies, no implant quality evaluation was mentioned. Six studies correlated outcome with implant dose. One study showed an association of implant dose
Mohamed, Ahmed F.; Elarini, Mahdi M.; Othman, Ahmed M.
2013-01-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) whic...
A custom three-dimensional electron bolus technique for optimization of postmastectomy irradiation
Perkins, George H.; McNeese, Marsha D.; Antolak, John A.; Buchholz, Thomas A.; Strom, Eric A.; Hogstrom, Kenneth R.
2001-01-01
Purpose: Postmastectomy irradiation (PMI) is a technically complex treatment requiring consideration of the primary tumor location, possible risk of internal mammary node involvement, varying chest wall thicknesses secondary to surgical defects or body habitus, and risk of damaging normal underlying structures. In this report, we describe the application of a customized three-dimensional (3D) electron bolus technique for delivering PMI. Methods and Materials: A customized electron bolus was designed using a 3D planning system. Computed tomography (CT) images of each patient were obtained in treatment position and the volume to be treated was identified. The distal surface of the wax bolus matched the skin surface, and the proximal surface was designed to conform to the 90% isodose surface to the distal surface of the planning target volume (PTV). Dose was calculated with a pencil-beam algorithm correcting for patient heterogeneity. The bolus was then fabricated from modeling wax using a computer-controlled milling device. To aid in quality assurance, CT images with the bolus in place were generated and the dose distribution was computed using these images. Results: This technique optimized the dose distribution while minimizing irradiation of normal tissues. The use of a single anterior field eliminated field junction sites. Two patients who benefited from this option are described: one with altered chest wall geometry (congenital pectus excavatum), and one with recurrent disease in the medial chest wall and internal mammary chain (IMC) area. Conclusion: The use of custom 3D electron bolus for PMI is an effective method for optimizing dose delivery. The radiation dose distribution is highly conformal, dose heterogeneity is reduced compared to standard techniques in certain suboptimal settings, and excellent immediate outcome is obtained
Anon.
1998-09-01
The ``hydro-wired`` technique consists in the use of small diameter synthetic pipes for the water supply of hot or cold water to space heating or cooling appliances. This new technique has several advantages regarding the thermal comfort and the realization of installations. However cares must be taken during the application of this technique and the design of installations. This paper describes: the French potential market for this technique (heating and cooling floors, traditional heating, sanitary hot water distribution), the characteristics of this technique with respect to traditional techniques (the sheathing of pipes, the specificities of the `octopus`-type distribution), and the design of installations (the different steps: preliminary thermal analysis, realization, start-up and tests). (J.S.)
Climate, Agriculture, Energy and the Optimal Allocation of Global Land Use
Steinbuks, J.; Hertel, T. W.
2011-12-01
The allocation of the world's land resources over the course of the next century has become a pressing research question. Continuing population increases, improving, land-intensive diets amongst the poorest populations in the world, increasing production of biofuels and rapid urbanization in developing countries are all competing for land even as the world looks to land resources to supply more environmental services. The latter include biodiversity and natural lands, as well as forests and grasslands devoted to carbon sequestration. And all of this is taking place in the context of faster than expected climate change which is altering the biophysical environment for land-related activities. The goal of the paper is to determine the optimal profile for global land use in the context of growing commercial demands for food and forest products, increasing non-market demands for ecosystem services, and more stringent GHG mitigation targets. We then seek to assess how the uncertainty associated with the underlying biophysical and economic processes influences this optimal profile of land use, in light of potential irreversibility in these decisions. We develop a dynamic long-run, forward-looking partial equilibrium framework in which the societal objective function being maximized places value on food production, liquid fuels (including biofuels), timber production, forest carbon and biodiversity. Given the importance of land-based emissions to any GHG mitigation strategy, as well as the potential impacts of climate change itself on the productivity of land in agriculture, forestry and ecosystem services, we aim to identify the optimal allocation of the world's land resources, over the course of the next century, in the face of alternative GHG constraints. The forestry sector is characterized by multiple forest vintages which add considerable computational complexity in the context of this dynamic analysis. In order to solve this model efficiently, we have employed the
Chang, Chiou-Shiung; Hwang, Jing-Min; Tai, Po-An; Chang, You-Kang; Wang, Yu-Nong; Shih, Rompin; Chuang, Keh-Shih
2016-01-01
(p < 0.05) than either DCA or IMRS plans, at 9.2 ± 7% and 8.2 ± 6%, respectively. Owing to the multiple arc or beam planning designs of IMRS and VMAT, both of these techniques required higher MU delivery than DCA, with the averages being twice as high (p < 0.05). If linear accelerator is only 1 modality can to establish for SRS treatment. Based on statistical evidence retrospectively, we recommend VMAT as the optimal technique for delivering treatment to tumors adjacent to brainstem.
The Hit and Away technique: optimal usage of the ultrasonic scalpel in laparoscopic gastrectomy.
Irino, Tomoyuki; Hiki, Naoki; Ohashi, Manabu; Nunobe, Souya; Sano, Takeshi; Yamaguchi, Toshiharu
2016-01-01
Thermal injury and unexpected bleeding caused by ultrasonic scalpels can lead to fatal complications in laparoscopic gastrectomy (LG), such as postoperative pancreatic fistulas (POPF). In this study, we developed the "Hit and Away" protocol for optimal usage of the ultrasonic scalpel, which in essence involves dividing tissues and vessels in batches using the tip of the scalpel to control tissue temperature. To assess the effectiveness of the technique, the surface temperature of the mesocolon of female swine after ultrasonic scalpel activations was measured, and tissue samples were collected to evaluate microscopic thermal injury to the pancreas. In parallel, we retrospectively surveyed 216 patients who had undergone LG before or after the introduction of this technique and assessed the ability of this technique to reduce POPF. The tissue temperature of the swine mesocolon reached 43 °C, a temperature at which adipose tissue melted but fibrous tissue, including vessels, remained intact. The temperature returned to baseline within 3 s of turning off the ultrasonic scalpel, demonstrating the advantage of using ultrasonic scalpel in a pulsatile manner. Tissue samples from the pancreas demonstrated that the extent of thermal injury post-procedure was limited to the capsule of the pancreas. Moreover, with respect to the clinical outcomes before and after the introduction of this technique, POPF incidence decreased significantly from 7.8 to 1.0% (p = 0.021). The "Hit and Away" technique can reduce blood loss and thermal injury to the pancreas and help to ensure the safety of lymph node dissection in LG.
Ioannou, Lawrence M.; Travaglione, Benjamin C.
2006-01-01
We focus on determining the separability of an unknown bipartite quantum state ρ by invoking a sufficiently large subset of all possible entanglement witnesses given the expected value of each element of a set of mutually orthogonal observables. We review the concept of an entanglement witness from the geometrical point of view and use this geometry to show that the set of separable states is not a polytope and to characterize the class of entanglement witnesses (observables) that detect entangled states on opposite sides of the set of separable states. All this serves to motivate a classical algorithm which, given the expected values of a subset of an orthogonal basis of observables of an otherwise unknown quantum state, searches for an entanglement witness in the span of the subset of observables. The idea of such an algorithm, which is an efficient reduction of the quantum separability problem to a global optimization problem, was introduced by [Ioannou et al., Phys. Rev. A 70, 060303(R)], where it was shown to be an improvement on the naive approach for the quantum separability problem (exhaustive search for a decomposition of the given state into a convex combination of separable states). The last section of the paper discusses in more generality such algorithms, which, in our case, assume a subroutine that computes the global maximum of a real function of several variables. Despite this, we anticipate that such algorithms will perform sufficiently well on small instances that they will render a feasible test for separability in some cases of interest (e.g., in 3x3 dimensional systems)
Voyant, Cyril; Muselli, Marc; Paoli, Christophe; Nivet, Marie-Laure
2011-01-01
This paper presents an application of Artificial Neural Networks (ANNs) to predict daily solar radiation. We look at the Multi-Layer Perceptron (MLP) network which is the most used of ANNs architectures. In previous studies, we have developed an ad-hoc time series preprocessing and optimized a MLP with endogenous inputs in order to forecast the solar radiation on a horizontal surface. We propose in this paper to study the contribution of exogenous meteorological data (multivariate method) as time series to our optimized MLP and compare with different forecasting methods: a naive forecaster (persistence), ARIMA reference predictor, an ANN with preprocessing using only endogenous inputs (univariate method) and an ANN with preprocessing using endogenous and exogenous inputs. The use of exogenous data generates an nRMSE decrease between 0.5% and 1% for two stations during 2006 and 2007 (Corsica Island, France). The prediction results are also relevant for the concrete case of a tilted PV wall (1.175 kWp). The addition of endogenous and exogenous data allows a 1% decrease of the nRMSE over a 6 months-cloudy period for the power production. While the use of exogenous data shows an interest in winter, endogenous data as inputs on a preprocessed ANN seem sufficient in summer. -- Research highlights: → Use of exogenous data as ANN inputs to forecast horizontal daily global irradiation data. → New methodology allowing to choice the adequate exogenous data - a systematic method comparing endogenous and exogenous data. → Different referenced mathematical predictors allows to conclude about the pertinence of the proposed methodology.
Chang, Chiou-Shiung; Hwang, Jing-Min; Tai, Po-An; Chang, You-Kang; Wang, Yu-Nong; Shih, Rompin; Chuang, Keh-Shih
2016-01-01
Stereotactic radiosurgery (SRS) is a well-established technique that is replacing whole-brain irradiation in the treatment of intracranial lesions, which leads to better preservation of brain functions, and therefore a better quality of life for the patient. There are several available forms of linear accelerator (LINAC)-based SRS, and the goal of the present study is to identify which of these techniques is best (as evaluated by dosimetric outcomes statistically) when the target is located adjacent to brainstem. We collected the records of 17 patients with lesions close to the brainstem who had previously been treated with single-fraction radiosurgery. In all, 5 different lesion catalogs were collected, and the patients were divided into 2 distance groups-1 consisting of 7 patients with a target-to-brainstem distance of less than 0.5cm, and the other of 10 patients with a target-to-brainstem distance of ≥ 0.5 and linear accelerator is only 1 modality can to establish for SRS treatment. Based on statistical evidence retrospectively, we recommend VMAT as the optimal technique for delivering treatment to tumors adjacent to brainstem. Copyright © 2016 American Association of Medical Dosimetrists. All rights reserved.
Karthivashan, Govindarajan; Masarudin, Mas Jaffri; Kura, Aminu Umar; Abas, Faridah; Fakurazi, Sharida
2016-01-01
This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as "flavonosome". Three widely established and therapeutically valuable flavonoids, such as quercetin (Q), kaempferol (K), and apigenin (A), were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA-phosphatidylcholine) through four different methods of synthesis - bulk (M1) and serialized (M2) co-sonication and bulk (M3) and sequential (M4) co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug-carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG). Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA-phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0.17%, 34.51%±2.07%, and 31.79%±0.01%, respectively. The in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics of the flavonoids indirectly depicts the release kinetic behavior of the flavonoids from the carrier. The QKA-loaded flavonosome had no indication of toxicity toward human hepatoma cell line as shown by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide result, wherein even at the higher concentration of 200 µg/mL, the flavonosomes exert >85% of cell viability. These results suggest that sequential loading technique may be a promising
Kleijnen, Jack P.C.; van Beers, W.C.M.; van Nieuwenhuyse, I.
2010-01-01
This paper uses a sequentialized experimental design to select simulation input com- binations for global optimization, based on Kriging (also called Gaussian process or spatial correlation modeling); this Kriging is used to analyze the input/output data of the simulation model (computer code). This
Tulio Rosembuj
2006-12-01
Full Text Available There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.
Tulio Rosembuj
2006-01-01
There is no singular globalization, nor is the result of an individual agent. We could start by saying that global action has different angles and subjects who perform it are different, as well as its objectives. The global is an invisible invasion of materials and immediate effects.
Protein structure modeling and refinement by global optimization in CASP12.
Hong, Seung Hwan; Joung, InSuk; Flores-Canales, Jose C; Manavalan, Balachandran; Cheng, Qianyi; Heo, Seungryong; Kim, Jong Yun; Lee, Sun Young; Nam, Mikyung; Joo, Keehyoung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2018-03-01
For protein structure modeling in the CASP12 experiment, we have developed a new protocol based on our previous CASP11 approach. The global optimization method of conformational space annealing (CSA) was applied to 3 stages of modeling: multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain re-modeling. For better template selection and model selection, we updated our model quality assessment (QA) method with the newly developed SVMQA (support vector machine for quality assessment). For 3D chain building, we updated our energy function by including restraints generated from predicted residue-residue contacts. New energy terms for the predicted secondary structure and predicted solvent accessible surface area were also introduced. For difficult targets, we proposed a new method, LEEab, where the template term played a less significant role than it did in LEE, complemented by increased contributions from other terms such as the predicted contact term. For TBM (template-based modeling) targets, LEE performed better than LEEab, but for FM targets, LEEab was better. For model refinement, we modified our CASP11 molecular dynamics (MD) based protocol by using explicit solvents and tuning down restraint weights. Refinement results from MD simulations that used a new augmented statistical energy term in the force field were quite promising. Finally, when using inaccurate information (such as the predicted contacts), it was important to use the Lorentzian function for which the maximal penalty arising from wrong information is always bounded. © 2017 Wiley Periodicals, Inc.
Two-stage collaborative global optimization design model of the CHPG microgrid
Liao, Qingfen; Xu, Yeyan; Tang, Fei; Peng, Sicheng; Yang, Zheng
2017-06-01
With the continuous developing of technology and reducing of investment costs, renewable energy proportion in the power grid is becoming higher and higher because of the clean and environmental characteristics, which may need more larger-capacity energy storage devices, increasing the cost. A two-stage collaborative global optimization design model of the combined-heat-power-and-gas (abbreviated as CHPG) microgrid is proposed in this paper, to minimize the cost by using virtual storage without extending the existing storage system. P2G technology is used as virtual multi-energy storage in CHPG, which can coordinate the operation of electric energy network and natural gas network at the same time. Demand response is also one kind of good virtual storage, including economic guide for the DGs and heat pumps in demand side and priority scheduling of controllable loads. Two kinds of storage will coordinate to smooth the high-frequency fluctuations and low-frequency fluctuations of renewable energy respectively, and achieve a lower-cost operation scheme simultaneously. Finally, the feasibility and superiority of proposed design model is proved in a simulation of a CHPG microgrid.
Yang, Jian; Cong, Weijian; Fan, Jingfan; Liu, Yue; Wang, Yongtian; Chen, Yang
2014-01-01
The clinical value of the 3D reconstruction of a coronary artery is important for the diagnosis and intervention of cardiovascular diseases. This work proposes a method based on a deformable model for reconstructing coronary arteries from two monoplane angiographic images acquired from different angles. First, an external force back-projective composition model is developed to determine the external force, for which the force distributions in different views are back-projected to the 3D space and composited in the same coordinate system based on the perspective projection principle of x-ray imaging. The elasticity and bending forces are composited as an internal force to maintain the smoothness of the deformable curve. Second, the deformable curve evolves rapidly toward the true vascular centerlines in 3D space and angiographic images under the combination of internal and external forces. Third, densely matched correspondence among vessel centerlines is constructed using a curve alignment method. The bundle adjustment method is then utilized for the global optimization of the projection parameters and the 3D structures. The proposed method is validated on phantom data and routine angiographic images with consideration for space and re-projection image errors. Experimental results demonstrate the effectiveness and robustness of the proposed method for the reconstruction of coronary arteries from two monoplane angiographic images. The proposed method can achieve a mean space error of 0.564 mm and a mean re-projection error of 0.349 mm. (paper)
Slepoy, A; Peters, M D; Thompson, A P
2007-11-30
Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.
Global shape optimization of airfoil using multi-objective genetic algorithm
Lee, Ju Hee; Lee, Sang Hwan; Park, Kyoung Woo
2005-01-01
The shape optimization of an airfoil has been performed for an incompressible viscous flow. In this study, Pareto frontier sets, which are global and non-dominated solutions, can be obtained without various weighting factors by using the multi-objective genetic algorithm. An NACA0012 airfoil is considered as a baseline model, and the profile of the airfoil is parameterized and rebuilt with four Bezier curves. Two curves, from leading to maximum thickness, are composed of five control points and the rest, from maximum thickness to tailing edge, are composed of four control points. There are eighteen design variables and two objective functions such as the lift and drag coefficients. A generation is made up of forty-five individuals. After fifteenth evolutions, the Pareto individuals of twenty can be achieved. One Pareto, which is the best of the reduction of the drag force, improves its drag to 13% and lift-drag ratio to 2%. Another Pareto, however, which is focused on increasing the lift force, can improve its lift force to 61%, while sustaining its drag force, compared to those of the baseline model
Global shape optimization of airfoil using multi-objective genetic algorithm
Lee, Ju Hee; Lee, Sang Hwan [Hanyang Univ., Seoul (Korea, Republic of); Park, Kyoung Woo [Hoseo Univ., Asan (Korea, Republic of)
2005-10-01
The shape optimization of an airfoil has been performed for an incompressible viscous flow. In this study, Pareto frontier sets, which are global and non-dominated solutions, can be obtained without various weighting factors by using the multi-objective genetic algorithm. An NACA0012 airfoil is considered as a baseline model, and the profile of the airfoil is parameterized and rebuilt with four Bezier curves. Two curves, from leading to maximum thickness, are composed of five control points and the rest, from maximum thickness to tailing edge, are composed of four control points. There are eighteen design variables and two objective functions such as the lift and drag coefficients. A generation is made up of forty-five individuals. After fifteenth evolutions, the Pareto individuals of twenty can be achieved. One Pareto, which is the best of the reduction of the drag force, improves its drag to 13% and lift-drag ratio to 2%. Another Pareto, however, which is focused on increasing the lift force, can improve its lift force to 61%, while sustaining its drag force, compared to those of the baseline model.
Ghasemi, Mojtaba; Ghavidel, Sahand; Aghaei, Jamshid; Gitizadeh, Mohsen; Falah, Hasan
2014-01-01
Highlights: • Chaotic invasive weed optimization techniques based on chaos. • Nonlinear environmental OPF problem considering non-smooth fuel cost curves. • A comparative study of CIWO techniques for environmental OPF problem. - Abstract: This paper presents efficient chaotic invasive weed optimization (CIWO) techniques based on chaos for solving optimal power flow (OPF) problems with non-smooth generator fuel cost functions (non-smooth OPF) with the minimum pollution level (environmental OPF) in electric power systems. OPF problem is used for developing corrective strategies and to perform least cost dispatches. However, cost based OPF problem solutions usually result in unattractive system gaze emission issue (environmental OPF). In the present paper, the OPF problem is formulated by considering the emission issue. The total emission can be expressed as a non-linear function of power generation, as a multi-objective optimization problem, where optimal control settings for simultaneous minimization of fuel cost and gaze emission issue are obtained. The IEEE 30-bus test power system is presented to illustrate the application of the environmental OPF problem using CIWO techniques. Our experimental results suggest that CIWO techniques hold immense promise to appear as efficient and powerful algorithm for optimization in the power systems
Sintering process optimization for multi-layer CGO membranes by in situ techniques
Kaiser, Andreas; Prasad, A.S.; Foghmoes, Søren Preben Vagn
2013-01-01
The sintering of asymmetric CGO bi-layers (thin dense membrane on a porous support; Ce0.9Gd0.1O1.95-delta = CGO) with Co3O4 as sintering additive has been optimized by combination of two in situ techniques. Optical dilatometry revealed that bi-layer shape and microstructure are dramatically...... changing in a narrow temperature range of less than 100 degrees C. Below 1030 degrees C, a higher densification rate in the dense membrane layer than in the porous support leads to concave shape, whereas the densification rate of the support is dominant above 1030 degrees C, leading to convex shape. A fiat...... bi-layer could be prepared at 1030 degrees C, when shrinkage rates were similar. In situ van der Pauw measurements on tape cast layers during sintering allowed following the conductivity during sintering. A strong increase in conductivity and in activation energy E-a for conduction was observed...
Cooper, G.S. Jr.; Kaluarachchi, J.J.; Peralta, R.C.
1993-01-01
An innovative approach is presented to minimize pumping for immobilizing a floating plume of a light non-aqueous phase liquid (LNAPL). The best pumping strategy is determined to contain the free oil product and provide for gradient control of the water table. This approach combined detailed simulation, statistical analysis, and optimization. This modeling technique uses regression equations that describe system response to variable pumping stimuli. The regression equations were developed from analysis of systematically performed simulations of multiphase flow in an areal region of an unconfined aquifer. Simulations were performed using ARMOS, a finite element model. ARMOS can be used to simulate a spill, leakage from subsurface storage facilities and recovery of hydrocarbons from trenches or pumping wells to design remediation schemes
An Exploratory Study on the Optimized Test Conditions of the Lock-in Thermography Technique
Cho, Yong Jin
2011-01-01
This work is devoted to the technique application of lock-in infrared thermography in the shipbuilding and ocean engineering industry. For this purpose, an exploratory study to find the optimized test conditions is carried out by the design of experiments. It has been confirmed to be useful method that the phase contrast images were quantified by a reference image and weighted by defect hole size. Illuminated optical intensity of lower or medium strength give a good result for getting a phase contrast image. In order to get a good phase contrast image, lock-in frequency factors should be high in proportion to the illuminated optical intensity. The integration time of infrared camera should have been inversely proportional to the optical intensity. The other hand, the difference of specimen materials gave a slightly biased results not being discriminative reasoning
Harding, D.C.; Eldred, M.S.; Witkowski, W.R.
1995-01-01
Type B radioactive material transport packages must meet strict Nuclear Regulatory Commission (NRC) regulations specified in 10 CFR 71. Type B containers include impact limiters, radiation or thermal shielding layers, and one or more containment vessels. In the past, each component was typically designed separately based on its driving constraint and the expertise of the designer. The components were subsequently assembled and the design modified iteratively until all of the design criteria were met. This approach neglects the fact that components may serve secondary purposes as well as primary ones. For example, an impact limiter's primary purpose is to act as an energy absorber and protect the contents of the package, but can also act as a heat dissipater or insulator. Designing the component to maximize its performance with respect to both objectives can be accomplished using numerical optimization techniques
Engine Yaw Augmentation for Hybrid-Wing-Body Aircraft via Optimal Control Allocation Techniques
Taylor, Brian R.; Yoo, Seung Yeun
2011-01-01
Asymmetric engine thrust was implemented in a hybrid-wing-body non-linear simulation to reduce the amount of aerodynamic surface deflection required for yaw stability and control. Hybrid-wing-body aircraft are especially susceptible to yaw surface deflection due to their decreased bare airframe yaw stability resulting from the lack of a large vertical tail aft of the center of gravity. Reduced surface deflection, especially for trim during cruise flight, could reduce the fuel consumption of future aircraft. Designed as an add-on, optimal control allocation techniques were used to create a control law that tracks total thrust and yaw moment commands with an emphasis on not degrading the baseline system. Implementation of engine yaw augmentation is shown and feasibility is demonstrated in simulation with a potential drag reduction of 2 to 4 percent. Future flight tests are planned to demonstrate feasibility in a flight environment.
Analysis and optimization of a proton exchange membrane fuel cell using modeling techniques
Torre Valdés, Ing. Raciel de la; García Parra, MSc. Lázaro Roger; González Rodríguez, MSc. Daniel
2015-01-01
This paper proposes a three-dimensional, non-isothermal and steady-state model of Proton Exchange Membrane Fuel Cell using Computational Fluid Dynamic techniques, specifically ANSYS FLUENT 14.5. It's considered multicomponent diffusion and two-phasic flow. The model was compared with experimental published data and with another model. The operation parameters: reactants pressure and temperature, gases flow direction, gas diffusion layer and catalyst layer porosity, reactants humidification and oxygen concentration are analyzed. The model allows the fuel cell design optimization taking in consideration the channels dimensions, the channels length and the membrane thickness. Furthermore, fuel cell performance is analyzed working with SPEEK membrane, an alternative electrolyte to Nafion. In order to carry on membrane material study, it's necessary to modify the expression that describes the electrolyte ionic conductivity. It's found that the device performance has got a great sensibility to pressure, temperature, reactant humidification and oxygen concentration variations. (author)
Optimal Draft requirement for vibratory tillage equipment using Genetic Algorithm Technique
Rao, Gowripathi; Chaudhary, Himanshu; Singh, Prem
2018-03-01
Agriculture is an important sector of Indian economy. Primary and secondary tillage operations are required for any land preparation process. Conventionally different tractor-drawn implements such as mouldboard plough, disc plough, subsoiler, cultivator and disc harrow, etc. are used for primary and secondary manipulations of soils. Among them, oscillatory tillage equipment is one such type which uses vibratory motion for tillage purpose. Several investigators have reported that the requirement for draft consumption in primary tillage implements is more as compared to oscillating one because they are always in contact with soil. Therefore in this paper, an attempt is made to find out the optimal parameters from the experimental data available in the literature to obtain minimum draft consumption through genetic algorithm technique.
Del Rio, Beatriz G; Dieterich, Johannes M; Carter, Emily A
2017-08-08
The accuracy of local pseudopotentials (LPSs) is one of two major determinants of the fidelity of orbital-free density functional theory (OFDFT) simulations. We present a global optimization strategy for LPSs that enables OFDFT to reproduce solid and liquid properties obtained from Kohn-Sham DFT. Our optimization strategy can fit arbitrary properties from both solid and liquid phases, so the resulting globally optimized local pseudopotentials (goLPSs) can be used in solid and/or liquid-phase simulations depending on the fitting process. We show three test cases proving that we can (1) improve solid properties compared to our previous bulk-derived local pseudopotential generation scheme; (2) refine predicted liquid and solid properties by adding force matching data; and (3) generate a from-scratch, accurate goLPS from the local channel of a non-local pseudopotential. The proposed scheme therefore serves as a full and improved LPS construction protocol.
Optimization of GPS water vapor tomography technique with radiosonde and COSMIC historical data
S. Ye
2016-09-01
Full Text Available The near-real-time high spatial resolution of atmospheric water vapor distribution is vital in numerical weather prediction. GPS tomography technique has been proved effectively for three-dimensional water vapor reconstruction. In this study, the tomography processing is optimized in a few aspects by the aid of radiosonde and COSMIC historical data. Firstly, regional tropospheric zenith hydrostatic delay (ZHD models are improved and thus the zenith wet delay (ZWD can be obtained at a higher accuracy. Secondly, the regional conversion factor of converting the ZWD to the precipitable water vapor (PWV is refined. Next, we develop a new method for dividing the tomography grid with an uneven voxel height and a varied water vapor layer top. Finally, we propose a Gaussian exponential vertical interpolation method which can better reflect the vertical variation characteristic of water vapor. GPS datasets collected in Hong Kong in February 2014 are employed to evaluate the optimized tomographic method by contrast with the conventional method. The radiosonde-derived and COSMIC-derived water vapor densities are utilized as references to evaluate the tomographic results. Using radiosonde products as references, the test results obtained from our optimized method indicate that the water vapor density accuracy is improved by 15 and 12 % compared to those derived from the conventional method below the height of 3.75 km and above the height of 3.75 km, respectively. Using the COSMIC products as references, the results indicate that the water vapor density accuracy is improved by 15 and 19 % below 3.75 km and above 3.75 km, respectively.
Liu, Jing-Han; Zhou, Jun; Ouyang, Xi-Lin; Li, Xi-Jin; Lu, Fa-Qiang
2005-08-01
This study was aimed to further optimize trehalose loading technique including loading temperature, loading time, loading solution and loading concentration of trehalose, based on the established parameters. Loading efficiency in plasma was compared with that in buffer at 37 degrees C; the curves of intracellular trehalose concentration versus loading time at 37 degrees C and 16 degrees C were measured; curves of mean platelet volume (MPV) versus loading time and loading concentration were investigated and compared. According to results obtained, the loaing time, loading temperature, loading solution and trehalose concentration were ascertained for high loading efficiency of trehalose into human platelet. The results showed that the loading efficiency in plasma was markedly higher than that in buffer at 37 degrees C, the loading efficiency in plasma at 37 degrees C was significantly higher than that at 16 degrees C and reached 19.51% after loading for 4 hours, but 6.16% at 16 degrees C. MPV at 16 degrees C was increased by 43.2% than that at 37 degrees C, but had no distinct changes with loading time and loading concentration. In loading at 37 degrees C, MPV increased with loading time and loading concentration positively. Loading time and loading concentration displayed synergetic effect on MPV. MPV increased with loading time and concentration while trehalose loading concentration was above 50 mmol/L. It is concluded that the optimization parameters of trehalose loading technique are 37 degrees C (temperature), 4 hours (leading time), plasma (loading solution), 50 mmol/L (feasible trehalose concentration). The trehalose concentration can be adjusted to meet the requirement of lyophilization.
Comparison of metaheuristic techniques to determine optimal placement of biomass power plants
Reche-Lopez, P.; Ruiz-Reyes, N.; Garcia Galan, S.; Jurado, F.
2009-01-01
This paper deals with the application and comparison of several metaheuristic techniques to optimize the placement and supply area of biomass-fueled power plants. Both, trajectory and population-based methods are applied for our goal. In particular, two well-known trajectory method, such as Simulated Annealing (SA) and Tabu Search (TS), and two commonly used population-based methods, such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are hereby considered. In addition, a new binary PSO algorithm has been proposed, which incorporates an inertia weight factor, like the classical continuous approach. The fitness function for the metaheuristics is the profitability index, defined as the ratio between the net present value and the initial investment. In this work, forest residues are considered as biomass source, and the problem constraints are: the generation system must be located inside the supply area, and its maximum electric power is 5 MW. The comparative results obtained by all considered metaheuristics are discussed. Random walk has also been assessed for the problem we deal with.
Multi-view 3D scene reconstruction using ant colony optimization techniques
Chrysostomou, Dimitrios; Gasteratos, Antonios; Nalpantidis, Lazaros; Sirakoulis, Georgios C
2012-01-01
This paper presents a new method performing high-quality 3D object reconstruction of complex shapes derived from multiple, calibrated photographs of the same scene. The novelty of this research is found in two basic elements, namely: (i) a novel voxel dissimilarity measure, which accommodates the elimination of the lighting variations of the models and (ii) the use of an ant colony approach for further refinement of the final 3D models. The proposed reconstruction procedure employs a volumetric method based on a novel projection test for the production of a visual hull. While the presented algorithm shares certain aspects with the space carving algorithm, it is, nevertheless, first enhanced with the lightness compensating image comparison method, and then refined using ant colony optimization. The algorithm is fast, computationally simple and results in accurate representations of the input scenes. In addition, compared to previous publications, the particular nature of the proposed algorithm allows accurate 3D volumetric measurements under demanding lighting environmental conditions, due to the fact that it can cope with uneven light scenes, resulting from the characteristics of the voxel dissimilarity measure applied. Besides, the intelligent behavior of the ant colony framework provides the opportunity to formulate the process as a combinatorial optimization problem, which can then be solved by means of a colony of cooperating artificial ants, resulting in very promising results. The method is validated with several real datasets, along with qualitative comparisons with other state-of-the-art 3D reconstruction techniques, following the Middlebury benchmark. (paper)
Comparison of metaheuristic techniques to determine optimal placement of biomass power plants
Reche-Lopez, P.; Ruiz-Reyes, N.; Garcia Galan, S. [Telecommunication Engineering Department, University of Jaen Polytechnic School, C/ Alfonso X el Sabio 28, 23700 Linares, Jaen (Spain); Jurado, F. [Electrical Engineering Department, University of Jaen Polytechnic School, C/ Alfonso X el Sabio 28, 23700 Linares, Jaen (Spain)
2009-08-15
This paper deals with the application and comparison of several metaheuristic techniques to optimize the placement and supply area of biomass-fueled power plants. Both, trajectory and population-based methods are applied for our goal. In particular, two well-known trajectory method, such as Simulated Annealing (SA) and Tabu Search (TS), and two commonly used population-based methods, such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are hereby considered. In addition, a new binary PSO algorithm has been proposed, which incorporates an inertia weight factor, like the classical continuous approach. The fitness function for the metaheuristics is the profitability index, defined as the ratio between the net present value and the initial investment. In this work, forest residues are considered as biomass source, and the problem constraints are: the generation system must be located inside the supply area, and its maximum electric power is 5 MW. The comparative results obtained by all considered metaheuristics are discussed. Random walk has also been assessed for the problem we deal with. (author)
Optimal Sizing and Location of Distributed Generators Based on PBIL and PSO Techniques
Luis Fernando Grisales-Noreña
2018-04-01
Full Text Available The optimal location and sizing of distributed generation is a suitable option for improving the operation of electric systems. This paper proposes a parallel implementation of the Population-Based Incremental Learning (PBIL algorithm to locate distributed generators (DGs, and the use of Particle Swarm Optimization (PSO to define the size those devices. The resulting method is a master-slave hybrid approach based on both the parallel PBIL (PPBIL algorithm and the PSO, which reduces the computation time in comparison with other techniques commonly used to address this problem. Moreover, the new hybrid method also reduces the active power losses and improves the nodal voltage profiles. In order to verify the performance of the new method, test systems with 33 and 69 buses are implemented in Matlab, using Matpower, for evaluating multiple cases. Finally, the proposed method is contrasted with the Loss Sensitivity Factor (LSF, a Genetic Algorithm (GA and a Parallel Monte-Carlo algorithm. The results demonstrate that the proposed PPBIL-PSO method provides the best balance between processing time, voltage profiles and reduction of power losses.
Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.
2015-11-01
Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.
Castillo M, J.A. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)
2003-07-01
The basic elements of the Tabu search technique are presented, putting emphasis in the qualities that it has in comparison with the traditional methods of optimization known as in descending pass. Later on some modifications are sketched that have been implemented in the technique along the time, so that this it is but robust. Finally they are given to know some areas where this technique has been applied, obtaining successful results. (Author)
Biswas, A.; Sharma, S. P.
2012-12-01
Self-Potential anomaly is an important geophysical technique that measures the electrical potential due natural source of current in the Earth's subsurface. An inclined sheet type model is a very familiar structure associated with mineralization, fault plane, groundwater flow and many other geological features which exhibits self potential anomaly. A number of linearized and global inversion approaches have been developed for the interpretation of SP anomaly over different structures for various purposes. Mathematical expression to compute the forward response over a two-dimensional dipping sheet type structures can be described in three different ways using five variables in each case. Complexities in the inversion using three different forward approaches are different. Interpretation of self-potential anomaly using very fast simulated annealing global optimization has been developed in the present study which yielded a new insight about the uncertainty and equivalence in model parameters. Interpretation of the measured data yields the location of the causative body, depth to the top, extension, dip and quality of the causative body. In the present study, a comparative performance of three different forward approaches in the interpretation of self-potential anomaly is performed to assess the efficacy of the each approach in resolving the possible ambiguity. Even though each forward formulation yields the same forward response but optimization of different sets of variable using different forward problems poses different kinds of ambiguity in the interpretation. Performance of the three approaches in optimization has been compared and it is observed that out of three methods, one approach is best and suitable for this kind of study. Our VFSA approach has been tested on synthetic, noisy and field data for three different methods to show the efficacy and suitability of the best method. It is important to use the forward problem in the optimization that yields the
Dual-phase helical CT using bolus triggering technique: optimization of transition time
Choi, Young Ho; Kim, Tae Kyoung; Park, Byung Kwan; Koh, Young Hwan; Han, Joon Koo; Choi, Byung Ihn
1999-01-01
To optimize the transition time between the triggering point in monitoring scanning and the initiation of diagnostic hepatic arterial phase (HAP) scanning in hepatic spiral CT, using a bolus triggering technique. One hundred consecutive patients with focal hepatic lesion were included in this study. Patients were randomized into two groups. Transition times of 7 and 11 seconds were used in group 1 and 2, respectively. In all patients, bolus triggered HAP spiral CT was obtained using a semi-automatic bolus tracking program after the injection of 120mL of non-ionic contrast media at a rate of 3mL/sec. When aortic enhancement reached 90 HU, diagnostic HAP scanning began after a given transition time. From images of group 1 and group 2, the degree of parenchymal enhancement of the liver and tumor-to-liver attenuation difference were measured. Also, for qualitative analysis, conspicuity of the hepatic artery and hypervascular tumor was scored and analyzed. Hepatic parenchymal enhancement on HAP was 12.07 + /-6.44 HU in group 1 and 16.03 + /-5.80 HU in group 2 (p .05). In the evaluation of conspicuity of hepatic artery, there was no statistically significant difference between the two groups (p > .05). The conspicuity of hypervascular tumors in group 2 was higher than in group 1 (p < .05). HAP spiral CT using a bolus triggering technique with a transition time of 11 seconds provides better HAP images than when the transition time is 7 seconds
Juan Antonio Castro Flores
Full Text Available ABSTRACT Mesial temporal sclerosis creates a focal epileptic syndrome that usually requires surgical resection of mesial temporal structures. Objective: To describe a novel operative technique for treatment of temporal lobe epilepsy and its clinical results. Methods: Prospective case-series at a single institution, performed by a single surgeon, from 2006 to 2012. A total of 120 patients were submitted to minimally-invasive keyhole transtemporal amygdalohippocampectomy. Results: Of the patients, 55% were male, and 85% had a right-sided disease. The first 70 surgeries had a mean surgical time of 2.51 hours, and the last 50 surgeries had a mean surgical time of 1.62 hours. There was 3.3% morbidity, and 5% mild temporal muscle atrophy. There was no visual field impairment. On the Engel Outcome Scale at the two-year follow-up, 71% of the patients were Class I, 21% were Class II, and 6% were Class III. Conclusion: This novel technique is feasible and reproducible, with optimal clinical results.
Optimization of Fluorescent Silicon Nano material Production Using Peroxide/ Acid/ Salt Technique
Abuhassan, L.H.
2009-01-01
Silicon nano material was prepared using the peroxide/ acid/ salt technique in which an aqueous silicon-based salt solution was added to H 2 O 2 / HF etchants. In order to optimize the experimental conditions for silicon nano material production, the amount of nano material produced was studied as a function of the volume of the silicon salt solution used in the synthesis. A set of samples was prepared using: 0, 5, 10, 15, and 20 ml of an aqueous 1 mg/ L metasilicate solution. The area under the corresponding peaks in the infrared (ir) absorption spectra was used as a qualitative indicator to the amount of the nano material present. The results indicated that using 10 ml of the metasilicate solution produced the highest amount of nano material. Furthermore, the results demonstrated that the peroxide/ acid/ salt technique results in the enhancement of the production yield of silicon nano material at a reduced power demand and with a higher material to void ratio. A model in which the silicon salt forms a secondary source of silicon nano material is proposed. The auxiliary nano material is deposited into the porous network causing an increase in the amount of nano material produced and a reduction in the voids present. Thus a reduction in the resistance of the porous layer, and consequently reduction in the power required, are expected. (author)
A New Technique of Removing Blind Spots to Optimize Wireless Coverage in Indoor Area
A. W. Reza
2013-01-01
Full Text Available Blind spots (or bad sampling points in indoor areas are the positions where no signal exists (or the signal is too weak and the existence of a receiver within the blind spot decelerates the performance of the communication system. Therefore, it is one of the fundamental requirements to eliminate the blind spots from the indoor area and obtain the maximum coverage while designing the wireless networks. In this regard, this paper combines ray-tracing (RT, genetic algorithm (GA, depth first search (DFS, and branch-and-bound method as a new technique that guarantees the removal of blind spots and subsequently determines the optimal wireless coverage using minimum number of transmitters. The proposed system outperforms the existing techniques in terms of algorithmic complexity and demonstrates that the computation time can be reduced as high as 99% and 75%, respectively, as compared to existing algorithms. Moreover, in terms of experimental analysis, the coverage prediction successfully reaches 99% and, thus, the proposed coverage model effectively guarantees the removal of blind spots.
Zhigang Lian
2010-01-01
Full Text Available The Job-shop scheduling problem (JSSP is a branch of production scheduling, which is among the hardest combinatorial optimization problems. Many different approaches have been applied to optimize JSSP, but for some JSSP even with moderate size cannot be solved to guarantee optimality. The original particle swarm optimization algorithm (OPSOA, generally, is used to solve continuous problems, and rarely to optimize discrete problems such as JSSP. In OPSOA, through research I find that it has a tendency to get stuck in a near optimal solution especially for middle and large size problems. The local and global search combine particle swarm optimization algorithm (LGSCPSOA is used to solve JSSP, where particle-updating mechanism benefits from the searching experience of one particle itself, the best of all particles in the swarm, and the best of particles in neighborhood population. The new coding method is used in LGSCPSOA to optimize JSSP, and it gets all sequences are feasible solutions. Three representative instances are made computational experiment, and simulation shows that the LGSCPSOA is efficacious for JSSP to minimize makespan.
Anatomy-based transmission factors for technique optimization in portable chest x-ray
Liptak, Christopher L.; Tovey, Deborah; Segars, William P.; Dong, Frank D.; Li, Xiang
2015-03-01
Portable x-ray examinations often account for a large percentage of all radiographic examinations. Currently, portable examinations do not employ automatic exposure control (AEC). To aid in the design of a size-specific technique chart, acrylic slabs of various thicknesses are often used to estimate x-ray transmission for patients of various body thicknesses. This approach, while simple, does not account for patient anatomy, tissue heterogeneity, and the attenuation properties of the human body. To better account for these factors, in this work, we determined x-ray transmission factors using computational patient models that are anatomically realistic. A Monte Carlo program was developed to model a portable x-ray system. Detailed modeling was done of the x-ray spectrum, detector positioning, collimation, and source-to-detector distance. Simulations were performed using 18 computational patient models from the extended cardiac-torso (XCAT) family (9 males, 9 females; age range: 2-58 years; weight range: 12-117 kg). The ratio of air kerma at the detector with and without a patient model was calculated as the transmission factor. Our study showed that the transmission factor decreased exponentially with increasing patient thickness. For the range of patient thicknesses examined (12-28 cm), the transmission factor ranged from approximately 21% to 1.9% when the air kerma used in the calculation represented an average over the entire imaging field of view. The transmission factor ranged from approximately 21% to 3.6% when the air kerma used in the calculation represented the average signals from two discrete AEC cells behind the lung fields. These exponential relationships may be used to optimize imaging techniques for patients of various body thicknesses to aid in the design of clinical technique charts.
Visualization of Global Disease Burden for the Optimization of Patient Management and Treatment
Winfried Schlee
2017-06-01
Full Text Available BackgroundThe assessment and treatment of complex disorders is challenged by the multiple domains and instruments used to evaluate clinical outcome. With the large number of assessment tools typically used in complex disorders comes the challenge of obtaining an integrative view of disease status to further evaluate treatment outcome both at the individual level and at the group level. Radar plots appear as an attractive visual tool to display multivariate data on a two-dimensional graphical illustration. Here, we describe the use of radar plots for the visualization of disease characteristics applied in the context of tinnitus, a complex and heterogeneous condition, the treatment of which has shown mixed success.MethodsData from two different cohorts, the Swedish Tinnitus Outreach Project (STOP and the Tinnitus Research Initiative (TRI database, were used. STOP is a population-based cohort where cross-sectional data from 1,223 non-tinnitus and 933 tinnitus subjects were analyzed. By contrast, the TRI contained data from 571 patients who underwent various treatments and whose Clinical Global Impression (CGI score was accessible to infer treatment outcome. In the latter, 34,560 permutations were tested to evaluate whether a particular ordering of the instruments could reflect better the treatment outcome measured with the CGI.ResultsRadar plots confirmed that tinnitus subtypes such as occasional and chronic tinnitus from the STOP cohort could be strikingly different, and helped appreciate a gender bias in tinnitus severity. Radar plots with greater surface areas were consistent with greater burden, and enabled a rapid appreciation of the global distress associated with tinnitus in patients categorized according to tinnitus severity. Permutations in the arrangement of instruments allowed to identify a configuration with minimal variance and maximized surface difference between CGI groups from the TRI database, thus affording a means of optimally
Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.
Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong
2017-08-09
For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem
A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization
Foster, John V.; Cunningham, Kevin
2010-01-01
Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the
Andru?cã Maria Carmen
2013-01-01
The field of globalization has highlighted an interdependence implied by a more harmonious understanding determined by the daily interaction between nations through the inducement of peace and the management of streamlining and the effectiveness of the global economy. For the functioning of the globalization, the developing countries that can be helped by the developed ones must be involved. The international community can contribute to the institution of the development environment of the gl...
Lin, Y. S.; Medlyn, B. E.; Duursma, R.; Prentice, I. C.; Wang, H.
2014-12-01
Stomatal conductance (gs) is a key land surface attribute as it links transpiration, the dominant component of global land evapotranspiration and a key element of the global water cycle, and photosynthesis, the driving force of the global carbon cycle. Despite the pivotal role of gs in predictions of global water and carbon cycles, a global scale database and an associated globally applicable model of gs that allow predictions of stomatal behaviour are lacking. We present a unique database of globally distributed gs obtained in the field for a wide range of plant functional types (PFTs) and biomes. We employed a model of optimal stomatal conductance to assess differences in stomatal behaviour, and estimated the model slope coefficient, g1, which is directly related to the marginal carbon cost of water, for each dataset. We found that g1 varies considerably among PFTs, with evergreen savanna trees having the largest g1 (least conservative water use), followed by C3 grasses and crops, angiosperm trees, gymnosperm trees, and C4 grasses. Amongst angiosperm trees, species with higher wood density had a higher marginal carbon cost of water, as predicted by the theory underpinning the optimal stomatal model. There was an interactive effect between temperature and moisture availability on g1: for wet environments, g1 was largest in high temperature environments, indicated by high mean annual temperature during the period when temperature above 0oC (Tm), but it did not vary with Tm across dry environments. We examine whether these differences in leaf-scale behaviour are reflected in ecosystem-scale differences in water-use efficiency. These findings provide a robust theoretical framework for understanding and predicting the behaviour of stomatal conductance across biomes and across PFTs that can be applied to regional, continental and global-scale modelling of productivity and ecohydrological processes in a future changing climate.
Karthivashan G
2016-07-01
Full Text Available Govindarajan Karthivashan,1 Mas Jaffri Masarudin,2 Aminu Umar Kura,1 Faridah Abas,3,4 Sharida Fakurazi1,5 1Laboratory of Vaccines and Immunotherapeutics, Institute of Bioscience, 2Department of Cell and Molecular Biology, Faculty of Biotechnology and Biomolecular Sciences, 3Department of Food Science, Faculty of Food Science and Technology, 4Laboratory of Natural Products, Institute of Bioscience, 5Department of Human Anatomy, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, Serdang, Selangor, Malaysia Abstract: This study involves adaptation of bulk or sequential technique to load multiple flavonoids in a single phytosome, which can be termed as “flavonosome”. Three widely established and therapeutically valuable flavonoids, such as quercetin (Q, kaempferol (K, and apigenin (A, were quantified in the ethyl acetate fraction of Moringa oleifera leaves extract and were commercially obtained and incorporated in a single flavonosome (QKA–phosphatidylcholine through four different methods of synthesis – bulk (M1 and serialized (M2 co-sonication and bulk (M3 and sequential (M4 co-loading. The study also established an optimal formulation method based on screening the synthesized flavonosomes with respect to their size, charge, polydispersity index, morphology, drug–carrier interaction, antioxidant potential through in vitro 1,1-diphenyl-2-picrylhydrazyl kinetics, and cytotoxicity evaluation against human hepatoma cell line (HepaRG. Furthermore, entrapment and loading efficiency of flavonoids in the optimal flavonosome have been identified. Among the four synthesis methods, sequential loading technique has been optimized as the best method for the synthesis of QKA–phosphatidylcholine flavonosome, which revealed an average diameter of 375.93±33.61 nm, with a zeta potential of -39.07±3.55 mV, and the entrapment efficiency was >98% for all the flavonoids, whereas the drug-loading capacity of Q, K, and A was 31.63%±0
Jain, P.C.
1984-04-01
Angstrom equation H=H 0 (a+bS/S 0 ) has been fitted using the least-square method to the global irradiation and the sunshine duration data of 31 Italian locations for the duration 1965-1974. Three more linear equations: i) the equation H'=H 0 (a+bS/S 0 ), obtained by incorporating the effect of the multiple reflections between the earth's surface and the atmosphere, ii) the equation H=H 0 (a+bS/S' 0 ), obtained by incorporating the effect of not burning of the sunshine recorder chart when the elevation of the sun is less than 5 deg., and iii) the equation H'=H 0 (a+bS/S' 0 ), obtained by incorporating both the above effects simultaneously, have also each been fitted to the same data. Good correlation with correlation coefficients around 0.9 or more are obtained for most of the locations with all the four equations. Substantial spatial scatter is obtained in the values of the regression parameters. The use of any of the three latter equations does not result in any advantage over that of the simpler Angstrom equation; it neither results in a decrease in the spatial scatter in the values of the regression parameters nor does it yield better correlation. The computed values of the regression parameters in the Angstrom equation yield estimates of the global irradiation that are on the average within +- 4% of the measured values for most of the locations. (author)
Shah, Chirag [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI (United States); Vicini, Frank A., E-mail: fvicini@beaumont.edu [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI (United States)
2011-11-15
As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer-related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2-65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.
Shah, Chirag; Vicini, Frank A.
2011-01-01
As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer–related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2–65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.
Multidisciplinary Optimization of Tilt Rotor Blades Using Comprehensive Composite Modeling Technique
Chattopadhyay, Aditi; McCarthy, Thomas R.; Rajadas, John N.
1997-01-01
An optimization procedure is developed for addressing the design of composite tilt rotor blades. A comprehensive technique, based on a higher-order laminate theory, is developed for the analysis of the thick composite load-carrying sections, modeled as box beams, in the blade. The theory, which is based on a refined displacement field, is a three-dimensional model which approximates the elasticity solution so that the beam cross-sectional properties are not reduced to one-dimensional beam parameters. Both inplane and out-of-plane warping are included automatically in the formulation. The model can accurately capture the transverse shear stresses through the thickness of each wall while satisfying stress free boundary conditions on the inner and outer surfaces of the beam. The aerodynamic loads on the blade are calculated using the classical blade element momentum theory. Analytical expressions for the lift and drag are obtained based on the blade planform with corrections for the high lift capability of rotor blades. The aerodynamic analysis is coupled with the structural model to formulate the complete coupled equations of motion for aeroelastic analyses. Finally, a multidisciplinary optimization procedure is developed to improve the aerodynamic, structural and aeroelastic performance of the tilt rotor aircraft. The objective functions include the figure of merit in hover and the high speed cruise propulsive efficiency. Structural, aerodynamic and aeroelastic stability criteria are imposed as constraints on the problem. The Kreisselmeier-Steinhauser function is used to formulate the multiobjective function problem. The search direction is determined by the Broyden-Fletcher-Goldfarb-Shanno algorithm. The optimum results are compared with the baseline values and show significant improvements in the overall performance of the tilt rotor blade.
Frolov, A.M.
1986-01-01
Exact variational calculations are treated for few-particle systems in the exponential basis of relative coordinates using nonlinear parameters. The methods of step-by-step optimization and global chaos of nonlinear parameters are applied to calculate the S and P states of ppμ, ddμ, ttμ homonuclear mesomolecules within the error ≤±0.001 eV. The global chaos method turned out to be well applicable to nuclear 3 H and 3 He systems
Frolov, A M
1986-09-01
Exact variational calculations are treated for few-particle systems in the exponential basis of relative coordinates using nonlinear parameters. The methods of step-by-step optimization and global chaos of nonlinear parameters are applied to calculate the S and P states of pp..mu.., dd..mu.., tt..mu.. homonuclear mesomolecules within the error less than or equal to+-0.001 eV. The global chaos method turned out to be well applicable to nuclear /sup 3/H and /sup 3/He systems.
Becerra, Sandra C; Roy, Daniel C; Sanchez, Carlos J; Christy, Robert J; Burmeister, David M
2016-04-12
Bacterial infections are a common clinical problem in both acute and chronic wounds. With growing concerns over antibiotic resistance, treatment of bacterial infections should only occur after positive diagnosis. Currently, diagnosis is delayed due to lengthy culturing methods which may also fail to identify the presence of bacteria. While newer costly bacterial identification methods are being explored, a simple and inexpensive diagnostic tool would aid in immediate and accurate treatments for bacterial infections. Histologically, hematoxylin and eosin (H&E) and Gram stains have been employed, but are far from optimal when analyzing tissue samples due to non-specific staining. The goal of the current study was to develop a modification of the Gram stain that enhances the contrast between bacteria and host tissue. A modified Gram stain was developed and tested as an alternative to Gram stain that improves the contrast between Gram positive bacteria, Gram negative bacteria and host tissue. Initially, clinically relevant strains of Pseudomonas aeruginosa and Staphylococcus aureus were visualized in vitro and in biopsies of infected, porcine burns using routine Gram stain, and immunohistochemistry techniques involving bacterial strain-specific fluorescent antibodies as validation tools. H&E and Gram stain of serial biopsy sections were then compared to a modification of the Gram stain incorporating a counterstain that highlights collagen found in tissue. The modified Gram stain clearly identified both Gram positive and Gram negative bacteria, and when compared to H&E or Gram stain alone provided excellent contrast between bacteria and non-viable burn eschar. Moreover, when applied to surgical biopsies from patients that underwent burn debridement this technique was able to clearly detect bacterial morphology within host tissue. We describe a modification of the Gram stain that provides improved contrast of Gram positive and Gram negative microorganisms within host
Stieler, Florian; Yan, Hui; Lohr, Frank; Wenz, Frederik; Yin, Fang-Fang
2009-01-01
Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be 'translated' to a set of 'if-then rules' for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the 'behavior' of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way
Wenz Frederik
2009-09-01
Full Text Available Abstract Background Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI guided system was developed and examined. Methods The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS. Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be "translated" to a set of "if-then rules" for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS, was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints. The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Results Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02% and membership functions (3.9%, thus suggesting that the "behavior" of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. Conclusion The
Selection of the optimal radiotherapy technique for locally advanced hepatocellular carcinoma
Lee, Ik-Jae; Seong, Jinsil; Koom, Woong-Sub; Kim, Yong-Bae; Jeon, Byeong-Chul; Kim, Joo-Ho; Han, Kwang-Hyub
2011-01-01
Various techniques are available for radiotherapy of hepatocellular carcinoma, including three-dimensional conformal radiotherapy, linac-based intensity-modulated radiotherapy and helical tomotherapy. The purpose of this study was to determine the optimal radiotherapy technique for hepatocellular carcinoma. Between 2006 and 2007, 12 patients underwent helical tomotherapy for locally advanced hepatocellular carcinoma. Helical tomotherapy computerized radiotherapy planning was compared with the best computerized radiotherapy planning for three-dimensional conformal radiotherapy and linac-based intensity-modulated radiotherapy for the delivery of 60 Gy in 30 fractions. Tumor coverage was assessed by conformity index, radical dose homogeneity index and moderated dose homogeneity index. Computerized radiotherapy planning was also compared according to the tumor location. Tumor coverage was shown to be significantly superior with helical tomotherapy as assessed by conformity index and moderated dose homogeneity index (P=0.002 and 0.03, respectively). Helical tomotherapy showed significantly lower irradiated liver volume at 40, 50 and 60 Gy (V40, V50 and V60, P=0.04, 0.03 and 0.01, respectively). On the contrary, the dose-volume of three-dimensional conformal radiotherapy at V20 was significantly smaller than those of linac-based intensity-modulated radiotherapy and helical tomotherapy in the remaining liver (P=0.03). Linac-based intensity-modulated radiotherapy showed better sparing of the stomach compared with helical tomotherapy in the case of separated lesions in both lobes (12.3 vs. 24.6 Gy). Helical tomotherapy showed the high dose-volume exposure to the left kidney due to helical delivery in the right lobe lesion. Helical tomotherapy achieved the best tumor coverage of the remaining normal liver. However, helical tomotherapy showed much exposure to the remaining liver at the lower dose region and left kidney. (author)
Wilmar Hernandez
2007-01-01
Full Text Available In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart sensors that todayÃ¢Â€Â™s cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcherÃ¢Â€Â™s interest in the fusion of intelligent sensors and optimal signal processing techniques.
Mohamed, Ahmed F; Elarini, Mahdi M; Othman, Ahmed M
2014-05-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.
Ahmed F. Mohamed
2014-05-01
Full Text Available One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC. The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.
Chen, Zhuoqi; Chen, Jing M.; Zhang, Shupeng; Zheng, Xiaogu; Ju, Weiming; Mo, Gang; Lu, Xiaoliang
2017-12-01
The Global Carbon Assimilation System that assimilates ground-based atmospheric CO2 data is used to estimate several key parameters in a terrestrial ecosystem model for the purpose of improving carbon cycle simulation. The optimized parameters are the leaf maximum carboxylation rate at 25°C (Vmax25), the temperature sensitivity of ecosystem respiration (Q10), and the soil carbon pool size. The optimization is performed at the global scale at 1° resolution for the period from 2002 to 2008. The results indicate that vegetation from tropical zones has lower Vmax25 values than vegetation in temperate regions. Relatively high values of Q10 are derived over high/midlatitude regions. Both Vmax25 and Q10 exhibit pronounced seasonal variations at middle-high latitudes. The maxima in Vmax25 occur during growing seasons, while the minima appear during nongrowing seasons. Q10 values decrease with increasing temperature. The seasonal variabilities of Vmax25 and Q10 are larger at higher latitudes. Optimized Vmax25 and Q10 show little seasonal variabilities at tropical regions. The seasonal variabilities of Vmax25 are consistent with the variabilities of LAI for evergreen conifers and broadleaf evergreen forests. Variations in leaf nitrogen and leaf chlorophyll contents may partly explain the variations in Vmax25. The spatial distribution of the total soil carbon pool size after optimization is compared favorably with the gridded Global Soil Data Set for Earth System. The results also suggest that atmospheric CO2 data are a source of information that can be tapped to gain spatially and temporally meaningful information for key ecosystem parameters that are representative at the regional and global scales.