WorldWideScience

Sample records for solutions extremal optimization

  1. Optimization with Extremal Dynamics

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Percus, Allon G.

    2001-01-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard discrete optimization problems. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. Extremal optimization successively updates extremely undesirable variables of a single suboptimal solution, assigning them new, random values. Large fluctuations ensue, efficiently exploring many local optima. We use extremal optimization to elucidate the phase transition in the 3-coloring problem, and we provide independent confirmation of previously reported extrapolations for the ground-state energy of ±J spin glasses in d=3 and 4

  2. Spatial planning via extremal optimization enhanced by cell-based local search

    International Nuclear Information System (INIS)

    Sidiropoulos, Epaminondas

    2014-01-01

    A new treatment is presented for land use planning problems by means of extremal optimization in conjunction to cell-based neighborhood local search. Extremal optimization, inspired by self-organized critical models of evolution has been applied mainly to the solution of classical combinatorial optimization problems. Cell-based local search has been employed by the author elsewhere in problems of spatial resource allocation in combination with genetic algorithms and simulated annealing. In this paper it complements extremal optimization in order to enhance its capacity for a spatial optimization problem. The hybrid method thus formed is compared to methods of the literature on a specific characteristic problem. It yields better results both in terms of objective function values and in terms of compactness. The latter is an important quantity for spatial planning. The present treatment yields significant compactness values as emergent results

  3. Optimal bounds and extremal trajectories for time averages in dynamical systems

    Science.gov (United States)

    Tobasco, Ian; Goluskin, David; Doering, Charles

    2017-11-01

    For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.

  4. [Optimal solution and analysis of muscular force during standing balance].

    Science.gov (United States)

    Wang, Hongrui; Zheng, Hui; Liu, Kun

    2015-02-01

    The present study was aimed at the optimal solution of the main muscular force distribution in the lower extremity during standing balance of human. The movement musculoskeletal system of lower extremity was simplified to a physical model with 3 joints and 9 muscles. Then on the basis of this model, an optimum mathematical model was built up to solve the problem of redundant muscle forces. Particle swarm optimization (PSO) algorithm is used to calculate the single objective and multi-objective problem respectively. The numerical results indicated that the multi-objective optimization could be more reasonable to obtain the distribution and variation of the 9 muscular forces. Finally, the coordination of each muscle group during maintaining standing balance under the passive movement was qualitatively analyzed using the simulation results obtained.

  5. Extremal solutions of measure differential equations

    Czech Academy of Sciences Publication Activity Database

    Monteiro, Giselle Antunes; Slavík, A.

    2016-01-01

    Roč. 444, č. 1 (2016), s. 568-597 ISSN 0022-247X Institutional support: RVO:67985840 Keywords : measure differential equations * extremal solution * lower solution Subject RIV: BA - General Mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X16302724

  6. Extremal black holes as exact string solutions

    International Nuclear Information System (INIS)

    Horowitz, G.T.; Tseytlin, A.A.

    1994-01-01

    We show that the leading order solution describing an extremal electrically charged black hole in string theory is, in fact, an exact solution to all orders in α' when interpreted in a Kaluza-Klein fashion. This follows from the observation that it can be obtained via dimensional reduction from a five-dimensional background which is proved to be an exact string solution

  7. Extreme Trust Region Policy Optimization for Active Object Recognition.

    Science.gov (United States)

    Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei

    2018-06-01

    In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.

  8. New numerical methods for open-loop and feedback solutions to dynamic optimization problems

    Science.gov (United States)

    Ghosh, Pradipto

    The topic of the first part of this research is trajectory optimization of dynamical systems via computational swarm intelligence. Particle swarm optimization is a nature-inspired heuristic search method that relies on a group of potential solutions to explore the fitness landscape. Conceptually, each particle in the swarm uses its own memory as well as the knowledge accumulated by the entire swarm to iteratively converge on an optimal or near-optimal solution. It is relatively straightforward to implement and unlike gradient-based solvers, does not require an initial guess or continuity in the problem definition. Although particle swarm optimization has been successfully employed in solving static optimization problems, its application in dynamic optimization, as posed in optimal control theory, is still relatively new. In the first half of this thesis particle swarm optimization is used to generate near-optimal solutions to several nontrivial trajectory optimization problems including thrust programming for minimum fuel, multi-burn spacecraft orbit transfer, and computing minimum-time rest-to-rest trajectories for a robotic manipulator. A distinct feature of the particle swarm optimization implementation in this work is the runtime selection of the optimal solution structure. Optimal trajectories are generated by solving instances of constrained nonlinear mixed-integer programming problems with the swarming technique. For each solved optimal programming problem, the particle swarm optimization result is compared with a nearly exact solution found via a direct method using nonlinear programming. Numerical experiments indicate that swarm search can locate solutions to very great accuracy. The second half of this research develops a new extremal-field approach for synthesizing nearly optimal feedback controllers for optimal control and two-player pursuit-evasion games described by general nonlinear differential equations. A notable revelation from this development

  9. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems

    Science.gov (United States)

    Tobasco, Ian; Goluskin, David; Doering, Charles R.

    2018-02-01

    For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.

  10. On some interconnections between combinatorial optimization and extremal graph theory

    Directory of Open Access Journals (Sweden)

    Cvetković Dragoš M.

    2004-01-01

    Full Text Available The uniting feature of combinatorial optimization and extremal graph theory is that in both areas one should find extrema of a function defined in most cases on a finite set. While in combinatorial optimization the point is in developing efficient algorithms and heuristics for solving specified types of problems, the extremal graph theory deals with finding bounds for various graph invariants under some constraints and with constructing extremal graphs. We analyze by examples some interconnections and interactions of the two theories and propose some conclusions.

  11. Non-extremal black hole solutions from the c-map

    International Nuclear Information System (INIS)

    Errington, D.; Mohaupt, T.; Vaughan, O.

    2015-01-01

    We construct new static, spherically symmetric non-extremal black hole solutions of four-dimensional N=2 supergravity, using a systematic technique based on dimensional reduction over time (the c-map) and the real formulation of special geometry. For a certain class of models we actually obtain the general solution to the full second order equations of motion, whilst for other classes of models, such as those obtainable by dimensional reduction from five dimensions, heterotic tree-level models, and type-II Calabi-Yau compactifications in the large volume limit a partial set of solutions are found. When considering specifically non-extremal black hole solutions we find that regularity conditions reduce the number of integration constants by one half. Such solutions satisfy a unique set of first order equations, which we identify. Several models are investigated in detail, including examples of non-homogeneous spaces such as the quantum deformed STU model. Though we focus on static, spherically symmetric solutions of ungauged supergravity, the method is adaptable to other types of solutions and to gauged supergravity.

  12. Existence of extremal periodic solutions for quasilinear parabolic equations

    Directory of Open Access Journals (Sweden)

    Siegfried Carl

    1997-01-01

    bounded domain under periodic Dirichlet boundary conditions. Our main goal is to prove the existence of extremal solutions among all solutions lying in a sector formed by appropriately defined upper and lower solutions. The main tools used in the proof of our result are recently obtained abstract results on nonlinear evolution equations, comparison and truncation techniques and suitably constructed special testfunction.

  13. Finding Multiple Optimal Solutions to Optimal Load Distribution Problem in Hydropower Plant

    Directory of Open Access Journals (Sweden)

    Xinhao Jiang

    2012-05-01

    Full Text Available Optimal load distribution (OLD among generator units of a hydropower plant is a vital task for hydropower generation scheduling and management. Traditional optimization methods for solving this problem focus on finding a single optimal solution. However, many practical constraints on hydropower plant operation are very difficult, if not impossible, to be modeled, and the optimal solution found by those models might be of limited practical uses. This motivates us to find multiple optimal solutions to the OLD problem, which can provide more flexible choices for decision-making. Based on a special dynamic programming model, we use a modified shortest path algorithm to produce multiple solutions to the problem. It is shown that multiple optimal solutions exist for the case study of China’s Geheyan hydropower plant, and they are valuable for assessing the stability of generator units, showing the potential of reducing occurrence times of units across vibration areas.

  14. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    Science.gov (United States)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  15. Modernizing Distribution System Restoration to Achieve Grid Resiliency Against Extreme Weather Events: An Integrated Solution

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chen; Wang, Jianhui; Ton, Dan

    2017-07-07

    Recent severe power outages caused by extreme weather hazards have highlighted the importance and urgency of improving the resilience of the electric power grid. As the distribution grids still remain vulnerable to natural disasters, the power industry has focused on methods of restoring distribution systems after disasters in an effective and quick manner. The current distribution system restoration practice for utilities is mainly based on predetermined priorities and tends to be inefficient and suboptimal, and the lack of situational awareness after the hazard significantly delays the restoration process. As a result, customers may experience an extended blackout, which causes large economic loss. On the other hand, the emerging advanced devices and technologies enabled through grid modernization efforts have the potential to improve the distribution system restoration strategy. However, utilizing these resources to aid the utilities in better distribution system restoration decision-making in response to extreme weather events is a challenging task. Therefore, this paper proposes an integrated solution: a distribution system restoration decision support tool designed by leveraging resources developed for grid modernization. We first review the current distribution restoration practice and discuss why it is inadequate in response to extreme weather events. Then we describe how the grid modernization efforts could benefit distribution system restoration, and we propose an integrated solution in the form of a decision support tool to achieve the goal. The advantages of the solution include improving situational awareness of the system damage status and facilitating survivability for customers. The paper provides a comprehensive review of how the existing methodologies in the literature could be leveraged to achieve the key advantages. The benefits of the developed system restoration decision support tool include the optimal and efficient allocation of repair crews

  16. Adaptive extremal optimization by detrended fluctuation analysis

    International Nuclear Information System (INIS)

    Hamacher, K.

    2007-01-01

    Global optimization is one of the key challenges in computational physics as several problems, e.g. protein structure prediction, the low-energy landscape of atomic clusters, detection of community structures in networks, or model-parameter fitting can be formulated as global optimization problems. Extremal optimization (EO) has become in recent years one particular, successful approach to the global optimization problem. As with almost all other global optimization approaches, EO is driven by an internal dynamics that depends crucially on one or more parameters. Recently, the existence of an optimal scheme for this internal parameter of EO was proven, so as to maximize the performance of the algorithm. However, this proof was not constructive, that is, one cannot use it to deduce the optimal parameter itself a priori. In this study we analyze the dynamics of EO for a test problem (spin glasses). Based on the results we propose an online measure of the performance of EO and a way to use this insight to reformulate the EO algorithm in order to construct optimal values of the internal parameter online without any input by the user. This approach will ultimately allow us to make EO parameter free and thus its application in general global optimization problems much more efficient

  17. Patterns and singular features of extreme fluctuational paths of a periodically driven system

    International Nuclear Information System (INIS)

    Chen, Zhen; Liu, Xianbin

    2016-01-01

    Large fluctuations of an overdamped periodically driven oscillating system are investigated theoretically and numerically in the limit of weak noise. Optimal paths fluctuating to certain point are given by statistical analysis using the concept of prehistory probability distribution. The validity of statistical results is verified by solutions of boundary value problem. Optimal paths are found to change topologically when terminating points lie at opposite side of a switching line. Patterns of extreme paths are plotted through a proper parameterization of Lagrangian manifold having complicated structures. Several extreme paths to the same point are obtained by multiple solutions of boundary value solutions. Actions along various extreme paths are calculated and associated analysis is performed in relation to the singular features of the patterns. - Highlights: • Both extreme and optimal paths are obtained by various methods. • Boundary value problems are solved to ensure the validity of statistical results. • Topological structure of Lagrangian manifold is considered. • Singularities of the pattern of extreme paths are studied.

  18. Using qualimetric engineering and extremal analysis to optimize a proton exchange membrane fuel cell stack

    International Nuclear Information System (INIS)

    Besseris, George J.

    2014-01-01

    Highlights: • We consider the optimal configuration of a PEMFC stack. • We utilize qualimetric engineering tools (Taguchi screening, regression analysis). • We achieve analytical solution on a restructured power-law fitting. • We discuss the Pt-cost involvement in the unit and area minimization scope. - Abstract: The optimal configuration of the proton exchange membrane fuel-cell (PEMFC) stack has received attention recently because of its potential use as an isolated energy distributor for household needs. In this work, the original complex problem for generating an optimal PEMFC stack based on the number of cell units connected in series and parallel arrangements as well as on the cell area is revisited. A qualimetric engineering strategy is formulated which is based on quick profiling the PEMFC stack voltage response. Stochastic screening is initiated by employing an L 9 (3 3 ) Taguchi-type OA for partitioning numerically the deterministic expression of the output PEMFC stack voltage such that to facilitate the sizing of the magnitude of the individual effects. The power and current household specifications for the stack system are maintained at the typical settings of 200 W at 12 V, respectively. The minimization of the stack total-area requirement becomes explicit in this work. The relationship of cell voltage against cell area is cast into a power-law model by regression fitting that achieves a coefficient of determination value of 99.99%. Thus, the theoretical formulation simplifies into a non-linear extremal problem with a constrained solution due to a singularity which is solved analytically. The optimal solution requires 22 cell units connected in series where each unit is designed with an area value of 151.4 cm 2 . It is also demonstrated how to visualize the optimal solution using the graphical method of operating lines. The total area of 3270.24 cm 2 becomes a new benchmark for the optimal design of the studied PEMFC stack configuration. It is

  19. Approximative solutions of stochastic optimization problem

    Czech Academy of Sciences Publication Activity Database

    Lachout, Petr

    2010-01-01

    Roč. 46, č. 3 (2010), s. 513-523 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic optimization problem * sensitivity * approximative solution Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/lachout-approximative solutions of stochastic optimization problem.pdf

  20. Multiobjective optimization of an extremal evolution model

    International Nuclear Information System (INIS)

    Elettreby, M.F.

    2004-09-01

    We propose a two-dimensional model for a co-evolving ecosystem that generalizes the extremal coupled map lattice model. The model takes into account the concept of multiobjective optimization. We find that the system self-organizes into a critical state. The distributions of the distances between subsequent mutations as well as the distribution of avalanches sizes follow power law. (author)

  1. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    Science.gov (United States)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  2. On the complexity of determining tolerances for ->e--optimal solutions to min-max combinatorial optimization problems

    NARCIS (Netherlands)

    Ghosh, D.; Sierksma, G.

    2000-01-01

    Sensitivity analysis of e-optimal solutions is the problem of calculating the range within which a problem parameter may lie so that the given solution re-mains e-optimal. In this paper we study the sensitivity analysis problem for e-optimal solutions tocombinatorial optimization problems with

  3. IMRT optimization: Variability of solutions and its radiobiological impact

    International Nuclear Information System (INIS)

    Mattia, Maurizio; Del Giudice, Paolo; Caccia, Barbara

    2004-01-01

    We aim at (1) defining and measuring a 'complexity' index for the optimization process of an intensity modulated radiation therapy treatment plan (IMRT TP), (2) devising an efficient approximate optimization strategy, and (3) evaluating the impact of the complexity of the optimization process on the radiobiological quality of the treatment. In this work, for a prostate therapy case, the IMRT TP optimization problem has been formulated in terms of dose-volume constraints. The cost function has been minimized in order to achieve the optimal solution, by means of an iterative procedure, which is repeated for many initial modulation profiles, and for each of them the final optimal solution is recorded. To explore the complexity of the space of such solutions we have chosen to minimize the cost function with an algorithm that is unable to avoid local minima. The size of the (sub)optimal solutions distribution is taken as an indicator of the complexity of the optimization problem. The impact of the estimated complexity on the probability of success of the therapy is evaluated using radiobiological indicators (Poissonian TCP model [S. Webb and A. E. Nahum, Phys. Med. Biol. 38(6), 653-666 (1993)] and NTCP relative seriality model [Kallman et al., Int. J. Radiat. Biol. 62(2), 249-262 (1992)]). We find in the examined prostate case a nontrivial distribution of local minima, which has symmetry properties allowing a good estimate of near-optimal solutions with a moderate computational load. We finally demonstrate that reducing the a priori uncertainty in the optimal solution results in a significant improvement of the probability of success of the TP, based on TCP and NTCP estimates

  4. Portfolio optimization for heavy-tailed assets: Extreme Risk Index vs. Markowitz

    OpenAIRE

    Mainik, Georg; Mitov, Georgi; Rüschendorf, Ludger

    2015-01-01

    Using daily returns of the S&P 500 stocks from 2001 to 2011, we perform a backtesting study of the portfolio optimization strategy based on the extreme risk index (ERI). This method uses multivariate extreme value theory to minimize the probability of large portfolio losses. With more than 400 stocks to choose from, our study seems to be the first application of extreme value techniques in portfolio management on a large scale. The primary aim of our investigation is the potential of ERI in p...

  5. Numerical solution of the state-delayed optimal control problems by a fast and accurate finite difference θ-method

    Science.gov (United States)

    Hajipour, Mojtaba; Jajarmi, Amin

    2018-02-01

    Using the Pontryagin's maximum principle for a time-delayed optimal control problem results in a system of coupled two-point boundary-value problems (BVPs) involving both time-advance and time-delay arguments. The analytical solution of this advance-delay two-point BVP is extremely difficult, if not impossible. This paper provides a discrete general form of the numerical solution for the derived advance-delay system by applying a finite difference θ-method. This method is also implemented for the infinite-time horizon time-delayed optimal control problems by using a piecewise version of the θ-method. A matrix formulation and the error analysis of the suggested technique are provided. The new scheme is accurate, fast and very effective for the optimal control of linear and nonlinear time-delay systems. Various types of finite- and infinite-time horizon problems are included to demonstrate the accuracy, validity and applicability of the new technique.

  6. Research into Financial Position of Listed Companies following Classification via Extreme Learning Machine Based upon DE Optimization

    OpenAIRE

    Fu Yu; Mu Jiong; Duan Xu Liang

    2016-01-01

    By means of the model of extreme learning machine based upon DE optimization, this article particularly centers on the optimization thinking of such a model as well as its application effect in the field of listed company’s financial position classification. It proves that the improved extreme learning machine algorithm based upon DE optimization eclipses the traditional extreme learning machine algorithm following comparison. Meanwhile, this article also intends to introduce certain research...

  7. A Complete First-Order Analytical Solution for Optimal Low-Thrust Limited-Power Transfers Between Coplanar Orbits with Small Eccentricities

    Science.gov (United States)

    Da Silva Fernandes, Sandro; Das Chagas Carvalho, Francisco; Vilhena de Moraes, Rodolpho

    The purpose of this work is to present a complete first order analytical solution, which includes short periodic terms, for the problem of optimal low-thrust limited power trajectories with large amplitude transfers (no rendezvous) between coplanar orbits with small eccentricities in Newtonian central gravity field. The study of these transfers is particularly interesting because the orbits found in practice often have a small eccentricity and the problem of transferring a vehicle from a low earth orbit to a high earth orbit is frequently found. Besides, the analysis has been motivated by the renewed interest in the use of low-thrust propulsion systems in space missions verified in the last two decades. Several researchers have obtained numerical and sometimes analytical solutions for a number of specific initial orbits and specific thrust profiles. Averaging methods are also used in such researches. Firstly, the optimization problem associated to the space transfer problem is formulated as a Mayer problem of optimal control with Cartesian elements - position and velocity vectors - as state variables. After applying the Pontryagin Maximum Principle, successive Mathieu transformations are performed and suitable sets of orbital elements are introduced. The short periodic terms are eliminated from the maximum Hamiltonian function through an infinitesimal canonical transformation built through Hori method - a perturbation canonical method based on Lie series. The new Hamiltonian function, which results from the infinitesimal canonical transformation, describes the extremal trajectories for long duration maneuvers. Closed-form analytical solutions are obtained for the new canonical system by solving the Hamilton-Jacobi equation through the separation of variables technique. By applying the transformation equations of the algorithm of Hori method, a first order analytical solution for the problem is obtained in non-singular orbital elements. For long duration maneuvers

  8. Research into Financial Position of Listed Companies following Classification via Extreme Learning Machine Based upon DE Optimization

    Directory of Open Access Journals (Sweden)

    Fu Yu

    2016-01-01

    Full Text Available By means of the model of extreme learning machine based upon DE optimization, this article particularly centers on the optimization thinking of such a model as well as its application effect in the field of listed company’s financial position classification. It proves that the improved extreme learning machine algorithm based upon DE optimization eclipses the traditional extreme learning machine algorithm following comparison. Meanwhile, this article also intends to introduce certain research thinking concerning extreme learning machine into the economics classification area so as to fulfill the purpose of computerizing the speedy but effective evaluation of massive financial statements of listed companies pertain to different classes

  9. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  10. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  11. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2014-01-01

    Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

  12. Multiobjective Optimization of Linear Cooperative Spectrum Sensing: Pareto Solutions and Refinement.

    Science.gov (United States)

    Yuan, Wei; You, Xinge; Xu, Jing; Leung, Henry; Zhang, Tianhang; Chen, Chun Lung Philip

    2016-01-01

    In linear cooperative spectrum sensing, the weights of secondary users and detection threshold should be optimally chosen to minimize missed detection probability and to maximize secondary network throughput. Since these two objectives are not completely compatible, we study this problem from the viewpoint of multiple-objective optimization. We aim to obtain a set of evenly distributed Pareto solutions. To this end, here, we introduce the normal constraint (NC) method to transform the problem into a set of single-objective optimization (SOO) problems. Each SOO problem usually results in a Pareto solution. However, NC does not provide any solution method to these SOO problems, nor any indication on the optimal number of Pareto solutions. Furthermore, NC has no preference over all Pareto solutions, while a designer may be only interested in some of them. In this paper, we employ a stochastic global optimization algorithm to solve the SOO problems, and then propose a simple method to determine the optimal number of Pareto solutions under a computational complexity constraint. In addition, we extend NC to refine the Pareto solutions and select the ones of interest. Finally, we verify the effectiveness and efficiency of the proposed methods through computer simulations.

  13. Optimal calibration of variable biofuel blend dual-injection engines using sparse Bayesian extreme learning machine and metaheuristic optimization

    International Nuclear Information System (INIS)

    Wong, Ka In; Wong, Pak Kin

    2017-01-01

    Highlights: • A new calibration method is proposed for dual-injection engines under biofuel blends. • Sparse Bayesian extreme learning machine and flower pollination algorithm are employed in the proposed method. • An SI engine is retrofitted for operating under dual-injection strategy. • The proposed method is verified experimentally under the two idle speed conditions. • Comparison with other machine learning methods and optimization algorithms is conducted. - Abstract: Although many combinations of biofuel blends are available in the market, it is more beneficial to vary the ratio of biofuel blends at different engine operating conditions for optimal engine performance. Dual-injection engines have the potential to implement such function. However, while optimal engine calibration is critical for achieving high performance, the use of two injection systems, together with other modern engine technologies, leads the calibration of the dual-injection engines to a very complicated task. Traditional trial-and-error-based calibration approach can no longer be adopted as it would be time-, fuel- and labor-consuming. Therefore, a new and fast calibration method based on sparse Bayesian extreme learning machine (SBELM) and metaheuristic optimization is proposed to optimize the dual-injection engines operating with biofuels. A dual-injection spark-ignition engine fueled with ethanol and gasoline is employed for demonstration purpose. The engine response for various parameters is firstly acquired, and an engine model is then constructed using SBELM. With the engine model, the optimal engine settings are determined based on recently proposed metaheuristic optimization methods. Experimental results validate the optimal settings obtained with the proposed methodology, indicating that the use of machine learning and metaheuristic optimization for dual-injection engine calibration is effective and promising.

  14. Asymptotic Normality of the Optimal Solution in Multiresponse Surface Mathematical Programming

    OpenAIRE

    Díaz-García, José A.; Caro-Lopera, Francisco J.

    2015-01-01

    An explicit form for the perturbation effect on the matrix of regression coeffi- cients on the optimal solution in multiresponse surface methodology is obtained in this paper. Then, the sensitivity analysis of the optimal solution is studied and the critical point characterisation of the convex program, associated with the optimum of a multiresponse surface, is also analysed. Finally, the asymptotic normality of the optimal solution is derived by the standard methods.

  15. Mechanical Design Optimization Using Advanced Optimization Techniques

    CERN Document Server

    Rao, R Venkata

    2012-01-01

    Mechanical design includes an optimization process in which designers always consider objectives such as strength, deflection, weight, wear, corrosion, etc. depending on the requirements. However, design optimization for a complete mechanical assembly leads to a complicated objective function with a large number of design variables. It is a good practice to apply optimization techniques for individual components or intermediate assemblies than a complete assembly. Analytical or numerical methods for calculating the extreme values of a function may perform well in many practical cases, but may fail in more complex design situations. In real design problems, the number of design parameters can be very large and their influence on the value to be optimized (the goal function) can be very complicated, having nonlinear character. In these complex cases, advanced optimization algorithms offer solutions to the problems, because they find a solution near to the global optimum within reasonable time and computational ...

  16. Design of materials with extreme thermal expansion using a three-phase topology optimization method

    DEFF Research Database (Denmark)

    Sigmund, Ole; Torquato, S.

    1997-01-01

    We show how composites with extremal or unusual thermal expansion coefficients can be designed using a numerical topology optimization method. The composites are composed of two different material phases and void. The optimization method is illustrated by designing materials having maximum therma...

  17. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhao, Changhong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zamzam, Admed S. [University of Minnesota; Sidiropoulos, Nicholas D. [University of Minnesota; Taylor, Josh A. [University of Toronto

    2018-01-12

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successive convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.

  18. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    Science.gov (United States)

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  19. A Visualization Technique for Accessing Solution Pool in Interactive Methods of Multiobjective Optimization

    OpenAIRE

    Filatovas, Ernestas; Podkopaev, Dmitry; Kurasova, Olga

    2015-01-01

    Interactive methods of multiobjective optimization repetitively derive Pareto optimal solutions based on decision maker’s preference information and present the obtained solutions for his/her consideration. Some interactive methods save the obtained solutions into a solution pool and, at each iteration, allow the decision maker considering any of solutions obtained earlier. This feature contributes to the flexibility of exploring the Pareto optimal set and learning about the op...

  20. Solar photovoltaic power forecasting using optimized modified extreme learning machine technique

    Directory of Open Access Journals (Sweden)

    Manoja Kumar Behera

    2018-06-01

    Full Text Available Prediction of photovoltaic power is a significant research area using different forecasting techniques mitigating the effects of the uncertainty of the photovoltaic generation. Increasingly high penetration level of photovoltaic (PV generation arises in smart grid and microgrid concept. Solar source is irregular in nature as a result PV power is intermittent and is highly dependent on irradiance, temperature level and other atmospheric parameters. Large scale photovoltaic generation and penetration to the conventional power system introduces the significant challenges to microgrid a smart grid energy management. It is very critical to do exact forecasting of solar power/irradiance in order to secure the economic operation of the microgrid and smart grid. In this paper an extreme learning machine (ELM technique is used for PV power forecasting of a real time model whose location is given in the Table 1. Here the model is associated with the incremental conductance (IC maximum power point tracking (MPPT technique that is based on proportional integral (PI controller which is simulated in MATLAB/SIMULINK software. To train single layer feed-forward network (SLFN, ELM algorithm is implemented whose weights are updated by different particle swarm optimization (PSO techniques and their performance are compared with existing models like back propagation (BP forecasting model. Keywords: PV array, Extreme learning machine, Maximum power point tracking, Particle swarm optimization, Craziness particle swarm optimization, Accelerate particle swarm optimization, Single layer feed-forward network

  1. Optimality conditions for the numerical solution of optimization problems with PDE constraints :

    Energy Technology Data Exchange (ETDEWEB)

    Aguilo Valentin, Miguel Alejandro; Ridzal, Denis

    2014-03-01

    A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.

  2. Optimal security investments and extreme risk.

    Science.gov (United States)

    Mohtadi, Hamid; Agiwal, Swati

    2012-08-01

    In the aftermath of 9/11, concern over security increased dramatically in both the public and the private sector. Yet, no clear algorithm exists to inform firms on the amount and the timing of security investments to mitigate the impact of catastrophic risks. The goal of this article is to devise an optimum investment strategy for firms to mitigate exposure to catastrophic risks, focusing on how much to invest and when to invest. The latter question addresses the issue of whether postponing a risk mitigating decision is an optimal strategy or not. Accordingly, we develop and estimate both a one-period model and a multiperiod model within the framework of extreme value theory (EVT). We calibrate these models using probability measures for catastrophic terrorism risks associated with attacks on the food sector. We then compare our findings with the purchase of catastrophic risk insurance. © 2012 Society for Risk Analysis.

  3. Optimized Extreme Learning Machine for Power System Transient Stability Prediction Using Synchrophasors

    Directory of Open Access Journals (Sweden)

    Yanjun Zhang

    2015-01-01

    Full Text Available A new optimized extreme learning machine- (ELM- based method for power system transient stability prediction (TSP using synchrophasors is presented in this paper. First, the input features symbolizing the transient stability of power systems are extracted from synchronized measurements. Then, an ELM classifier is employed to build the TSP model. And finally, the optimal parameters of the model are optimized by using the improved particle swarm optimization (IPSO algorithm. The novelty of the proposal is in the fact that it improves the prediction performance of the ELM-based TSP model by using IPSO to optimize the parameters of the model with synchrophasors. And finally, based on the test results on both IEEE 39-bus system and a large-scale real power system, the correctness and validity of the presented approach are verified.

  4. Solution quality improvement in chiller loading optimization

    International Nuclear Information System (INIS)

    Geem, Zong Woo

    2011-01-01

    In order to reduce greenhouse gas emission, we can energy-efficiently operate a multiple chiller system using optimization techniques. So far, various optimization techniques have been proposed to the optimal chiller loading problem. Most of those techniques are meta-heuristic algorithms such as genetic algorithm, simulated annealing, and particle swarm optimization. However, this study applied a gradient-based method, named generalized reduced gradient, and then obtains better results when compared with other approaches. When two additional approaches (hybridization between meta-heuristic algorithm and gradient-based algorithm; and reformulation of optimization structure by adding a binary variable which denotes chiller's operating status) were introduced, generalized reduced gradient found even better solutions. - Highlights: → Chiller loading problem is optimized by generalized reduced gradient (GRG) method. → Results are compared with meta-heuristic algorithms such as genetic algorithm. → Results are even enhanced by hybridizing meta-heuristic and gradient techniques. → Results are even enhanced by modifying the optimization formulation.

  5. Families of optimal thermodynamic solutions for combined cycle gas turbine (CCGT) power plants

    International Nuclear Information System (INIS)

    Godoy, E.; Scenna, N.J.; Benz, S.J.

    2010-01-01

    Optimal designs of a CCGT power plant characterized by maximum second law efficiency values are determined for a wide range of power demands and different values of the available heat transfer area. These thermodynamic optimal solutions are found within a feasible operation region by means of a non-linear mathematical programming (NLP) model, where decision variables (i.e. transfer areas, power production, mass flow rates, temperatures and pressures) can vary freely. Technical relationships among them are used to systematize optimal values of design and operative variables of a CCGT power plant into optimal solution sets, named here as optimal solution families. From an operative and design point of view, the families of optimal solutions let knowing in advance optimal values of the CCGT variables when facing changes of power demand or adjusting the design to an available heat transfer area.

  6. Optimization of the annual construction program solutions

    Directory of Open Access Journals (Sweden)

    Oleinik Pavel

    2017-01-01

    Full Text Available The article considers potentially possible optimization solutions in scheduling while forming the annual production programs of the construction complex organizations. The optimization instrument is represented as a two-component system. As a fundamentally new approach in the first block of the annual program solutions, the authors propose to use a scientifically grounded methodology for determining the scope of work permissible for the transfer to a subcontractor without risk of General Contractor’s management control losing over the construction site. For this purpose, a special indicator is introduced that characterizes the activity of the general construction organization - the coefficient of construction production management. In the second block, the principal methods for the formation of calendar plans for the fulfillment of the critical work effort by the leading stream are proposed, depending on the intensity characteristic.

  7. Amodified probabilistic genetic algorithm for the solution of complex constrained optimization problems

    OpenAIRE

    Vorozheikin, A.; Gonchar, T.; Panfilov, I.; Sopov, E.; Sopov, S.

    2009-01-01

    A new algorithm for the solution of complex constrained optimization problems based on the probabilistic genetic algorithm with optimal solution prediction is proposed. The efficiency investigation results in comparison with standard genetic algorithm are presented.

  8. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2015-11-01

    © 2015 Elsevier B.V. All rights reserved. Significant research has been conducted in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research efforts aim to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open MPI. The proposed optimization technique is designed to address the challenge of extreme scale of future HPC platforms. It is based on hierarchical transformation of the traditionally flat logical arrangement of communicating processors. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of the Grid\\'5000 platform are presented.

  9. On Attainability of Optimal Solutions for Linear Elliptic Equations with Unbounded Coefficients

    Directory of Open Access Journals (Sweden)

    P. I. Kogut

    2011-12-01

    Full Text Available We study an optimal boundary control problem (OCP associated to a linear elliptic equation —div (Vj/ + A(xVy = f describing diffusion in a turbulent flow. The characteristic feature of this equation is the fact that, in applications, the stream matrix A(x = [a,ij(x]i,j=i,...,N is skew-symmetric, ац(х = —a,ji(x, measurable, and belongs to L -space (rather than L°°. An optimal solution to such problem can inherit a singular character of the original stream matrix A. We show that optimal solutions can be attainable by solutions of special optimal boundary control problems.

  10. Sensitivity analysis of efficient solution in vector MINMAX boolean programming problem

    Directory of Open Access Journals (Sweden)

    Vladimir A. Emelichev

    2002-11-01

    Full Text Available We consider a multiple criterion Boolean programming problem with MINMAX partial criteria. The extreme level of independent perturbations of partial criteria parameters such that efficient (Pareto optimal solution preserves optimality was obtained.

  11. Optimized extreme learning machine for urban land cover classification using hyperspectral imagery

    Science.gov (United States)

    Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam

    2017-12-01

    This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.

  12. Multiobjective optimization of urban water resources: Moving toward more practical solutions

    Science.gov (United States)

    Mortazavi, Mohammad; Kuczera, George; Cui, Lijie

    2012-03-01

    The issue of drought security is of paramount importance for cities located in regions subject to severe prolonged droughts. The prospect of "running out of water" for an extended period would threaten the very existence of the city. Managing drought security for an urban water supply is a complex task involving trade-offs between conflicting objectives. In this paper a multiobjective optimization approach for urban water resource planning and operation is developed to overcome practically significant shortcomings identified in previous work. A case study based on the headworks system for Sydney (Australia) demonstrates the approach and highlights the potentially serious shortcomings of Pareto optimal solutions conditioned on short climate records, incomplete decision spaces, and constraints to which system response is sensitive. Where high levels of drought security are required, optimal solutions conditioned on short climate records are flawed. Our approach addresses drought security explicitly by identifying approximate optimal solutions in which the system does not "run dry" in severe droughts with expected return periods up to a nominated (typically large) value. In addition, it is shown that failure to optimize the full mix of interacting operational and infrastructure decisions and to explore the trade-offs associated with sensitive constraints can lead to significantly more costly solutions.

  13. Pressure Prediction of Coal Slurry Transportation Pipeline Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Xue-cun Yang

    2015-01-01

    Full Text Available For coal slurry pipeline blockage prediction problem, through the analysis of actual scene, it is determined that the pressure prediction from each measuring point is the premise of pipeline blockage prediction. Kernel function of support vector machine is introduced into extreme learning machine, the parameters are optimized by particle swarm algorithm, and blockage prediction method based on particle swarm optimization kernel function extreme learning machine (PSOKELM is put forward. The actual test data from HuangLing coal gangue power plant are used for simulation experiments and compared with support vector machine prediction model optimized by particle swarm algorithm (PSOSVM and kernel function extreme learning machine prediction model (KELM. The results prove that mean square error (MSE for the prediction model based on PSOKELM is 0.0038 and the correlation coefficient is 0.9955, which is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy.

  14. Persistent junk solutions in time-domain modeling of extreme mass ratio binaries

    International Nuclear Information System (INIS)

    Field, Scott E.; Hesthaven, Jan S.; Lau, Stephen R.

    2010-01-01

    In the context of metric perturbation theory for nonspinning black holes, extreme mass ratio binary systems are described by distributionally forced master wave equations. Numerical solution of a master wave equation as an initial boundary value problem requires initial data. However, because the correct initial data for generic-orbit systems is unknown, specification of trivial initial data is a common choice, despite being inconsistent and resulting in a solution which is initially discontinuous in time. As is well known, this choice leads to a burst of junk radiation which eventually propagates off the computational domain. We observe another potential consequence of trivial initial data: development of a persistent spurious solution, here referred to as the Jost junk solution, which contaminates the physical solution for long times. This work studies the influence of both types of junk on metric perturbations, waveforms, and self-force measurements, and it demonstrates that smooth modified source terms mollify the Jost solution and reduce junk radiation. Our concluding section discusses the applicability of these observations to other numerical schemes and techniques used to solve distributionally forced master wave equations.

  15. Efficient Solutions and Cost-Optimal Analysis for Existing School Buildings

    Directory of Open Access Journals (Sweden)

    Paolo Maria Congedo

    2016-10-01

    Full Text Available The recast of the energy performance of buildings directive (EPBD describes a comparative methodological framework to promote energy efficiency and establish minimum energy performance requirements in buildings at the lowest costs. The aim of the cost-optimal methodology is to foster the achievement of nearly zero energy buildings (nZEBs, the new target for all new buildings by 2020, characterized by a high performance with a low energy requirement almost covered by renewable sources. The paper presents the results of the application of the cost-optimal methodology in two existing buildings located in the Mediterranean area. These buildings are a kindergarten and a nursery school that differ in construction period, materials and systems. Several combinations of measures have been applied to derive cost-effective efficient solutions for retrofitting. The cost-optimal level has been identified for each building and the best performing solutions have been selected considering both a financial and a macroeconomic analysis. The results illustrate the suitability of the methodology to assess cost-optimality and energy efficiency in school building refurbishment. The research shows the variants providing the most cost-effective balance between costs and energy saving. The cost-optimal solution reduces primary energy consumption by 85% and gas emissions by 82%–83% in each reference building.

  16. Practical solutions for multi-objective optimization: An application to system reliability design problems

    International Nuclear Information System (INIS)

    Taboada, Heidi A.; Baheranwala, Fatema; Coit, David W.; Wattanapongsakorn, Naruemon

    2007-01-01

    For multiple-objective optimization problems, a common solution methodology is to determine a Pareto optimal set. Unfortunately, these sets are often large and can become difficult to comprehend and consider. Two methods are presented as practical approaches to reduce the size of the Pareto optimal set for multiple-objective system reliability design problems. The first method is a pseudo-ranking scheme that helps the decision maker select solutions that reflect his/her objective function priorities. In the second approach, we used data mining clustering techniques to group the data by using the k-means algorithm to find clusters of similar solutions. This provides the decision maker with just k general solutions to choose from. With this second method, from the clustered Pareto optimal set, we attempted to find solutions which are likely to be more relevant to the decision maker. These are solutions where a small improvement in one objective would lead to a large deterioration in at least one other objective. To demonstrate how these methods work, the well-known redundancy allocation problem was solved as a multiple objective problem by using the NSGA genetic algorithm to initially find the Pareto optimal solutions, and then, the two proposed methods are applied to prune the Pareto set

  17. Complicated problem solution techniques in optimal parameter searching

    International Nuclear Information System (INIS)

    Gergel', V.P.; Grishagin, V.A.; Rogatneva, E.A.; Strongin, R.G.; Vysotskaya, I.N.; Kukhtin, V.V.

    1992-01-01

    An algorithm is presented of a global search for numerical solution of multidimentional multiextremal multicriteria optimization problems with complicated constraints. A boundedness of object characteristic changes is assumed at restricted changes of its parameters (Lipschitz condition). The algorithm was realized as a computer code. The algorithm was realized as a computer code. The programme was used to solve in practice the different applied optimization problems. 10 refs.; 3 figs

  18. Efficient Output Solution for Nonlinear Stochastic Optimal Control Problem with Model-Reality Differences

    Directory of Open Access Journals (Sweden)

    Sie Long Kek

    2015-01-01

    Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.

  19. Inverse planning and optimization: a comparison of solutions

    Energy Technology Data Exchange (ETDEWEB)

    Ringor, Michael [School of Health Sciences, Purdue University, West Lafayette, IN (United States); Papiez, Lech [Department of Radiation Oncology, Indiana University, Indianapolis, IN (United States)

    1998-09-01

    The basic problem in radiation therapy treatment planning is to determine an appropriate set of treatment parameters that would induce an effective dose distribution inside a patient. One can approach this task as an inverse problem, or as an optimization problem. In this presentation, we compare both approaches. The inverse problem is presented as a dose reconstruction problem similar to tomography reconstruction. We formulate the optimization problem as linear and quadratic programs. Explicit comparisons are made between the solutions obtained by inversion and those obtained by optimization for the case in which scatter and attenuation are ignored (the NS-NA approximation)

  20. Iterative solution to the optimal poison management problem in pressurized water reactors

    International Nuclear Information System (INIS)

    Colletti, J.P.; Levine, S.H.; Lewis, J.B.

    1983-01-01

    A new method for solving the optimal poison management problem for a multiregion pressurized water reactor has been developed. The optimization objective is to maximize the end-of-cycle core excess reactivity for any given beginning-of-cycle fuel loading. The problem is treated as an optimal control problem with the region burnup and control absorber concentrations acting as the state and control variables, respectively. Constraints are placed on the power peaking, soluble boron concentration, and control absorber concentrations. The solution method consists of successive relinearizations of the system equations resulting in a sequence of nonlinear programming problems whose solutions converge to the desired optimal control solution. Application of the method to several test problems based on a simplified three-region reactor suggests a bang-bang optimal control strategy with the peak power location switching between the inner and outer regions of the core and the critical soluble boron concentration as low as possible throughout the cycle

  1. Smooth Solutions to Optimal Investment Models with Stochastic Volatilities and Portfolio Constraints

    International Nuclear Information System (INIS)

    Pham, H.

    2002-01-01

    This paper deals with an extension of Merton's optimal investment problem to a multidimensional model with stochastic volatility and portfolio constraints. The classical dynamic programming approach leads to a characterization of the value function as a viscosity solution of the highly nonlinear associated Bellman equation. A logarithmic transformation expresses the value function in terms of the solution to a semilinear parabolic equation with quadratic growth on the derivative term. Using a stochastic control representation and some approximations, we prove the existence of a smooth solution to this semilinear equation. An optimal portfolio is shown to exist, and is expressed in terms of the classical solution to this semilinear equation. This reduction is useful for studying numerical schemes for both the value function and the optimal portfolio. We illustrate our results with several examples of stochastic volatility models popular in the financial literature

  2. New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems

    Directory of Open Access Journals (Sweden)

    Xiguang Li

    2017-01-01

    Full Text Available Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA, is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent.

  3. Investigation of the existence and uniqueness of extremal and positive definite solutions of nonlinear matrix equations

    Directory of Open Access Journals (Sweden)

    Abdel-Shakoor M Sarhan

    2016-05-01

    Full Text Available Abstract We consider two nonlinear matrix equations X r ± ∑ i = 1 m A i ∗ X δ i A i = I $X^{r} \\pm \\sum_{i = 1}^{m} A_{i}^{*}X^{\\delta_{i}}A_{i} = I$ , where − 1 < δ i < 0 $- 1 < \\delta_{i} < 0$ , and r, m are positive integers. For the first equation (plus case, we prove the existence of positive definite solutions and extremal solutions. Two algorithms and proofs of their convergence to the extremal positive definite solutions are constructed. For the second equation (negative case, we prove the existence and the uniqueness of a positive definite solution. Moreover, the algorithm given in (Duan et al. in Linear Algebra Appl. 429:110-121, 2008 (actually, in (Shi et al. in Linear Multilinear Algebra 52:1-15, 2004 for r = 1 $r = 1$ is proved to be valid for any r. Numerical examples are given to illustrate the performance and effectiveness of all the constructed algorithms. In Appendix, we analyze the ordering on the positive cone P ( n ‾ $\\overline{P(n}$ .

  4. Design of materials with extreme thermal expansion using a three-phase topology optimization method

    DEFF Research Database (Denmark)

    Sigmund, Ole; Torquato, S.

    1997-01-01

    Composites with extremal or unusual thermal expansion coefficients are designed using a three-phase topology optimization method. The composites are made of two different material phases and a void phase. The topology optimization method consists in finding the distribution of material phases...... materials having maximum directional thermal expansion (thermal actuators), zero isotropic thermal expansion, and negative isotropic thermal expansion. It is shown that materials with effective negative thermal expansion coefficients can be obtained by mixing two phases with positive thermal expansion...

  5. Solution for state constrained optimal control problems applied to power split control for hybrid vehicles

    NARCIS (Netherlands)

    Keulen, van T.A.C.; Gillot, J.; Jager, de A.G.; Steinbuch, M.

    2014-01-01

    This paper presents a numerical solution for scalar state constrained optimal control problems. The algorithm rewrites the constrained optimal control problem as a sequence of unconstrained optimal control problems which can be solved recursively as a two point boundary value problem. The solution

  6. Energy efficiency: EDF Optimal Solutions improves the l'Oreal firm of Vichy; Efficacite energetique: EDF Optimal Solutions ameliore l'usine l'Oreal de Vichy

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2011-09-15

    The l'Oreal CAP site has inaugurated its energetic eco-efficiency installations realized by EDF Optimal Solutions. This solution combines several techniques and makes possible to halve its yearly CO{sub 2} releases. (O.M.)

  7. Regulation of Dynamical Systems to Optimal Solutions of Semidefinite Programs: Algorithms and Applications to AC Optimal Power Flow

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    2015-07-01

    This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the control of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.

  8. Multiresolution strategies for the numerical solution of optimal control problems

    Science.gov (United States)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a

  9. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    Science.gov (United States)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  10. Sequences of extremal radially excited rotating black holes.

    Science.gov (United States)

    Blázquez-Salcedo, Jose Luis; Kunz, Jutta; Navarro-Lérida, Francisco; Radu, Eugen

    2014-01-10

    In the Einstein-Maxwell-Chern-Simons theory the extremal Reissner-Nordström solution is no longer the single extremal solution with vanishing angular momentum, when the Chern-Simons coupling constant reaches a critical value. Instead a whole sequence of rotating extremal J=0 solutions arises, labeled by the node number of the magnetic U(1) potential. Associated with the same near horizon solution, the mass of these radially excited extremal solutions converges to the mass of the extremal Reissner-Nordström solution. On the other hand, not all near horizon solutions are also realized as global solutions.

  11. The Fundamental Solution and Its Role in the Optimal Control of Infinite Dimensional Neutral Systems

    International Nuclear Information System (INIS)

    Liu Kai

    2009-01-01

    In this work, we shall consider standard optimal control problems for a class of neutral functional differential equations in Banach spaces. As the basis of a systematic theory of neutral models, the fundamental solution is constructed and a variation of constants formula of mild solutions is established. We introduce a class of neutral resolvents and show that the Laplace transform of the fundamental solution is its neutral resolvent operator. Necessary conditions in terms of the solutions of neutral adjoint systems are established to deal with the fixed time integral convex cost problem of optimality. Based on optimality conditions, the maximum principle for time varying control domain is presented. Finally, the time optimal control problem to a target set is investigated

  12. Analytic solution to variance optimization with no short positions

    Science.gov (United States)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  13. Particle Swarm Optimization with Various Inertia Weight Variants for Optimal Power Flow Solution

    Directory of Open Access Journals (Sweden)

    Prabha Umapathy

    2010-01-01

    Full Text Available This paper proposes an efficient method to solve the optimal power flow problem in power systems using Particle Swarm Optimization (PSO. The objective of the proposed method is to find the steady-state operating point which minimizes the fuel cost, while maintaining an acceptable system performance in terms of limits on generator power, line flow, and voltage. Three different inertia weights, a constant inertia weight (CIW, a time-varying inertia weight (TVIW, and global-local best inertia weight (GLbestIW, are considered with the particle swarm optimization algorithm to analyze the impact of inertia weight on the performance of PSO algorithm. The PSO algorithm is simulated for each of the method individually. It is observed that the PSO algorithm with the proposed inertia weight yields better results, both in terms of optimal solution and faster convergence. The proposed method has been tested on the standard IEEE 30 bus test system to prove its efficacy. The algorithm is computationally faster, in terms of the number of load flows executed, and provides better results than other heuristic techniques.

  14. Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

    Science.gov (United States)

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  15. Optimization of the recycling process of precipitation barren solution in a uranium mine

    International Nuclear Information System (INIS)

    Long Qing; Yu Suqin; Zhao Wucheng; Han Wei; Zhang Hui; Chen Shuangxi

    2014-01-01

    Alkaline leaching process was adopted to recover uranium from ores in a uranium mine, and high concentration uranium solution, which would be later used in precipitation, was obtained after ion-exchange and elution steps. The eluting agent consisted of NaCl and NaHCO 3 . Though precipitation barren solution contained as high as 80 g/L Na 2 CO 3 , it still can not be recycled due to presence of high Cl - concentration So, both elution and precipitation processes were optimized in order to control the Cl - concentration in the precipitation barren solution to the recyclable concentration range. Because the precipitation barren solution can be recycled by optimization, the agent consumption was lowered and the discharge of waste water was reduced. (authors)

  16. Optimization of process and solution parameters in electrospinning polyethylene oxide

    CSIR Research Space (South Africa)

    Jacobs, V

    2011-11-01

    Full Text Available This paper reports the optimization of electrospinning process and solution parameters using factorial design approach to obtain uniform polyethylene oxide (PEO) nanofibers. The parameters studied were distance between nozzle and collector screen...

  17. An accurate approximate solution of optimal sequential age replacement policy for a finite-time horizon

    International Nuclear Information System (INIS)

    Jiang, R.

    2009-01-01

    It is difficult to find the optimal solution of the sequential age replacement policy for a finite-time horizon. This paper presents an accurate approximation to find an approximate optimal solution of the sequential replacement policy. The proposed approximation is computationally simple and suitable for any failure distribution. Their accuracy is illustrated by two examples. Based on the approximate solution, an approximate estimate for the total cost is derived.

  18. Optimal Design Solutions for Permanent Magnet Synchronous Machines

    Directory of Open Access Journals (Sweden)

    POPESCU, M.

    2011-11-01

    Full Text Available This paper presents optimal design solutions for reducing the cogging torque of permanent magnets synchronous machines. A first solution proposed in the paper consists in using closed stator slots that determines a nearly isotropic magnetic structure of the stator core, reducing the mutual attraction between permanent magnets and the slotted armature. To avoid complications in the windings manufacture technology the stator slots are closed using wedges made of soft magnetic composite materials. The second solution consists in properly choosing the combination of pole number and stator slots number that typically leads to a winding with fractional number of slots/pole/phase. The proposed measures for cogging torque reduction are analyzed by means of 2D/3D finite element models developed using the professional Flux software package. Numerical results are discussed and compared with experimental ones obtained by testing a PMSM prototype.

  19. Neural architecture design based on extreme learning machine.

    Science.gov (United States)

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. A New Methodology to Select the Preferred Solutions from the Pareto-optimal Set: Application to Polymer Extrusion

    International Nuclear Information System (INIS)

    Ferreira, Jose C.; Gaspar-Cunha, Antonio; Fonseca, Carlos M.

    2007-01-01

    Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria

  1. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  2. Regulation of Renewable Energy Sources to Optimal Power Flow Solutions Using ADMM: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yijian; Hong, Mingyi; Dall' Anese, Emiliano; Dhople, Sairaj; Xu, Zi

    2017-03-03

    This paper considers power distribution systems featuring renewable energy sources (RESs), and develops a distributed optimization method to steer the RES output powers to solutions of AC optimal power flow (OPF) problems. The design of the proposed method leverages suitable linear approximations of the AC-power flow equations, and is based on the Alternating Direction Method of Multipliers (ADMM). Convergence of the RES-inverter output powers to solutions of the OPF problem is established under suitable conditions on the stepsize as well as mismatches between the commanded setpoints and actual RES output powers. In a broad sense, the methods and results proposed here are also applicable to other distributed optimization problem setups with ADMM and inexact dual updates.

  3. Genetic search for an optimal power flow solution from a high density cluster

    Energy Technology Data Exchange (ETDEWEB)

    Amarnath, R.V. [Hi-Tech College of Engineering and Technology, Hyderabad (India); Ramana, N.V. [JNTU College of Engineering, Jagityala (India)

    2008-07-01

    This paper proposed a novel method to solve optimal power flow (OPF) problems. The method is based on a genetic algorithm (GA) search from a High Density Cluster (GAHDC). The algorithm of the proposed method includes 3 stages, notably (1) a suboptimal solution is obtained via a conventional analytical method, (2) a high density cluster, which consists of other suboptimal data points from the first stage, is formed using a density-based cluster algorithm, and (3) a genetic algorithm based search is carried out for the exact optimal solution from a low population sized, high density cluster. The final optimal solution thoroughly satisfies the well defined fitness function. A standard IEEE 30-bus test system was considered for the simulation study. Numerical results were presented and compared with the results of other approaches. It was concluded that although there is not much difference in numerical values, the proposed method has the advantage of minimal computational effort and reduced CPU time. As such, the method would be suitable for online applications such as the present Optimal Power Flow problem. 24 refs., 2 tabs., 4 figs.

  4. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  5. Intelligent fault diagnosis of photovoltaic arrays based on optimized kernel extreme learning machine and I-V characteristics

    International Nuclear Information System (INIS)

    Chen, Zhicong; Wu, Lijun; Cheng, Shuying; Lin, Peijie; Wu, Yue; Lin, Wencheng

    2017-01-01

    Highlights: •An improved Simulink based modeling method is proposed for PV modules and arrays. •Key points of I-V curves and PV model parameters are used as the feature variables. •Kernel extreme learning machine (KELM) is explored for PV arrays fault diagnosis. •The parameters of KELM algorithm are optimized by the Nelder-Mead simplex method. •The optimized KELM fault diagnosis model achieves high accuracy and reliability. -- Abstract: Fault diagnosis of photovoltaic (PV) arrays is important for improving the reliability, efficiency and safety of PV power stations, because the PV arrays usually operate in harsh outdoor environment and tend to suffer various faults. Due to the nonlinear output characteristics and varying operating environment of PV arrays, many machine learning based fault diagnosis methods have been proposed. However, there still exist some issues: fault diagnosis performance is still limited due to insufficient monitored information; fault diagnosis models are not efficient to be trained and updated; labeled fault data samples are hard to obtain by field experiments. To address these issues, this paper makes contribution in the following three aspects: (1) based on the key points and model parameters extracted from monitored I-V characteristic curves and environment condition, an effective and efficient feature vector of seven dimensions is proposed as the input of the fault diagnosis model; (2) the emerging kernel based extreme learning machine (KELM), which features extremely fast learning speed and good generalization performance, is utilized to automatically establish the fault diagnosis model. Moreover, the Nelder-Mead Simplex (NMS) optimization method is employed to optimize the KELM parameters which affect the classification performance; (3) an improved accurate Simulink based PV modeling approach is proposed for a laboratory PV array to facilitate the fault simulation and data sample acquisition. Intensive fault experiments are

  6. Bolting multicenter solutions

    Energy Technology Data Exchange (ETDEWEB)

    Bena, Iosif [Institut de Physique Théorique, Université Paris Saclay, CEA, CNRS, 91191 Gif-sur-Yvette Cedex (France); Bossard, Guillaume [Centre de Physique Théorique, Ecole Polytechnique, CNRS, Université Paris-Saclay, 91128 Palaiseau Cedex (France); Katmadas, Stefanos; Turton, David [Institut de Physique Théorique, Université Paris Saclay, CEA, CNRS, 91191 Gif-sur-Yvette Cedex (France)

    2017-01-30

    We introduce a solvable system of equations that describes non-extremal multicenter solutions to six-dimensional ungauged supergravity coupled to tensor multiplets. The system involves a set of functions on a three-dimensional base metric. We obtain a family of non-extremal axisymmetric solutions that generalize the known multicenter extremal solutions, using a particular base metric that introduces a bolt. We analyze the conditions for regularity, and in doing so we show that this family does not include solutions that contain an extremal black hole and a smooth bolt. We determine the constraints that are necessary to obtain smooth horizonless solutions involving a bolt and an arbitrary number of Gibbons-Hawking centers.

  7. An effective secondary decomposition approach for wind power forecasting using extreme learning machine trained by crisscross optimization

    International Nuclear Information System (INIS)

    Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo

    2017-01-01

    Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.

  8. Optimization and Modeling of Extreme Freshwater Discharge from Japanese First-Class River Basins to Coastal Oceans

    Science.gov (United States)

    Kuroki, R.; Yamashiki, Y. A.; Varlamov, S.; Miyazawa, Y.; Gupta, H. V.; Racault, M.; Troselj, J.

    2017-12-01

    We estimated the effects of extreme fluvial outflow events from river mouths on the salinity distribution in the Japanese coastal zones. Targeted extreme event was a typhoon from 06/09/2015 to 12/09/2015, and we generated a set of hourly simulated river outflow data of all Japanese first-class rivers from these basins to the Pacific Ocean and the Sea of Japan during the period by using our model "Cell Distributed Runoff Model Version 3.1.1 (CDRMV3.1.1)". The model simulated fresh water discharges for the case of the typhoon passage over Japan. We used these data with a coupled hydrological-oceanographic model JCOPE-T, developed by Japan Agency for Marine-earth Science and Technology (JAMSTEC), for estimation of the circulation and salinity distribution in Japanese coastal zones. By using the model, the coastal oceanic circulation was reproduced adequately, which was verified by satellite remote sensing. In addition to this, we have successfully optimized 5 parameters, soil roughness coefficient, river roughness coefficient, effective porosity, saturated hydraulic conductivity, and effective rainfall by using Shuffled Complex Evolution method developed by University of Arizona (SCE-UA method), that is one of the optimization method for hydrological model. Increasing accuracy of peak discharge prediction of extreme typhoon events on river mouths is essential for continental-oceanic mutual interaction.

  9. An Analytical Solution for Yaw Maneuver Optimization on the International Space Station and Other Orbiting Space Vehicles

    Science.gov (United States)

    Dobrinskaya, Tatiana

    2015-01-01

    This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command

  10. Design of a Fractional Order Frequency PID Controller for an Islanded Microgrid: A Multi-Objective Extremal Optimization Method

    Directory of Open Access Journals (Sweden)

    Huan Wang

    2017-10-01

    Full Text Available Fractional order proportional-integral-derivative(FOPID controllers have attracted increasing attentions recently due to their better control performance than the traditional integer-order proportional-integral-derivative (PID controllers. However, there are only few studies concerning the fractional order control of microgrids based on evolutionary algorithms. From the perspective of multi-objective optimization, this paper presents an effective FOPID based frequency controller design method called MOEO-FOPID for an islanded microgrid by using a Multi-objective extremal optimization (MOEO algorithm to minimize frequency deviation and controller output signal simultaneously in order to improve finally the efficient operation of distributed generations and energy storage devices. Its superiority to nondominated sorting genetic algorithm-II (NSGA-II based FOPID/PID controllers and other recently reported single-objective evolutionary algorithms such as Kriging-based surrogate modeling and real-coded population extremal optimization-based FOPID controllers is demonstrated by the simulation studies on a typical islanded microgrid in terms of the control performance including frequency deviation, deficit grid power, controller output signal and robustness.

  11. Charged de Sitter-like black holes: quintessence-dependent enthalpy and new extreme solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Ainou, Mustapha [Baskent University, Faculty of Engineering, Ankara (Turkey)

    2015-01-01

    We consider Reissner-Nordstroem black holes surrounded by quintessence where both a non-extremal event horizon and a cosmological horizon exist besides an inner horizon (-1 ≤ ω < -1/3). We determine new extreme black hole solutions that generalize the Nariai horizon to asymptotically de Sitter-like solutions for any order relation between the squares of the charge q{sup 2} and the mass parameter M{sup 2} provided q{sup 2} remains smaller than some limit, which is larger than M{sup 2}. In the limit case q{sup 2} = 9ω{sup 2}M{sup 2}/(9ω{sup 2}-1), we derive the general expression of the extreme cosmo-blackhole, where the three horizons merge, and we discuss some of its properties.We also show that the endpoint of the evaporation process is independent of any order relation between q{sup 2} and M{sup 2}. The Teitelboim energy and the Padmanabhan energy are related by a nonlinear expression and are shown to correspond to different ensembles. We also determine the enthalpy H of the event horizon, as well as the effective thermodynamic volume which is the conjugate variable of the negative quintessential pressure, and show that in general the mass parameter and the Teitelboim energy are different from the enthalpy and internal energy; only in the cosmological case, that is, for Reissner-Nordstroem-de Sitter black hole we have H = M. Generalized Smarr formulas are also derived. It is concluded that the internal energy has a universal expression for all static charged black holes, with possibly a variable mass parameter, but it is not a suitable thermodynamic potential for static-black-hole thermodynamics if M is constant. It is also shown that the reverse isoperimetric inequality holds. We generalize the results to the case of the Reissner-Nordstroem-de Sitter black hole surrounded by quintessence with two physical constants yielding two thermodynamic volumes. (orig.)

  12. A modified estimation distribution algorithm based on extreme elitism.

    Science.gov (United States)

    Gao, Shujun; de Silva, Clarence W

    2016-12-01

    An existing estimation distribution algorithm (EDA) with univariate marginal Gaussian model was improved by designing and incorporating an extreme elitism selection method. This selection method highlighted the effect of a few top best solutions in the evolution and advanced EDA to form a primary evolution direction and obtain a fast convergence rate. Simultaneously, this selection can also keep the population diversity to make EDA avoid premature convergence. Then the modified EDA was tested by means of benchmark low-dimensional and high-dimensional optimization problems to illustrate the gains in using this extreme elitism selection. Besides, no-free-lunch theorem was implemented in the analysis of the effect of this new selection on EDAs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. An Indirect Simulation-Optimization Model for Determining Optimal TMDL Allocation under Uncertainty

    Directory of Open Access Journals (Sweden)

    Feng Zhou

    2015-11-01

    Full Text Available An indirect simulation-optimization model framework with enhanced computational efficiency and risk-based decision-making capability was developed to determine optimal total maximum daily load (TMDL allocation under uncertainty. To convert the traditional direct simulation-optimization model into our indirect equivalent model framework, we proposed a two-step strategy: (1 application of interval regression equations derived by a Bayesian recursive regression tree (BRRT v2 algorithm, which approximates the original hydrodynamic and water-quality simulation models and accurately quantifies the inherent nonlinear relationship between nutrient load reductions and the credible interval of algal biomass with a given confidence interval; and (2 incorporation of the calibrated interval regression equations into an uncertain optimization framework, which is further converted to our indirect equivalent framework by the enhanced-interval linear programming (EILP method and provides approximate-optimal solutions at various risk levels. The proposed strategy was applied to the Swift Creek Reservoir’s nutrient TMDL allocation (Chesterfield County, VA to identify the minimum nutrient load allocations required from eight sub-watersheds to ensure compliance with user-specified chlorophyll criteria. Our results indicated that the BRRT-EILP model could identify critical sub-watersheds faster than the traditional one and requires lower reduction of nutrient loadings compared to traditional stochastic simulation and trial-and-error (TAE approaches. This suggests that our proposed framework performs better in optimal TMDL development compared to the traditional simulation-optimization models and provides extreme and non-extreme tradeoff analysis under uncertainty for risk-based decision making.

  14. Aero Engine Component Fault Diagnosis Using Multi-Hidden-Layer Extreme Learning Machine with Optimized Structure

    Directory of Open Access Journals (Sweden)

    Shan Pang

    2016-01-01

    Full Text Available A new aero gas turbine engine gas path component fault diagnosis method based on multi-hidden-layer extreme learning machine with optimized structure (OM-ELM was proposed. OM-ELM employs quantum-behaved particle swarm optimization to automatically obtain the optimal network structure according to both the root mean square error on training data set and the norm of output weights. The proposed method is applied to handwritten recognition data set and a gas turbine engine diagnostic application and is compared with basic ELM, multi-hidden-layer ELM, and two state-of-the-art deep learning algorithms: deep belief network and the stacked denoising autoencoder. Results show that, with optimized network structure, OM-ELM obtains better test accuracy in both applications and is more robust to sensor noise. Meanwhile it controls the model complexity and needs far less hidden nodes than multi-hidden-layer ELM, thus saving computer memory and making it more efficient to implement. All these advantages make our method an effective and reliable tool for engine component fault diagnosis tool.

  15. Optimal solution of full fuzzy transportation problems using total integral ranking

    Science.gov (United States)

    Sam’an, M.; Farikhin; Hariyanto, S.; Surarso, B.

    2018-03-01

    Full fuzzy transportation problem (FFTP) is a transportation problem where transport costs, demand, supply and decision variables are expressed in form of fuzzy numbers. To solve fuzzy transportation problem, fuzzy number parameter must be converted to a crisp number called defuzzyfication method. In this new total integral ranking method with fuzzy numbers from conversion of trapezoidal fuzzy numbers to hexagonal fuzzy numbers obtained result of consistency defuzzyfication on symmetrical fuzzy hexagonal and non symmetrical type 2 numbers with fuzzy triangular numbers. To calculate of optimum solution FTP used fuzzy transportation algorithm with least cost method. From this optimum solution, it is found that use of fuzzy number form total integral ranking with index of optimism gives different optimum value. In addition, total integral ranking value using hexagonal fuzzy numbers has an optimal value better than the total integral ranking value using trapezoidal fuzzy numbers.

  16. Optimization of the solution of the problem of scheduling theory ...

    African Journals Online (AJOL)

    This article describes the genetic algorithm used to solve the problem related to the scheduling theory. A large number of different methods is described in the scientific literature. The main issue that faced the problem in question is that it is necessary to search the optimal solution in a large search space for the set of ...

  17. Pareto Optimal Solutions for Network Defense Strategy Selection Simulator in Multi-Objective Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Yang Sun

    2018-01-01

    Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.

  18. Sensitive Constrained Optimal PMU Allocation with Complete Observability for State Estimation Solution

    Directory of Open Access Journals (Sweden)

    R. Manam

    2017-12-01

    Full Text Available In this paper, a sensitive constrained integer linear programming approach is formulated for the optimal allocation of Phasor Measurement Units (PMUs in a power system network to obtain state estimation. In this approach, sensitive buses along with zero injection buses (ZIB are considered for optimal allocation of PMUs in the network to generate state estimation solutions. Sensitive buses are evolved from the mean of bus voltages subjected to increase of load consistently up to 50%. Sensitive buses are ranked in order to place PMUs. Sensitive constrained optimal PMU allocation in case of single line and no line contingency are considered in observability analysis to ensure protection and control of power system from abnormal conditions. Modeling of ZIB constraints is included to minimize the number of PMU network allocations. This paper presents optimal allocation of PMU at sensitive buses with zero injection modeling, considering cost criteria and redundancy to increase the accuracy of state estimation solution without losing observability of the whole system. Simulations are carried out on IEEE 14, 30 and 57 bus systems and results obtained are compared with traditional and other state estimation methods available in the literature, to demonstrate the effectiveness of the proposed method.

  19. The optimal solution of a non-convex state-dependent LQR problem and its applications.

    Directory of Open Access Journals (Sweden)

    Xudan Xu

    Full Text Available This paper studies a Non-convex State-dependent Linear Quadratic Regulator (NSLQR problem, in which the control penalty weighting matrix [Formula: see text] in the performance index is state-dependent. A necessary and sufficient condition for the optimal solution is established with a rigorous proof by Euler-Lagrange Equation. It is found that the optimal solution of the NSLQR problem can be obtained by solving a Pseudo-Differential-Riccati-Equation (PDRE simultaneously with the closed-loop system equation. A Comparison Theorem for the PDRE is given to facilitate solution methods for the PDRE. A linear time-variant system is employed as an example in simulation to verify the proposed optimal solution. As a non-trivial application, a goal pursuit process in psychology is modeled as a NSLQR problem and two typical goal pursuit behaviors found in human and animals are reproduced using different control weighting [Formula: see text]. It is found that these two behaviors save control energy and cause less stress over Conventional Control Behavior typified by the LQR control with a constant control weighting [Formula: see text], in situations where only the goal discrepancy at the terminal time is of concern, such as in Marathon races and target hitting missions.

  20. Optimization of Low-Thrust Spiral Trajectories by Collocation

    Science.gov (United States)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  1. Near-horizon symmetries of extremal black holes

    International Nuclear Information System (INIS)

    Kunduri, Hari K; Lucietti, James; Reall, Harvey S

    2007-01-01

    Recent work has demonstrated an attractor mechanism for extremal rotating black holes subject to the assumption of a near-horizon SO(2, 1) symmetry. We prove the existence of this symmetry for any extremal black hole with the same number of rotational symmetries as known four- and five-dimensional solutions (including black rings). The result is valid for a general two-derivative theory of gravity coupled to Abelian vectors and uncharged scalars, allowing for a non-trivial scalar potential. We prove that it remains valid in the presence of higher-derivative corrections. We show that SO(2, 1)-symmetric near-horizon solutions can be analytically continued to give SU(2)-symmetric black hole solutions. For example, the near-horizon limit of an extremal 5D Myers-Perry black hole is related by analytic continuation to a non-extremal cohomogeneity-1 Myers-Perry solution

  2. Analyze the optimal solutions of optimization problems by means of fractional gradient based system using VIM

    Directory of Open Access Journals (Sweden)

    Firat Evirgen

    2016-04-01

    Full Text Available In this paper, a class of Nonlinear Programming problem is modeled with gradient based system of fractional order differential equations in Caputo's sense. To see the overlap between the equilibrium point of the fractional order dynamic system and theoptimal solution of the NLP problem in a longer timespan the Multistage Variational İteration Method isapplied. The comparisons among the multistage variational iteration method, the variationaliteration method and the fourth order Runge-Kutta method in fractional and integer order showthat fractional order model and techniques can be seen as an effective and reliable tool for finding optimal solutions of Nonlinear Programming problems.

  3. The Successor Function and Pareto Optimal Solutions of Cooperative Differential Systems with Concavity. I

    DEFF Research Database (Denmark)

    Andersen, Kurt Munk; Sandqvist, Allan

    1997-01-01

    We investigate the domain of definition and the domain of values for the successor function of a cooperative differential system x'=f(t,x), where the coordinate functions are concave in x for any fixed value of t. Moreover, we give a characterization of a weakly Pareto optimal solution.......We investigate the domain of definition and the domain of values for the successor function of a cooperative differential system x'=f(t,x), where the coordinate functions are concave in x for any fixed value of t. Moreover, we give a characterization of a weakly Pareto optimal solution....

  4. Near-extreme system condition and near-extreme remaining useful time for a group of products

    International Nuclear Information System (INIS)

    Wang, Hai-Kun; Li, Yan-Feng; Huang, Hong-Zhong; Jin, Tongdan

    2017-01-01

    When a group of identical products is operating in field, the aggregation of failures is a catastrophe to engineers and customers who strive to develop reliable and safe products. In order to avoid a swarm of failures in a short time, it is essential to measure the degree of dispersion from different failure times in a group of products to the first failure time. This phenomenon is relevant to the crowding of system conditions near the worst one among a group of products. The group size in this paper represents a finite number of products, instead of infinite number or a single product. We evaluate the reliability of the product fleet from two aspects. First, we define near-extreme system condition and near-extreme failure time for offline solutions, which means no online observations. Second, we apply them to a continuous degradation system that breaks down when it reaches a soft failure threshold. By using particle filtering in the framework of prognostics and health management for a group of products, we aim to estimate near-extreme system condition and further predict the remaining useful life (RUL) using online solutions. Numerical examples are provided to demonstrate the effectiveness of the proposed method. - Highlights: • The aggregation of failures is measured for a group of identical products. • The crowding of failures is quantitated by the near-extreme evaluations. • Near-extreme system condition are given for offline solutions. • Near-extreme remaining useful time are provided for online solutions.

  5. Feature selection in wind speed prediction systems based on a hybrid coral reefs optimizationExtreme learning machine approach

    International Nuclear Information System (INIS)

    Salcedo-Sanz, S.; Pastor-Sánchez, A.; Prieto, L.; Blanco-Aguilera, A.; García-Herrera, R.

    2014-01-01

    Highlights: • A novel approach for short-term wind speed prediction is presented. • The system is formed by a coral reefs optimization algorithm and an extreme learning machine. • Feature selection is carried out with the CRO to improve the ELM performance. • The method is tested in real wind farm data in USA, for the period 2007–2008. - Abstract: This paper presents a novel approach for short-term wind speed prediction based on a Coral Reefs Optimization algorithm (CRO) and an Extreme Learning Machine (ELM), using meteorological predictive variables from a physical model (the Weather Research and Forecast model, WRF). The approach is based on a Feature Selection Problem (FSP) carried out with the CRO, that must obtain a reduced number of predictive variables out of the total available from the WRF. This set of features will be the input of an ELM, that finally provides the wind speed prediction. The CRO is a novel bio-inspired approach, based on the simulation of reef formation and coral reproduction, able to obtain excellent results in optimization problems. On the other hand, the ELM is a new paradigm in neural networks’ training, that provides a robust and extremely fast training of the network. Together, these algorithms are able to successfully solve this problem of feature selection in short-term wind speed prediction. Experiments in a real wind farm in the USA show the excellent performance of the CRO–ELM approach in this FSP wind speed prediction problem

  6. A New Method Based on Simulation-Optimization Approach to Find Optimal Solution in Dynamic Job-shop Scheduling Problem with Breakdown and Rework

    Directory of Open Access Journals (Sweden)

    Farzad Amirkhani

    2017-03-01

    The proposed method is implemented on classical job-shop problems with objective of makespan and results are compared with mixed integer programming model. Moreover, the appropriate dispatching priorities are achieved for dynamic job-shop problem minimizing a multi-objective criteria. The results show that simulation-based optimization are highly capable to capture the main characteristics of the shop and produce optimal/near-optimal solutions with highly credibility degree.

  7. Tax solutions for optimal reduction of tobacco use in West Africa ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Tax solutions for optimal reduction of tobacco use in West Africa. During the first phase of this project, numerous decision-makers were engaged and involved in discussions with the goal of establishing a new taxation system to reduce tobacco use in West Africa. Although regional economic authorities (ECOWAS and ...

  8. Space-planning and structural solutions of low-rise buildings: Optimal selection methods

    Science.gov (United States)

    Gusakova, Natalya; Minaev, Nikolay; Filushina, Kristina; Dobrynina, Olga; Gusakov, Alexander

    2017-11-01

    The present study is devoted to elaboration of methodology used to select appropriately the space-planning and structural solutions in low-rise buildings. Objective of the study is working out the system of criteria influencing the selection of space-planning and structural solutions which are most suitable for low-rise buildings and structures. Application of the defined criteria in practice aim to enhance the efficiency of capital investments, energy and resource saving, create comfortable conditions for the population considering climatic zoning of the construction site. Developments of the project can be applied while implementing investment-construction projects of low-rise housing at different kinds of territories based on the local building materials. The system of criteria influencing the optimal selection of space-planning and structural solutions of low-rise buildings has been developed. Methodological basis has been also elaborated to assess optimal selection of space-planning and structural solutions of low-rise buildings satisfying the requirements of energy-efficiency, comfort and safety, and economical efficiency. Elaborated methodology enables to intensify the processes of low-rise construction development for different types of territories taking into account climatic zoning of the construction site. Stimulation of low-rise construction processes should be based on the system of approaches which are scientifically justified; thus it allows enhancing energy efficiency, comfort, safety and economical effectiveness of low-rise buildings.

  9. K-maps: a vehicle to an optimal solution in combinational logic ...

    African Journals Online (AJOL)

    K-maps: a vehicle to an optimal solution in combinational logic design problems using digital multiplexers. ... Abstract. Application of Karnaugh maps (K-Maps) for the design of combinational logic circuits and sequential logic circuits is a subject that has been widely discussed. However, the use of K-Maps in the design of ...

  10. Calculation of Pareto-optimal solutions to multiple-objective problems using threshold-of-acceptability constraints

    Science.gov (United States)

    Giesy, D. P.

    1978-01-01

    A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.

  11. On the diversity of multiple optimal controls for quantum systems

    International Nuclear Information System (INIS)

    Shir, O M; Baeck, Th; Beltrani, V; Rabitz, H; Vrakking, M J J

    2008-01-01

    This study presents simulations of optimal field-free molecular alignment and rotational population transfer (starting from the J = 0 rotational ground state of a diatomic molecule), optimized by means of laser pulse shaping guided by evolutionary algorithms. Qualitatively different solutions are obtained that optimize the alignment and population transfer efficiency to the maximum extent that is possible given the existing constraints on the optimization due to the finite bandwidth and energy of the laser pulse, the finite degrees of freedom in the laser pulse shaping and the evolutionary algorithm employed. The effect of these constraints on the optimization process is discussed at several levels, subject to theoretical as well as experimental considerations. We show that optimized alignment yields can reach extremely high values, even with severe constraints being present. The breadth of optimal controls is assessed, and a correlation is found between the diversity of solutions and the difficulty of the problem. In the pulse shapes that optimize dynamic alignment we observe a transition between pulse sequences that maximize the initial population transfer from J = 0 to J = 2 and pulse sequences that optimize the transfer to higher rotational levels

  12. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions

  13. Reinforcement learning solution for HJB equation arising in constrained optimal control problem.

    Science.gov (United States)

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong

    2015-11-01

    The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Ranked solutions to a class of combinatorial optimizations - with applications in mass spectrometry based peptide sequencing

    Science.gov (United States)

    Doerr, Timothy; Alves, Gelio; Yu, Yi-Kuo

    2006-03-01

    Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time. This suggests a way to efficiently find approximate solutions - - find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the fininte number of high- ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks - - peptide sequencing using tandem mass spectrometry data.

  15. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    International Nuclear Information System (INIS)

    Engelmann, Christian; Hukerikar, Saurabh

    2017-01-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across

  16. Simple and accurate solution for convective-radiative fin with temperature dependent thermal conductivity using double optimal linearization

    International Nuclear Information System (INIS)

    Bouaziz, M.N.; Aziz, Abdul

    2010-01-01

    A novel concept of double optimal linearization is introduced and used to obtain a simple and accurate solution for the temperature distribution in a straight rectangular convective-radiative fin with temperature dependent thermal conductivity. The solution is built from the classical solution for a pure convection fin of constant thermal conductivity which appears in terms of hyperbolic functions. When compared with the direct numerical solution, the double optimally linearized solution is found to be accurate within 4% for a range of radiation-conduction and thermal conductivity parameters that are likely to be encountered in practice. The present solution is simple and offers superior accuracy compared with the fairly complex approximate solutions based on the homotopy perturbation method, variational iteration method, and the double series regular perturbation method. The fin efficiency expression resembles the classical result for the constant thermal conductivity convecting fin. The present results are easily usable by the practicing engineers in their thermal design and analysis work involving fins.

  17. Spin glasses and nonlinear constraints in portfolio optimization

    International Nuclear Information System (INIS)

    Andrecut, M.

    2014-01-01

    We discuss the portfolio optimization problem with the obligatory deposits constraint. Recently it has been shown that as a consequence of this nonlinear constraint, the solution consists of an exponentially large number of optimal portfolios, completely different from each other, and extremely sensitive to any changes in the input parameters of the problem, making the concept of rational decision making questionable. Here we reformulate the problem using a quadratic obligatory deposits constraint, and we show that from the physics point of view, finding an optimal portfolio amounts to calculating the mean-field magnetizations of a random Ising model with the constraint of a constant magnetization norm. We show that the model reduces to an eigenproblem, with 2N solutions, where N is the number of assets defining the portfolio. Also, in order to illustrate our results, we present a detailed numerical example of a portfolio of several risky common stocks traded on the Nasdaq Market.

  18. Spin glasses and nonlinear constraints in portfolio optimization

    Energy Technology Data Exchange (ETDEWEB)

    Andrecut, M., E-mail: mircea.andrecut@gmail.com

    2014-01-17

    We discuss the portfolio optimization problem with the obligatory deposits constraint. Recently it has been shown that as a consequence of this nonlinear constraint, the solution consists of an exponentially large number of optimal portfolios, completely different from each other, and extremely sensitive to any changes in the input parameters of the problem, making the concept of rational decision making questionable. Here we reformulate the problem using a quadratic obligatory deposits constraint, and we show that from the physics point of view, finding an optimal portfolio amounts to calculating the mean-field magnetizations of a random Ising model with the constraint of a constant magnetization norm. We show that the model reduces to an eigenproblem, with 2N solutions, where N is the number of assets defining the portfolio. Also, in order to illustrate our results, we present a detailed numerical example of a portfolio of several risky common stocks traded on the Nasdaq Market.

  19. Game theory and extremal optimization for community detection in complex dynamic networks.

    Science.gov (United States)

    Lung, Rodica Ioana; Chira, Camelia; Andreica, Anca

    2014-01-01

    The detection of evolving communities in dynamic complex networks is a challenging problem that recently received attention from the research community. Dynamics clearly add another complexity dimension to the difficult task of community detection. Methods should be able to detect changes in the network structure and produce a set of community structures corresponding to different timestamps and reflecting the evolution in time of network data. We propose a novel approach based on game theory elements and extremal optimization to address dynamic communities detection. Thus, the problem is formulated as a mathematical game in which nodes take the role of players that seek to choose a community that maximizes their profit viewed as a fitness function. Numerical results obtained for both synthetic and real-world networks illustrate the competitive performance of this game theoretical approach.

  20. Portfolio Optimization and Mortgage Choice

    Directory of Open Access Journals (Sweden)

    Maj-Britt Nordfang

    2017-01-01

    Full Text Available This paper studies the optimal mortgage choice of an investor in a simple bond market with a stochastic interest rate and access to term life insurance. The study is based on advances in stochastic control theory, which provides analytical solutions to portfolio problems with a stochastic interest rate. We derive the optimal portfolio of a mortgagor in a simple framework and formulate stylized versions of mortgage products offered in the market today. This allows us to analyze the optimal investment strategy in terms of optimal mortgage choice. We conclude that certain extreme investors optimally choose either a traditional fixed rate mortgage or an adjustable rate mortgage, while investors with moderate risk aversion and income prefer a mix of the two. By matching specific investor characteristics to existing mortgage products, our study provides a better understanding of the complex and yet restricted mortgage choice faced by many household investors. In addition, the simple analytical framework enables a detailed analysis of how changes to market, income and preference parameters affect the optimal mortgage choice.

  1. Optimization Under Uncertainty for Wake Steering Strategies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Annoni, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fleming, Paul A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ning, Andrew [Brigham Young University

    2017-05-01

    Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.

  2. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  3. Optimal phase estimation with arbitrary a priori knowledge

    International Nuclear Information System (INIS)

    Demkowicz-Dobrzanski, Rafal

    2011-01-01

    The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attention is paid to a natural a priori probability distribution arising from a diffusion process.

  4. Energetic optimization of the ventilation system in modern ships

    International Nuclear Information System (INIS)

    Pérez, José Antonio; Orosa, José Antonio; Costa, Ángel Martín

    2016-01-01

    Highlights: • New solutions to optimize the ventilation system in modern ships are proposed. • Very important energy savings have been achieved. • Extreme indoor conditions in the engine room are modelled and analysed. • Critical places and hazardous tasks have been identified and analysed. • Important problems in the daily task schedule have been detected and corrected. - Abstract: The indoor ambience on board modern ships constitutes a perfect example of severe industrial environment, where personnel are exposed to extreme working conditions, especially in the engine room. To mitigate this problem, the classical solution is the use of powerful mechanical ventilation systems, with high energy consumption, which, in the case of the engine room, represents between 3.5% and 5.5% of the overall power installed. Consequently, its energetic optimization is critical, being an interesting example of not well solved thermal engineering problem, where work risk criteria also must be considered, as the engine room is the hottest and, therefore, one of the most hazardous places on the ship. Based on a complete 3D CFD analysis of the thermal conditions in the engine room and the requirements and duties of the crew derived from their daily work schedule, the optimal ventilation requirements and the maximum tolerable working time have been established, achieving very important energy savings, without any reduction in crew productivity or safety.

  5. Optimally Stopped Optimization

    Science.gov (United States)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  6. Non-extremal D-instantons

    NARCIS (Netherlands)

    Bergshoeff, E; Collinucci, A; Gran, U; Roest, D; Vandoren, S

    2004-01-01

    We construct the most general non-extremal deformation of the D-instanton solution with maximal rotational symmetry. The general non-supersymmetric solution carries electric charges of the SL(2,R) symmetry, which correspond to each of the three conjugacy classes of SL (2, R). Our calculations

  7. Non-extremal D-instantons

    NARCIS (Netherlands)

    Bergshoeff, E.; Collinucci, A.; Gran, U.; Roest, D.; Vandoren, S.

    2004-01-01

    We construct the most general non-extremal deformation of the D-instanton solution with maximal rotational symmetry. The general non-supersymmetric solution carries electric charges of the SL(2,R) symmetry, which correspond to each of the three conjugacy classes of SL(2,R). Our calculations

  8. Clinical application of lower extremity CTA and lower extremity perfusion CT as a method of diagnostic for lower extremity atherosclerotic obliterans

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Il Bong; Dong, Kyung Rae [Dept. Radiological Technology, Gwangju Health University, Gwangju (Korea, Republic of); Goo, Eun Hoe [Dept. Radiological Science, Cheongju University, Cheongju (Korea, Republic of)

    2016-11-15

    The purpose of this study was to assess clinical application of lower extremity CTA and lower extremity perfusion CT as a method of diagnostic for lower extremity atherosclerotic obliterans. From January to July 2016, 30 patients (mean age, 68) were studied with lower extremity CTA and lower extremity perfusion CT. 128 channel multi-detector row CT scans were acquired with a CT scanner (SOMATOM Definition Flash, Siemens medical solution, Germany) of lower extremity perfusion CT and lower extremity CTA. Acquired images were reconstructed with 3D workstation (Leonardo, Siemens, Germany). Site of lower extremity arterial occlusive and stenosis lesions were detected superficial femoral artery 36.6%, popliteal artery 23.4%, external iliac artery 16.7%, common femoral artery 13.3%, peroneal artery 10%. The mean total DLP comparison of lower extremity perfusion CT and lower extremity CTA, 650 mGy-cm and 675 mGy-cm, respectively. Lower extremity perfusion CT and lower extremity CTA were realized that were never be two examination that were exactly the same legions. Future through the development of lower extremity perfusion CT soft ware programs suggest possible clinical applications.

  9. Improved Solutions for the Optimal Coordination of DOCRs Using Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Muhammad Sulaiman

    2018-01-01

    Full Text Available Nature-inspired optimization techniques are useful tools in electrical engineering problems to minimize or maximize an objective function. In this paper, we use the firefly algorithm to improve the optimal solution for the problem of directional overcurrent relays (DOCRs. It is a complex and highly nonlinear constrained optimization problem. In this problem, we have two types of design variables, which are variables for plug settings (PSs and the time dial settings (TDSs for each relay in the circuit. The objective function is to minimize the total operating time of all the basic relays to avoid unnecessary delays. We have considered four models in this paper which are IEEE (3-bus, 4-bus, 6-bus, and 8-bus models. From the numerical results, it is obvious that the firefly algorithm with certain parameter settings performs better than the other state-of-the-art algorithms.

  10. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    Science.gov (United States)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value

  11. Viscosity Solutions for a System of Integro-PDEs and Connections to Optimal Switching and Control of Jump-Diffusion Processes

    International Nuclear Information System (INIS)

    Biswas, Imran H.; Jakobsen, Espen R.; Karlsen, Kenneth H.

    2010-01-01

    We develop a viscosity solution theory for a system of nonlinear degenerate parabolic integro-partial differential equations (IPDEs) related to stochastic optimal switching and control problems or stochastic games. In the case of stochastic optimal switching and control, we prove via dynamic programming methods that the value function is a viscosity solution of the IPDEs. In our setting the value functions or the solutions of the IPDEs are not smooth, so classical verification theorems do not apply.

  12. Optimal design of cluster-based ad-hoc networks using probabilistic solution discovery

    International Nuclear Information System (INIS)

    Cook, Jason L.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    The reliability of ad-hoc networks is gaining popularity in two areas: as a topic of academic interest and as a key performance parameter for defense systems employing this type of network. The ad-hoc network is dynamic and scalable and these descriptions are what attract its users. However, these descriptions are also synonymous for undefined and unpredictable when considering the impacts to the reliability of the system. The configuration of an ad-hoc network changes continuously and this fact implies that no single mathematical expression or graphical depiction can describe the system reliability-wise. Previous research has used mobility and stochastic models to address this challenge successfully. In this paper, the authors leverage the stochastic approach and build upon it a probabilistic solution discovery (PSD) algorithm to optimize the topology for a cluster-based mobile ad-hoc wireless network (MAWN). Specifically, the membership of nodes within the back-bone network or networks will be assigned in such as way as to maximize reliability subject to a constraint on cost. The constraint may also be considered as a non-monetary cost, such as weight, volume, power, or the like. When a cost is assigned to each component, a maximum cost threshold is assigned to the network, and the method is run; the result is an optimized allocation of the radios enabling back-bone network(s) to provide the most reliable network possible without exceeding the allowable cost. The method is intended for use directly as part of the architectural design process of a cluster-based MAWN to efficiently determine an optimal or near-optimal design solution. It is capable of optimizing the topology based upon all-terminal reliability (ATR), all-operating terminal reliability (AoTR), or two-terminal reliability (2TR)

  13. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    Science.gov (United States)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  14. Optimal resource allocation solutions for heterogeneous cognitive radio networks

    Directory of Open Access Journals (Sweden)

    Babatunde Awoyemi

    2017-05-01

    Full Text Available Cognitive radio networks (CRN are currently gaining immense recognition as the most-likely next-generation wireless communication paradigm, because of their enticing promise of mitigating the spectrum scarcity and/or underutilisation challenge. Indisputably, for this promise to ever materialise, CRN must of necessity devise appropriate mechanisms to judiciously allocate their rather scarce or limited resources (spectrum and others among their numerous users. ‘Resource allocation (RA in CRN', which essentially describes mechanisms that can effectively and optimally carry out such allocation, so as to achieve the utmost for the network, has therefore recently become an important research focus. However, in most research works on RA in CRN, a highly significant factor that describes a more realistic and practical consideration of CRN has been ignored (or only partially explored, i.e., the aspect of the heterogeneity of CRN. To address this important aspect, in this paper, RA models that incorporate the most essential concepts of heterogeneity, as applicable to CRN, are developed and the imports of such inclusion in the overall networking are investigated. Furthermore, to fully explore the relevance and implications of the various heterogeneous classifications to the RA formulations, weights are attached to the different classes and their effects on the network performance are studied. In solving the developed complex RA problems for heterogeneous CRN, a solution approach that examines and exploits the structure of the problem in achieving a less-complex reformulation, is extensively employed. This approach, as the results presented show, makes it possible to obtain optimal solutions to the rather difficult RA problems of heterogeneous CRN.

  15. Spike-layer solutions to nonlinear fractional Schrodinger equations with almost optimal nonlinearities

    Directory of Open Access Journals (Sweden)

    Jinmyoung Seok

    2015-07-01

    Full Text Available In this article, we are interested in singularly perturbed nonlinear elliptic problems involving a fractional Laplacian. Under a class of nonlinearity which is believed to be almost optimal, we construct a positive solution which exhibits multiple spikes near any given local minimum components of an exterior potential of the problem.

  16. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    Science.gov (United States)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  17. Selective recovery of Pd(II) from extremely acidic solution using ion-imprinted chitosan fiber: Adsorption performance and mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Shuo [School of Environmental Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Wei, Wei [School of Chemical Engineering, Chonbuk National University, Jeonbuk 561-756 (Korea, Republic of); Wu, Xiaohui; Zhou, Tao [School of Environmental Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Mao, Juan, E-mail: monicamao45@hust.edu.cn [School of Environmental Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Yun, Yeoung-Sang, E-mail: ysyun@jbnu.ac.kr [School of Chemical Engineering, Chonbuk National University, Jeonbuk 561-756 (Korea, Republic of)

    2015-12-15

    Highlights: • An acid-resisting chitosan fiber was prepared by ion-imprinting technique. • Pd(II) and ECH were as template and two-step crosslinking agent, respectively. • IIF showed a good adsorption and selectivity performance on Pd(II) solutions. • Selectivity was due to the electrostatic attraction between −NH{sub 3}{sup +} and [PdCl{sub 4}]{sup 2−}. • Stable sorption/desorption performance shows a potential in further application. - Abstract: A novel, selective and acid-resisting chitosan fiber adsorbent was prepared by the ion-imprinting technique using Pd(II) and epichlorohydrin as the template and two-step crosslinking agent, respectively. The resulting ion-imprinted chitosan fibers (IIF) were used to selectively adsorb Pd(II) under extremely acidic synthetic metal solutions. The adsorption and selectivity performances of IIF including kinetics, isotherms, pH effects, and regeneration were investigated. Pd(II) rapidly adsorbed on the IIF within 100 min, achieving the adsorption equilibrium. The isotherm results showed that the maximum Pd(II) uptake on the IIF was maintained as 324.6–326.4 mg g{sup −1} in solutions containing single and multiple metals, whereas the Pd(II) uptake on non-imprinted fibers (NIF) decreased from 313.7 to 235.3 mg g{sup −1} in solution containing multiple metals. Higher selectivity coefficients values were obtained from the adsorption on the IIF, indicating a better Pd(II) selectivity. The amine group, supposedly the predominant adsorption site for Pd(II), was confirmed by Fourier transform infrared spectroscopy and X-ray photoelectron spectroscopy. The pH value played a significant role on the mechanism of the selective adsorption in the extremely acidic conditions. Furthermore, the stabilized performance for three cycles of sorption/desorption shows a potential for further large-scale applications.

  18. Selective recovery of Pd(II) from extremely acidic solution using ion-imprinted chitosan fiber: Adsorption performance and mechanisms

    International Nuclear Information System (INIS)

    Lin, Shuo; Wei, Wei; Wu, Xiaohui; Zhou, Tao; Mao, Juan; Yun, Yeoung-Sang

    2015-01-01

    Highlights: • An acid-resisting chitosan fiber was prepared by ion-imprinting technique. • Pd(II) and ECH were as template and two-step crosslinking agent, respectively. • IIF showed a good adsorption and selectivity performance on Pd(II) solutions. • Selectivity was due to the electrostatic attraction between −NH_3"+ and [PdCl_4]"2"−. • Stable sorption/desorption performance shows a potential in further application. - Abstract: A novel, selective and acid-resisting chitosan fiber adsorbent was prepared by the ion-imprinting technique using Pd(II) and epichlorohydrin as the template and two-step crosslinking agent, respectively. The resulting ion-imprinted chitosan fibers (IIF) were used to selectively adsorb Pd(II) under extremely acidic synthetic metal solutions. The adsorption and selectivity performances of IIF including kinetics, isotherms, pH effects, and regeneration were investigated. Pd(II) rapidly adsorbed on the IIF within 100 min, achieving the adsorption equilibrium. The isotherm results showed that the maximum Pd(II) uptake on the IIF was maintained as 324.6–326.4 mg g"−"1 in solutions containing single and multiple metals, whereas the Pd(II) uptake on non-imprinted fibers (NIF) decreased from 313.7 to 235.3 mg g"−"1 in solution containing multiple metals. Higher selectivity coefficients values were obtained from the adsorption on the IIF, indicating a better Pd(II) selectivity. The amine group, supposedly the predominant adsorption site for Pd(II), was confirmed by Fourier transform infrared spectroscopy and X-ray photoelectron spectroscopy. The pH value played a significant role on the mechanism of the selective adsorption in the extremely acidic conditions. Furthermore, the stabilized performance for three cycles of sorption/desorption shows a potential for further large-scale applications.

  19. Optimization and analysis of large chemical kinetic mechanisms using the solution mapping method - Combustion of methane

    Science.gov (United States)

    Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.

    1992-01-01

    A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.

  20. Multiswarm comprehensive learning particle swarm optimization for solving multiobjective optimization problems.

    Science.gov (United States)

    Yu, Xiang; Zhang, Xueqing

    2017-01-01

    Comprehensive learning particle swarm optimization (CLPSO) is a powerful state-of-the-art single-objective metaheuristic. Extending from CLPSO, this paper proposes multiswarm CLPSO (MSCLPSO) for multiobjective optimization. MSCLPSO involves multiple swarms, with each swarm associated with a separate original objective. Each particle's personal best position is determined just according to the corresponding single objective. Elitists are stored externally. MSCLPSO differs from existing multiobjective particle swarm optimizers in three aspects. First, each swarm focuses on optimizing the associated objective using CLPSO, without learning from the elitists or any other swarm. Second, mutation is applied to the elitists and the mutation strategy appropriately exploits the personal best positions and elitists. Third, a modified differential evolution (DE) strategy is applied to some extreme and least crowded elitists. The DE strategy updates an elitist based on the differences of the elitists. The personal best positions carry useful information about the Pareto set, and the mutation and DE strategies help MSCLPSO discover the true Pareto front. Experiments conducted on various benchmark problems demonstrate that MSCLPSO can find nondominated solutions distributed reasonably over the true Pareto front in a single run.

  1. Topology optimization of Channel flow problems

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Sigmund, Ole; Haber, R. B.

    2005-01-01

    function which measures either some local aspect of the velocity field or a global quantity, such as the rate of energy dissipation. We use the finite element method to model the flow, and we solve the optimization problem with a gradient-based math-programming algorithm that is driven by analytical......This paper describes a topology design method for simple two-dimensional flow problems. We consider steady, incompressible laminar viscous flows at low to moderate Reynolds numbers. This makes the flow problem non-linear and hence a non-trivial extension of the work of [Borrvall&Petersson 2002......]. Further, the inclusion of inertia effects significantly alters the physics, enabling solutions of new classes of optimization problems, such as velocity--driven switches, that are not addressed by the earlier method. Specifically, we determine optimal layouts of channel flows that extremize a cost...

  2. Limitations and pitfalls of climate change impact analysis on urban rainfall extremes

    DEFF Research Database (Denmark)

    Willems, P.; Olsson, J.; Arnbjerg-Nielsen, Karsten

    Under the umbrella of the IWA/IAHR Joint Committee on Urban Drainage, the International Working Group on Urban Rainfall (IGUR) has reviewed existing methodologies for the analysis of long-term historical and future trends in urban rainfall extremes and their effects on urban drainage systems, due...... to anthropogenic climate change. Current practices have several limitations and pitfalls, which are important to be considered by trend or climate change impact modellers and users of trend/impact results. Climate change may well be the driver that ensures that changes in urban drainage paradigms are identified...... and suitable solutions implemented. Design and optimization of urban drainage infrastructure considering climate change impacts and co-optimizing with other objectives will become ever more important to keep our cities liveable into the future....

  3. Optimal power flow: a bibliographic survey I. Formulations and deterministic methods

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [University of Jyvaskyla, Department of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)

    2012-09-15

    Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey (this article) provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)

  4. Oil Reservoir Production Optimization using Optimal Control

    DEFF Research Database (Denmark)

    Völcker, Carsten; Jørgensen, John Bagterp; Stenby, Erling Halfdan

    2011-01-01

    Practical oil reservoir management involves solution of large-scale constrained optimal control problems. In this paper we present a numerical method for solution of large-scale constrained optimal control problems. The method is a single-shooting method that computes the gradients using the adjo...... reservoir using water ooding and smart well technology. Compared to the uncontrolled case, the optimal operation increases the Net Present Value of the oil field by 10%.......Practical oil reservoir management involves solution of large-scale constrained optimal control problems. In this paper we present a numerical method for solution of large-scale constrained optimal control problems. The method is a single-shooting method that computes the gradients using...

  5. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  6. Adaptive surrogate model based multiobjective optimization for coastal aquifer management

    Science.gov (United States)

    Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin

    2018-06-01

    In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.

  7. Shape optimization in 2D contact problems with given friction and a solution-dependent coefficient of friction

    Czech Academy of Sciences Publication Activity Database

    Haslinger, J.; Outrata, Jiří; Pathó, R.

    2012-01-01

    Roč. 20, č. 1 (2012), s. 31-59 ISSN 1877-0533 R&D Projects: GA AV ČR IAA100750802 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : shape optimization * Signorini problem * model with given frinction * solution-dependent coefficient of friction * mathematical probrams with equilibrium constraints Subject RIV: BA - General Mathematics Impact factor: 1.036, year: 2012 http://library.utia.cas.cz/separaty/2012/MTR/outrata-shape optimization in 2d contact problems with given friction and a solution-dependent coefficient of friction .pdf

  8. Optimization and photomodification of extremely broadband optical response of plasmonic core-shell obscurants.

    Science.gov (United States)

    de Silva, Vashista C; Nyga, Piotr; Drachev, Vladimir P

    2016-12-15

    Plasmonic resonances of the metallic shells depend on their nanostructure and geometry of the core, which can be optimized for the broadband extinction normalized by mass. The fractal nanostructures can provide a broadband extinction. It allows as well for a laser photoburning of holes in the extinction spectra and consequently windows of transparency in a controlled manner. The studied core-shell microparticles synthesized using colloidal chemistry consist of gold fractal nanostructures grown on precipitated calcium carbonate (PCC) microparticles or silica (SiO 2 ) microspheres. The optimization includes different core sizes and shapes, and shell nanostructures. It shows that the rich surface of the PCC flakes is the best core for the fractal shells providing the highest mass normalized extinction over the extremely broad spectral range. The mass normalized extinction cross section up to 3m 2 /g has been demonstrated in the broad spectral range from the visible to mid-infrared. Essentially, the broadband response is a characteristic feature of each core-shell microparticle in contrast to a combination of several structures resonant at different wavelengths, for example nanorods with different aspect ratios. The photomodification at an IR wavelength makes the window of transparency at the longer wavelength side. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Extremal graph theory

    CERN Document Server

    Bollobas, Bela

    2004-01-01

    The ever-expanding field of extremal graph theory encompasses a diverse array of problem-solving methods, including applications to economics, computer science, and optimization theory. This volume, based on a series of lectures delivered to graduate students at the University of Cambridge, presents a concise yet comprehensive treatment of extremal graph theory.Unlike most graph theory treatises, this text features complete proofs for almost all of its results. Further insights into theory are provided by the numerous exercises of varying degrees of difficulty that accompany each chapter. A

  10. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    Science.gov (United States)

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  11. The Primary Experiments of an Analysis of Pareto Solutions for Conceptual Design Optimization Problem of Hybrid Rocket Engine

    Science.gov (United States)

    Kudo, Fumiya; Yoshikawa, Tomohiro; Furuhashi, Takeshi

    Recentry, Multi-objective Genetic Algorithm, which is the application of Genetic Algorithm to Multi-objective Optimization Problems is focused on in the engineering design field. In this field, the analysis of design variables in the acquired Pareto solutions, which gives the designers useful knowledge in the applied problem, is important as well as the acquisition of advanced solutions. This paper proposes a new visualization method using Isomap which visualizes the geometric distances of solutions in the design variable space considering their distances in the objective space. The proposed method enables a user to analyze the design variables of the acquired solutions considering their relationship in the objective space. This paper applies the proposed method to the conceptual design optimization problem of hybrid rocket engine and studies the effectiveness of the proposed method.

  12. Micro-scale NMR Experiments for Monitoring the Optimization of Membrane Protein Solutions for Structural Biology.

    Science.gov (United States)

    Horst, Reto; Wüthrich, Kurt

    2015-07-20

    Reconstitution of integral membrane proteins (IMP) in aqueous solutions of detergent micelles has been extensively used in structural biology, using either X-ray crystallography or NMR in solution. Further progress could be achieved by establishing a rational basis for the selection of detergent and buffer conditions, since the stringent bottleneck that slows down the structural biology of IMPs is the preparation of diffracting crystals or concentrated solutions of stable isotope labeled IMPs. Here, we describe procedures to monitor the quality of aqueous solutions of [ 2 H, 15 N]-labeled IMPs reconstituted in detergent micelles. This approach has been developed for studies of β-barrel IMPs, where it was successfully applied for numerous NMR structure determinations, and it has also been adapted for use with α-helical IMPs, in particular GPCRs, in guiding crystallization trials and optimizing samples for NMR studies (Horst et al ., 2013). 2D [ 15 N, 1 H]-correlation maps are used as "fingerprints" to assess the foldedness of the IMP in solution. For promising samples, these "inexpensive" data are then supplemented with measurements of the translational and rotational diffusion coefficients, which give information on the shape and size of the IMP/detergent mixed micelles. Using microcoil equipment for these NMR experiments enables data collection with only micrograms of protein and detergent. This makes serial screens of variable solution conditions viable, enabling the optimization of parameters such as the detergent concentration, sample temperature, pH and the composition of the buffer.

  13. Irrigation solutions in open fractures of the lower extremities: evaluation of isotonic saline and distilled water.

    Science.gov (United States)

    Olufemi, Olukemi Temiloluwa; Adeyeye, Adeolu Ikechukwu

    2017-01-01

    Open fractures are widely considered as orthopaedic emergencies requiring immediate intervention. The initial management of these injuries usually affects the ultimate outcome because open fractures may be associated with significant morbidity. Wound irrigation forms one of the pivotal principles in the treatment of open fractures. The choice of irrigation fluid has since been a source of debate. This study aimed to evaluate and compare the effects of isotonic saline and distilled water as irrigation solutions in the management of open fractures of the lower extremities. Wound infection and wound healing rates using both solutions were evaluated. This was a prospective hospital-based study of 109 patients who presented to the Accident and Emergency department with open lower limb fractures. Approval was sought and obtained from the Ethics Committee of the Hospital. Patients were randomized into either the isotonic saline (NS) or the distilled water (DW) group using a simple ballot technique. Twelve patients were lost to follow-up, while 97 patients were available until conclusion of the study. There were 50 patients in the isotonic saline group and 47 patients in the distilled water group. Forty-one (42.3%) of the patients were in the young and economically productive strata of the population. There was a male preponderance with a 1.7:1 male-to-female ratio. The wound infection rate was 34% in the distilled water group and 44% in the isotonic saline group (p = 0.315). The mean time ± SD to wound healing was 2.7 ± 1.5 weeks in the distilled water group and 3.1 ± 1.8 weeks in the isotonic saline group (p = 0.389). It was concluded from this study that the use of distilled water compares favourably with isotonic saline as an irrigation solution in open fractures of the lower extremities. © The Authors, published by EDP Sciences, 2017.

  14. Irrigation solutions in open fractures of the lower extremities: evaluation of isotonic saline and distilled water

    Directory of Open Access Journals (Sweden)

    Olufemi Olukemi Temiloluwa

    2017-01-01

    Full Text Available Introduction: Open fractures are widely considered as orthopaedic emergencies requiring immediate intervention. The initial management of these injuries usually affects the ultimate outcome because open fractures may be associated with significant morbidity. Wound irrigation forms one of the pivotal principles in the treatment of open fractures. The choice of irrigation fluid has since been a source of debate. This study aimed to evaluate and compare the effects of isotonic saline and distilled water as irrigation solutions in the management of open fractures of the lower extremities. Wound infection and wound healing rates using both solutions were evaluated. Methods: This was a prospective hospital-based study of 109 patients who presented to the Accident and Emergency department with open lower limb fractures. Approval was sought and obtained from the Ethics Committee of the Hospital. Patients were randomized into either the isotonic saline (NS or the distilled water (DW group using a simple ballot technique. Twelve patients were lost to follow-up, while 97 patients were available until conclusion of the study. There were 50 patients in the isotonic saline group and 47 patients in the distilled water group. Results: Forty-one (42.3% of the patients were in the young and economically productive strata of the population. There was a male preponderance with a 1.7:1 male-to-female ratio. The wound infection rate was 34% in the distilled water group and 44% in the isotonic saline group (p = 0.315. The mean time ± SD to wound healing was 2.7 ± 1.5 weeks in the distilled water group and 3.1 ± 1.8 weeks in the isotonic saline group (p = 0.389. Conclusions: It was concluded from this study that the use of distilled water compares favourably with isotonic saline as an irrigation solution in open fractures of the lower extremities.

  15. Optimal power flow: a bibliographic survey II. Non-deterministic and hybrid methods

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen [Colorado School of Mines, Department of Electrical Engineering and Computer Science, Golden, CO (United States); Steponavice, Ingrida [Univ. of Jyvaskyla, Dept. of Mathematical Information Technology, Agora (Finland); Rebennack, Steffen [Colorado School of Mines, Division of Economics and Business, Golden, CO (United States)

    2012-09-15

    Over the past half-century, optimal power flow (OPF) has become one of the most important and widely studied nonlinear optimization problems. In general, OPF seeks to optimize the operation of electric power generation, transmission, and distribution networks subject to system constraints and control limits. Within this framework, however, there is an extremely wide variety of OPF formulations and solution methods. Moreover, the nature of OPF continues to evolve due to modern electricity markets and renewable resource integration. In this two-part survey, we survey both the classical and recent OPF literature in order to provide a sound context for the state of the art in OPF formulation and solution methods. The survey contributes a comprehensive discussion of specific optimization techniques that have been applied to OPF, with an emphasis on the advantages, disadvantages, and computational characteristics of each. Part I of the survey provides an introduction and surveys the deterministic optimization methods that have been applied to OPF. Part II of the survey (this article) examines the recent trend towards stochastic, or non-deterministic, search techniques and hybrid methods for OPF. (orig.)

  16. Optimizing an Investment Solution in Conditions of Uncertainty and Risk as a Multicriterial Task

    Directory of Open Access Journals (Sweden)

    Kotsyuba Oleksiy S.

    2017-10-01

    Full Text Available The article is concerned with the methodology for optimizing investment decisions in conditions of uncertainty and risk. The subject area of the study relates, first of all, to real investment. The problem of modeling an optimal investment solution is considered to be a multicriterial task. Also, the constructive part of the publication is based on the position that the multicriteriality of objectives of investment projecting is the result, first, of the complex nature of the category of economic attractiveness (efficiency of real investment, and secondly, of the need to take into account the risk factor, which is a vector measure, in the preparation of an investment solution. An attempt has been made to develop an instrumentarium to optimize investment decisions in a situation of uncertainty and the risk it engenders, based on the use of roll-up of the local criteria. As a result of its implementation, a model has been proposed, which has the advantage that it takes into account, to a greater extent than is the case for standardized roll-up options, the contensive and formal features of the local (detailed criteria.

  17. An Ad-Hoc Initial Solution Heuristic for Metaheuristic Optimization of Energy Market Participation Portfolios

    Directory of Open Access Journals (Sweden)

    Ricardo Faia

    2017-06-01

    Full Text Available The deregulation of the electricity sector has culminated in the introduction of competitive markets. In addition, the emergence of new forms of electric energy production, namely the production of renewable energy, has brought additional changes in electricity market operation. Renewable energy has significant advantages, but at the cost of an intermittent character. The generation variability adds new challenges for negotiating players, as they have to deal with a new level of uncertainty. In order to assist players in their decisions, decision support tools enabling assisting players in their negotiations are crucial. Artificial intelligence techniques play an important role in this decision support, as they can provide valuable results in rather small execution times, namely regarding the problem of optimizing the electricity markets participation portfolio. This paper proposes a heuristic method that provides an initial solution that allows metaheuristic techniques to improve their results through a good initialization of the optimization process. Results show that by using the proposed heuristic, multiple metaheuristic optimization methods are able to improve their solutions in a faster execution time, thus providing a valuable contribution for players support in energy markets negotiations.

  18. Approximate ideal multi-objective solution Q(λ) learning for optimal carbon-energy combined-flow in multi-energy power systems

    International Nuclear Information System (INIS)

    Zhang, Xiaoshun; Yu, Tao; Yang, Bo; Zheng, Limin; Huang, Linni

    2015-01-01

    Highlights: • A novel optimal carbon-energy combined-flow (OCECF) model is firstly established. • A novel approximate ideal multi-objective solution Q(λ) learning is designed. • The proposed algorithm has a high convergence stability and reliability. • The proposed algorithm can be applied for OCECF in a large-scale power grid. - Abstract: This paper proposes a novel approximate ideal multi-objective solution Q(λ) learning for optimal carbon-energy combined-flow in multi-energy power systems. The carbon emissions, fuel cost, active power loss, voltage deviation and carbon emission loss are chosen as the optimization objectives, which are simultaneously optimized by five different Q-value matrices. The dynamic optimal weight of each objective is calculated online from the entire Q-value matrices such that the greedy action policy can be obtained. Case studies are carried out to evaluate the optimization performance for carbon-energy combined-flow in an IEEE 118-bus system and the regional power grid of southern China.

  19. Optimization of soymilk, mango nectar and sucrose solution mixes for better quality of soymilk based beverage.

    Science.gov (United States)

    Getu, Rahel; Tola, Yetenayet B; Neela, Satheesh

    2017-01-01

    Soy milk-based beverages play an important role as a healthy food alternative for human consumption. However, the ‘beany’ flavor and chalky mouth feel of soy milk often makes it unpalatable to consumers. The objective of the present study is to optimize a blend of soy milk, mango nectar and sucrose solution for the best quality soy milk-based beverage. This study was designed to develop a soy milk blended beverage, with mango nectar and sucrose solutions, with the best physicochemical and sensory properties. Fourteen combinations of formulations were determined by D-optimal mixture simplex lattice design, by using Design expert. The blended beverages were prepared by mixing the three basic ingredients with the range of 60−100% soy milk, 0–25% mango nectar and 0–15% sucrose solution. The prepared blended beverage was analyzed for selected physicochemical and sensory properties. The statistical significance of the terms in the regression equations were examined by Analysis of Variance (ANOVA) for each response and the significance test level was set at 5% (p nectar and sucrose solution increased, total color change, total soluble solid, gross energy, titratable acidity, and beta-carotene contents increased but with a decrease in moisture , ash, protein, ether extract, minerals and phytic acid contents was observed. Fi- nally, numerical optimization determined that 81% soy milk, 16% Mango nectar and 3% sugar solution will give by a soy milk blended beverage with the best physicochemical and sensory properties, with a desirability of 0.564. Blending soy milk with fruit juice such as mango is beneficial, as it improves sensory as well as selected nutritional parameters.

  20. A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Lizhi Cui

    2014-01-01

    Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.

  1. MATHEMATICAL SOLUTIONS FOR OPTIMAL DIMENSIONING OF NUMBER AND HEIGHTS OF SOME HYDROTECHNIQUE ON TORRENTIAL FORMATION

    Directory of Open Access Journals (Sweden)

    Nicolae Petrescu

    2010-01-01

    Full Text Available This paper is intended to achieve a mathematical model for the optimal dimensioning of number and heights of somedams/thresholds during a downpour, a decrease of water flow rate being obtained and by the solid material depositionsbehind the constructions a new smaller slope of the valley that changes the torrential nature that had before theconstruction is obtained.The choice of the dam and its characteristic dimensions may be an optimization issue and the location of dams on thetorrential (rainfall aspect is dictated by the capabilities of the foundation and restraint so that the chosen solutions willhave to comply with these sites. Finally, the choice of optimal solution to limit torrential (rainfall aspect will be basedon a calculation where the number of thresholds / dams can be a variable related to this, their height properly varying.The calculation method presented is an attempt to demonstrate the multiple opportunities available to implement atechnical issue solving conditions offered by the mathematics against soil erosion, which now is currently very topicalon the environmental protection.

  2. Optimal solutions for a bio mathematical model for the evolution of smoking habit

    Science.gov (United States)

    Sikander, Waseem; Khan, Umar; Ahmed, Naveed; Mohyud-Din, Syed Tauseef

    In this study, we apply Variation of Parameter Method (VPM) coupled with an auxiliary parameter to obtain the approximate solutions for the epidemic model for the evolution of smoking habit in a constant population. Convergence of the developed algorithm, namely VPM with an auxiliary parameter is studied. Furthermore, a simple way is considered for obtaining an optimal value of auxiliary parameter via minimizing the total residual error over the domain of problem. Comparison of the obtained results with standard VPM shows that an auxiliary parameter is very feasible and reliable in controlling the convergence of approximate solutions.

  3. A modified teaching–learning based optimization for multi-objective optimal power flow problem

    International Nuclear Information System (INIS)

    Shabanpour-Haghighi, Amin; Seifi, Ali Reza; Niknam, Taher

    2014-01-01

    Highlights: • A new modified teaching–learning based algorithm is proposed. • A self-adaptive wavelet mutation strategy is used to enhance the performance. • To avoid reaching a large repository size, a fuzzy clustering technique is used. • An efficiently smart population selection is utilized. • Simulations show the superiority of this algorithm compared with other ones. - Abstract: In this paper, a modified teaching–learning based optimization algorithm is analyzed to solve the multi-objective optimal power flow problem considering the total fuel cost and total emission of the units. The modified phase of the optimization algorithm utilizes a self-adapting wavelet mutation strategy. Moreover, a fuzzy clustering technique is proposed to avoid extremely large repository size besides a smart population selection for the next iteration. These techniques make the algorithm searching a larger space to find the optimal solutions while speed of the convergence remains good. The IEEE 30-Bus and 57-Bus systems are used to illustrate performance of the proposed algorithm and results are compared with those in literatures. It is verified that the proposed approach has better performance over other techniques

  4. General solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging.

    Science.gov (United States)

    Nakata, Toshihiko; Ninomiya, Takanori

    2006-10-10

    A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.

  5. Methanol Synthesis: Optimal Solution for a Better Efficiency of the Process

    Directory of Open Access Journals (Sweden)

    Grazia Leonzio

    2018-02-01

    Full Text Available In this research, an ANOVA analysis and a response surface methodology are applied to analyze the equilibrium of methanol reaction from pure carbon dioxide and hydrogen. In the ANOVA analysis, carbon monoxide composition in the feed, reaction temperature, recycle and water removal through a zeolite membrane are the analyzed factors. Carbon conversion, methanol yield, methanol productivity and methanol selectivity are the analyzed responses. Results show that main factors have the same effect on responses and a common significant interaction is not present. Carbon monoxide composition and water removal have a positive effect, while temperature and recycle have a negative effect on the system. From central composite design, an optimal solution is found in order to overcome thermodynamic limit: the reactor works with a membrane at lower temperature with carbon monoxide composition in the feed equal to 10 mol % and without recycle. In these conditions, carbon conversion, methanol yield, methanol selectivity, and methanol production are, respectively, higher than 60%, higher than 60%, between 90% and 95% and higher than 0.15 mol/h when considering a feed flow rate of 1 mol/h. A comparison with a traditional reactor is also developed: the membrane reactor ensures to have a carbon conversion higher of the 29% and a methanol yield higher of the 34%. Future researches should evaluate an economic analysis about the optimal solution.

  6. Anti-predatory particle swarm optimization: Solution to nonconvex economic dispatch problems

    Energy Technology Data Exchange (ETDEWEB)

    Selvakumar, A. Immanuel [Department of Electrical and Electronics Engineering, Karunya Institute of Technology and Sciences, Coimbatore 641114, Tamilnadu (India); Thanushkodi, K. [Department of Electronics and Instrumentation Engineering, Government College of Technology, Coimbatore 641013, Tamilnadu (India)

    2008-01-15

    This paper proposes a new particle swarm optimization (PSO) strategy namely, anti-predatory particle swarm optimization (APSO) to solve nonconvex economic dispatch problems. In the classical PSO, the movement of a particle (bird) is governed by three behaviors: inertial, cognitive and social. The cognitive and social behaviors are the components of the foraging activity, which help the swarm of birds to locate food. Another activity that is observed in birds is the anti-predatory nature, which helps the swarm to escape from the predators. In this work, the anti-predatory activity is modeled and embedded in the classical PSO to form APSO. This inclusion enhances the exploration capability of the swarm. To validate the proposed APSO model, it is applied to two test systems having nonconvex solution spaces. Satisfactory results are obtained when compared with previous approaches. (author)

  7. Closed-form solutions for linear regulator-design of mechanical systems including optimal weighting matrix selection

    Science.gov (United States)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.

  8. Optimal Analytical Solution for a Capacitive Wireless Power Transfer System with One Transmitter and Two Receivers

    Directory of Open Access Journals (Sweden)

    Ben Minnaert

    2017-09-01

    Full Text Available Wireless power transfer from one transmitter to multiple receivers through inductive coupling is slowly entering the market. However, for certain applications, capacitive wireless power transfer (CWPT using electric coupling might be preferable. In this work, we determine closed-form expressions for a CWPT system with one transmitter and two receivers. We determine the optimal solution for two design requirements: (i maximum power transfer, and (ii maximum system efficiency. We derive the optimal loads and provide the analytical expressions for the efficiency and power. We show that the optimal load conductances for the maximum power configuration are always larger than for the maximum efficiency configuration. Furthermore, it is demonstrated that if the receivers are coupled, this can be compensated for by introducing susceptances that have the same value for both configurations. Finally, we numerically verify our results. We illustrate the similarities to the inductive wireless power transfer (IWPT solution and find that the same, but dual, expressions apply.

  9. Enhancement of conversion efficiency of extreme ultraviolet radiation from a liquid aqueous solution microjet target by use of dual laser pulses

    Science.gov (United States)

    Higashiguchi, Takeshi; Dojyo, Naoto; Hamada, Masaya; Kawasaki, Keita; Sasaki, Wataru; Kubodera, Shoichi

    2006-03-01

    We demonstrated a debris-free, efficient laser-produced plasma extreme ultraviolet (EUV) source by use of a regenerative liquid microjet target containing tin-dioxide (SnO II) nano-particles. By using a low SnO II concentration (6%) solution and dual laser pulses for the plasma control, we observed the EUV conversion efficiency of 1.2% with undetectable debris.

  10. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    Science.gov (United States)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  11. Root-induced Changes in the Rhizosphere of Extreme High Yield Tropical Rice: 2. Soil Solution Chemical Properties

    Directory of Open Access Journals (Sweden)

    Mitsuru Osaki

    2012-09-01

    Full Text Available Our previous studies showed that the extreme high yield tropical rice (Padi Panjang produced 3-8 t ha-1 without fertilizers. We also found that the rice yield did not correlate with some soil properties. We thought that it may be due to ability of root in affecting soil properties in the root zone. Therefore, we studied the extent of rice root in affecting the chemical properties of soil solution surrounding the root zone. A homemade rhizobox (14x10x12 cm was used in this experiment. The rhizobox was vertically segmented 2 cm interval using nylon cloth that could be penetrated neither root nor mycorrhiza, but, soil solution was freely passing the cloth. Three soils of different origins (Kuin, Bunipah and Guntung Papuyu were used. The segment in the center was sown with 20 seeds of either Padi Panjang or IR64 rice varieties. After emerging, 10 seedlings were maintained for 5 weeks. At 4 weeks after sowing, some chemical properties of the soil solution were determined. These were ammonium (NH4+, nitrate (NO3-, phosphorus (P and iron (Fe2+ concentrations and pH, electric conductivity (EC and oxidation reduction potential (ORP. In general, the plant root changed solution chemical properties both in- and outside the soil rhizosphere. The patterns of changes were affected by the properties of soil origins. The release of exudates and change in ORP may have been responsible for the changes soil solution chemical properties.

  12. Approximate analytical solution of diffusion equation with fractional time derivative using optimal homotopy analysis method

    Directory of Open Access Journals (Sweden)

    S. Das

    2013-12-01

    Full Text Available In this article, optimal homotopy-analysis method is used to obtain approximate analytic solution of the time-fractional diffusion equation with a given initial condition. The fractional derivatives are considered in the Caputo sense. Unlike usual Homotopy analysis method, this method contains at the most three convergence control parameters which describe the faster convergence of the solution. Effects of parameters on the convergence of the approximate series solution by minimizing the averaged residual error with the proper choices of parameters are calculated numerically and presented through graphs and tables for different particular cases.

  13. Design of Distributed Controllers Seeking Optimal Power Flow Solutions Under Communication Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    2016-12-29

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltage measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.

  14. Determining the optimal system-specific cut-off frequencies for filtering in-vitro upper extremity impact force and acceleration data by residual analysis.

    Science.gov (United States)

    Burkhart, Timothy A; Dunning, Cynthia E; Andrews, David M

    2011-10-13

    The fundamental nature of impact testing requires a cautious approach to signal processing, to minimize noise while preserving important signal information. However, few recommendations exist regarding the most suitable filter frequency cut-offs to achieve these goals. Therefore, the purpose of this investigation is twofold: to illustrate how residual analysis can be utilized to quantify optimal system-specific filter cut-off frequencies for force, moment, and acceleration data resulting from in-vitro upper extremity impacts, and to show how optimal cut-off frequencies can vary based on impact condition intensity. Eight human cadaver radii specimens were impacted with a pneumatic impact testing device at impact energies that increased from 20J, in 10J increments, until fracture occurred. The optimal filter cut-off frequency for pre-fracture and fracture trials was determined with a residual analysis performed on all force and acceleration waveforms. Force and acceleration data were filtered with a dual pass, 4th order Butterworth filter at each of 14 different cut-off values ranging from 60Hz to 1500Hz. Mean (SD) pre-fracture and fracture optimal cut-off frequencies for the force variables were 605.8 (82.7)Hz and 513.9 (79.5)Hz, respectively. Differences in the optimal cut-off frequency were also found between signals (e.g. Fx (medial-lateral), Fy (superior-inferior), Fz (anterior-posterior)) within the same test. These optimal cut-off frequencies do not universally agree with the recommendations of filtering all upper extremity impact data using a cut-off frequency of 600Hz. This highlights the importance of quantifying the filter frequency cut-offs specific to the instrumentation and experimental set-up. Improper digital filtering may lead to erroneous results and a lack of standardized approaches makes it difficult to compare findings of in-vitro dynamic testing between laboratories. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku kose

    2017-01-01

    In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an...

  16. Study of Research and Development Processes through Fuzzy Super FRM Model and Optimization Solutions

    Directory of Open Access Journals (Sweden)

    Flavius Aurelian Sârbu

    2015-01-01

    Full Text Available The aim of this study is to measure resources for R&D (research and development at the regional level in Romania and also obtain primary data that will be important in making the right decisions to increase competitiveness and development based on an economic knowledge. As our motivation, we would like to emphasize that by the use of Super Fuzzy FRM model we want to determine the state of R&D processes at regional level using a mean different from the statistical survey, while by the two optimization methods we mean to provide optimization solutions for the R&D actions of the enterprises. Therefore to fulfill the above mentioned aim in this application-oriented paper we decided to use a questionnaire and for the interpretation of the results the Super Fuzzy FRM model, representing the main novelty of our paper, as this theory provides a formalism based on matrix calculus, which allows processing of large volumes of information and also delivers results difficult or impossible to see, through statistical processing. Furthermore another novelty of the paper represents the optimization solutions submitted in this work, given for the situation when the sales price is variable, and the quantity sold is constant in time and for the reverse situation.

  17. Solution of the radiative enclosure with a hybrid inverse method

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Rogerio Brittes da; Franca, Francis Henrique Ramos [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Dept. de Engenharia Mecanica], E-mail: frfranca@mecanica.ufrgs.br

    2010-07-01

    This work applies the inverse analysis to solve a three-dimensional radiative enclosure - which the surfaces are diffuse-grays - filled with transparent medium. The aim is determine the powers and locations of the heaters to attain both uniform heat flux and temperature on the design surface. A hybrid solution that couples two methods, the generalized extremal optimization (GEO) and the truncated singular value decomposition (TSVD) is proposed. The determination of the heat sources distribution is treated as an optimization problem, by GEO algorithm , whereas the solution of the system of equation, that embodies the Fredholm equation of first kind and therefore is expected to be ill conditioned, is build up through TSVD regularization method. The results show that the hybrid method can lead to a heat flux on the design surface that satisfies the imposed conditions with maximum error of less than 1,10%. The results illustrated the relevance of a hybrid method as a prediction tool. (author)

  18. Sensitivity of Optimal Solutions to Control Problems for Second Order Evolution Subdifferential Inclusions.

    Science.gov (United States)

    Bartosz, Krzysztof; Denkowski, Zdzisław; Kalita, Piotr

    In this paper the sensitivity of optimal solutions to control problems described by second order evolution subdifferential inclusions under perturbations of state relations and of cost functionals is investigated. First we establish a new existence result for a class of such inclusions. Then, based on the theory of sequential [Formula: see text]-convergence we recall the abstract scheme concerning convergence of minimal values and minimizers. The abstract scheme works provided we can establish two properties: the Kuratowski convergence of solution sets for the state relations and some complementary [Formula: see text]-convergence of the cost functionals. Then these two properties are implemented in the considered case.

  19. Asymptotic Method of Solution for a Problem of Construction of Optimal Gas-Lift Process Modes

    Directory of Open Access Journals (Sweden)

    Fikrat A. Aliev

    2010-01-01

    Full Text Available Mathematical model in oil extraction by gas-lift method for the case when the reciprocal value of well's depth represents a small parameter is considered. Problem of optimal mode construction (i.e., construction of optimal program trajectories and controls is reduced to the linear-quadratic optimal control problem with a small parameter. Analytic formulae for determining the solutions at the first-order approximation with respect to the small parameter are obtained. Comparison of the obtained results with known ones on a specific example is provided, which makes it, in particular, possible to use obtained results in realizations of oil extraction problems by gas-lift method.

  20. Controlling extreme events on complex networks

    Science.gov (United States)

    Chen, Yu-Zhong; Huang, Zi-Gang; Lai, Ying-Cheng

    2014-08-01

    Extreme events, a type of collective behavior in complex networked dynamical systems, often can have catastrophic consequences. To develop effective strategies to control extreme events is of fundamental importance and practical interest. Utilizing transportation dynamics on complex networks as a prototypical setting, we find that making the network ``mobile'' can effectively suppress extreme events. A striking, resonance-like phenomenon is uncovered, where an optimal degree of mobility exists for which the probability of extreme events is minimized. We derive an analytic theory to understand the mechanism of control at a detailed and quantitative level, and validate the theory numerically. Implications of our finding to current areas such as cybersecurity are discussed.

  1. Kinetic turbulence simulations at extreme scale on leadership-class systems

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Bei [Princeton Univ., Princeton, NJ (United States); Ethier, Stephane [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Tang, William [Princeton Univ., Princeton, NJ (United States); Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Williams, Timothy [Argonne National Lab. (ANL), Argonne, IL (United States); Ibrahim, Khaled Z. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Madduri, Kamesh [The Pennsylvania State Univ., University Park, PA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCF and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).

  2. Application of Nontraditional Optimization Techniques for Airfoil Shape Optimization

    Directory of Open Access Journals (Sweden)

    R. Mukesh

    2012-01-01

    Full Text Available The method of optimization algorithms is one of the most important parameters which will strongly influence the fidelity of the solution during an aerodynamic shape optimization problem. Nowadays, various optimization methods, such as genetic algorithm (GA, simulated annealing (SA, and particle swarm optimization (PSO, are more widely employed to solve the aerodynamic shape optimization problems. In addition to the optimization method, the geometry parameterization becomes an important factor to be considered during the aerodynamic shape optimization process. The objective of this work is to introduce the knowledge of describing general airfoil geometry using twelve parameters by representing its shape as a polynomial function and coupling this approach with flow solution and optimization algorithms. An aerodynamic shape optimization problem is formulated for NACA 0012 airfoil and solved using the methods of simulated annealing and genetic algorithm for 5.0 deg angle of attack. The results show that the simulated annealing optimization scheme is more effective in finding the optimum solution among the various possible solutions. It is also found that the SA shows more exploitation characteristics as compared to the GA which is considered to be more effective explorer.

  3. Multi-Objective Sustainable Operation of the Three Gorges Cascaded Hydropower System Using Multi-Swarm Comprehensive Learning Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Xiang Yu

    2016-06-01

    Full Text Available Optimal operation of hydropower reservoir systems often needs to optimize multiple conflicting objectives simultaneously. The conflicting objectives result in a Pareto front, which is a set of non-dominated solutions. Non-dominated solutions cannot outperform each other on all the objectives. An optimization framework based on the multi-swarm comprehensive learning particle swarm optimization algorithm is proposed to solve the multi-objective operation of hydropower reservoir systems. Through adopting search techniques such as decomposition, mutation and differential evolution, the algorithm tries to derive multiple non-dominated solutions reasonably distributed over the true Pareto front in one single run, thereby facilitating determining the final tradeoff. The long-term sustainable planning of the Three Gorges cascaded hydropower system consisting of the Three Gorges Dam and Gezhouba Dam located on the Yangtze River in China is studied. Two conflicting objectives, i.e., maximizing hydropower generation and minimizing deviation from the outflow lower target to realize the system’s economic, environmental and social benefits during the drought season, are optimized simultaneously. Experimental results demonstrate that the optimization framework helps to robustly derive multiple feasible non-dominated solutions with satisfactory convergence, diversity and extremity in one single run for the case studied.

  4. Infection in the ischemic lower extremity.

    Science.gov (United States)

    Fry, D E; Marek, J M; Langsfeld, M

    1998-06-01

    Infections in the lower extremity of the patient with ischemia can cover a broad spectrum of different diseases. An understanding of the particular pathophysiologic circumstances in the ischemic extremity can be of great value in understanding the natural history of the disease and the potential complications that may occur. Optimizing blood flow to the extremity by using revascularization techniques is important for any patient with an ischemic lower extremity complicated by infection or ulceration. Infections in the ischemic lower extremity require local débridement and systemic antibiotics. For severe infections, such as necrotizing fasciitis or the fetid foot, more extensive local débridement and even amputation may be required. Fundamentals of managing prosthetic graft infection require removing the infected prosthesis, local wound débridement, and systemic antibiotics while attempting to preserve viability of the lower extremity using autogenous graft reconstruction.

  5. Local entropy as a measure for sampling solutions in constraint satisfaction problems

    International Nuclear Information System (INIS)

    Baldassi, Carlo; Ingrosso, Alessandro; Lucibello, Carlo; Saglietti, Luca; Zecchina, Riccardo

    2016-01-01

    We introduce a novel entropy-driven Monte Carlo (EdMC) strategy to efficiently sample solutions of random constraint satisfaction problems (CSPs). First, we extend a recent result that, using a large-deviation analysis, shows that the geometry of the space of solutions of the binary perceptron learning problem (a prototypical CSP), contains regions of very high-density of solutions. Despite being sub-dominant, these regions can be found by optimizing a local entropy measure. Building on these results, we construct a fast solver that relies exclusively on a local entropy estimate, and can be applied to general CSPs. We describe its performance not only for the perceptron learning problem but also for the random K-satisfiabilty problem (another prototypical CSP with a radically different structure), and show numerically that a simple zero-temperature Metropolis search in the smooth local entropy landscape can reach sub-dominant clusters of optimal solutions in a small number of steps, while standard Simulated Annealing either requires extremely long cooling procedures or just fails. We also discuss how the EdMC can heuristically be made even more efficient for the cases we studied. (paper: disordered systems, classical and quantum)

  6. Searching for optimal integer solutions to set partitioning problems using column generation

    OpenAIRE

    Bredström, David; Jörnsten, Kurt; Rönnqvist, Mikael

    2007-01-01

    We describe a new approach to produce integer feasible columns to a set partitioning problem directly in solving the linear programming (LP) relaxation using column generation. Traditionally, column generation is aimed to solve the LP relaxation as quick as possible without any concern of the integer properties of the columns formed. In our approach we aim to generate the columns forming the optimal integer solution while simultaneously solving the LP relaxation. By this we can re...

  7. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    Science.gov (United States)

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  8. Hybrid Cascading Outage Analysis of Extreme Events with Optimized Corrective Actions

    Energy Technology Data Exchange (ETDEWEB)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.; Samaan, Nader A.; Makarov, Yuri V.; Diao, Ruisheng; Huang, Qiuhua; Ke, Xinda

    2017-10-19

    Power system are vulnerable to extreme contingencies (like an outage of a major generating substation) that can cause significant generation and load loss and can lead to further cascading outages of other transmission facilities and generators in the system. Some cascading outages are seen within minutes following a major contingency, which may not be captured exclusively using the dynamic simulation of the power system. The utilities plan for contingencies either based on dynamic or steady state analysis separately which may not accurately capture the impact of one process on the other. We address this gap in cascading outage analysis by developing Dynamic Contingency Analysis Tool (DCAT) that can analyze hybrid dynamic and steady state behavior of the power system, including protection system models in dynamic simulations, and simulating corrective actions in post-transient steady state conditions. One of the important implemented steady state processes is to mimic operator corrective actions to mitigate aggravated states caused by dynamic cascading. This paper presents an Optimal Power Flow (OPF) based formulation for selecting corrective actions that utility operators can take during major contingency and thus automate the hybrid dynamic-steady state cascading outage process. The improved DCAT framework with OPF based corrective actions is demonstrated on IEEE 300 bus test system.

  9. BRAIN Journal - Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku Kose

    2016-01-01

    ABSTRACT In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Sc...

  10. A solution to the optimal power flow using multi-verse optimizer

    Directory of Open Access Journals (Sweden)

    Bachir Bentouati

    2016-12-01

    Full Text Available In this work, the most common problem of the modern power system named optimal power flow (OPF is optimized using the novel meta-heuristic optimization Multi-verse Optimizer(MVO algorithm. In order to solve the optimal power flow problem, the IEEE 30-bus and IEEE 57-bus systems are used. MVO is applied to solve the proposed problem. The problems considered in the OPF problem are fuel cost reduction, voltage profile improvement, voltage stability enhancement. The obtained results are compared with recently published meta-heuristics. Simulation results clearly reveal the effectiveness and the rapidity of the proposed algorithm for solving the OPF problem.

  11. Impact of discretization of the decision variables in the search of optimized solutions for history matching and injection rate optimization; Impacto do uso de variaveis discretas na busca de solucoes otimizadas para o ajuste de historico e distribuicao de vazoes de injecao

    Energy Technology Data Exchange (ETDEWEB)

    Sousa, Sergio H.G. de; Madeira, Marcelo G. [Halliburton Servicos Ltda., Rio de Janeiro, RJ (Brazil)

    2008-07-01

    In the classical operations research arena, there is the notion that the search for optimized solutions in continuous solution spaces is easier than on discrete solution spaces, even when the latter is a subset of the first. On the upstream oil industry, there is an additional complexity in the optimization problems because there usually are no analytical expressions for the objective function, which require some form of simulation in order to be evaluated. Thus, the use of meta heuristic optimizers like scatter search, tabu search and genetic algorithms is common. In this meta heuristic context, there are advantages in transforming continuous solution spaces in equivalent discrete ones; the goal to do so usually is to speed up the search for optimized solutions. However, these advantages can be masked when the problem has restrictions formed by linear combinations of its decision variables. In order to study these aspects of meta heuristic optimization, two optimization problems are proposed and solved with both continuous and discrete solution spaces: assisted history matching and injection rates optimization. Both cases operate on a model of the Wytch Farm onshore oil filed located in England. (author)

  12. Methods for providing decision makers with optimal solutions for multiple objectives that change over time

    CSIR Research Space (South Africa)

    Greeff, M

    2010-09-01

    Full Text Available Decision making - with the goal of finding the optimal solution - is an important part of modern life. For example: In the control room of an airport, the goals or objectives are to minimise the risk of airplanes colliding, minimise the time that a...

  13. What happens at the horizon(s) of an extreme black hole?

    International Nuclear Information System (INIS)

    Murata, Keiju; Reall, Harvey S; Tanahashi, Norihiro

    2013-01-01

    A massless scalar field exhibits an instability at the event horizon of an extreme black hole. We study numerically the nonlinear evolution of this instability for spherically symmetric perturbations of an extreme Reissner–Nordstrom (RN) black hole. We find that generically the endpoint of the instability is a non-extreme RN solution. However, there exist fine-tuned initial perturbations for which the instability never decays. In this case, the perturbed spacetime describes a time-dependent extreme black hole. Such solutions settle down to extreme RN outside, but not on, the event horizon. The event horizon remains smooth but certain observers who cross it at late time experience large gradients there. Our results indicate that these dynamical extreme black holes admit a C 1 extension across an inner (Cauchy) horizon. (paper)

  14. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    Science.gov (United States)

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  15. Design of Distributed Controllers Seeking Optimal Power Flow Solutions under Communication Constraints: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    2016-12-01

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltage measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.

  16. The importance of functional form in optimal control solutions of problems in population dynamics

    Science.gov (United States)

    Runge, M.C.; Johnson, F.A.

    2002-01-01

    Optimal control theory is finding increased application in both theoretical and applied ecology, and it is a central element of adaptive resource management. One of the steps in an adaptive management process is to develop alternative models of system dynamics, models that are all reasonable in light of available data, but that differ substantially in their implications for optimal control of the resource. We explored how the form of the recruitment and survival functions in a general population model for ducks affected the patterns in the optimal harvest strategy, using a combination of analytical, numerical, and simulation techniques. We compared three relationships between recruitment and population density (linear, exponential, and hyperbolic) and three relationships between survival during the nonharvest season and population density (constant, logistic, and one related to the compensatory harvest mortality hypothesis). We found that the form of the component functions had a dramatic influence on the optimal harvest strategy and the ultimate equilibrium state of the system. For instance, while it is commonly assumed that a compensatory hypothesis leads to higher optimal harvest rates than an additive hypothesis, we found this to depend on the form of the recruitment function, in part because of differences in the optimal steady-state population density. This work has strong direct consequences for those developing alternative models to describe harvested systems, but it is relevant to a larger class of problems applying optimal control at the population level. Often, different functional forms will not be statistically distinguishable in the range of the data. Nevertheless, differences between the functions outside the range of the data can have an important impact on the optimal harvest strategy. Thus, development of alternative models by identifying a single functional form, then choosing different parameter combinations from extremes on the likelihood

  17. Extremal black holes in dynamical Chern–Simons gravity

    International Nuclear Information System (INIS)

    McNees, Robert; Stein, Leo C; Yunes, Nicolás

    2016-01-01

    Rapidly rotating black hole (BH) solutions in theories beyond general relativity (GR) play a key role in experimental gravity, as they allow us to compute observables in extreme spacetimes that deviate from the predictions of GR. Such solutions are often difficult to find in beyond-general-relativity theories due to the inclusion of additional fields that couple to the metric nonlinearly and non-minimally. In this paper, we consider rotating BH solutions in one such theory, dynamical Chern–Simons (dCS) gravity, where the Einstein–Hilbert action is modified by the introduction of a dynamical scalar field that couples to the metric through the Pontryagin density. We treat dCS gravity as an effective field theory and work in the decoupling limit, where corrections are treated as small perturbations from GR. We perturb about the maximally rotating Kerr solution, the so-called extremal limit, and develop mathematical insight into the analysis techniques needed to construct solutions for generic spin. First we find closed-form, analytic expressions for the extremal scalar field, and then determine the trace of the metric perturbation, giving both in terms of Legendre decompositions. Retaining only the first three and four modes in the Legendre representation of the scalar field and the trace, respectively, suffices to ensure a fidelity of over 99% relative to full numerical solutions. The leading-order mode in the Legendre expansion of the trace of the metric perturbation contains a logarithmic divergence at the extremal Kerr horizon, which is likely to be unimportant as it occurs inside the perturbed dCS horizon. The techniques employed here should enable the construction of analytic, closed-form expressions for the scalar field and metric perturbations on a background with arbitrary rotation. (paper)

  18. Stochastic network interdiction optimization via capacitated network reliability modeling and probabilistic solution discovery

    International Nuclear Information System (INIS)

    Ramirez-Marquez, Jose Emmanuel; Rocco S, Claudio M.

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve stochastic network interdiction problems (SNIP). The network interdiction problem solved considers the minimization of the cost associated with an interdiction strategy such that the maximum flow that can be transmitted between a source node and a sink node for a fixed network design is greater than or equal to a given reliability requirement. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link and that such interdiction has a probability of being successful. This version of the SNIP is for the first time modeled as a capacitated network reliability problem allowing for the implementation of computation and solution techniques previously unavailable. The solution process is based on an evolutionary algorithm that implements: (1) Monte-Carlo simulation, to generate potential network interdiction strategies, (2) capacitated network reliability techniques to analyze strategies' source-sink flow reliability and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks are used throughout the paper to illustrate the approach

  19. Optimizing image quality and dose for digital radiography of distal pediatric extremities using the contrast-to-noise ratio

    International Nuclear Information System (INIS)

    Hess, R.; Neitzel, U.

    2012-01-01

    Purpose: To investigate the influence of X-ray tube voltage and filtration on image quality in terms of contrast-to-noise ratio (CNR) and dose for digital radiography of distal pediatric extremities and to determine conditions that give the best balance of CNR and patient dose. Materials and Methods: In a phantom study simulating the absorption properties of distal extremities, the CNR and the related patient dose were determined as a function of tube voltage in the range 40 - 66 kV, both with and without additional filtration of 0.1 mm Cu/1 mm Al. The measured CNR was used as an indicator of image quality, while the mean absorbed dose (MAD) - determined by a combination of measurement and simulation - was used as an indicator of the patient dose. Results: The most favorable relation of CNR and dose was found for the lowest tube voltage investigated (40 kV) without additional filtration. Compared to a situation with 50 kV or 60 kV, the mean absorbed dose could be lowered by 24 % and 50 %, respectively, while keeping the image quality (CNR) at the same level. Conclusion: For digital radiography of distal pediatric extremities, further CNR and dose optimization appears to be possible using lower tube voltages. Further clinical investigation of the suggested parameters is necessary. (orig.)

  20. Optimization of Multipurpose Reservoir Operation with Application Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Elahe Fallah Mehdipour

    2012-12-01

    Full Text Available Optimal operation of multipurpose reservoirs is one of the complex and sometimes nonlinear problems in the field of multi-objective optimization. Evolutionary algorithms are optimization tools that search decision space using simulation of natural biological evolution and present a set of points as the optimum solutions of problem. In this research, application of multi-objective particle swarm optimization (MOPSO in optimal operation of Bazoft reservoir with different objectives, including generating hydropower energy, supplying downstream demands (drinking, industry and agriculture, recreation and flood control have been considered. In this regard, solution sets of the MOPSO algorithm in bi-combination of objectives and compromise programming (CP using different weighting and power coefficients have been first compared that the MOPSO algorithm in all combinations of objectives is more capable than the CP to find solution with appropriate distribution and these solutions have dominated the CP solutions. Then, ending points of solution set from the MOPSO algorithm and nonlinear programming (NLP results have been compared. Results showed that the MOPSO algorithm with 0.3 percent difference from the NLP results has more capability to present optimum solutions in the ending points of solution set.

  1. Optimal analytic method for the nonlinear Hasegawa-Mima equation

    Science.gov (United States)

    Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle

    2014-05-01

    The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.

  2. Optimal control of two coupled spinning particles in the Euler–Lagrange picture

    International Nuclear Information System (INIS)

    Delgado-Téllez, M; Ibort, A; Peña, T Rodríguez de la; Salmoni, R

    2016-01-01

    A family of optimal control problems for a single and two coupled spinning particles in the Euler–Lagrange formalism is discussed. A characteristic of such problems is that the equations controlling the system are implicit and a reduction procedure to deal with them must be carried out. The reduction of the implicit control equations arising in these problems will be discussed in the slightly more general setting of implicit equations defined by invariant one-forms on Lie groups. As an example the first order differential equations describing the extremal solutions of an optimal control problem for a single spinning particle, obtained by using Pontryagin’s Maximum Principle (PMP), will be found and shown to be completely integrable. Then, again using PMP, solutions for the problem of two coupled spinning particles will be characterized as solutions of a system of coupled non-linear matrix differential equations. The reduction of the implicit system will show that the reduced space for them is the product of the space of states for the independent systems, implying the absence of ‘entanglement’ in this instance. Finally, it will be shown that, in the case of identical systems, the degree three matrix polynomial differential equations determined by the optimal feedback law, constitute a completely integrable Hamiltonian system and some of its solutions are described explicitly. (paper)

  3. Electrical Discharge Platinum Machining Optimization Using Stefan Problem Solutions

    Directory of Open Access Journals (Sweden)

    I. B. Stavitskiy

    2015-01-01

    Full Text Available The article presents the theoretical study results of platinum workability by electrical discharge machining (EDM, based on the solution of the thermal problem of moving the boundary of material change phase, i.e. Stefan problem. The problem solution enables defining the surface melt penetration of the material under the heat flow proceeding from the time of its action and the physical properties of the processed material. To determine the rational EDM operating conditions of platinum the article suggests relating its workability with machinability of materials, for which the rational EDM operating conditions are, currently, defined. It is shown that at low densities of the heat flow corresponding to the finishing EDM operating conditions, the processing conditions used for steel 45 are appropriate for platinum machining; with EDM at higher heat flow densities (e.g. 50 GW / m2 for this purpose copper processing conditions are used; at the high heat flow densities corresponding to heavy roughing EDM it is reasonable to use tungsten processing conditions. The article also represents how the minimum width of the current pulses, at which platinum starts melting and, accordingly, the EDM process becomes possible, depends on the heat flow density. It is shown that the processing of platinum is expedient at a pulse width corresponding to the values, called the effective pulse width. Exceeding these values does not lead to a substantial increase in removal of material per pulse, but considerably reduces the maximum repetition rate and therefore, the EDM capacity. The paper shows the effective pulse width versus the heat flow density. It also presents the dependences of the maximum platinum surface melt penetration and the corresponding pulse width on the heat flow density. Results obtained using solutions of the Stephen heat problem can be used to optimize EDM operating conditions of platinum machining.

  4. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    Science.gov (United States)

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  5. Site-optimization of wind turbine generators

    Energy Technology Data Exchange (ETDEWEB)

    Wolff, T.J. de; Thillerup, J. [Nordtank Energy Group, Richmond, VA (United States)

    1997-12-31

    The Danish Company Nordtank is one of the pioneers within the wind turbine industry. Since 1981 Nordtank has installed worldwide more than 2500 wind turbine generators with a total name plate capacity that is exceeding 450 MW. The opening up of new and widely divergent markets has demanded an extremely flexible approach towards wind turbine construction. The Nordtank product range has expanded considerable in recent years, with the main objective to develop wind energy conversion machines that can run profitable in any given case. This paper will describe site optimization of Nordtank wind turbines. Nordtank has developed a flexible design concept for its WTGs in the 500/750 kW range, in order to offer the optimal WTG solution for any given site and wind regime. Through this flexible design, the 500/750 turbine line can adjust the rotor diameter, tower height and many other components to optimally fit the turbine to each specific project. This design philosophy will be illustrated with some case histories of recently completed projects.

  6. Moment-tensor solutions estimated using optimal filter theory: Global seismicity, 2001

    Science.gov (United States)

    Sipkin, S.A.; Bufe, C.G.; Zirbes, M.D.

    2003-01-01

    This paper is the 12th in a series published yearly containing moment-tensor solutions computed at the US Geological Survey using an algorithm based on the theory of optimal filter design (Sipkin, 1982 and Sipkin, 1986b). An inversion has been attempted for all earthquakes with a magnitude, mb or MS, of 5.5 or greater. Previous listings include solutions for earthquakes that occurred from 1981 to 2000 (Sipkin, 1986b; Sipkin and Needham, 1989, Sipkin and Needham, 1991, Sipkin and Needham, 1992, Sipkin and Needham, 1993, Sipkin and Needham, 1994a and Sipkin and Needham, 1994b; Sipkin and Zirbes, 1996 and Sipkin and Zirbes, 1997; Sipkin et al., 1998, Sipkin et al., 1999, Sipkin et al., 2000a, Sipkin et al., 2000b and Sipkin et al., 2002).The entire USGS moment-tensor catalog can be obtained via anonymous FTP at ftp://ghtftp.cr.usgs.gov. After logging on, change directory to “momten”. This directory contains two compressed ASCII files that contain the finalized solutions, “mt.lis.Z” and “fmech.lis.Z”. “mt.lis.Z” contains the elements of the moment tensors along with detailed event information; “fmech.lis.Z” contains the decompositions into the principal axes and best double-couples. The fast moment-tensor solutions for more recent events that have not yet been finalized and added to the catalog, are gathered by month in the files “jan01.lis.Z”, etc. “fmech.doc.Z” describes the various fields.

  7. Extremal black hole/CFT correspondence in (gauged) supergravities

    International Nuclear Information System (INIS)

    Chow, David D. K.; Cvetic, M.; Lue, H.; Pope, C. N.

    2009-01-01

    We extend the investigation of the recently proposed Kerr/conformal field theory correspondence to large classes of rotating black hole solutions in gauged and ungauged supergravities. The correspondence, proposed originally for four-dimensional Kerr black holes, asserts that the quantum states in the near-horizon region of an extremal rotating black hole are holographically dual to a two-dimensional chiral theory whose Virasoro algebra arises as an asymptotic symmetry of the near-horizon geometry. In fact, in dimension D there are [(D-1)/2] commuting Virasoro algebras. We consider a general canonical class of near-horizon geometries in arbitrary dimension D, and show that in any such metric the [(D-1)/2] central charges each imply, via the Cardy formula, a microscopic entropy that agrees with the Bekenstein-Hawking entropy of the associated extremal black hole. In the remainder of the paper we show for most of the known rotating black hole solutions of gauged supergravity, and for the ungauged supergravity solutions with four charges in D=4 and three charges in D=5, that their extremal near-horizon geometries indeed lie within the canonical form. This establishes that, in all these examples, the microscopic entropies of the dual conformal field theories agree with the Bekenstein-Hawking entropies of the extremal rotating black holes.

  8. An Algoritm for the Alocation Optimization of Trading Executions

    Directory of Open Access Journals (Sweden)

    Claudiu Vinte

    2006-02-01

    Full Text Available In this paper, I wish to propose the Integer Allocation employing Tabu Search in conjunction with Simulated Annealing Heuristics for optimizing the distribution of trading executions in investors’ accounts. There is no polynomial algorithm discovered for Integer Linear Programming (a problem which is NP-complete. Generally, the practical experience shows that large-scale integer linear programs seem as yet practically unsolvable or extremely time-consuming. The algorithm described herein proposes an alternative approach to the problem. The algorithm consists of three steps: allocate the total executed quantity proportionally on the accounts, based on the allocation instructions (pro-rata basis; construct an initial solution, distributing the executed prices; improve the solution iteratively, employing Tabu Search in conjunction with Simulated Annealing heuristics.

  9. Analytical Solutions and Optimization of the Exo-Irreversible Schmidt Cycle with Imperfect Regeneration for the 3 Classical Types of Stirling Engine Solutions analytiques et optimisation du cycle de Schmidt irréversible à régénération imparfaite appliquées aux 3 types classiques de moteur Stirling

    Directory of Open Access Journals (Sweden)

    Rochelle P.

    2011-11-01

    Full Text Available The “old” Stirling engine is one of the most promising multi-heat source engines for the future. Simple and realistic basic models are useful to aid in optimizing a preliminary engine configuration. In addition to new proper analytical solutions for regeneration that dramatically reduce computing time, this study of the Schmidt-Stirling engine cycle is carried out from an engineer-friendly viewpoint introducing exo-irreversible heat transfers. The reference parameters are the technological or physical constraints: the maximum pressure, the maximum volume, the extreme wall temperatures and the overall thermal conductance, while the adjustable optimization variables are the volumetric compression ratio, the dead volume ratios, the volume phase-lag, the gas characteristics, the hot-to-cold conductance ratio and the regenerator efficiency. The new normalized analytical expressions for the operating characteristics of the engine: power, work, efficiency, mean pressure, maximum speed of revolution are derived, and some dimensionless and dimensional reference numbers are presented as well as power optimization examples with respect to non-dimensional speed, volume ratio and volume phase-lag angle.analytical solutions. Le “vieux” moteur Stirling est l’un des moteurs a sources multiples d’energie les plus prometteurs pour le futur. Des modeles elementaires simples et realistes sont utiles pour faciliter l’optimisation de configurations preliminaires du moteur. En plus de nouvelles solutions analytiques qui reduisent fortement le temps de calcul, cette etude du cycle moteur de Schmidt-Stirling modifie est entreprise avec le point de vue de l’ingenieur en introduisant les exo-irreversibilites dues aux transferts thermiques. Les parametres de reference sont des contraintes technologiques ou physiques : la pression maximum, le volume maximum, les temperatures de paroi extremes et la conductance totale, alors que les parametres d

  10. Extreme learning machine based optimal embedding location finder for image steganography.

    Directory of Open Access Journals (Sweden)

    Hayfaa Abdulzahra Atee

    Full Text Available In image steganography, determining the optimum location for embedding the secret message precisely with minimum distortion of the host medium remains a challenging issue. Yet, an effective approach for the selection of the best embedding location with least deformation is far from being achieved. To attain this goal, we propose a novel approach for image steganography with high-performance, where extreme learning machine (ELM algorithm is modified to create a supervised mathematical model. This ELM is first trained on a part of an image or any host medium before being tested in the regression mode. This allowed us to choose the optimal location for embedding the message with best values of the predicted evaluation metrics. Contrast, homogeneity, and other texture features are used for training on a new metric. Furthermore, the developed ELM is exploited for counter over-fitting while training. The performance of the proposed steganography approach is evaluated by computing the correlation, structural similarity (SSIM index, fusion matrices, and mean square error (MSE. The modified ELM is found to outperform the existing approaches in terms of imperceptibility. Excellent features of the experimental results demonstrate that the proposed steganographic approach is greatly proficient for preserving the visual information of an image. An improvement in the imperceptibility as much as 28% is achieved compared to the existing state of the art methods.

  11. Extraction of indium from extremely diluted solutions; Gewinnung von Indium aus extrem verduennten Loesungen

    Energy Technology Data Exchange (ETDEWEB)

    Vostal, Radek; Singliar, Ute; Froehlich, Peter [TU Bergakademie Freiberg (Germany). Inst. fuer Technische Chemie

    2017-02-15

    The demand for indium is rising with the growth of the electronics industry, where it is mainly used. Therefore, a multistage extraction process was developed to separate indium from a model solution whose composition was adequate to sphalerite ore. The initially very low concentration of indium in the solution was significantly increased by several successive extraction and reextraction steps. The process described is characterized by a low requirement for chemicals and a high purity of the obtained indium oxide.

  12. Generalized concavity in fuzzy optimization and decision analysis

    CERN Document Server

    Ramík, Jaroslav

    2002-01-01

    Convexity of sets in linear spaces, and concavity and convexity of functions, lie at the root of beautiful theoretical results that are at the same time extremely useful in the analysis and solution of optimization problems, including problems of either single objective or multiple objectives. Not all of these results rely necessarily on convexity and concavity; some of the results can guarantee that each local optimum is also a global optimum, giving these methods broader application to a wider class of problems. Hence, the focus of the first part of the book is concerned with several types of generalized convex sets and generalized concave functions. In addition to their applicability to nonconvex optimization, these convex sets and generalized concave functions are used in the book's second part, where decision-making and optimization problems under uncertainty are investigated. Uncertainty in the problem data often cannot be avoided when dealing with practical problems. Errors occur in real-world data for...

  13. Nariai, Bertotti-Robinson, and anti-Nariai solutions in higher dimensions

    International Nuclear Information System (INIS)

    Cardoso, Vitor; Dias, Oscar J.C.; Lemos, Jose P.S.

    2004-01-01

    We find all higher dimensional solutions of Einstein-Maxwell theory that are the topological product of two manifolds of constant curvature. These solutions include the higher dimensional Nariai, Bertotti-Robinson and anti-Nariai solutions and the anti-de Sitter Bertotti-Robinson solutions with toroidal and hyperbolic topology (Plebanski-Hacyan solutions). We give explicit results for any dimension D≥4. These solutions are generated from the appropriate extremal limits of the higher dimensional near-extreme black holes in de Sitter and anti-de Sitter backgrounds. Thus, we also find the mass and charge parameters of higher dimensional extreme black holes as a function of the radius of the degenerate horizon

  14. Optimal Thermal Unit Commitment Solution integrating Renewable Energy with Generator Outage

    Directory of Open Access Journals (Sweden)

    S. Sivasakthi

    2017-06-01

    Full Text Available The increasing concern of global climate changes, the promotion of renewable energy sources, primarily wind generation, is a welcome move to reduce the pollutant emissions from conventional power plants. Integration of wind power generation with the existing power network is an emerging research field. This paper presents a meta-heuristic algorithm based approach to determine the feasible dispatch solution for wind integrated thermal power system. The Unit Commitment (UC process aims to identify the best feasible generation scheme of the committed units such that the overall generation cost is reduced, when subjected to a variety of constraints at each time interval. As the UC formulation involves many variables and system and operational constraints, identifying the best solution is still a research task. Nowadays, it is inevitable to include power system reliability issues in operation strategy. The generator failure and malfunction are the prime influencing factor for reliability issues hence they have considered in UC formulation of wind integrated thermal power system. The modern evolutionary algorithm known as Grey Wolf Optimization (GWO algorithm is applied to solve the intended UC problem. The potential of the GWO algorithm is validated by the standard test systems. Besides, the ramp rate limits are also incorporated in the UC formulation. The simulation results reveal that the GWO algorithm has the capability of obtaining economical resolutions with good solution quality.

  15. Machine Learning meets Mathematical Optimization to predict the optimal production of offshore wind parks

    DEFF Research Database (Denmark)

    Fischetti, Martina; Fraccaro, Marco

    2018-01-01

    In this paper we propose a combination of Mathematical Optimization and Machine Learning to estimate the value of optimized solutions. In particular, we investigate if a machine, trained on a large number of optimized solutions, could accurately estimate the value of the optimized solution for new...... in production between optimized/non optimized solutions, it is not trivial to understand the potential value of a new site without running a complete optimization. This could be too time consuming if a lot of sites need to be evaluated, therefore we propose to use Machine Learning to quickly estimate...... the potential of new sites (i.e., to estimate the optimized production of a site without explicitly running the optimization). To do so, we trained and tested different Machine Learning models on a dataset of 3000+ optimized layouts found by the optimizer. Thanks to the close collaboration with a leading...

  16. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    Science.gov (United States)

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  17. Optimized bacterial expression and purification of the c-Src catalytic domain for solution NMR studies

    International Nuclear Information System (INIS)

    Piserchio, Andrea; Ghose, Ranajeet; Cowburn, David

    2009-01-01

    Progression of a host of human cancers is associated with elevated levels of expression and catalytic activity of the Src family of tyrosine kinases (SFKs), making them key therapeutic targets. Even with the availability of multiple crystal structures of active and inactive forms of the SFK catalytic domain (CD), a complete understanding of its catalytic regulation is unavailable. Also unavailable are atomic or near-atomic resolution information about their interactions, often weak or transient, with regulating phosphatases and downstream targets. Solution NMR, the biophysical method best suited to tackle this problem, was previously hindered by difficulties in bacterial expression and purification of sufficient quantities of soluble, properly folded protein for economically viable labeling with NMR-active isotopes. Through a choice of optimal constructs, co-expression with chaperones and optimization of the purification protocol, we have achieved the ability to bacterially produce large quantities of the isotopically-labeled CD of c-Src, the prototypical SFK, and of its activating Tyr-phosphorylated form. All constructs produce excellent spectra allowing solution NMR studies of this family in an efficient manner

  18. Non-extremal instantons and wormholes in string theory

    NARCIS (Netherlands)

    Bergshoeff, E; Collinucci, A; Gran, U; Roest, D; Vandoren, S

    2005-01-01

    We construct the most general non-extremal spherically symmetric instanton solution of a gravity-dilatonaxion system with SL(2, R) symmetry, for arbitrary euclidean spacetime dimension D >= 3. A subclass of these solutions describe completely regular wormhole geometries, whose size is determined by

  19. An energy-optimal solution for transportation control of cranes with double pendulum dynamics: Design and experiments

    Science.gov (United States)

    Sun, Ning; Wu, Yiming; Chen, He; Fang, Yongchun

    2018-03-01

    Underactuated cranes play an important role in modern industry. Specifically, in most situations of practical applications, crane systems exhibit significant double pendulum characteristics, which makes the control problem quite challenging. Moreover, most existing planners/controllers obtained with standard methods/techniques for double pendulum cranes cannot minimize the energy consumption when fulfilling the transportation tasks. Therefore, from a practical perspective, this paper proposes an energy-optimal solution for transportation control of double pendulum cranes. By applying the presented approach, the transportation objective, including fast trolley positioning and swing elimination, is achieved with minimized energy consumption, and the residual oscillations are suppressed effectively with all the state constrains being satisfied during the entire transportation process. As far as we know, this is the first energy-optimal solution for transportation control of underactuated double pendulum cranes with various state and control constraints. Hardware experimental results are included to verify the effectiveness of the proposed approach, whose superior performance is reflected by being experimentally compared with some comparative controllers.

  20. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  1. Discrete optimization in architecture extremely modular systems

    CERN Document Server

    Zawidzki, Machi

    2017-01-01

    This book is comprised of two parts, both of which explore modular systems: Pipe-Z (PZ) and Truss-Z (TZ), respectively. It presents several methods of creating PZ and TZ structures subjected to discrete optimization. The algorithms presented employ graph-theoretic and heuristic methods. The underlying idea of both systems is to create free-form structures using the minimal number of types of modular elements. PZ is more conceptual, as it forms single-branch mathematical knots with a single type of module. Conversely, TZ is a skeletal system for creating free-form pedestrian ramps and ramp networks among any number of terminals in space. In physical space, TZ uses two types of modules that are mirror reflections of each other. The optimization criteria discussed include: the minimal number of units, maximal adherence to the given guide paths, etc.

  2. Trajectory planning of mobile robots using indirect solution of optimal control method in generalized point-to-point task

    Science.gov (United States)

    Nazemizadeh, M.; Rahimi, H. N.; Amini Khoiy, K.

    2012-03-01

    This paper presents an optimal control strategy for optimal trajectory planning of mobile robots by considering nonlinear dynamic model and nonholonomic constraints of the system. The nonholonomic constraints of the system are introduced by a nonintegrable set of differential equations which represent kinematic restriction on the motion. The Lagrange's principle is employed to derive the nonlinear equations of the system. Then, the optimal path planning of the mobile robot is formulated as an optimal control problem. To set up the problem, the nonlinear equations of the system are assumed as constraints, and a minimum energy objective function is defined. To solve the problem, an indirect solution of the optimal control method is employed, and conditions of the optimality derived as a set of coupled nonlinear differential equations. The optimality equations are solved numerically, and various simulations are performed for a nonholonomic mobile robot to illustrate effectiveness of the proposed method.

  3. Optimization of foaming properties of sludge protein solution by 60Co γ-ray/H2O2 using response surface methodology

    International Nuclear Information System (INIS)

    Xiang, Yulin; Xiang, Yuxiu; Wang, Lipeng; Zhang, Zhifang

    2016-01-01

    Response surface methodology and Box-Behnken experimental design were used to model and optimize the operational parameters of foaming properties of the sludge protein solution by 60 Co γ-ray/H 2 O 2 treatment. The four variables involved in this research were the protein solution concentration, H 2 O 2 , pH and dose. In the range studied, statistical analysis of the results showed that selected variables had a significant effect on protein foaming properties. The optimized conditions contained: protein solution concentration 26.50% (v/v), H 2 O 2 concentration 0.30% (v/v), pH value 9.0, and dose 4.81 kGy. Under optimal conditions, the foamability and foam stability approached 23.3 cm and 21.3 cm, respectively. Regression analysis with R 2 value of 0.9923 (foamability) and 0.9922 (foam stability) indicated a satisfactory correlation between the experimental data and predicted values (response). In addition, based on a feasibility analysis, the 60 Co γ-ray/H 2 O 2 method can improve odor and color of the protein foaming solution. - Highlights: • Effects of 60 Co γ-ray/H 2 O 2 on foaming properties of sludge protein were studied. • Response surface methodology and Box-Behnken experimental design were applied. • 60 Co γ-ray/H 2 O 2 method can improve foaming properties of protein solution.

  4. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  5. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    constraints and opportunities for solutions deployed at various layers of the system stack. The framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also enables optimization of the cost-benefit trade-os among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-ecient manner in spite of frequent faults, errors, and failures of various types.

  6. Non-extremal instantons and wormholes in string theory

    NARCIS (Netherlands)

    Bergshoeff, E.; Collinucci, A.; Gran, U.; Roest, D.; Vandoren, S.

    2004-01-01

    We construct the most general non-extremal spherically symmetric instanton solution of a gravity-dilaton-axion system with SL(2,R) symmetry, for arbitrary euclidean spacetime dimension D ≥ 3. A subclass of these solutions describe completely regular wormhole geometries, whose size is determined

  7. Optimal simultaneous superpositioning of multiple structures with missing data.

    Science.gov (United States)

    Theobald, Douglas L; Steindel, Phillip A

    2012-08-01

    Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.

  8. Numerical and Analytical Study of Optimal Low-Thrust Limited-Power Transfers between Close Circular Coplanar Orbits

    Directory of Open Access Journals (Sweden)

    Sandro da Silva Fernandes

    2007-01-01

    Full Text Available A numerical and analytical study of optimal low-thrust limited-power trajectories for simple transfer (no rendezvous between close circular coplanar orbits in an inverse-square force field is presented. The numerical study is carried out by means of an indirect approach of the optimization problem in which the two-point boundary value problem, obtained from the set of necessary conditions describing the optimal solutions, is solved through a neighboring extremal algorithm based on the solution of the linearized two-point boundary value problem through Riccati transformation. The analytical study is provided by a linear theory which is expressed in terms of nonsingular elements and is determined through the canonical transformation theory. The fuel consumption is taken as the performance criterion and the analysis is carried out considering various radius ratios and transfer durations. The results are compared to the ones provided by a numerical method based on gradient techniques.

  9. Application of integer programming on logistics solution for load transportation: the solver tool and its limitations in the search for the optimal solution

    Directory of Open Access Journals (Sweden)

    Ricardo França Santos

    2012-01-01

    Full Text Available This work tries to solve a typical logistics problem of Navy of Brazil regards the allocation, transportation and distribution of genera refrigerated for Military Organizations within Grande Rio (RJ. After a brief review of literature on Linear/Integer Programming and some of their applications, we proposed the use of Integer Programming, using the Excel’s Solver as a tool for obtaining the optimal load configuration for the fleet, obtaining the lower distribution costs in order to meet the demand schedule. The assumptions were met in a first attempt with a single spreadsheet, but it could not find a convergent solution, without degeneration problems and with a reasonable solution time. A second solution was proposed separating the problem into three phases, which allowed us to highlight the potential and limitations of the Solver tool. This study showed the importance of formulating a realistic model and of a detailed critical analysis, which could be seen through the lack of convergence of the first solution and the success achieved by the second one.

  10. [Optimization of benzalkonium chloride concentration in 0.0015% tafluprost ophthalmic solution from the points of ocular surface safety and preservative efficacy].

    Science.gov (United States)

    Asada, Hiroyuki; Takaoka-Shichijo, Yuko; Nakamura, Masatsugu; Kimura, Akio

    2010-06-01

    Optimization of benzalkonium chloride (alkyl dimethylbenzylammonium chloride: BAK) concentration as preservative in 0.0015% tafluprost ophthalmic solution (Tapros 0.0015% ophthalmic solution), an anti-glaucoma medicine, was examined from the points of ocular surface safety and preservative efficacy. BAKC(12), which is dodecyl dimethylbenzylammonium chloride, and BAKmix, which is the mixture of dodecyl, tetradecyl and hexadecyl dimethylbenzylammonium chloride were used in this study. The effects of BAKC(12) concentrations and the BAK types, BAKC(12) and BAKmix, in tafluprost ophthalmic solution on ocular surface safety were evaluated using the in vitro SV 40-immobilized human corneal epithelium cell line (HCE-T). Following treatments of Tafluprost ophthalmic solutions with BAKC(12), its concentration dependency was observed on cell viability of HCE-T. The cell viability of HCE-T after treatment of these solutions with 0.001% to 0.003% BAKC(12) for 5 minutes were the same level as that after treatment of the solution without BAK. Tafluprost ophthalmic solution with 0.01% BAKC(12) was safer for the ocular surface than the same solution with 0.01% BAKmix. Preservatives-effectiveness tests of tafluprost ophthalmic solutions with various concentrations of BAKC(12) were performed according to the Japanese Pharmacopoeia (JP), and solutions with more than 0.0005% BAKC(12) conformed to JP criteria. It was concluded that 0.0005% to 0.003% of BAKC(12) in tafluprost ophthalmic solution was optimal, namely, well-balanced from the points of ocular surface safety and preservative efficacy.

  11. Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control

    Science.gov (United States)

    Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.

    2015-01-01

    The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.

  12. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    Science.gov (United States)

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  13. Homogenized blocked arcs for multicriteria optimization of radiotherapy: Analytical and numerical solutions

    International Nuclear Information System (INIS)

    Fenwick, John D.; Pardo-Montero, Juan

    2010-01-01

    Purpose: Homogenized blocked arcs are intuitively appealing as basis functions for multicriteria optimization of rotational radiotherapy. Such arcs avoid an organ-at-risk (OAR), spread dose out well over the rest-of-body (ROB), and deliver homogeneous doses to a planning target volume (PTV) using intensity modulated fluence profiles, obtainable either from closed-form solutions or iterative numerical calculations. Here, the analytic and iterative arcs are compared. Methods: Dose-distributions have been calculated for nondivergent beams, both including and excluding scatter, beam penumbra, and attenuation effects, which are left out of the derivation of the analytic arcs. The most straightforward analytic arc is created by truncating the well-known Brahme, Roos, and Lax (BRL) solution, cutting its uniform dose region down from an annulus to a smaller nonconcave region lying beyond the OAR. However, the truncation leaves behind high dose hot-spots immediately on either side of the OAR, generated by very high BRL fluence levels just beyond the OAR. These hot-spots can be eliminated using alternative analytical solutions ''C'' and ''L,'' which, respectively, deliver constant and linearly rising fluences in the gap region between the OAR and PTV (before truncation). Results: Measured in terms of PTV dose homogeneity, ROB dose-spread, and OAR avoidance, C solutions generate better arc dose-distributions than L when scatter, penumbra, and attenuation are left out of the dose modeling. Including these factors, L becomes the best analytical solution. However, the iterative approach generates better dose-distributions than any of the analytical solutions because it can account and compensate for penumbra and scatter effects. Using the analytical solutions as starting points for the iterative methodology, dose-distributions almost as good as those obtained using the conventional iterative approach can be calculated very rapidly. Conclusions: The iterative methodology is

  14. Evaluation of Persian Professional Web Social Networks\\\\\\' Features, to Provide a Suitable Solution for Optimization of These Networks in Iran

    Directory of Open Access Journals (Sweden)

    Nadjla Hariri

    2013-03-01

    Full Text Available This study aimed to determine the status of Persian professional web social networks' features and provide a suitable solution for optimization of these networks in Iran. The research methods were library research and evaluative method, and study population consisted of 10 Persian professional web social networks. In this study, for data collection, a check list of social networks important tools and features was used. According to the results, “Cloob”, “IR Experts” and “Doreh” were the most compatible networks with the criteria of social networks. Finally, some solutions were presented for optimization of capabilities of Persian professional web social networks.

  15. Genetic Algorithm Optimizes Q-LAW Control Parameters

    Science.gov (United States)

    Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard

    2008-01-01

    A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.

  16. A Generalized Measure for the Optimal Portfolio Selection Problem and its Explicit Solution

    Directory of Open Access Journals (Sweden)

    Zinoviy Landsman

    2018-03-01

    Full Text Available In this paper, we offer a novel class of utility functions applied to optimal portfolio selection. This class incorporates as special cases important measures such as the mean-variance, Sharpe ratio, mean-standard deviation and others. We provide an explicit solution to the problem of optimal portfolio selection based on this class. Furthermore, we show that each measure in this class generally reduces to the efficient frontier that coincides or belongs to the classical mean-variance efficient frontier. In addition, a condition is provided for the existence of the a one-to-one correspondence between the parameter of this class of utility functions and the trade-off parameter λ in the mean-variance utility function. This correspondence essentially provides insight into the choice of this parameter. We illustrate our results by taking a portfolio of stocks from National Association of Securities Dealers Automated Quotation (NASDAQ.

  17. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  18. Optimization of the indirect at neutron activation technique for the determination of boron in aqueous solutions

    International Nuclear Information System (INIS)

    Luz, L.C.Q.P. da.

    1984-01-01

    The purpose of this work was the development of an instrumental method for the optimization of the indirect neutron activation analysis of boron in aqueous solutions. The optimization took into account the analytical parameters under laboratory conditions: activation carried out with a 241 Am/Be neutron source and detection of the activity induced in vanadium with two NaI(Tl) gamma spectrometers. A calibration curve was thus obtained for a concentration range of 0 to 5000 ppm B. Later on, experimental models were built in order to study the feasibility of automation. The analysis of boron was finally performed, under the previously established conditions, with an automated system comprising the operations of transport, irradiation and counting. An improvement in the quality of the analysis was observed, with boron concentrations as low as 5 ppm being determined with a precision level better than 0.4%. The experimental model features all basic design elements for an automated device for the analysis of boron in agueous solutions wherever this is required, as in the operation of nuclear reactors. (Author) [pt

  19. Optimal pin enrichment distributions in nuclear reactor fuel bundles

    International Nuclear Information System (INIS)

    Lim, E.Y.

    1976-01-01

    A methodology has been developed to determine the fuel pin enrichment distribution that yields the best approximation to a prescribed power distribution in nuclear reactor fuel bundles. The problem is formulated as an optimization problem in which the optimal pin enrichments minimize the sum of squared deviations between the actual and prescribed fuel pin powers. A constant average enrichment constraint is imposed to ensure that a suitable value of reactivity is present in the bundle. When constraints are added that limit the fuel pins to a few enrichment types, one must determine not only the optimal values of the enrichment types but also the optimal distribution of the enrichment types amongst the pins. A matrix of boolean variables is used to describe the assignment of enrichment types to the pins. This nonlinear mixed integer programming problem may be rigorously solved with either exhaustive enumeration or branch and bound methods using a modification of the algorithm from the continuous problem as a suboptimization. Unfortunately these methods are extremely cumbersome and computationally overwhelming. Solutions which require only a moderate computational effort are obtained by assuming that the fuel pin enrichments in this problem are ordered as in the solution to the continuous problem. Under this assumption search schemes using either exhaustive enumeration or branch and bound become computationally attractive. An adaptation of the Hooke--Jeeves pattern search technique is shown to be especially efficient

  20. Automation Rover for Extreme Environments

    Science.gov (United States)

    Sauder, Jonathan; Hilgemann, Evan; Johnson, Michael; Parness, Aaron; Hall, Jeffrey; Kawata, Jessie; Stack, Kathryn

    2017-01-01

    Almost 2,300 years ago the ancient Greeks built the Antikythera automaton. This purely mechanical computer accurately predicted past and future astronomical events long before electronics existed1. Automata have been credibly used for hundreds of years as computers, art pieces, and clocks. However, in the past several decades automata have become less popular as the capabilities of electronics increased, leaving them an unexplored solution for robotic spacecraft. The Automaton Rover for Extreme Environments (AREE) proposes an exciting paradigm shift from electronics to a fully mechanical system, enabling longitudinal exploration of the most extreme environments within the solar system.

  1. Global stability, periodic solutions, and optimal control in a nonlinear differential delay model

    Directory of Open Access Journals (Sweden)

    Anatoli F. Ivanov

    2010-09-01

    Full Text Available A nonlinear differential equation with delay serving as a mathematical model of several applied problems is considered. Sufficient conditions for the global asymptotic stability and for the existence of periodic solutions are given. Two particular applications are treated in detail. The first one is a blood cell production model by Mackey, for which new periodicity criteria are derived. The second application is a modified economic model with delay due to Ramsey. An optimization problem for a maximal consumption is stated and solved for the latter.

  2. Solution of a General Linear Complementarity Problem Using Smooth Optimization and Its Application to Bilinear Programming and LCP

    International Nuclear Information System (INIS)

    Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.

    2001-01-01

    This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper

  3. Optimal solutions for the evolution of a social obesity epidemic model

    Science.gov (United States)

    Sikander, Waseem; Khan, Umar; Mohyud-Din, Syed Tauseef

    2017-06-01

    In this work, a novel modification in the traditional homotopy perturbation method (HPM) is proposed by embedding an auxiliary parameter in the boundary condition. The scheme is used to carry out a mathematical evaluation of the social obesity epidemic model. The incidence of excess weight and obesity in adulthood population and prediction of its behavior in the coming years is analyzed by using a modified algorithm. The proposed method increases the convergence of the approximate analytical solution over the domain of the problem. Furthermore, a convenient way is considered for choosing an optimal value of auxiliary parameters via minimizing the total residual error. The graphical comparison of the obtained results with the standard HPM explicitly reveals the accuracy and efficiency of the developed scheme.

  4. Scaling Optimization of the SIESTA MHD Code

    Science.gov (United States)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  5. Optimization of the test intervals of a nuclear safety system by genetic algorithms, solution clustering and fuzzy preference assignment

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2010-01-01

    In this paper, a procedure is developed for identifying a number of representative solutions manageable for decision-making in a multiobjective optimization problem concerning the test intervals of the components of a safety system of a nuclear power plant. Pareto Front solutions are identified by a genetic algorithm and then clustered by subtractive clustering into 'families'. On the basis of the decision maker's preferences, each family is then synthetically represented by a 'head of the family' solution. This is done by introducing a scoring system that ranks the solutions with respect to the different objectives: a fuzzy preference assignment is employed to this purpose. Level Diagrams are then used to represent, analyze and interpret the Pareto Fronts reduced to the head-of-the-family solutions

  6. Chiral gravity, log gravity, and extremal CFT

    International Nuclear Information System (INIS)

    Maloney, Alexander; Song Wei; Strominger, Andrew

    2010-01-01

    We show that the linearization of all exact solutions of classical chiral gravity around the AdS 3 vacuum have positive energy. Nonchiral and negative-energy solutions of the linearized equations are infrared divergent at second order, and so are removed from the spectrum. In other words, chirality is confined and the equations of motion have linearization instabilities. We prove that the only stationary, axially symmetric solutions of chiral gravity are BTZ black holes, which have positive energy. It is further shown that classical log gravity--the theory with logarithmically relaxed boundary conditions--has finite asymptotic symmetry generators but is not chiral and hence may be dual at the quantum level to a logarithmic conformal field theories (CFT). Moreover we show that log gravity contains chiral gravity within it as a decoupled charge superselection sector. We formally evaluate the Euclidean sum over geometries of chiral gravity and show that it gives precisely the holomorphic extremal CFT partition function. The modular invariance and integrality of the expansion coefficients of this partition function are consistent with the existence of an exact quantum theory of chiral gravity. We argue that the problem of quantizing chiral gravity is the holographic dual of the problem of constructing an extremal CFT, while quantizing log gravity is dual to the problem of constructing a logarithmic extremal CFT.

  7. Hybrid Algorithm of Particle Swarm Optimization and Grey Wolf Optimizer for Improving Convergence Performance

    Directory of Open Access Journals (Sweden)

    Narinder Singh

    2017-01-01

    Full Text Available A newly hybrid nature inspired algorithm called HPSOGWO is presented with the combination of Particle Swarm Optimization (PSO and Grey Wolf Optimizer (GWO. The main idea is to improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength. Some unimodal, multimodal, and fixed-dimension multimodal test functions are used to check the solution quality and performance of HPSOGWO variant. The numerical and statistical solutions show that the hybrid variant outperforms significantly the PSO and GWO variants in terms of solution quality, solution stability, convergence speed, and ability to find the global optimum.

  8. Test scheduling optimization for 3D network-on-chip based on cloud evolutionary algorithm of Pareto multi-objective

    Science.gov (United States)

    Xu, Chuanpei; Niu, Junhao; Ling, Jing; Wang, Suyan

    2018-03-01

    In this paper, we present a parallel test strategy for bandwidth division multiplexing under the test access mechanism bandwidth constraint. The Pareto solution set is combined with a cloud evolutionary algorithm to optimize the test time and power consumption of a three-dimensional network-on-chip (3D NoC). In the proposed method, all individuals in the population are sorted in non-dominated order and allocated to the corresponding level. Individuals with extreme and similar characteristics are then removed. To increase the diversity of the population and prevent the algorithm from becoming stuck around local optima, a competition strategy is designed for the individuals. Finally, we adopt an elite reservation strategy and update the individuals according to the cloud model. Experimental results show that the proposed algorithm converges to the optimal Pareto solution set rapidly and accurately. This not only obtains the shortest test time, but also optimizes the power consumption of the 3D NoC.

  9. Reliability-based performance simulation for optimized pavement maintenance

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Le, Thanh-Son

    2011-01-01

    Roadway pavement maintenance is essential for driver safety and highway infrastructure efficiency. However, regular preventive maintenance and rehabilitation (M and R) activities are extremely costly. Unfortunately, the funds available for the M and R of highway pavement are often given lower priority compared to other national development policies, therefore, available funds must be allocated wisely. Maintenance strategies are typically implemented by optimizing only the cost whilst the reliability of facility performance is neglected. This study proposes a novel algorithm using multi-objective particle swarm optimization (MOPSO) technique to evaluate the cost-reliability tradeoff in a flexible maintenance strategy based on non-dominant solutions. Moreover, a probabilistic model for regression parameters is employed to assess reliability-based performance. A numerical example of a highway pavement project is illustrated to demonstrate the efficacy of the proposed MOPSO algorithms. The analytical results show that the proposed approach can help decision makers to optimize roadway maintenance plans. - Highlights: →A novel algorithm using multi-objective particle swarm optimization technique. → Evaluation of the cost-reliability tradeoff in a flexible maintenance strategy. → A probabilistic model for regression parameters is employed to assess reliability-based performance. → The proposed approach can help decision makers to optimize roadway maintenance plans.

  10. Reliability-based performance simulation for optimized pavement maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jui-Sheng, E-mail: jschou@mail.ntust.edu.tw [Department of Construction Engineering, National Taiwan University of Science and Technology (Taiwan Tech), 43 Sec. 4, Keelung Rd., Taipei 106, Taiwan (China); Le, Thanh-Son [Department of Construction Engineering, National Taiwan University of Science and Technology (Taiwan Tech), 43 Sec. 4, Keelung Rd., Taipei 106, Taiwan (China)

    2011-10-15

    Roadway pavement maintenance is essential for driver safety and highway infrastructure efficiency. However, regular preventive maintenance and rehabilitation (M and R) activities are extremely costly. Unfortunately, the funds available for the M and R of highway pavement are often given lower priority compared to other national development policies, therefore, available funds must be allocated wisely. Maintenance strategies are typically implemented by optimizing only the cost whilst the reliability of facility performance is neglected. This study proposes a novel algorithm using multi-objective particle swarm optimization (MOPSO) technique to evaluate the cost-reliability tradeoff in a flexible maintenance strategy based on non-dominant solutions. Moreover, a probabilistic model for regression parameters is employed to assess reliability-based performance. A numerical example of a highway pavement project is illustrated to demonstrate the efficacy of the proposed MOPSO algorithms. The analytical results show that the proposed approach can help decision makers to optimize roadway maintenance plans. - Highlights: > A novel algorithm using multi-objective particle swarm optimization technique. > Evaluation of the cost-reliability tradeoff in a flexible maintenance strategy. > A probabilistic model for regression parameters is employed to assess reliability-based performance. > The proposed approach can help decision makers to optimize roadway maintenance plans.

  11. Accelerated Simplified Swarm Optimization with Exploitation Search Scheme for Data Clustering.

    Directory of Open Access Journals (Sweden)

    Wei-Chang Yeh

    Full Text Available Data clustering is commonly employed in many disciplines. The aim of clustering is to partition a set of data into clusters, in which objects within the same cluster are similar and dissimilar to other objects that belong to different clusters. Over the past decade, the evolutionary algorithm has been commonly used to solve clustering problems. This study presents a novel algorithm based on simplified swarm optimization, an emerging population-based stochastic optimization approach with the advantages of simplicity, efficiency, and flexibility. This approach combines variable vibrating search (VVS and rapid centralized strategy (RCS in dealing with clustering problem. VVS is an exploitation search scheme that can refine the quality of solutions by searching the extreme points nearby the global best position. RCS is developed to accelerate the convergence rate of the algorithm by using the arithmetic average. To empirically evaluate the performance of the proposed algorithm, experiments are examined using 12 benchmark datasets, and corresponding results are compared with recent works. Results of statistical analysis indicate that the proposed algorithm is competitive in terms of the quality of solutions.

  12. Software Support for Optimizing Layout Solution in Lean Production

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2018-02-01

    Full Text Available As progressive managerial styles, the techniques based on "lean thinking" are being increasingly promoted. They are focused on applying lean production concepts to all phases of product lifecycle and also to business environment. This innovative approach strives to eliminate any wasting of resources and shortens the time to respond to customer requirements, including redesigning the structure of the organization’ supply chain. A lean organization is created mainly by employees, their creative potential, knowledge, self-realization and motivation for continuous improvement of the processes and the production systems. A set of tools, techniques and methods of lean production is basically always very similar. Only a form of their presentation or classification into individual phases of the product lifecycle may differ. The authors present the results of their research from the designing phases of production systems to optimize their dispositional solution with software support and 3D simulation and visualization. Modelling is based on use of Tecnomatix's and Photomodeler's progressive software tools and a dynamic model for capacitive dimensioning of more intelligent production system

  13. Cocoa agroforestry is less resilient to sub-optimal and extreme climate than cocoa in full sun.

    Science.gov (United States)

    Abdulai, Issaka; Vaast, Philippe; Hoffmann, Munir P; Asare, Richard; Jassogne, Laurence; Van Asten, Piet; Rötter, Reimund P; Graefe, Sophie

    2018-01-01

    Cocoa agroforestry is perceived as potential adaptation strategy to sub-optimal or adverse environmental conditions such as drought. We tested this strategy over wet, dry and extremely dry periods comparing cocoa in full sun with agroforestry systems: shaded by (i) a leguminous tree species, Albizia ferruginea and (ii) Antiaris toxicaria, the most common shade tree species in the region. We monitored micro-climate, sap flux density, throughfall, and soil water content from November 2014 to March 2016 at the forest-savannah transition zone of Ghana with climate and drought events during the study period serving as proxy for projected future climatic conditions in marginal cocoa cultivation areas of West Africa. Combined transpiration of cocoa and shade trees was significantly higher than cocoa in full sun during wet and dry periods. During wet period, transpiration rate of cocoa plants shaded by A. ferruginea was significantly lower than cocoa under A. toxicaria and full sun. During the extreme drought of 2015/16, all cocoa plants under A. ferruginea died. Cocoa plants under A. toxicaria suffered 77% mortality and massive stress with significantly reduced sap flux density of 115 g cm -2  day -1 , whereas cocoa in full sun maintained higher sap flux density of 170 g cm -2  day -1 . Moreover, cocoa sap flux recovery after the extreme drought was significantly higher in full sun (163 g cm -2  day -1 ) than under A. toxicaria (37 g cm -2  day -1 ). Soil water content in full sun was higher than in shaded systems suggesting that cocoa mortality in the shaded systems was linked to strong competition for soil water. The present results have major implications for cocoa cultivation under climate change. Promoting shade cocoa agroforestry as drought resilient system especially under climate change needs to be carefully reconsidered as shade tree species such as the recommended leguminous A. ferruginea constitute major risk to cocoa functioning under

  14. Framatome ANP outage optimization support solutions

    International Nuclear Information System (INIS)

    Bombail, Jean Paul

    2003-01-01

    Over the last several years, leading plant operators have demonstrated that availability factors can be improved while safety and reliability can be enhanced on a long-term basis and operating costs reduced. Outage optimization is the new term being used to describe these long-term initiatives through which a variety of measures aimed at shortening scheduled plant outages have been developed and successfully implemented by these leaders working with their service providers who were introducing new technologies and process improvements. Following the leaders, all operators now have ambitious outage optimization plans and the median and average outage duration are decreasing world-wide. Future objectives are even more stringent and must include plant upgrades and component replacements being performed for life extension of plant operation. Outage optimization covers a broad range of activities from modifications of plant systems to faster cool down rates to human behavior improvements. It has been proven to reduce costs, avoid unplanned outages and thus support plant availability and help to ensure the utility's competitive position in the marketplace

  15. Design optimization of shell-and-tube heat exchangers using single objective and multiobjective particle swarm optimization

    International Nuclear Information System (INIS)

    Elsays, Mostafa A.; Naguib Aly, M; Badawi, Alya A.

    2010-01-01

    The Particle Swarm Optimization (PSO) algorithm is used to optimize the design of shell-and-tube heat exchangers and determine the optimal feasible solutions so as to eliminate trial-and-error during the design process. The design formulation takes into account the area and the total annual cost of heat exchangers as two objective functions together with operating as well as geometrical constraints. The Nonlinear Constrained Single Objective Particle Swarm Optimization (NCSOPSO) algorithm is used to minimize and find the optimal feasible solution for each of the nonlinear constrained objective functions alone, respectively. Then, a novel Nonlinear Constrained Mult-objective Particle Swarm Optimization (NCMOPSO) algorithm is used to minimize and find the Pareto optimal solutions for both of the nonlinear constrained objective functions together. The experimental results show that the two algorithms are very efficient, fast and can find the accurate optimal feasible solutions of the shell and tube heat exchangers design optimization problem. (orig.)

  16. Design Optimization of Mechanical Components Using an Enhanced Teaching-Learning Based Optimization Algorithm with Differential Operator

    Directory of Open Access Journals (Sweden)

    B. Thamaraikannan

    2014-01-01

    Full Text Available This paper studies in detail the background and implementation of a teaching-learning based optimization (TLBO algorithm with differential operator for optimization task of a few mechanical components, which are essential for most of the mechanical engineering applications. Like most of the other heuristic techniques, TLBO is also a population-based method and uses a population of solutions to proceed to the global solution. A differential operator is incorporated into the TLBO for effective search of better solutions. To validate the effectiveness of the proposed method, three typical optimization problems are considered in this research: firstly, to optimize the weight in a belt-pulley drive, secondly, to optimize the volume in a closed coil helical spring, and finally to optimize the weight in a hollow shaft. have been demonstrated. Simulation result on the optimization (mechanical components problems reveals the ability of the proposed methodology to find better optimal solutions compared to other optimization algorithms.

  17. Optimal adaptation to extreme rainfalls under climate change

    Science.gov (United States)

    Rosbjerg, Dan

    2017-04-01

    More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time span. Immediate as well as delayed adaptation is considered.

  18. Can Concentration - Discharge Relationships Diagnose Material Source During Extreme Events?

    Science.gov (United States)

    Karwan, D. L.; Godsey, S.; Rose, L.

    2017-12-01

    Floods can carry >90% of the basin material exported in a given year as well as alter flow pathways and material sources. In turn, sediment and solute fluxes can increase flood damages and negatively impact water quality and integrate physical and chemical weathering of landscapes and channels. Concentration-discharge (C-Q) relationships are used to both describe export patterns as well as compute them. Metrics for describing C-Q patterns and inferring their controls are vulnerable to infrequent sampling that affects how C-Q relationships are interpolated and interpreted. C-Q relationships are typically evaluated from multiple samples, but because hydrological extremes are rare, data are often unavailable for extreme events. Because solute and sediment C-Q relationships likely respond to changes in hydrologic extremes in different ways, there is a pressing need to define their behavior under extreme conditions, including how to properly sample to capture these patterns. In the absence of such knowledge, improving load estimates in extreme floods will likely remain difficult. Here we explore the use of C-Q relationships to determine when an event alters a watershed system such that it enters a new material source/transport regime. We focus on watersheds with sediment and discharge time series include low-frequency and/or extreme events. For example, we compare solute and sediment patterns in White Clay Creek in southeastern Pennsylvania across a range of flows inclusive of multiple hurricanes for which we have ample ancillary hydrochemical data. TSS is consistently mobilized during high flow events, even during extreme floods associated with hurricanes, and sediment fingerprinting indicates different sediment sources, including in-channel remobilization and landscape erosion, are active at different times. In other words, TSS mobilization in C-Q space is not sensitive to the source of material being mobilized. Unlike sediments, weathering solutes in this watershed

  19. Multivariate optimization of production systems

    International Nuclear Information System (INIS)

    Carroll, J.A.; Horne, R.N.

    1992-01-01

    This paper reports that mathematically, optimization involves finding the extreme values of a function. Given a function of several variables, Z = ∫(rvec x 1 , rvec x 2 ,rvec x 3 ,→x n ), an optimization scheme will find the combination of these variables that produces an extreme value in the function, whether it is a minimum or a maximum value. Many examples of optimization exist. For instance, if a function gives and investor's expected return on the basis of different investments, numerical optimization of the function will determine the mix of investments that will yield the maximum expected return. This is the basis of modern portfolio theory. If a function gives the difference between a set of data and a model of the data, numerical optimization of the function will produce the best fit of the model to the data. This is the basis for nonlinear parameter estimation. Similar examples can be given for network analysis, queuing theory, decision analysis, etc

  20. Balanced and optimal bianisotropic particles: maximizing power extracted from electromagnetic fields

    International Nuclear Information System (INIS)

    Ra'di, Younes; Tretyakov, Sergei A

    2013-01-01

    Here we introduce the concept of ‘optimal particles’ for strong interactions with electromagnetic fields. We assume that a particle occupies a given electrically small volume in space and study the required optimal relations between the particle polarizabilities. In these optimal particles, the inclusion shape and material are chosen so that the particles extract the maximum possible power from given incident fields. It appears that for different excitation scenarios the optimal particles are bianisotropic chiral, omega, moving and Tellegen particles. The optimal dimensions of resonant canonical chiral and omega particles are found analytically. Such optimal particles have extreme properties in scattering (e.g., zero backscattering or invisibility). Planar arrays of optimal particles possess extreme properties in reflection and transmission (e.g. total absorption or magnetic-wall response), and volumetric composites of optimal particles realize, for example, such extreme materials as the chiral nihility medium. (paper)

  1. Epidemiology of extremity fractures in the Netherlands

    NARCIS (Netherlands)

    Beerekamp, M. S. H.; de Muinck Keizer, R. J. O.; Schep, N. W. L.; Ubbink, D. T.; Panneman, M. J. M.; Goslings, J. C.

    2017-01-01

    Insight in epidemiologic data of extremity fractures is relevant to identify people at risk. By analyzing age- and gender specific fracture incidence and treatment patterns we may adjust future policy, take preventive measures and optimize health care management. Current epidemiologic data on

  2. Reaction kinetics of hydrazine neutralization in steam generator wet lay-up solution: Identifying optimal degradation conditions

    International Nuclear Information System (INIS)

    Schildermans, Kim; Lecocq, Raphael; Girasa, Emmanuel

    2012-09-01

    During a nuclear power plant outage, hydrazine is used as an oxygen scavenger in the steam generator lay-up solution. However, due to the carcinogenic effects of hydrazine, more stringent discharge limits are or will be imposed in the environmental permits. Hydrazine discharge could even be prohibited. Consequently, hydrazine alternatives or hydrazine degradation before discharge is needed. This paper presents the laboratory tests performed to characterize the reaction kinetics of hydrazine neutralization using bleach or hydrogen peroxide, catalyzed with either copper sulfate (CuSO 4 ) or potassium permanganate (KMnO 4 ). The tests are performed on two standard steam generator lay-up solutions based on different pH control agents: ammonia or ethanolamine. Different neutralization conditions are tested by varying temperature, oxidant addition, and catalyst concentration, among others, in order to identify the optimal parameters for hydrazine neutralization in a steam generator wet lay-up solution. (authors)

  3. A Pathological Brain Detection System based on Extreme Learning Machine Optimized by Bat Algorithm.

    Science.gov (United States)

    Lu, Siyuan; Qiu, Xin; Shi, Jianping; Li, Na; Lu, Zhi-Hai; Chen, Peng; Yang, Meng-Meng; Liu, Fang-Yuan; Jia, Wen-Juan; Zhang, Yudong

    2017-01-01

    It is beneficial to classify brain images as healthy or pathological automatically, because 3D brain images can generate so much information which is time consuming and tedious for manual analysis. Among various 3D brain imaging techniques, magnetic resonance (MR) imaging is the most suitable for brain, and it is now widely applied in hospitals, because it is helpful in the four ways of diagnosis, prognosis, pre-surgical, and postsurgical procedures. There are automatic detection methods; however they suffer from low accuracy. Therefore, we proposed a novel approach which employed 2D discrete wavelet transform (DWT), and calculated the entropies of the subbands as features. Then, a bat algorithm optimized extreme learning machine (BA-ELM) was trained to identify pathological brains from healthy controls. A 10x10-fold cross validation was performed to evaluate the out-of-sample performance. The method achieved a sensitivity of 99.04%, a specificity of 93.89%, and an overall accuracy of 98.33% over 132 MR brain images. The experimental results suggest that the proposed approach is accurate and robust in pathological brain detection. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. Robust and Reliable Portfolio Optimization Formulation of a Chance Constrained Problem

    Directory of Open Access Journals (Sweden)

    Sengupta Raghu Nandan

    2017-02-01

    Full Text Available We solve a linear chance constrained portfolio optimization problem using Robust Optimization (RO method wherein financial script/asset loss return distributions are considered as extreme valued. The objective function is a convex combination of portfolio’s CVaR and expected value of loss return, subject to a set of randomly perturbed chance constraints with specified probability values. The robust deterministic counterpart of the model takes the form of Second Order Cone Programming (SOCP problem. Results from extensive simulation runs show the efficacy of our proposed models, as it helps the investor to (i utilize extensive simulation studies to draw insights into the effect of randomness in portfolio decision making process, (ii incorporate different risk appetite scenarios to find the optimal solutions for the financial portfolio allocation problem and (iii compare the risk and return profiles of the investments made in both deterministic as well as in uncertain and highly volatile financial markets.

  5. Implementation of fault-tolerant quantum logic gates via optimal control

    International Nuclear Information System (INIS)

    Nigmatullin, R; Schirmer, S G

    2009-01-01

    The implementation of fault-tolerant quantum gates on encoded logic qubits is considered. It is shown that transversal implementation of logic gates based on simple geometric control ideas is problematic for realistic physical systems suffering from imperfections such as qubit inhomogeneity or uncontrollable interactions between qubits. However, this problem can be overcome by formulating the task as an optimal control problem and designing efficient algorithms to solve it. In particular, we can find solutions that implement all of the elementary logic gates in a fixed amount of time with limited control resources for the five-qubit stabilizer code. Most importantly, logic gates that are extremely difficult to implement using conventional techniques even for ideal systems, such as the T-gate for the five-qubit stabilizer code, do not appear to pose a problem for optimal control.

  6. Optimal wind-hydro solution for the Marmara region of Turkey to meet electricity demand

    International Nuclear Information System (INIS)

    Dursun, Bahtiyar; Alboyaci, Bora; Gokcol, Cihan

    2011-01-01

    Wind power technology is now a reliable electricity production system. It presents an economically attractive solution for the continuously increasing energy demand of the Marmara region located in Turkey. However, the stochastic behavior of wind speed in the Marmara region can lead to significant disharmony between wind energy production and electricity demand. Therefore, to overcome wind's variable nature, a more reliable solution would be to integrate hydropower with wind energy. In this study, a methodology to estimate an optimal wind-hydro solution is developed and it is subsequently applied to six typical different site cases in the Marmara region in order to define the most beneficial configuration of the wind-hydro system. All numerical calculations are based on the long-term wind speed measurements, electrical load demand and operational characteristics of the system components. -- Research highlights: → This study is the first application of a wind-hydro pumped storage system in Turkey. → The methodology developed in this study is applied to the six sites in the Marmara region of Turkey. A wind - hydro pumped storage system is proposed to meet the electric energy demand of the Marmara region.

  7. Extremal dyonic black holes in D=4 Gauss-Bonnet gravity

    International Nuclear Information System (INIS)

    Chen, C.-M.; Gal'tsov, Dmitri V.; Orlov, Dmitry G.

    2008-01-01

    We investigate extremal dyon black holes in the Einstein-Maxwell-dilaton theory with higher curvature corrections in the form of the Gauss-Bonnet density coupled to the dilaton. In the same theory without the Gauss-Bonnet term the extremal dyon solutions exist only for discrete values of the dilaton coupling constant a. We show that the Gauss-Bonnet term acts as a dyon hair tonic enlarging the allowed values of a to continuous domains in the plane (a,q m ) where q m is the magnetic charge. In the limit of the vanishing curvature coupling (a large magnetic charge) the dyon solutions obtained tend to the Reissner-Nordstroem solution but not to the extremal dyons of the Einstein-Maxwell-dilaton theory. Both solutions have the same dependence of the horizon radius in terms of charges. The entropy of new dyonic black holes interpolates between the Bekenstein-Hawking value in the limit of the large magnetic charge (equivalent to the vanishing Gauss-Bonnet coupling) and twice this value for the vanishing magnetic charge. Although an expression for the entropy can be obtained analytically using purely local near-horizon solutions, its interpretation as the black hole entropy is legitimate only once the global black hole solution is known to exist, and we obtain numerically the corresponding conditions on the parameters. Thus, a purely local analysis is insufficient to fully understand the entropy of the curvature-corrected black holes. We also find dyon solutions which are not asymptotically flat, but approach the linear dilaton background at infinity. They describe magnetic black holes on the electric linear dilaton background.

  8. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  9. Pareto Optimization of a Half Car Passive Suspension Model Using a Novel Multiobjective Heat Transfer Search Algorithm

    Directory of Open Access Journals (Sweden)

    Vimal Savsani

    2017-01-01

    Full Text Available Most of the modern multiobjective optimization algorithms are based on the search technique of genetic algorithms; however the search techniques of other recently developed metaheuristics are emerging topics among researchers. This paper proposes a novel multiobjective optimization algorithm named multiobjective heat transfer search (MOHTS algorithm, which is based on the search technique of heat transfer search (HTS algorithm. MOHTS employs the elitist nondominated sorting and crowding distance approach of an elitist based nondominated sorting genetic algorithm-II (NSGA-II for obtaining different nondomination levels and to preserve the diversity among the optimal set of solutions, respectively. The capability in yielding a Pareto front as close as possible to the true Pareto front of MOHTS has been tested on the multiobjective optimization problem of the vehicle suspension design, which has a set of five second-order linear ordinary differential equations. Half car passive ride model with two different sets of five objectives is employed for optimizing the suspension parameters using MOHTS and NSGA-II. The optimization studies demonstrate that MOHTS achieves the better nondominated Pareto front with the widespread (diveresed set of optimal solutions as compared to NSGA-II, and further the comparison of the extreme points of the obtained Pareto front reveals the dominance of MOHTS over NSGA-II, multiobjective uniform diversity genetic algorithm (MUGA, and combined PSO-GA based MOEA.

  10. Data of cost-optimal solutions and retrofit design methods for school renovation in a warm climate.

    Science.gov (United States)

    Zacà, Ilaria; Tornese, Giuliano; Baglivo, Cristina; Congedo, Paolo Maria; D'Agostino, Delia

    2016-12-01

    "Efficient Solutions and Cost-Optimal Analysis for Existing School Buildings" (Paolo Maria Congedo, Delia D'Agostino, Cristina Baglivo, Giuliano Tornese, Ilaria Zacà) [1] is the paper that refers to this article. It reports the data related to the establishment of several variants of energy efficient retrofit measures selected for two existing school buildings located in the Mediterranean area. In compliance with the cost-optimal analysis described in the Energy Performance of Buildings Directive and its guidelines (EU, Directive, EU 244,) [2], [3], these data are useful for the integration of renewable energy sources and high performance technical systems for school renovation. The data of cost-efficient high performance solutions are provided in tables that are explained within the following sections. The data focus on the describe school refurbishment sector to which European policies and investments are directed. A methodological approach already used in previous studies about new buildings is followed (Baglivo Cristina, Congedo Paolo Maria, D׳Agostino Delia, Zacà Ilaria, 2015; IlariaZacà, Delia D'Agostino, Paolo Maria Congedo, Cristina Baglivo; Baglivo Cristina, Congedo Paolo Maria, D'Agostino Delia, Zacà Ilaria, 2015; Ilaria Zacà, Delia D'Agostino, Paolo Maria Congedo, Cristina Baglivo, 2015; Paolo Maria Congedo, Cristina Baglivo, IlariaZacà, Delia D'Agostino,2015) [4], [5], [6], [7], [8]. The files give the cost-optimal solutions for a kindergarten (REF1) and a nursery (REF2) school located in Sanarica and Squinzano (province of Lecce Southern Italy). The two reference buildings differ for construction period, materials and systems. The eleven tables provided contain data about the localization of the buildings, geometrical features and thermal properties of the envelope, as well as the energy efficiency measures related to walls, windows, heating, cooling, dhw and renewables. Output values of energy consumption, gas emission and costs are given for a

  11. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    Science.gov (United States)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto

  12. An n -material thresholding method for improving integerness of solutions in topology optimization

    International Nuclear Information System (INIS)

    Watts, Seth; Engineering); Tortorelli, Daniel A.; Engineering)

    2016-01-01

    It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, the canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.

  13. Concentration-discharge relationships during an extreme event: Contrasting behavior of solutes and changes to chemical quality of dissolved organic material in the Boulder Creek Watershed during the September 2013 flood: SOLUTE FLUX IN A FLOOD EVENT

    Energy Technology Data Exchange (ETDEWEB)

    Rue, Garrett P. [Institute of Arctic and Alpine Research, University of Colorado, Boulder Colorado USA; Rock, Nathan D. [Institute of Arctic and Alpine Research, University of Colorado, Boulder Colorado USA; Gabor, Rachel S. [Institute of Arctic and Alpine Research, University of Colorado, Boulder Colorado USA; Pitlick, John [Department of Geography, University of Colorado, Boulder Colorado USA; Tfaily, Malak [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland Washington USA; McKnight, Diane M. [Institute of Arctic and Alpine Research, University of Colorado, Boulder Colorado USA

    2017-07-01

    During the week of September 10-17, 2013, close to 20 inches of rain fell across Boulder County, Colorado, USA. This rainfall represented a 1000-year event that caused massive hillslope erosion, landslides, and mobilization of sediments. The resultant stream flows corresponded to a 100-year flood. For the Boulder Creek Critical Zone Observatory (BC-CZO), this event provided an opportunity to study the effect of extreme rainfall on solute concentration-discharge relationships and biogeochemical catchment processes. We observed base cation and dissolved organic carbon (DOC) concentrations at two sites on Boulder Creek following the recession of peak flow. We also isolated three distinct fractions of dissolved organic matter (DOM) for chemical characterization. At the upper site, which represented the forested mountain catchment, the concentrations of the base cations Ca, Mg and Na were greatest at the peak flood and decreased only slightly, in contrast with DOC and K concentrations, which decreased substantially. At the lower site within urban corridor, all solutes decreased abruptly after the first week of flow recession, with base cation concentrations stabilizing while DOC and K continued to decrease. Additionally, we found significant spatiotemporal trends in the chemical quality of organic matter exported during the flood recession, as measured by fluorescence, 13C-NMR spectroscopy, and FTICR-MS. Similar to the effect of extreme rainfall events in driving landslides and mobilizing sediments, our findings suggest that such events mobilize solutes by the flushing of the deeper layers of the critical zone, and that this flushing regulates terrestrial-aquatic biogeochemical linkages during the flow recession.

  14. Hybrid Optimization in the Design of Reciprocal Structures

    DEFF Research Database (Denmark)

    Parigi, Dario; Kirkegaard, Poul Henning; Sassone, Mario

    2012-01-01

    that explore the global domain of solutions as genetic algorithms (GAs). The benchmark tests show that when the control on the topology is required the best result is obtained by a hybrid approach that combines the global search of the GA with the local search of a GB algorithm. The optimization method......The paper presents a method to generate the geometry of reciprocal structures by means of a hybrid optimization procedure. The geometry of reciprocal structures where elements are sitting on the top or in the bottom of each other is extremely difficult to predict because of the non....... In this paper it is shown that the geometrically compatible position of the elements could be determined by local search algorithm gradient-based (GB). However the control on which bar sit on the top or in the bottom at each connection can be regarded as a topological problem and require the use of algorithms...

  15. Short-Term Distribution System State Forecast Based on Optimal Synchrophasor Sensor Placement and Extreme Learning Machine

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang; Zhang, Yingchen

    2016-11-14

    This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vector regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.

  16. Shape Optimization in Contact Problems with Coulomb Friction and a Solution-Dependent Friction Coefficient

    Czech Academy of Sciences Publication Activity Database

    Beremlijski, P.; Outrata, Jiří; Haslinger, Jaroslav; Pathó, R.

    2014-01-01

    Roč. 52, č. 5 (2014), s. 3371-3400 ISSN 0363-0129 R&D Projects: GA ČR(CZ) GAP201/12/0671 Grant - others:GA MŠK(CZ) CZ.1.05/1.1.00/02.0070; GA MŠK(CZ) CZ.1.07/2.3.00/20.0070 Institutional support: RVO:67985556 ; RVO:68145535 Keywords : shape optimization * contact problems * Coulomb friction * solution-dependent coefficient of friction * mathematical programs with equilibrium constraints Subject RIV: BA - General Mathematics Impact factor: 1.463, year: 2014 http://library.utia.cas.cz/separaty/2014/MTR/outrata-0434234.pdf

  17. LETTER TO THE EDITOR: Constant-time solution to the global optimization problem using Brüschweiler's ensemble search algorithm

    Science.gov (United States)

    Protopopescu, V.; D'Helon, C.; Barhen, J.

    2003-06-01

    A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.

  18. Optimization of bioethanol production from carbohydrate rich wastes by extreme thermophilic microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Tomas, A.F.

    2013-05-15

    Second-generation bioethanol is produced from residual biomass such as industrial and municipal waste or agricultural and forestry residues. However, Saccharomyces cerevisiae, the microorganism currently used in industrial first-generation bioethanol production, is not capable of converting all of the carbohydrates present in these complex substrates into ethanol. This is in particular true for pentose sugars such as xylose, generally the second major sugar present in lignocellulosic biomass. The transition of second-generation bioethanol production from pilot to industrial scale is hindered by the recalcitrance of the lignocellulosic biomass, and by the lack of a microorganism capable of converting this feedstock to bioethanol with high yield, efficiency and productivity. In this study, a new extreme thermophilic ethanologenic bacterium was isolated from household waste. When assessed for ethanol production from xylose, an ethanol yield of 1.39 mol mol-1 xylose was obtained. This represents 83 % of the theoretical ethanol yield from xylose and is to date the highest reported value for a native, not genetically modified microorganism. The bacterium was identified as a new member of the genus Thermoanaerobacter, named Thermoanaerobacter pentosaceus and was subsequently used to investigate some of the factors that influence secondgeneration bioethanol production, such as initial substrate concentration and sensitivity to inhibitors. Furthermore, T. pentosaceus was used to develop and optimize bioethanol production from lignocellulosic biomass using a range of different approaches, including combination with other microorganisms and immobilization of the cells. T. pentosaceus could produce ethanol from a wide range of substrates without the addition of nutrients such as yeast extract and vitamins to the medium. It was initially sensitive to concentrations of 10 g l-1 of xylose and 1 % (v/v) ethanol. However, long term repeated batch cultivation showed that the strain

  19. Particle Swarm Optimization Toolbox

    Science.gov (United States)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry

  20. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    International Nuclear Information System (INIS)

    Zbijewski, W.; De Jean, P.; Prakash, P.; Ding, Y.; Stayman, J. W.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Machado, A.; Carrino, J. A.; Siewerdsen, J. H.

    2011-01-01

    cm diameter bore (20 x 20 x 20 cm 3 field of view); total acquisition arc of ∼240 deg. The system MTF declines to 50% at ∼1.3 mm -1 and to 10% at ∼2.7 mm -1 , consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from ∼500 projections at less than ∼0.5 kW power, implying ∼6.4 mGy (0.064 mSv) for low-dose protocols and ∼15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10-20 HU contrast resolution). Conclusions: The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography.

  1. Extremal vacuum black holes in higher dimensions

    International Nuclear Information System (INIS)

    Figueras, Pau; Lucietti, James; Rangamani, Mukund; Kunduri, Hari K.

    2008-01-01

    We consider extremal black hole solutions to the vacuum Einstein equations in dimensions greater than five. We prove that the near-horizon geometry of any such black hole must possess an SO(2,1) symmetry in a special case where one has an enhanced rotational symmetry group. We construct examples of vacuum near-horizon geometries using the extremal Myers-Perry black holes and boosted Myers-Perry strings. The latter lead to near-horizon geometries of black ring topology, which in odd spacetime dimensions have the correct number of rotational symmetries to describe an asymptotically flat black object. We argue that a subset of these correspond to the near-horizon limit of asymptotically flat extremal black rings. Using this identification we provide a conjecture for the exact 'phase diagram' of extremal vacuum black rings with a connected horizon in odd spacetime dimensions greater than five.

  2. Optimization of radioactivation analysis for the determination of iodine, bromine, and chlorine contents in soils, plants, soil solutions and rain water

    International Nuclear Information System (INIS)

    Yuita, Kouichi

    1983-01-01

    The conventional analytical procedures for iodine, bromine and chlorine in soils, plants, soil solutions and rain water, especially in the former two, have not been sufficient in their accuracy and sensitivity. With emphasis on the radioactivation analysis known to be a highly accurate analytical method, practical radioactivation procedures with high sensitivity, accurate and covenient, have been investigated for the determination of the three halogen elements in various soils and plants and of the three contained in extremely low concentrations in soil solutions and rain water. Consequently, the following methods were able to be established: (1) non-destructive radioactivation analysis without the chemical separation of bromine and chlorine in plants, soil solutions and rain water; (2) radioactivation analysis by group separating, simultaneous determination of iodine, bromine and chlorine in soils; (3) highsensitivity radioactivation analysis for iodine in plants, soil solutions and rain water. A manual for the analytical procedures was prepared accordingly. (Mori, K.)

  3. Improving real-time estimation of heavy-to-extreme precipitation using rain gauge data via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Seo, Dong-Jun; Siddique, Ridwan; Zhang, Yu; Kim, Dongsoo

    2014-11-01

    A new technique for gauge-only precipitation analysis for improved estimation of heavy-to-extreme precipitation is described and evaluated. The technique is based on a novel extension of classical optimal linear estimation theory in which, in addition to error variance, Type-II conditional bias (CB) is explicitly minimized. When cast in the form of well-known kriging, the methodology yields a new kriging estimator, referred to as CB-penalized kriging (CBPK). CBPK, however, tends to yield negative estimates in areas of no or light precipitation. To address this, an extension of CBPK, referred to herein as extended conditional bias penalized kriging (ECBPK), has been developed which combines the CBPK estimate with a trivial estimate of zero precipitation. To evaluate ECBPK, we carried out real-world and synthetic experiments in which ECBPK and the gauge-only precipitation analysis procedure used in the NWS's Multisensor Precipitation Estimator (MPE) were compared for estimation of point precipitation and mean areal precipitation (MAP), respectively. The results indicate that ECBPK improves hourly gauge-only estimation of heavy-to-extreme precipitation significantly. The improvement is particularly large for estimation of MAP for a range of combinations of basin size and rain gauge network density. This paper describes the technique, summarizes the results and shares ideas for future research.

  4. A hydro-meteorological model chain to assess the influence of natural variability and impacts of climate change on extreme events and propose optimal water management

    Science.gov (United States)

    von Trentini, F.; Willkofer, F.; Wood, R. R.; Schmid, F. J.; Ludwig, R.

    2017-12-01

    The ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) focuses on the effects of climate change on hydro-meteorological extreme events and their implications for water management in Bavaria and Québec. Therefore, a hydro-meteorological model chain is applied. It employs high performance computing capacity of the Leibniz Supercomputing Centre facility SuperMUC to dynamically downscale 50 members of the Global Circulation Model CanESM2 over European and Eastern North American domains using the Canadian Regional Climate Model (RCM) CRCM5. Over Europe, the unique single model ensemble is conjointly analyzed with the latest information provided through the CORDEX-initiative, to better assess the influence of natural climate variability and climatic change in the dynamics of extreme events. Furthermore, these 50 members of a single RCM will enhance extreme value statistics (extreme return periods) by exploiting the available 1500 model years for the reference period from 1981 to 2010. Hence, the RCM output is applied to drive the process based, fully distributed, and deterministic hydrological model WaSiM in high temporal (3h) and spatial (500m) resolution. WaSiM and the large ensemble are further used to derive a variety of hydro-meteorological patterns leading to severe flood events. A tool for virtual perfect prediction shall provide a combination of optimal lead time and management strategy to mitigate certain flood events following these patterns.

  5. The Engineering for Climate Extremes Partnership

    Science.gov (United States)

    Holland, G. J.; Tye, M. R.

    2014-12-01

    Hurricane Sandy and the recent floods in Thailand have demonstrated not only how sensitive the urban environment is to the impact of severe weather, but also the associated global reach of the ramifications. These, together with other growing extreme weather impacts and the increasing interdependence of global commercial activities point towards a growing vulnerability to weather and climate extremes. The Engineering for Climate Extremes Partnership brings academia, industry and government together with the goals encouraging joint activities aimed at developing new, robust, and well-communicated responses to this increasing vulnerability. Integral to the approach is the concept of 'graceful failure' in which flexible designs are adopted that protect against failure by combining engineering or network strengths with a plan for efficient and rapid recovery if and when they fail. Such an approach enables optimal planning for both known future scenarios and their assessed uncertainty.

  6. Extreme Weather and Climate: Workshop Report

    Science.gov (United States)

    Sobel, Adam; Camargo, Suzana; Debucquoy, Wim; Deodatis, George; Gerrard, Michael; Hall, Timothy; Hallman, Robert; Keenan, Jesse; Lall, Upmanu; Levy, Marc; hide

    2016-01-01

    Extreme events are the aspects of climate to which human society is most sensitive. Due to both their severity and their rarity, extreme events can challenge the capacity of physical, social, economic and political infrastructures, turning natural events into human disasters. Yet, because they are low frequency events, the science of extreme events is very challenging. Among the challenges is the difficulty of connecting extreme events to longer-term, large-scale variability and trends in the climate system, including anthropogenic climate change. How can we best quantify the risks posed by extreme weather events, both in the current climate and in the warmer and different climates to come? How can we better predict them? What can we do to reduce the harm done by such events? In response to these questions, the Initiative on Extreme Weather and Climate has been created at Columbia University in New York City (extreme weather.columbia.edu). This Initiative is a University-wide activity focused on understanding the risks to human life, property, infrastructure, communities, institutions, ecosystems, and landscapes from extreme weather events, both in the present and future climates, and on developing solutions to mitigate those risks. In May 2015,the Initiative held its first science workshop, entitled Extreme Weather and Climate: Hazards, Impacts, Actions. The purpose of the workshop was to define the scope of the Initiative and tremendously broad intellectual footprint of the topic indicated by the titles of the presentations (see Table 1). The intent of the workshop was to stimulate thought across disciplinary lines by juxtaposing talks whose subjects differed dramatically. Each session concluded with question and answer panel sessions. Approximately, 150 people were in attendance throughout the day. Below is a brief synopsis of each presentation. The synopses collectively reflect the variety and richness of the emerging extreme event research agenda.

  7. Efficient solution to the stagnation problem of the particle swarm optimization algorithm for phase diversity.

    Science.gov (United States)

    Qi, Xin; Ju, Guohao; Xu, Shuyan

    2018-04-10

    The phase diversity (PD) technique needs optimization algorithms to minimize the error metric and find the global minimum. Particle swarm optimization (PSO) is very suitable for PD due to its simple structure, fast convergence, and global searching ability. However, the traditional PSO algorithm for PD still suffers from the stagnation problem (premature convergence), which can result in a wrong solution. In this paper, the stagnation problem of the traditional PSO algorithm for PD is illustrated first. Then, an explicit strategy is proposed to solve this problem, based on an in-depth understanding of the inherent optimization mechanism of the PSO algorithm. Specifically, a criterion is proposed to detect premature convergence; then a redistributing mechanism is proposed to prevent premature convergence. To improve the efficiency of this redistributing mechanism, randomized Halton sequences are further introduced to ensure the uniform distribution and randomness of the redistributed particles in the search space. Simulation results show that this strategy can effectively solve the stagnation problem of the PSO algorithm for PD, especially for large-scale and high-dimension wavefront sensing and noisy conditions. This work is further verified by an experiment. This work can improve the robustness and performance of PD wavefront sensing.

  8. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    addresses concrete problems in the design of resilient systems. The complete catalog of resilience design patterns provides designers with reusable design elements. We also define a framework that enhances a designer's understanding of the important constraints and opportunities for the design patterns to be implemented and deployed at various layers of the system stack. This design framework may be used to establish mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The framework also supports optimization of the cost-benefit trade-offs among performance, resilience, and power consumption. The overall goal of this work is to enable a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner in spite of frequent faults, errors, and failures of various types.

  9. Particle Swarm Optimization applied to combinatorial problem aiming the fuel recharge problem solution in a nuclear reactor

    International Nuclear Information System (INIS)

    Meneses, Anderson Alvarenga de Moura; Schirru, Roberto

    2005-01-01

    This work focuses on the usage the Artificial Intelligence technique Particle Swarm Optimization (PSO) to optimize the fuel recharge at a nuclear reactor. This is a combinatorial problem, in which the search of the best feasible solution is done by minimizing a specific objective function. However, in this first moment it is possible to compare the fuel recharge problem with the Traveling Salesman Problem (TSP), since both of them are combinatorial, with one advantage: the evaluation of the TSP objective function is much more simple. Thus, the proposed methods have been applied to two TSPs: Oliver 30 and Rykel 48. In 1995, KENNEDY and EBERHART presented the PSO technique to optimize non-linear continued functions. Recently some PSO models for discrete search spaces have been developed for combinatorial optimization. Although all of them having different formulation from the ones presented here. In this paper, we use the PSO theory associated with to the Random Keys (RK)model, used in some optimizations with Genetic Algorithms. The Particle Swarm Optimization with Random Keys (PSORK) results from this association, which combines PSO and RK. The adaptations and changings in the PSO aim to allow the usage of the PSO at the nuclear fuel recharge. This work shows the PSORK being applied to the proposed combinatorial problem and the obtained results. (author)

  10. The Development and Full-Scale Experimental Validation of an Optimal Water Treatment Solution in Improving Chiller Performances

    Directory of Open Access Journals (Sweden)

    Chen-Yu Chiang

    2016-06-01

    Full Text Available An optimal solution, in combining physical and chemical water treatment methods, has been developed. This method uses a high voltage capacitance based (HVCB electrodes, coupled with biocides to form a sustainable solution in improving chiller plant performances. In this study, the industrial full-scale tests, instead of laboratory tests, have been conducted on chiller plants at the size of 5000 RT to 10,000 RT cooling capacities under commercial operation for more than two years. The experimental results indicated that the condenser approach temperatures can be maintained at below 1 °C for over two years. It has been validated that the coefficient of performance (COP of a chiller can be improved by over 5% by implementing this solution. Every 1 °C reduction in condenser approach temperature can yield approximately 3% increase on chiller COP, which warrants its future application potential in the HVAC industry, where Ta can degrade by 1 °C every three to six months. The solution developed in this study could also reduce chemical dosages and conserve makeup water substantially and is more environment friendly.

  11. Optimization of Wind Turbine Airfoil Using Nondominated Sorting Genetic Algorithm and Pareto Optimal Front

    Directory of Open Access Journals (Sweden)

    Ziaul Huque

    2012-01-01

    Full Text Available A Computational Fluid Dynamics (CFD and response surface-based multiobjective design optimization were performed for six different 2D airfoil profiles, and the Pareto optimal front of each airfoil is presented. FLUENT, which is a commercial CFD simulation code, was used to determine the relevant aerodynamic loads. The Lift Coefficient (CL and Drag Coefficient (CD data at a range of 0° to 12° angles of attack (α and at three different Reynolds numbers (Re=68,459, 479, 210, and 958, 422 for all the six airfoils were obtained. Realizable k-ε turbulence model with a second-order upwind solution method was used in the simulations. The standard least square method was used to generate response surface by the statistical code JMP. Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II was used to determine the Pareto optimal set based on the response surfaces. Each Pareto optimal solution represents a different compromise between design objectives. This gives the designer a choice to select a design compromise that best suits the requirements from a set of optimal solutions. The Pareto solution set is presented in the form of a Pareto optimal front.

  12. Design Optimization Toolkit: Users' Manual

    Energy Technology Data Exchange (ETDEWEB)

    Aguilo Valentin, Miguel Alejandro [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computational Solid Mechanics and Structural Dynamics

    2014-07-01

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLAB command window.

  13. Instability of extremal relativistic charged spheres

    International Nuclear Information System (INIS)

    Anninos, Peter; Rothman, Tony

    2002-01-01

    With the question 'Can relativistic charged spheres form extremal black holes?' in mind, we investigate the properties of such spheres from a classical point of view. The investigation is carried out numerically by integrating the Oppenheimer-Volkov equation for relativistic charged fluid spheres and finding interior Reissner-Nordstroem solutions for these objects. We consider both constant density and adiabatic equations of state, as well as several possible charge distributions, and examine stability by both a normal mode and an energy analysis. In all cases, the stability limit for these spheres lies between the extremal (Q=M) limit and the black hole limit (R=R + ). That is, we find that charged spheres undergo gravitational collapse before they reach Q=M, suggesting that extremal Reissner-Nordstroem black holes produced by collapse are ruled out. A general proof of this statement would support a strong form of the cosmic censorship hypothesis, excluding not only stable naked singularities, but stable extremal black holes. The numerical results also indicate that although the interior mass-energy m(R) obeys the usual m/R + as Q→M. In the Appendix we also argue that Hawking radiation will not lead to an extremal Reissner-Nordstroem black hole. All our results are consistent with the third law of black hole dynamics, as currently understood

  14. Efficient robust control of first order scalar conservation laws using semi-analytical solutions

    KAUST Repository

    Li, Yanning; Canepa, Edward S.; Claudel, Christian G.

    2014-01-01

    This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using initial density control and boundary flow control, as a Linear Program. We then show that this framework can be extended to arbitrary control problems involving the control of subsets of the initial and boundary conditions. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP/MILP. Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality.

  15. Sufficient conditions for optimality for a mathematical model of drug treatment with pharmacodynamics

    Directory of Open Access Journals (Sweden)

    Maciej Leszczyński

    2017-01-01

    Full Text Available We consider an optimal control problem for a general mathematical model of drug treatment with a single agent. The control represents the concentration of the agent and its effect (pharmacodynamics is modelled by a Hill function (i.e., Michaelis-Menten type kinetics. The aim is to minimize a cost functional consisting of a weighted average related to the state of the system (both at the end and during a fixed therapy horizon and to the total amount of drugs given. The latter is an indirect measure for the side effects of treatment. It is shown that optimal controls are continuous functions of time that change between full or no dose segments with connecting pieces that take values in the interior of the control set. Sufficient conditions for the strong local optimality of an extremal controlled trajectory in terms of the existence of a solution to a piecewise defined Riccati differential equation are given.

  16. Biosorption of mercury from aqueous solutions using highly characterised peats

    Directory of Open Access Journals (Sweden)

    A.M. Rizzuti

    2015-02-01

    Full Text Available This research investigated the biosorption of mercury from aqueous solutions by six highly characterised peats. Samples of the peats were tested both in unaltered condition and after being treated with hydrochloric acid (HCl to free up any occupied exchange sites. Other variables tested were sample dose, contact time, mixing temperature, and the concentration and pH of the mercury solution. Desorption studies were also performed, and tests were done to determine whether the peats could be re-used for mercury biosorption. The results indicate that all six peat types biosorb mercury from aqueous solutions extremely well (92−100 % removal and that their mercury removal capacities are not significantly affected by manipulation of the various factors tested. The factor that had the greatest impact on the mercury removal capacities of the peats was the pH of the mercury solution. The optimal mercury solution pH for mercury removal was in the range 5−7 for four of the peats and in the range 2−3 for the other two. The desorption results indicate that it may be possible to recover up to 41 % of the removed mercury. All of the peat types tested can be repeatedly re-used for additional mercury biosorption cycles. Hence, their disposal should not become a hazardous waste problem.

  17. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  18. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  19. NN-Based Implicit Stochastic Optimization of Multi-Reservoir Systems Management

    Directory of Open Access Journals (Sweden)

    Matteo Sangiorgio

    2018-03-01

    Full Text Available Multi-reservoir systems management is complex because of the uncertainty on future events and the variety of purposes, usually conflicting, of the involved actors. An efficient management of these systems can help improving resource allocation, preventing political crisis and reducing the conflicts between the stakeholders. Bellman stochastic dynamic programming (SDP is the most famous among the many proposed approaches to solve this optimal control problem. Unfortunately, SDP is affected by the curse of dimensionality: computational effort increases exponentially with the complexity of the considered system (i.e., number of reservoirs, and the problem rapidly becomes intractable. This paper proposes an implicit stochastic optimization approach for the solution of the reservoir management problem. The core idea is using extremely flexible functions, such as artificial neural networks (NN, for designing release rules which approximate the optimal policies obtained by an open-loop approach. These trained NNs can then be used to take decisions in real time. The approach thus requires a sufficiently long series of historical or synthetic inflows, and the definition of a compromise solution to be approximated. This work analyzes with particular emphasis the importance of the information which represents the input of the control laws, investigating the effects of different degrees of completeness. The methodology is applied to the Nile River basin considering the main management objectives (minimization of the irrigation water deficit and maximization of the hydropower production, but can be easily adopted also in other cases.

  20. Enhanced Performance of PbS-quantum-dot-sensitized Solar Cells via Optimizing Precursor Solution and Electrolytes

    Science.gov (United States)

    Tian, Jianjun; Shen, Ting; Liu, Xiaoguang; Fei, Chengbin; Lv, Lili; Cao, Guozhong

    2016-03-01

    This work reports a PbS-quantum-dot-sensitized solar cell (QDSC) with power conversion efficiency (PCE) of 4%. PbS quantum dots (QDs) were grown on mesoporous TiO2 film using a successive ion layer absorption and reaction (SILAR) method. The growth of QDs was found to be profoundly affected by the concentration of the precursor solution. At low concentrations, the rate-limiting factor of the crystal growth was the adsorption of the precursor ions, and the surface growth of the crystal became the limiting factor in the high concentration solution. The optimal concentration of precursor solution with respect to the quantity and size of synthesized QDs was 0.06 M. To further increase the performance of QDSCs, the 30% deionized water of polysulfide electrolyte was replaced with methanol to improve the wettability and permeability of electrolytes in the TiO2 film, which accelerated the redox couple diffusion in the electrolyte solution and improved charge transfer at the interfaces between photoanodes and electrolytes. The stability of PbS QDs in the electrolyte was also improved by methanol to reduce the charge recombination and prolong the electron lifetime. As a result, the PCE of QDSC was increased to 4.01%.

  1. Structure and dynamics of solutions

    CERN Document Server

    Ohtaki, H

    2013-01-01

    Recent advances in the study of structural and dynamic properties of solutions have provided a molecular picture of solute-solvent interactions. Although the study of thermodynamic as well as electronic properties of solutions have played a role in the development of research on the rate and mechanism of chemical reactions, such macroscopic and microscopic properties are insufficient for a deeper understanding of fast chemical and biological reactions. In order to fill the gap between the two extremes, it is necessary to know how molecules are arranged in solution and how they change their pos

  2. Optimal control of quantum dissipative dynamics: Analytic solution for cooling the three-level Λ system

    International Nuclear Information System (INIS)

    Sklarz, Shlomo E.; Tannor, David J.; Khaneja, Navin

    2004-01-01

    We study the problem of optimal control of dissipative quantum dynamics. Although under most circumstances dissipation leads to an increase in entropy (or a decrease in purity) of the system, there is an important class of problems for which dissipation with external control can decrease the entropy (or increase the purity) of the system. An important example is laser cooling. In such systems, there is an interplay of the Hamiltonian part of the dynamics, which is controllable, and the dissipative part of the dynamics, which is uncontrollable. The strategy is to control the Hamiltonian portion of the evolution in such a way that the dissipation causes the purity of the system to increase rather than decrease. The goal of this paper is to find the strategy that leads to maximal purity at the final time. Under the assumption that Hamiltonian control is complete and arbitrarily fast, we provide a general framework by which to calculate optimal cooling strategies. These assumptions lead to a great simplification, in which the control problem can be reformulated in terms of the spectrum of eigenvalues of ρ, rather than ρ itself. By combining this formulation with the Hamilton-Jacobi-Bellman theorem we are able to obtain an equation for the globally optimal cooling strategy in terms of the spectrum of the density matrix. For the three-level Λ system, we provide a complete analytic solution for the optimal cooling strategy. For this system it is found that the optimal strategy does not exploit system coherences and is a 'greedy' strategy, in which the purity is increased maximally at each instant

  3. Biosorption of Cd(II), Ni(II) and Pb(II) from aqueous solution by dried biomass of aspergillus niger: application of response surface methodology to the optimization of process parameters

    Energy Technology Data Exchange (ETDEWEB)

    Amini, Malihe; Younesi, Habibollah [Department of Environmental Science, Faculty of Natural Resources and Marine Sciences, Tarbiat Modares University, Noor (Iran)

    2009-10-15

    In this study, the biosorption of Cd(II), Ni(II) and Pb(II) on Aspergillus niger in a batch system was investigated, and optimal condition determined by means of central composite design (CCD) under response surface methodology (RSM). Biomass inactivated by heat and pretreated by alkali solution was used in the determination of optimal conditions. The effect of initial solution pH, biomass dose and initial ion concentration on the removal efficiency of metal ions by A. niger was optimized using a design of experiment (DOE) method. Experimental results indicated that the optimal conditions for biosorption were 5.22 g/L, 89.93 mg/L and 6.01 for biomass dose, initial ion concentration and solution pH, respectively. Enhancement of metal biosorption capacity of the dried biomass by pretreatment with sodium hydroxide was observed. Maximal removal efficiencies for Cd(II), Ni(III) and Pb(II) ions of 98, 80 and 99% were achieved, respectively. The biosorption capacity of A. niger biomass obtained for Cd(II), Ni(II) and Pb(II) ions was 2.2, 1.6 and 4.7 mg/g, respectively. According to these observations the fungal biomass of A. niger is a suitable biosorbent for the removal of heavy metals from aqueous solutions. Multiple response optimization was applied to the experimental data to discover the optimal conditions for a set of responses, simultaneously, by using a desirability function. (Abstract Copyright [2009], Wiley Periodicals, Inc.)

  4. Optimizing the recovery of copper from electroplating rinse bath solution by hollow fiber membrane.

    Science.gov (United States)

    Oskay, Kürşad Oğuz; Kul, Mehmet

    2015-01-01

    This study aimed to recover and remove copper from industrial model wastewater solution by non-dispersive solvent extraction (NDSX). Two mathematical models were developed to simulate the performance of an integrated extraction-stripping process, based on the use of hollow fiber contactors using the response surface method. The models allow one to predict the time dependent efficiencies of the two phases involved in individual extraction or stripping processes. The optimal recovery efficiency parameters were determined as 227 g/L of H2SO4 concentration, 1.22 feed/strip ratio, 450 mL/min flow rate (115.9 cm/min. flow velocity) and 15 volume % LIX 84-I concentration in 270 min by central composite design (CCD). At these optimum conditions, the experimental value of recovery efficiency was 95.88%, which was in close agreement with the 97.75% efficiency value predicted by the model. At the end of the process, almost all the copper in the model wastewater solution was removed and recovered as CuSO4.5H2O salt, which can be reused in the copper electroplating industry.

  5. Extremal static AdS black hole/CFT correspondence in gauged supergravities

    International Nuclear Information System (INIS)

    Lue, H.; Mei Jianwei; Pope, C.N.; Vazquez-Poritz, Justin F.

    2009-01-01

    A recently proposed holographic duality allows the Bekenstein-Hawking entropy of extremal rotating black holes to be calculated microscopically, by applying the Cardy formula to the two-dimensional chiral CFTs associated with certain reparameterisations of azimuthal angular coordinates in the solutions. The central charges are proportional to the angular momenta of the black hole, and so the method degenerates in the case of static (non-rotating) black holes. We show that the method can be extended to encompass such charged static extremal AdS black holes by using consistent Kaluza-Klein sphere reduction ansatze to lift them to exact solutions in the low-energy limits of string theory or M-theory, where the electric charges become reinterpreted as angular momenta associated with internal rotations in the reduction sphere. We illustrate the procedure for the examples of extremal charged static AdS black holes in four, five, six and seven dimensions

  6. Exact solutions to robust control problems involving scalar hyperbolic conservation laws using Mixed Integer Linear Programming

    KAUST Repository

    Li, Yanning

    2013-10-01

    This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.

  7. Exact solutions to robust control problems involving scalar hyperbolic conservation laws using Mixed Integer Linear Programming

    KAUST Repository

    Li, Yanning; Canepa, Edward S.; Claudel, Christian G.

    2013-01-01

    This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.

  8. Optimal volume of injectate for fluoroscopy-guided cervical interlaminar epidural injection in patients with neck and upper extremity pain.

    Science.gov (United States)

    Park, Jun Young; Kim, Doo Hwan; Lee, Kunhee; Choi, Seong-Soo; Leem, Jeong-Gil

    2016-10-01

    There is no study of optimal volume of contrast medium to use in cervical interlaminar epidural injections (CIEIs) for appropriate spread to target lesions. To determine optimal volume of contrast medium to use in CIEIs. We analyzed the records of 80 patients who had undergone CIEIs. Patients were divided into 3 groups according to the amount of contrast: 3, 4.5, and 6 mL. The spread of medium to the target level was analyzed. Numerical rating scale data were also analyzed. The dye had spread to a point above the target level in 15 (78.9%), 22 (84.6%), and 32 (91.4%) patients in groups 1 to 3, respectively. The dye reached both sides in 14 (73.7%), 18 (69.2%), and 23 (65.7%) patients, and reached the ventral epidural space in 15 (78.9%), 22 (84.6%), and 30 (85.7%) patients, respectively. There were no significant differences of contrast spread among the groups. There were no significant differences in the numerical rating scale scores among the groups during the 3 months. When performing CIEIs, 3 mL medication is sufficient volume for the treatment of neck and upper-extremity pain induced by lower cervical degenerative disease.

  9. Which currency exchange regime for emerging markets?: Corner solutions under question

    Directory of Open Access Journals (Sweden)

    Allegret Jean-Pierre

    2007-01-01

    Full Text Available During the 90s, recurrent exchange rate crises in emerging markets have shown the extreme fragility of soft pegs, the so-called intermediate exchange rate regimes. As a result, numerous academic economists but also International institutions have promoted a new consensus: domestic authorities have to choose their exchange rate regime between only two solutions called corner solutions or extreme regimes: hard pegs or independent floating. This paper questions de relevance of this consensus. We stress the main advantages and costs of each corner solution. We conclude by stressing that intermediate regimes associated to an inflation targeting framework seem a better solution for emerging countries than corner solutions.

  10. Dynamic viscosity of polymer solutions

    Energy Technology Data Exchange (ETDEWEB)

    Peterlin, A

    1982-03-01

    The dynamic viscosity investigation of solutions of long chain polymers in very viscous solvents has definitely shown the existence of the low and high frequency plateau with the gradual transition between them. In both extreme cases the extrapolation of the measured Newtonian viscosities of the plateaus to the infinite dilution yields the limiting intrinsic viscosities. Such a behavior is expected from the dynamic intrinsic viscosity of the necklace model of the linear polymer with finite internal viscosity. The plateau at low frequency shows up in any model of polymer solution. This work shows the constant dynamic intrinsic viscosity in both extreme cases is well reproducible by the necklace model with the internal viscosity acting only between the beads on the same link. 20 references.

  11. On the physical parametrization and magnetic analogs of the Emparan-Teo dihole solution

    International Nuclear Information System (INIS)

    Cazares, J.A.; Garcia-Compean, H.; Manko, V.S.

    2008-01-01

    The Emparan-Teo non-extremal black dihole solution is reparametrized using Komar quantities and the separation distance as arbitrary parameters. We show how the potential A 3 can be calculated for the magnetic analogs of this solution in the Einstein-Maxwell and Einstein-Maxwell-dilaton theories. We also demonstrate that, similar to the extreme case, the external magnetic field can remove the supporting strut in the non-extremal black dihole too

  12. Alternative measures of risk of extreme events in decision trees

    International Nuclear Information System (INIS)

    Frohwein, H.I.; Lambert, J.H.; Haimes, Y.Y.

    1999-01-01

    A need for a methodology to control the extreme events, defined as low-probability, high-consequence incidents, in sequential decisions is identified. A variety of alternative and complementary measures of the risk of extreme events are examined for their usability as objective functions in sequential decisions, represented as single- or multiple-objective decision trees. Earlier work had addressed difficulties, related to non-separability, with the minimization of some measures of the risk of extreme events in sequential decisions. In an extension of these results, it is shown how some non-separable measures of the risk of extreme events can be interpreted in terms of separable constituents of risk, thereby enabling a wider class of measures of the risk of extreme events to be handled in a straightforward manner in a decision tree. Also for extreme events, results are given to enable minimax- and Hurwicz-criterion analyses in decision trees. An example demonstrates the incorporation of different measures of the risk of extreme events in a multi-objective decision tree. Conceptual formulations for optimizing non-separable measures of the risk of extreme events are identified as an important area for future investigation

  13. Precise Orbit Solution for Swarm Using Space-Borne GPS Data and Optimized Pseudo-Stochastic Pulses

    Directory of Open Access Journals (Sweden)

    Bingbing Zhang

    2017-03-01

    Full Text Available Swarm is a European Space Agency (ESA project that was launched on 22 November 2013, which consists of three Swarm satellites. Swarm precise orbits are essential to the success of the above project. This study investigates how well Swarm zero-differenced (ZD reduced-dynamic orbit solutions can be determined using space-borne GPS data and optimized pseudo-stochastic pulses under high ionospheric activity. We choose Swarm space-borne GPS data from 1–25 October 2014, and Swarm reduced-dynamic orbits are obtained. Orbit quality is assessed by GPS phase observation residuals and compared with Precise Science Orbits (PSOs released by ESA. Results show that pseudo-stochastic pulses with a time interval of 6 min and a priori standard deviation (STD of 10−2 mm/s in radial (R, along-track (T and cross-track (N directions are optimized to Swarm ZD reduced-dynamic precise orbit determination (POD. During high ionospheric activity, the mean Root Mean Square (RMS of Swarm GPS phase residuals is at 9–11 mm, Swarm orbit solutions are also compared with Swarm PSOs released by ESA and the accuracy of Swarm orbits can reach 2–4 cm in R, T and N directions. Independent Satellite Laser Ranging (SLR validation indicates that Swarm reduced-dynamic orbits have an accuracy of 2–4 cm. Swarm-B orbit quality is better than those of Swarm-A and Swarm-C. The Swarm orbits can be applied to the geomagnetic, geoelectric and gravity field recovery.

  14. Optimizing nanodiscs and bicelles for solution NMR studies of two β-barrel membrane proteins

    International Nuclear Information System (INIS)

    Kucharska, Iga; Edrington, Thomas C.; Liang, Binyong; Tamm, Lukas K.

    2015-01-01

    Solution NMR spectroscopy has become a robust method to determine structures and explore the dynamics of integral membrane proteins. The vast majority of previous studies on membrane proteins by solution NMR have been conducted in lipid micelles. Contrary to the lipids that form a lipid bilayer in biological membranes, micellar lipids typically contain only a single hydrocarbon chain or two chains that are too short to form a bilayer. Therefore, there is a need to explore alternative more bilayer-like media to mimic the natural environment of membrane proteins. Lipid bicelles and lipid nanodiscs have emerged as two alternative membrane mimetics that are compatible with solution NMR spectroscopy. Here, we have conducted a comprehensive comparison of the physical and spectroscopic behavior of two outer membrane proteins from Pseudomonas aeruginosa, OprG and OprH, in lipid micelles, bicelles, and nanodiscs of five different sizes. Bicelles stabilized with a fraction of negatively charged lipids yielded spectra of almost comparable quality as in the best micellar solutions and the secondary structures were found to be almost indistinguishable in the two environments. Of the five nanodiscs tested, nanodiscs assembled from MSP1D1ΔH5 performed the best with both proteins in terms of sample stability and spectral resolution. Even in these optimal nanodiscs some broad signals from the membrane embedded barrel were severely overlapped with sharp signals from the flexible loops making their assignments difficult. A mutant OprH that had two of the flexible loops truncated yielded very promising spectra for further structural and dynamical analysis in MSP1D1ΔH5 nanodiscs

  15. Optimized positioning of autonomous surgical lamps

    Science.gov (United States)

    Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel

    2017-03-01

    We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.

  16. A new perspective on the nonextremal Enhancon solution

    International Nuclear Information System (INIS)

    Barrett, Jessica K.

    2006-01-01

    We discuss the nonextremal generalisation of the enhancon mechanism. We find that the nonextremal shell branch solution does not violate the Weak Energy Condition when the nonextremality parameter is small, in contrast to earlier discussions of this subject. We show that this physical shell branch solution fills the mass gap between the extremal enhancon solution and the nonextremal horizon branch solution

  17. Use of response surface methodology for optimization of fluoride adsorption in an aqueous solution by Brushite

    Directory of Open Access Journals (Sweden)

    M. Mourabet

    2017-05-01

    Full Text Available In the present study, Response surface methodology (RSM was employed for the removal of fluoride on Brushite and the process parameters were optimized. Four important process parameters including initial fluoride concentration (40–50 mg/L, pH (4–11, temperature (10–40 °C and B dose (0.05–0.15 g were optimized to obtain the best response of fluoride removal using the statistical Box–Behnken design. The experimental data obtained were analyzed by analysis of variance (ANOVA and fitted to a second-order polynomial equation using multiple regression analysis. Numerical optimization applying desirability function was used to identify the optimum conditions for maximum removal of fluoride. The optimum conditions were found to be initial concentration = 49.06 mg/L, initial solution pH = 5.36, adsorbent dose = 0.15 g and temperature = 31.96 °C. A confirmatory experiment was performed to evaluate the accuracy of the optimization procedure and maximum fluoride removal of 88.78% was achieved under the optimized conditions. Several error analysis equations were used to measure the goodness-of-fit. Kinetic studies showed that the adsorption followed a pseudo-second order reaction. The equilibrium data were analyzed using Langmuir, Freundlich, and Sips isotherm models at different temperatures. The Langmuir model was found to be describing the data. The adsorption capacity from the Langmuir isotherm (QL was found to be 29.212, 35.952 and 36.260 mg/g at 298, 303, and 313 K respectively.

  18. Treatment for superficial infusion thrombophlebitis of the upper extremity

    NARCIS (Netherlands)

    Di Nisio, Marcello; Peinemann, Frank; Porreca, Ettore; Rutjes, Anne W. S.

    2015-01-01

    Although superficial thrombophlebitis of the upper extremity represents a frequent complication of intravenous catheters inserted into the peripheral veins of the forearm or hand, no consensus exists on the optimal management of this condition in clinical practice. To summarise the evidence from

  19. Particle swarm optimization of ascent trajectories of multistage launch vehicles

    Science.gov (United States)

    Pontani, Mauro

    2014-02-01

    Multistage launch vehicles are commonly employed to place spacecraft and satellites in their operational orbits. If the rocket characteristics are specified, the optimization of its ascending trajectory consists of determining the optimal control law that leads to maximizing the final mass at orbit injection. The numerical solution of a similar problem is not trivial and has been pursued with different methods, for decades. This paper is concerned with an original approach based on the joint use of swarming theory and the necessary conditions for optimality. The particle swarm optimization technique represents a heuristic population-based optimization method inspired by the natural motion of bird flocks. Each individual (or particle) that composes the swarm corresponds to a solution of the problem and is associated with a position and a velocity vector. The formula for velocity updating is the core of the method and is composed of three terms with stochastic weights. As a result, the population migrates toward different regions of the search space taking advantage of the mechanism of information sharing that affects the overall swarm dynamics. At the end of the process the best particle is selected and corresponds to the optimal solution to the problem of interest. In this work the three-dimensional trajectory of the multistage rocket is assumed to be composed of four arcs: (i) first stage propulsion, (ii) second stage propulsion, (iii) coast arc (after release of the second stage), and (iv) third stage propulsion. The Euler-Lagrange equations and the Pontryagin minimum principle, in conjunction with the Weierstrass-Erdmann corner conditions, are employed to express the thrust angles as functions of the adjoint variables conjugate to the dynamics equations. The use of these analytical conditions coming from the calculus of variations leads to obtaining the overall rocket dynamics as a function of seven parameters only, namely the unknown values of the initial state

  20. PARETO OPTIMAL SOLUTIONS FOR MULTI-OBJECTIVE GENERALIZED ASSIGNMENT PROBLEM

    Directory of Open Access Journals (Sweden)

    S. Prakash

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The Multi-Objective Generalized Assignment Problem (MGAP with two objectives, where one objective is linear and the other one is non-linear, has been considered, with the constraints that a job is assigned to only one worker – though he may be assigned more than one job, depending upon the time available to him. An algorithm is proposed to find the set of Pareto optimal solutions of the problem, determining assignments of jobs to workers with two objectives without setting priorities for them. The two objectives are to minimise the total cost of the assignment and to reduce the time taken to complete all the jobs.

    AFRIKAANSE OPSOMMING: ‘n Multi-doelwit veralgemeende toekenningsprobleem (“multi-objective generalised assignment problem – MGAP” met twee doelwitte, waar die een lineêr en die ander nielineêr is nie, word bestudeer, met die randvoorwaarde dat ‘n taak slegs toegedeel word aan een werker – alhoewel meer as een taak aan hom toegedeel kan word sou die tyd beskikbaar wees. ‘n Algoritme word voorgestel om die stel Pareto-optimale oplossings te vind wat die taaktoedelings aan werkers onderhewig aan die twee doelwitte doen sonder dat prioriteite toegeken word. Die twee doelwitte is om die totale koste van die opdrag te minimiseer en om die tyd te verminder om al die take te voltooi.

  1. Finding a pareto-optimal solution for multi-region models subject to capital trade and spillover externalities

    Energy Technology Data Exchange (ETDEWEB)

    Leimbach, Marian [Potsdam-Institut fuer Klimafolgenforschung e.V., Potsdam (Germany); Eisenack, Klaus [Oldenburg Univ. (Germany). Dept. of Economics and Statistics

    2008-11-15

    In this paper we present an algorithm that deals with trade interactions within a multi-region model. In contrast to traditional approaches this algorithm is able to handle spillover externalities. Technological spillovers are expected to foster the diffusion of new technologies, which helps to lower the cost of climate change mitigation. We focus on technological spillovers which are due to capital trade. The algorithm of finding a pareto-optimal solution in an intertemporal framework is embedded in a decomposed optimization process. The paper analyzes convergence and equilibrium properties of this algorithm. In the final part of the paper, we apply the algorithm to investigate possible impacts of technological spillovers. While benefits of technological spillovers are significant for the capital-importing region, benefits for the capital-exporting region depend on the type of regional disparities and the resulting specialization and terms-of-trade effects. (orig.)

  2. Applying the Taguchi method to river water pollution remediation strategy optimization.

    Science.gov (United States)

    Yang, Tsung-Ming; Hsu, Nien-Sheng; Chiu, Chih-Chiang; Wang, Hsin-Ju

    2014-04-15

    Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  3. Optimization of Transformation Coefficients Using Direct Search and Swarm Intelligence

    Directory of Open Access Journals (Sweden)

    Manusov V.Z.

    2017-04-01

    Full Text Available This research considers optimization of tap position of transformers in power systems to reduce power losses. Now, methods based on heuristic rules and fuzzy logic, or methods that optimize parts of the whole system separately, are applied to this problem. The first approach requires expert knowledge about processes in the network. The second methods are not able to consider all the interrelations of system’s parts, while changes in segment affect the entire system. Both approaches are tough to implement and require adjustment to the tasks solved. It needs to implement algorithms that can take into account complex interrelations of optimized variables and self-adapt to optimization task. It is advisable to use algorithms given complex interrelations of optimized variables and independently adapting from optimization tasks. Such algorithms include Swarm Intelligence algorithms. Their main features are self-organization, which allows them to automatically adapt to conditions of tasks, and the ability to efficiently exit from local extremes. Thus, they do not require specialized knowledge of the system, in contrast to fuzzy logic. In addition, they can efficiently find quasi-optimal solutions converging to the global optimum. This research applies Particle Swarm Optimization algorithm (PSO. The model of Tajik power system used in experiments. It was found out that PSO is much more efficient than greedy heuristics and more flexible and easier to use than fuzzy logic. PSO allows reducing active power losses from 48.01 to 45.83 MW (4.5%. With al, the effect of using greedy heuristics or fuzzy logic is two times smaller (2.3%.

  4. RECOVERY ACT - Robust Optimization for Connectivity and Flows in Dynamic Complex Networks

    Energy Technology Data Exchange (ETDEWEB)

    Balasundaram, Balabhaskar [Oklahoma State Univ., Stillwater, OK (United States); Butenko, Sergiy [Texas A & M Univ., College Station, TX (United States); Boginski, Vladimir [Univ. of Florida, Gainesville, FL (United States); Uryasev, Stan [Univ. of Florida, Gainesville, FL (United States)

    2013-12-25

    The goal of this project was to study robust connectivity and flow patterns of complex multi-scale systems modeled as networks. Networks provide effective ways to study global, system level properties, as well as local, multi-scale interactions at a component level. Numerous applications from power systems, telecommunication, transportation, biology, social science, and other areas have benefited from novel network-based models and their analysis. Modeling and optimization techniques that employ appropriate measures of risk for identifying robust clusters and resilient network designs in networks subject to uncertain failures were investigated in this collaborative multi-university project. In many practical situations one has to deal with uncertainties associated with possible failures of network components, thereby affecting the overall efficiency and performance of the system (e.g., every node/connection has a probability of partial or complete failure). Some extreme examples include power grid component failures, airline hub failures due to weather, or freeway closures due to emergencies. These are also situations in which people, materials, or other resources need to be managed efficiently. Important practical examples include rerouting flow through power grids, adjusting flight plans, and identifying routes for emergency services and supplies, in the event network elements fail unexpectedly. Solutions that are robust under uncertainty, in addition to being economically efficient, are needed. This project has led to the development of novel models and methodologies that can tackle the optimization problems arising in such situations. A number of new concepts, which have not been previously applied in this setting, were investigated in the framework of the project. The results can potentially help decision-makers to better control and identify robust or risk-averse decisions in such situations. Formulations and optimal solutions of the considered problems need

  5. Interior point algorithms: guaranteed optimality for fluence map optimization in IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Aleman, Dionne M [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, ON M5S 3G8 (Canada); Glaser, Daniel [Division of Optimization and Systems Theory, Department of Mathematics, Royal Institute of Technology, Stockholm (Sweden); Romeijn, H Edwin [Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109-2117 (United States); Dempsey, James F, E-mail: aleman@mie.utoronto.c, E-mail: romeijn@umich.ed, E-mail: jfdempsey@viewray.co [ViewRay, Inc. 2 Thermo Fisher Way, Village of Oakwood, OH 44146 (United States)

    2010-09-21

    One of the most widely studied problems of the intensity-modulated radiation therapy (IMRT) treatment planning problem is the fluence map optimization (FMO) problem, the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. For a given set of beams, the fluences of the beamlets can drastically affect the quality of the treatment plan, and thus it is critical to obtain good fluence maps for radiation delivery. Although several approaches have been shown to yield good solutions to the FMO problem, these solutions are not guaranteed to be optimal. This shortcoming can be attributed to either optimization model complexity or properties of the algorithms used to solve the optimization model. We present a convex FMO formulation and an interior point algorithm that yields an optimal treatment plan in seconds, making it a viable option for clinical applications.

  6. Interior point algorithms: guaranteed optimality for fluence map optimization in IMRT

    International Nuclear Information System (INIS)

    Aleman, Dionne M; Glaser, Daniel; Romeijn, H Edwin; Dempsey, James F

    2010-01-01

    One of the most widely studied problems of the intensity-modulated radiation therapy (IMRT) treatment planning problem is the fluence map optimization (FMO) problem, the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. For a given set of beams, the fluences of the beamlets can drastically affect the quality of the treatment plan, and thus it is critical to obtain good fluence maps for radiation delivery. Although several approaches have been shown to yield good solutions to the FMO problem, these solutions are not guaranteed to be optimal. This shortcoming can be attributed to either optimization model complexity or properties of the algorithms used to solve the optimization model. We present a convex FMO formulation and an interior point algorithm that yields an optimal treatment plan in seconds, making it a viable option for clinical applications.

  7. Mercury Removal From Aqueous Solutions With Chitosan-Coated Magnetite Nanoparticles Optimized Using the Box-Behnken Design

    Science.gov (United States)

    Rahbar, Nadereh; Jahangiri, Alireza; Boumi, Shahin; Khodayar, Mohammad Javad

    2014-01-01

    Background: Nowadays, removal of heavy metals from the environment is an important problem due to their toxicity. Objectives: In this study, a modified method was used to synthesize chitosan-coated magnetite nanoparticles (CCMN) to be used as a low cost and nontoxic adsorbent. CCMN was then employed to remove Hg2+ from water solutions. Materials and Methods: To remove the highest percentage of mercury ions, the Box-Behnken model of response surface methodology (RSM) was applied to simultaneously optimize all parameters affecting the adsorption process. Studied parameters of the process were pH (5-8), initial metal concentration (2-8 mg/L), and the amount of damped adsorbent (0.25-0.75 g). A second-order mathematical model was developed using regression analysis of experimental data obtained from 15 batch runs. Results: The optimal conditions predicted by the model were pH = 5, initial concentration of mercury ions = 6.2 mg/L, and the amount of damped adsorbent = 0.67 g. Confirmatory testing was performed and the maximum percentage of Hg2+ removed was found to be 99.91%. Kinetic studies of the adsorption process specified the efficiency of the pseudo second-order kinetic model. The adsorption isotherm was well-fitted to both the Langmuir and Freundlich models. Conclusions: CCMN as an excellent adsorbent could remove the mercury ions from water solutions at low and moderate concentrations, which is the usual amount found in environment. PMID:24872943

  8. Non-extremal instantons and wormholes in string theory

    International Nuclear Information System (INIS)

    Bergshoeff, E.; Collinucci, A.; Gran, U.; Roest, D.; Vandoren, S.

    2005-01-01

    We construct the most general non-extremal spherically symmetric instanton solution of a gravity-dilaton-axion system with SL(2,R) symmetry, for arbitrary euclidean spacetime dimension D≥3. A subclass of these solutions describe completely regular wormhole geometries, whose size is determined by an invariant combination of the SL(2,R) charges. Our results can be applied to four-dimensional effective actions of type II strings compactified on a Calabi-Yau manifold, and in particular to the universal hypermultiplet coupled to gravity. We show that these models contain regular wormhole solutions, supported by regular dilaton and RR scalar fields of the universal hypermultiplet. (Abstract Copyright [2005], Wiley Periodicals, Inc.)

  9. Biosynthesis of gold nanoparticles by the extreme bacterium Deinococcus radiodurans and an evaluation of their antibacterial properties.

    Science.gov (United States)

    Li, Jiulong; Li, Qinghao; Ma, Xiaoqiong; Tian, Bing; Li, Tao; Yu, Jiangliu; Dai, Shang; Weng, Yulan; Hua, Yuejin

    Deinococcus radiodurans is an extreme bacterium known for its high resistance to stresses including radiation and oxidants. The ability of D. radiodurans to reduce Au(III) and biosynthesize gold nanoparticles (AuNPs) was investigated in aqueous solution by ultraviolet and visible (UV/Vis) absorption spectroscopy, electron microscopy, X-ray diffraction (XRD), dynamic light scattering (DLS), Fourier transform infrared spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS). D. radiodurans efficiently synthesized AuNPs from 1 mM Au(III) solution in 8 h. The AuNPs were of spherical, triangular and irregular shapes with an average size of 43.75 nm and a polydispersity index of 0.23 as measured by DLS. AuNPs were distributed in the cell envelope, across the cytosol and in the extracellular space. XRD analysis confirmed the crystallite nature of the AuNPs from the cell supernatant. Data from the FTIR and XPS showed that upon binding to proteins or compounds through interactions with carboxyl, amine, phospho and hydroxyl groups, Au(III) may be reduced to Au(I), and further reduced to Au(0) with the capping groups to stabilize the AuNPs. Biosynthesis of AuNPs was optimized with respect to the initial concentration of gold salt, bacterial growth period, solution pH and temperature. The purified AuNPs exhibited significant antibacterial activity against both Gram-negative ( Escherichia coli ) and Gram-positive ( Staphylococcus aureus ) bacteria by damaging their cytoplasmic membrane. Therefore, the extreme bacterium D. radiodurans can be used as a novel bacterial candidate for efficient biosynthesis of AuNPs, which exhibited potential in biomedical application as an antibacterial agent.

  10. A brief introduction to continuous evolutionary optimization

    CERN Document Server

    Kramer, Oliver

    2014-01-01

    Practical optimization problems are often hard to solve, in particular when they are black boxes and no further information about the problem is available except via function evaluations. This work introduces a collection of heuristics and algorithms for black box optimization with evolutionary algorithms in continuous solution spaces. The book gives an introduction to evolution strategies and parameter control. Heuristic extensions are presented that allow optimization in constrained, multimodal, and multi-objective solution spaces. An adaptive penalty function is introduced for constrained optimization. Meta-models reduce the number of fitness and constraint function calls in expensive optimization problems. The hybridization of evolution strategies with local search allows fast optimization in solution spaces with many local optima. A selection operator based on reference lines in objective space is introduced to optimize multiple conflictive objectives. Evolutionary search is employed for learning kernel ...

  11. Evolution strategies for robust optimization

    NARCIS (Netherlands)

    Kruisselbrink, Johannes Willem

    2012-01-01

    Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but

  12. ERP SOLUTIONS FOR SMEs

    Directory of Open Access Journals (Sweden)

    TUTUNEA MIHAELA FILOFTEIA

    2012-09-01

    Full Text Available The integration of activities, the business processes as well as their optimization, bring the perspective of profitable growth and create significant and competitive advantages in any company. The adoption of some ERP integrated software solutions, from SMEs’ perspective, must be considered as a very important management decision in medium and long term. ERP solutions, along with the transparent and optimized management of all internal processes, also offer an intra and inter companies collaborative platform, which allows a rapid expansion of activities towards e- business and mobile-business environments. This material introduces ERP solutions for SMEs from commercial offer and open source perspective; the results of comparative analysis of the solutions on the specific market, can be an useful aid to the management of the companies, in making the decision to integrate business processes, using ERP as a support.

  13. Medical Dataset Classification: A Machine Learning Paradigm Integrating Particle Swarm Optimization with Extreme Learning Machine Classifier

    Directory of Open Access Journals (Sweden)

    C. V. Subbulakshmi

    2015-01-01

    Full Text Available Medical data classification is a prime data mining problem being discussed about for a decade that has attracted several researchers around the world. Most classifiers are designed so as to learn from the data itself using a training process, because complete expert knowledge to determine classifier parameters is impracticable. This paper proposes a hybrid methodology based on machine learning paradigm. This paradigm integrates the successful exploration mechanism called self-regulated learning capability of the particle swarm optimization (PSO algorithm with the extreme learning machine (ELM classifier. As a recent off-line learning method, ELM is a single-hidden layer feedforward neural network (FFNN, proved to be an excellent classifier with large number of hidden layer neurons. In this research, PSO is used to determine the optimum set of parameters for the ELM, thus reducing the number of hidden layer neurons, and it further improves the network generalization performance. The proposed method is experimented on five benchmarked datasets of the UCI Machine Learning Repository for handling medical dataset classification. Simulation results show that the proposed approach is able to achieve good generalization performance, compared to the results of other classifiers.

  14. Identifying and prioritizing indicators and effective solutions to optimization the use of wood in construction classical furniture by using AHP (Case study of Qom

    Directory of Open Access Journals (Sweden)

    Mohammad Ghofrani

    2017-02-01

    Full Text Available AbstractThe aim of this study was to identify and prioritize the indicators and provide effective solutions to optimize the use of wood in construction classical furniture using the analytic hierarchy process (case study in Qom. For this purpose, studies and results of other researchers and interviews with experts, the factors affecting the optimization of wood consumption were divided into 4 main categories and 23 sub-indicators. The importance of the sub after getting feedback furniture producers were determined by AHP. The results show that the original surface design and human resources are of great importance. In addition, among 23 sub-effective optimization of the use of wood in construction classical furniture, ergonomics, style, skill training and inlaid in classical furniture industry in order to weight the value of 0/247, 0/181, 0/124 and 0/087 are of paramount importance and the method of use of force specialist solutions were a priority.

  15. Technology development of protein rich concentrates for nutrition in extreme conditions using soybean and meat by-products.

    Science.gov (United States)

    Kalenik, Tatiana K; Costa, Rui; Motkina, Elena V; Kosenko, Tamara A; Skripko, Olga V; Kadnikova, Irina A

    2017-01-01

    There is a need to develop new foods for participants of expeditions in extreme conditions, which must be self-sufficient. These foods should be light to carry, with a long shelf life, tasty and with  high nutrient density. Currently, protein sources are limited mainly to dried and canned meat. In this work, a protein-rich dried concentrate suitable for extreme expeditions was developed using soya, tomato, milk whey and meat by-products. Protein concentrates were developed using minced beef liver and heart, dehydrated and mixed with a soya protein-lycopene coagulate (SPLC) obtained from a solution prepared with germi- nated soybeans and mixed with tomato paste in milk whey, and finally dried. The technological parameters of pressing SPLC and of drying the protein concentrate were optimized using response surface methodology. The optimized technological parameters to prepare the protein concentrates were obtained, with 70:30 being the ideal ratio of minced meat to SPLC. The developed protein concentrates are characterized by a high calorific value of 376 kcal/100 g of dry product, with a water content of 98 g·kg-1, and 641-644 g·kg-1 of proteins. The essential amino acid indices are 100, with minimum essential amino acid content constitut- ing 100-128% of the FAO standard, depending on the raw meat used. These concentrates are also rich in micronutrients such as β-carotene and vitamin C. Analysis of the nutrient content showed that these non-perishable concentrates present a high nutritional value and complement other widely available vegetable concentrates to prepare a two-course meal. The soups and porridges prepared with these concentrates can be classified as functional foods, and comply with army requirements applicable to food products for extreme conditions.

  16. A study of the LCA based biofuel supply chain multi-objective optimization model with multi-conversion paths in China

    International Nuclear Information System (INIS)

    Liu, Zhexuan; Qiu, Tong; Chen, Bingzhen

    2014-01-01

    influence of price change on the optimal solutions was investigated. The optimal solutions obtained in this study reveal a tradeoff between the impact of the 3E criteria. These results indicate that our model will be extremely useful for the design and planning of biofuel supply chains in China

  17. Essays and surveys in global optimization

    CERN Document Server

    Audet, Charles; Savard, Giles

    2005-01-01

    Global optimization aims at solving the most general problems of deterministic mathematical programming. In addition, once the solutions are found, this methodology is also expected to prove their optimality. With these difficulties in mind, global optimization is becoming an increasingly powerful and important methodology. This book is the most recent examination of its mathematical capability, power, and wide ranging solutions to many fields in the applied sciences.

  18. Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control

    Science.gov (United States)

    Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.

    2016-02-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  19. Solution of optimal power flow using evolutionary-based algorithms

    African Journals Online (AJOL)

    It aims to estimate the optimal settings of real generator output power, bus voltage, ...... Lansey, K. E., 2003, Optimization of water distribution network design using ... Pandit, M., 2016, Economic load dispatch of wind-solar-thermal system using ...

  20. Discrete-time optimal control and games on large intervals

    CERN Document Server

    Zaslavski, Alexander J

    2017-01-01

    Devoted to the structure of approximate solutions of discrete-time optimal control problems and approximate solutions of dynamic discrete-time two-player zero-sum games, this book presents results on properties of approximate solutions in an interval that is independent lengthwise, for all sufficiently large intervals. Results concerning the so-called turnpike property of optimal control problems and zero-sum games in the regions close to the endpoints of the time intervals are the main focus of this book. The description of the structure of approximate solutions on sufficiently large intervals and its stability will interest graduate students and mathematicians in optimal control and game theory, engineering, and economics. This book begins with a brief overview and moves on to analyze the structure of approximate solutions of autonomous nonconcave discrete-time optimal control Lagrange problems.Next the structures of approximate solutions of autonomous discrete-time optimal control problems that are discret...

  1. A novel approach for optimal chiller loading using particle swarm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ardakani, A. Jahanbani; Ardakani, F. Fattahi; Hosseinian, S.H. [Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Hafez Avenue, Tehran 15875-4413 (Iran, Islamic Republic of)

    2008-07-01

    This study employs two new methods to solve optimal chiller loading (OCL) problem. These methods are continuous genetic algorithm (GA) and particle swarm optimization (PSO). Because of continuous nature of variables in OCL problem, continuous GA and PSO easily overcome deficiencies in other conventional optimization methods. Partial load ratio (PLR) of the chiller is chosen as the variable to be optimized and consumption power of the chiller is considered as fitness function. Both of these methods find the optimal solution while the equality constraint is exactly satisfied. Some of the major advantages of proposed approaches over other conventional methods can be mentioned as fast convergence, escaping from getting into local optima, simple implementation as well as independency of the solution from the problem. Abilities of proposed methods are examined with reference to an example system. To demonstrate these abilities, results are compared with binary genetic algorithm method. The proposed approaches can be perfectly applied to air-conditioning systems. (author)

  2. Extreme commutative quantum observables are sharp

    International Nuclear Information System (INIS)

    Heinosaari, Teiko; Pellonpaeae, Juha-Pekka

    2011-01-01

    It is well known that, in the description of quantum observables, positive operator valued measures (POVMs) generalize projection valued measures (PVMs) and they also turn out be more optimal in many tasks. We show that a commutative POVM is an extreme point in the convex set of all POVMs if and only if it is a PVM. This results implies that non-commutativity is a necessary ingredient to overcome the limitations of PVMs.

  3. Applying the Taguchi Method to River Water Pollution Remediation Strategy Optimization

    Directory of Open Access Journals (Sweden)

    Tsung-Ming Yang

    2014-04-01

    Full Text Available Optimization methods usually obtain the travel direction of the solution by substituting the solutions into the objective function. However, if the solution space is too large, this search method may be time consuming. In order to address this problem, this study incorporated the Taguchi method into the solution space search process of the optimization method, and used the characteristics of the Taguchi method to sequence the effects of the variation of decision variables on the system. Based on the level of effect, this study determined the impact factor of decision variables and the optimal solution for the model. The integration of the Taguchi method and the solution optimization method successfully obtained the optimal solution of the optimization problem, while significantly reducing the solution computing time and enhancing the river water quality. The results suggested that the basin with the greatest water quality improvement effectiveness is the Dahan River. Under the optimal strategy of this study, the severe pollution length was reduced from 18 km to 5 km.

  4. A hybrid bird mating optimizer algorithm with teaching-learning-based optimization for global numerical optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Zhang

    2015-02-01

    Full Text Available Bird Mating Optimizer (BMO is a novel meta-heuristic optimization algorithm inspired by intelligent mating behavior of birds. However, it is still insufficient in convergence of speed and quality of solution. To overcome these drawbacks, this paper proposes a hybrid algorithm (TLBMO, which is established by combining the advantages of Teaching-learning-based optimization (TLBO and Bird Mating Optimizer (BMO. The performance of TLBMO is evaluated on 23 benchmark functions, and compared with seven state-of-the-art approaches, namely BMO, TLBO, Artificial Bee Bolony (ABC, Particle Swarm Optimization (PSO, Fast Evolution Programming (FEP, Differential Evolution (DE, Group Search Optimization (GSO. Experimental results indicate that the proposed method performs better than other existing algorithms for global numerical optimization.

  5. On the entropy of four-dimensional near-extremal N = 2 black holes with R2-terms

    International Nuclear Information System (INIS)

    Gruss, Eyal; Oz, Yaron

    2007-01-01

    We consider the entropy of four-dimensional near-extremal N = 2 black holes. The Bekenstein-Hawking entropy formula has the structure of the extremal black holes entropy with a shift of the charges depending on the non-extremality parameter and the moduli at infinity. We construct a class of near-extremal horizon solutions with R 2 -terms, and show that the generalized Wald entropy formula exhibits the same property

  6. Determination of optimal whole body vibration amplitude and frequency parameters with plyometric exercise and its influence on closed-chain lower extremity acute power output and EMG activity in resistance trained males

    Science.gov (United States)

    Hughes, Nikki J.

    The optimal combination of Whole body vibration (WBV) amplitude and frequency has not been established. Purpose. To determine optimal combination of WBV amplitude and frequency that will enhance acute mean and peak power (MP and PP) output EMG activity in the lower extremity muscles. Methods. Resistance trained males (n = 13) completed the following testing sessions: On day 1, power spectrum testing of bilateral leg press (BLP) movement was performed on the OMNI. Days 2 and 3 consisted of WBV testing with either average (5.8 mm) or high (9.8 mm) amplitude combined with either 0 (sham control), 10, 20, 30, 40 and 50 Hz frequency. Bipolar surface electrodes were placed on the rectus femoris (RF), vastus lateralis (VL), bicep femoris (BF) and gastrocnemius (GA) muscles for EMG analysis. MP and PP output and EMG activity of the lower extremity were assessed pre-, post-WBV treatments and after sham-controls on the OMNI while participants performed one set of five repetitions of BLP at the optimal resistance determined on Day 1. Results. No significant differences were found between pre- and sham-control on MP and PP output and on EMG activity in RF, VL, BF and GA. Completely randomized one-way ANOVA with repeated measures demonstrated no significant interaction of WBV amplitude and frequency on MP and PP output and peak and mean EMGrms amplitude and EMG rms area under the curve. RF and VL EMGrms area under the curve significantly decreased (p plyometric exercise does not induce alterations in subsequent MP and PP output and EMGrms activity of the lower extremity. Future studies need to address the time of WBV exposure and magnitude of external loads that will maximize strength and/or power output.

  7. Guided color consistency optimization for image mosaicking

    Science.gov (United States)

    Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li

    2018-01-01

    This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.

  8. Pointwise second-order necessary optimality conditions and second-order sensitivity relations in optimal control

    Science.gov (United States)

    Frankowska, Hélène; Hoehener, Daniel

    2017-06-01

    This paper is devoted to pointwise second-order necessary optimality conditions for the Mayer problem arising in optimal control theory. We first show that with every optimal trajectory it is possible to associate a solution p (ṡ) of the adjoint system (as in the Pontryagin maximum principle) and a matrix solution W (ṡ) of an adjoint matrix differential equation that satisfy a second-order transversality condition and a second-order maximality condition. These conditions seem to be a natural second-order extension of the maximum principle. We then prove a Jacobson like necessary optimality condition for general control systems and measurable optimal controls that may be only ;partially singular; and may take values on the boundary of control constraints. Finally we investigate the second-order sensitivity relations along optimal trajectories involving both p (ṡ) and W (ṡ).

  9. Optimal design of an in-situ bioremediation system using support vector machine and particle swarm optimization

    Science.gov (United States)

    ch, Sudheer; Kumar, Deepak; Prasad, Ram Kailash; Mathur, Shashi

    2013-08-01

    A methodology based on support vector machine and particle swarm optimization techniques (SVM-PSO) was used in this study to determine an optimal pumping rate and well location to achieve an optimal cost of an in-situ bioremediation system. In the first stage of the two stage methodology suggested for optimal in-situ bioremediation design, the optimal number of wells and their locations was determined from preselected candidate well locations. The pumping rate and well location in the first stage were subsequently optimized in the second stage of the methodology. The highly nonlinear system of equations governing in-situ bioremediation comprises the equations of flow and solute transport coupled with relevant biodegradation kinetics. A finite difference model was developed to simulate the process of in-situ bioremediation using an Alternate-Direction Implicit technique. This developed model (BIOFDM) yields the spatial and temporal distribution of contaminant concentration for predefined initial and boundary conditions. BIOFDM was later validated by comparing the simulated results with those obtained using BIOPLUME III for the case study of Shieh and Peralta (2005). The results were found to be in close agreement. Moreover, since the solution of the highly nonlinear equation otherwise requires significant computational effort, the computational burden in this study was managed within a practical time frame by replacing the BIOFDM model with a trained SVM model. Support Vector Machine which generates fast solutions in real time was considered to be a universal function approximator in the study. Apart from reducing the computational burden, this technique generates a set of near optimal solutions (instead of a single optimal solution) and creates a re-usable data base that could be used to address many other management problems. Besides this, the search for an optimal pumping pattern was directed by a simple PSO technique and a penalty parameter approach was adopted

  10. Optimal design of an in-situ bioremediation system using support vector machine and particle swarm optimization.

    Science.gov (United States)

    ch, Sudheer; Kumar, Deepak; Prasad, Ram Kailash; Mathur, Shashi

    2013-08-01

    A methodology based on support vector machine and particle swarm optimization techniques (SVM-PSO) was used in this study to determine an optimal pumping rate and well location to achieve an optimal cost of an in-situ bioremediation system. In the first stage of the two stage methodology suggested for optimal in-situ bioremediation design, the optimal number of wells and their locations was determined from preselected candidate well locations. The pumping rate and well location in the first stage were subsequently optimized in the second stage of the methodology. The highly nonlinear system of equations governing in-situ bioremediation comprises the equations of flow and solute transport coupled with relevant biodegradation kinetics. A finite difference model was developed to simulate the process of in-situ bioremediation using an Alternate-Direction Implicit technique. This developed model (BIOFDM) yields the spatial and temporal distribution of contaminant concentration for predefined initial and boundary conditions. BIOFDM was later validated by comparing the simulated results with those obtained using BIOPLUME III for the case study of Shieh and Peralta (2005). The results were found to be in close agreement. Moreover, since the solution of the highly nonlinear equation otherwise requires significant computational effort, the computational burden in this study was managed within a practical time frame by replacing the BIOFDM model with a trained SVM model. Support Vector Machine which generates fast solutions in real time was considered to be a universal function approximator in the study. Apart from reducing the computational burden, this technique generates a set of near optimal solutions (instead of a single optimal solution) and creates a re-usable data base that could be used to address many other management problems. Besides this, the search for an optimal pumping pattern was directed by a simple PSO technique and a penalty parameter approach was adopted

  11. A new Class of Extremal Composites

    DEFF Research Database (Denmark)

    Sigmund, Ole

    2000-01-01

    microstructure belonging to the new class of composites has maximum bulk modulus and lower shear modulus than any previously known composite. Inspiration for the new composite class comes from a numerical topology design procedure which solves the inverse homogenization problem of distributing two isotropic......The paper presents a new class of two-phase isotropic composites with extremal bulk modulus. The new class consists of micro geometrics for which exact solutions can be proven and their bulk moduli are shown to coincide with the Hashin-Shtrikman bounds. The results hold for two and three dimensions...... and for both well- and non-well-ordered isotropic constituent phases. The new class of composites constitutes an alternative to the three previously known extremal composite classes: finite rank laminates, composite sphere assemblages and Vigdergauz microstructures. An isotropic honeycomb-like hexagonal...

  12. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  13. Optimization of rotational radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Tulovsky, Vladimir; Ringor, Michael; Papiez, Lech

    1995-01-01

    Purpose: Rotational therapy treatment planning for rotationally symmetric geometry of tumor and healthy tissue provides an important example of testing various approaches to optimizing dose distributions for therapeutic x-ray irradiations. In this article, dose distribution optimization is formulated as a variational problem. This problem is solved analytically and numerically. Methods and Materials: The classical Lagrange method is used to derive equations and inequalities that give necessary conditions for minimizing the mean-square deviation between the ideal dose distribution and the achievable dose distribution. The solution of the resulting integral equation with Cauchy kernel is used to derive analytical formulas for the minimizing irradiation intensity function. Results: The solutions are evaluated numerically and the graphs of the minimizing intensity functions and the corresponding dose distributions are presented. Conclusions: The optimal solutions obtained using the mean-square criterion lead to significant underdosage in some areas of the tumor volume. Possible solutions to this shortcoming are investigated and medically more appropriate criteria for optimization are proposed for future investigations

  14. Extreme groundwater levels caused by extreme weather conditions - the highest ever measured groundwater levels in Middle Germany and their management

    Science.gov (United States)

    Reinstorf, F.; Kramer, S.; Koch, T.; Pfützner, B.

    2017-12-01

    Extreme weather conditions during the years 2009 - 2011 in combination with changes in the regional water management led to maximum groundwater levels in large areas of Germany in 2011. This resulted in extensive water logging, with problems especially in urban areas near rivers, where water logging produced huge problems for buildings and infrastructure. The acute situation still exists in many areas and requires the development of solution concepts. Taken the example of the Elbe-Saale-Region in the Federal State of Saxony-Anhalt, were a pilot research project was carried out, the analytical situation, the development of a management tool and the implementation of a groundwater management concept are shown. The central tool is a coupled water budget - groundwater flow model. In combination with sophisticated multi-scale parameter estimation, a high-resolution groundwater level simulation was carried out. A decision support process with an intensive stakeholder interaction combined with high-resolution simulations enables the development of a management concept for extreme groundwater situations in consideration of sustainable and environmentally sound solutions mainly on the base of passive measures.

  15. Optimal control with aerospace applications

    CERN Document Server

    Longuski, James M; Prussing, John E

    2014-01-01

    Want to know not just what makes rockets go up but how to do it optimally? Optimal control theory has become such an important field in aerospace engineering that no graduate student or practicing engineer can afford to be without a working knowledge of it. This is the first book that begins from scratch to teach the reader the basic principles of the calculus of variations, develop the necessary conditions step-by-step, and introduce the elementary computational techniques of optimal control. This book, with problems and an online solution manual, provides the graduate-level reader with enough introductory knowledge so that he or she can not only read the literature and study the next level textbook but can also apply the theory to find optimal solutions in practice. No more is needed than the usual background of an undergraduate engineering, science, or mathematics program: namely calculus, differential equations, and numerical integration. Although finding optimal solutions for these problems is a...

  16. Optimization modeling with spreadsheets

    CERN Document Server

    Baker, Kenneth R

    2015-01-01

    An accessible introduction to optimization analysis using spreadsheets Updated and revised, Optimization Modeling with Spreadsheets, Third Edition emphasizes model building skills in optimization analysis. By emphasizing both spreadsheet modeling and optimization tools in the freely available Microsoft® Office Excel® Solver, the book illustrates how to find solutions to real-world optimization problems without needing additional specialized software. The Third Edition includes many practical applications of optimization models as well as a systematic framework that il

  17. Using Central Composite Experimental Design to Optimize the Degradation of Tylosin from Aqueous Solution by Photo-Fenton Reaction

    Directory of Open Access Journals (Sweden)

    Abd Elaziz Sarrai

    2016-05-01

    Full Text Available The feasibility of the application of the Photo-Fenton process in the treatment of aqueous solution contaminated by Tylosin antibiotic was evaluated. The Response Surface Methodology (RSM based on Central Composite Design (CCD was used to evaluate and optimize the effect of hydrogen peroxide, ferrous ion concentration and initial pH as independent variables on the total organic carbon (TOC removal as the response function. The interaction effects and optimal parameters were obtained by using MODDE software. The significance of the independent variables and their interactions was tested by means of analysis of variance (ANOVA with a 95% confidence level. Results show that the concentration of the ferrous ion and pH were the main parameters affecting TOC removal, while peroxide concentration had a slight effect on the reaction. The optimum operating conditions to achieve maximum TOC removal were determined. The model prediction for maximum TOC removal was compared to the experimental result at optimal operating conditions. A good agreement between the model prediction and experimental results confirms the soundness of the developed model.

  18. Optimal adaptation to extreme rainfalls in current and future climate

    Science.gov (United States)

    Rosbjerg, Dan

    2017-01-01

    More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases, the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time-span. Immediate as well as delayed adaptation is considered.

  19. A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.

    Science.gov (United States)

    Yang, Shaofu; Liu, Qingshan; Wang, Jun

    2018-04-01

    This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.

  20. Hybrid solution and pump-storage optimization in water supply system efficiency: A case study

    International Nuclear Information System (INIS)

    Vieira, F.; Ramos, H.M.

    2008-01-01

    Environmental targets and saving energy have become ones of the world main concerns over the last years and it will increase and become more important in a near future. The world population growth rate is the major factor contributing for the increase in global pollution and energy and water consumption. In 2005, the world population was approximately 6.5 billion and this number is expected to reach 9 billion by 2050 [United Nations, 2008. (www.un.org), accessed on July]. Water supply systems use energy for pumping water, so new strategies must be developed and implemented in order to reduce this consumption. In addition, if there is excess of hydraulic energy in a water system, some type of water power generation can be implemented. This paper presents an optimization model that determines the best hourly operation for 1 day, according to the electricity tariff, for a pumped storage system with water consumption and inlet discharge. Wind turbines are introduced in the system. The rules obtained as output of the optimization process are subsequently introduced in a hydraulic simulator, in order to verify the system behaviour. A comparison with the normal water supply operating mode is done and the energy cost savings with this hybrid solution are calculated

  1. Normalization in Unsupervised Segmentation Parameter Optimization: A Solution Based on Local Regression Trend Analysis

    Directory of Open Access Journals (Sweden)

    Stefanos Georganos

    2018-02-01

    Full Text Available In object-based image analysis (OBIA, the appropriate parametrization of segmentation algorithms is crucial for obtaining satisfactory image classification results. One of the ways this can be done is by unsupervised segmentation parameter optimization (USPO. A popular USPO method does this through the optimization of a “global score” (GS, which minimizes intrasegment heterogeneity and maximizes intersegment heterogeneity. However, the calculated GS values are sensitive to the minimum and maximum ranges of the candidate segmentations. Previous research proposed the use of fixed minimum/maximum threshold values for the intrasegment/intersegment heterogeneity measures to deal with the sensitivity of user-defined ranges, but the performance of this approach has not been investigated in detail. In the context of a remote sensing very-high-resolution urban application, we show the limitations of the fixed threshold approach, both in a theoretical and applied manner, and instead propose a novel solution to identify the range of candidate segmentations using local regression trend analysis. We found that the proposed approach showed significant improvements over the use of fixed minimum/maximum values, is less subjective than user-defined threshold values and, thus, can be of merit for a fully automated procedure and big data applications.

  2. Direct aperture optimization: A turnkey solution for step-and-shoot IMRT

    International Nuclear Information System (INIS)

    Shepard, D.M.; Earl, M.A.; Li, X.A.; Naqvi, S.; Yu, C.

    2002-01-01

    IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach 'direct aperture optimization'. This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT

  3. DIRECTIONS OF EXTREME TOURISM IN UKRAINE

    Directory of Open Access Journals (Sweden)

    L. V. Martseniuk

    2016-02-01

    preservation of the property. Originality. The author shows the theoretical generalization and new solution of a scientific problem. It manifests itself in the development of theoretical and methodological approaches to the development of extreme tourism. Practical value. Rational use of measures proposed by the author of directional control of tourist flows will significantly increase the country's revenues from domestic tourism.

  4. Large-solid-angle illuminators for extreme ultraviolet lithography with laser plasmas

    International Nuclear Information System (INIS)

    Kubiak, G.D.; Tichenor, D.A.; Sweatt, W.C.; Chow, W.W.

    1995-06-01

    Laser Plasma Sources (LPSS) of extreme ultraviolet radiation are an attractive alternative to synchrotron radiation sources for extreme ultraviolet lithography (EUVL) due to their modularity, brightness, and modest size and cost. To fully exploit the extreme ultraviolet power emitted by such sources, it is necessary to capture the largest possible fraction of the source emission half-sphere while simultaneously optimizing the illumination stationarity and uniformity on the object mask. In this LDRD project, laser plasma source illumination systems for EUVL have been designed and then theoretically and experimentally characterized. Ellipsoidal condensers have been found to be simple yet extremely efficient condensers for small-field EUVL imaging systems. The effects of aberrations in such condensers on extreme ultraviolet (EUV) imaging have been studied with physical optics modeling. Lastly, the design of an efficient large-solid-angle condenser has been completed. It collects 50% of the available laser plasma source power at 14 nm and delivers it properly to the object mask in a wide-arc-field camera

  5. Modeling and simulating command and control for organizations under extreme situations

    CERN Document Server

    Moon, Il-Chul; Kim, Tag Gon

    2013-01-01

    Commanding and controlling organizations in extreme situations is a challenging task in military, intelligence, and disaster management. Such command and control must be quick, effective, and considerate when dealing with the changing, complex, and risky conditions of the situation. To enable optimal command and control under extremes, robust structures and efficient operations are required of organizations. This work discusses how to design and conduct virtual experiments on resilient organizational structures and operational practices using modeling and simulation. The work illustrates key a

  6. Optimizing the molarity of a EDTA washing solution for saturated-soil remediation of trace metal contaminated soils

    International Nuclear Information System (INIS)

    Andrade, M.D.; Prasher, S.O.; Hendershot, W.H.

    2007-01-01

    Three experiments were conducted to optimize the use of ethylenediaminetetraacetic acid (EDTA) for reclaiming urban soils contaminated with trace metals. As compared to Na 2 EDTA (NH 4 ) 2 EDTA extracted 60% more Zn and equivalent amounts of Cd, Cu and Pb from a sandy loam. When successively saturating and draining loamy sand columns during a washing cycle, which submerged it once with a (NH 4 ) 2 EDTA wash and four times with deionised water, the post-wash rinses largely contributed to the total cumulative extraction of Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn. Both the washing solution and the deionised water rinses were added in a 2:5 liquid to soil (L:S) weight ratio. For equal amounts of EDTA, concentrating the washing solution and applying it and the ensuing rinses in a smaller 1:5 L:S weight ratio, instead of a 2:5 L:S weight ratio, increased the extraction of targeted Cr, Cu, Ni, Pb and Zn. - A single EDTA addition is best utilised in a highly concentrated washing solution given in a small liquid to soil weight ratio

  7. Implicitly defined criteria for vector optimization in technological process of hydroponic germination of wheat grain

    Science.gov (United States)

    Koneva, M. S.; Rudenko, O. V.; Usatikov, S. V.; Bugaets, N. A.; Tereshchenko, I. V.

    2018-05-01

    To reduce the duration of the process and to ensure the microbiological purity of the germinated material, an improved method of germination has been developed based on the complex use of physical factors: electrochemically activated water (ECHA-water), electromagnetic field of extremely low frequencies (EMF ELF) with round-the-clock artificial illumination by LED lamps. The increase in the efficiency of the "numerical" technology for solving computational problems of parametric optimization of the technological process of hydroponic germination of wheat grains is considered. In this situation, the quality criteria are contradictory and part of them is given by implicit functions of many variables. A solution algorithm is offered without the construction of a Pareto set in which a relatively small number of elements of a set of alternatives is used to obtain a linear convolution of the criteria with given weights, normalized to their "ideal" values from the solution of the problems of single-criterion private optimizations. The use of the proposed mathematical models describing the processes of hydroponic germination of wheat grains made it possible to intensify the germination process and to shorten the time of obtaining wheat sprouts "Altayskaya 105" for 27 hours.

  8. Load flow optimization and optimal power flow

    CERN Document Server

    Das, J C

    2017-01-01

    This book discusses the major aspects of load flow, optimization, optimal load flow, and culminates in modern heuristic optimization techniques and evolutionary programming. In the deregulated environment, the economic provision of electrical power to consumers requires knowledge of maintaining a certain power quality and load flow. Many case studies and practical examples are included to emphasize real-world applications. The problems at the end of each chapter can be solved by hand calculations without having to use computer software. The appendices are devoted to calculations of line and cable constants, and solutions to the problems are included throughout the book.

  9. Optimization strategies for discrete multi-material stiffness optimization

    DEFF Research Database (Denmark)

    Hvejsel, Christian Frier; Lund, Erik; Stolpe, Mathias

    2011-01-01

    Design of composite laminated lay-ups are formulated as discrete multi-material selection problems. The design problem can be modeled as a non-convex mixed-integer optimization problem. Such problems are in general only solvable to global optimality for small to moderate sized problems. To attack...... which numerically confirm the sought properties of the new scheme in terms of convergence to a discrete solution....

  10. Optimizing Geographic Allotment of Photovoltaic Capacity in a Distributed Generation Setting: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Urquhart, B.; Sengupta, M.; Keller, J.

    2012-09-01

    A multi-objective optimization was performed to allocate 2MW of PV among four candidate sites on the island of Lanai such that energy was maximized and variability in the form of ramp rates was minimized. This resulted in an optimal solution set which provides a range of geographic allotment alternatives for the fixed PV capacity. Within the optimal set, a tradeoff between energy produced and variability experienced was found, whereby a decrease in variability always necessitates a simultaneous decrease in energy. A design point within the optimal set was selected for study which decreased extreme ramp rates by over 50% while only decreasing annual energy generation by 3% over the maximum generation allocation. To quantify the allotment mix selected, a metric was developed, called the ramp ratio, which compares ramping magnitude when all capacity is allotted to a single location to the aggregate ramping magnitude in a distributed scenario. The ramp ratio quantifies simultaneously how much smoothing a distributed scenario would experience over single site allotment and how much a single site is being under-utilized for its ability to reduce aggregate variability. This paper creates a framework for use by cities and municipal utilities to reduce variability impacts while planning for high penetration of PV on the distribution grid.

  11. Optimal truss and frame design from projected homogenization-based topology optimization

    DEFF Research Database (Denmark)

    Larsen, S. D.; Sigmund, O.; Groen, J. P.

    2018-01-01

    In this article, we propose a novel method to obtain a near-optimal frame structure, based on the solution of a homogenization-based topology optimization model. The presented approach exploits the equivalence between Michell’s problem of least-weight trusses and a compliance minimization problem...... using optimal rank-2 laminates in the low volume fraction limit. In a fully automated procedure, a discrete structure is extracted from the homogenization-based continuum model. This near-optimal structure is post-optimized as a frame, where the bending stiffness is continuously decreased, to allow...

  12. Dual-Energy Computed Tomography Angiography of the Lower Extremity Runoff: Impact of Noise-Optimized Virtual Monochromatic Imaging on Image Quality and Diagnostic Accuracy.

    Science.gov (United States)

    Wichmann, Julian L; Gillott, Matthew R; De Cecco, Carlo N; Mangold, Stefanie; Varga-Szemes, Akos; Yamada, Ricardo; Otani, Katharina; Canstein, Christian; Fuller, Stephen R; Vogl, Thomas J; Todoran, Thomas M; Schoepf, U Joseph

    2016-02-01

    The aim of this study was to evaluate the impact of a noise-optimized virtual monochromatic imaging algorithm (VMI+) on image quality and diagnostic accuracy at dual-energy computed tomography angiography (CTA) of the lower extremity runoff. This retrospective Health Insurance Portability and Accountability Act-compliant study was approved by the local institutional review board. We evaluated dual-energy CTA studies of the lower extremity runoff in 48 patients (16 women; mean age, 63.3 ± 13.8 years) performed on a third-generation dual-source CT system. Images were reconstructed with standard linear blending (F_0.5), VMI+, and traditional monochromatic (VMI) algorithms at 40 to 120 keV in 10-keV intervals. Vascular attenuation and image noise in 18 artery segments were measured; signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. Five-point scales were used to subjectively evaluate vascular attenuation and image noise. In a subgroup of 21 patients who underwent additional invasive catheter angiography, diagnostic accuracy for the detection of significant stenosis (≥50% lumen restriction) of F_0.5, 50-keV VMI+, and 60-keV VMI data sets were assessed. Objective image quality metrics were highest in the 40- and 50-keV VMI+ series (SNR: 20.2 ± 10.7 and 19.0 ± 9.5, respectively; CNR: 18.5 ± 10.3 and 16.8 ± 9.1, respectively) and were significantly (all P traditional VMI technique and standard linear blending for evaluation of the lower extremity runoff using dual-energy CTA.

  13. Kalai-Smorodinsky bargaining solution for optimal resource allocation over wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2012-01-01

    Surveillance applications usually require high levels of video quality, resulting in high power consumption. The existence of a well-behaved scheme to balance video quality and power consumption is crucial for the system's performance. In the present work, we adopt the game-theoretic approach of Kalai-Smorodinsky Bargaining Solution (KSBS) to deal with the problem of optimal resource allocation in a multi-node wireless visual sensor network (VSN). In our setting, the Direct Sequence Code Division Multiple Access (DS-CDMA) method is used for channel access, while a cross-layer optimization design, which employs a central processing server, accounts for the overall system efficacy through all network layers. The task assigned to the central server is the communication with the nodes and the joint determination of their transmission parameters. The KSBS is applied to non-convex utility spaces, efficiently distributing the source coding rate, channel coding rate and transmission powers among the nodes. In the underlying model, the transmission powers assume continuous values, whereas the source and channel coding rates can take only discrete values. Experimental results are reported and discussed to demonstrate the merits of KSBS over competing policies.

  14. All the Four-Dimensional Static, Spherically Symmetric Solutions of Abelian Kaluza-Klein Theory

    International Nuclear Information System (INIS)

    Cvetic, M.; Youm, D.

    1995-01-01

    We present the explicit form for all the four-dimensional, static, spherically symmetric solutions in (4+n)-d Abelian Kaluza-Klein theory by performing a subset of SO(2,n) transformations corresponding to four SO(1,1) boosts on the Schwarzschild solution, supplemented by SO(n)/SO(n-2) transformations. The solutions are parametrized by the mass M, Taub-NUT charge a, and n electric rvec Q and n magnetic rvec P charges. Nonextreme black holes (with zero Taub-NUT charge) have either the Reissner-Nordstroem or Schwarzschild global space-time. Supersymmetric extreme black holes have a null or naked singularity, while nonsupersymmetric extreme ones have a global space-time of extreme Reissner-Nordstroem black holes. copyright 1995 The American Physical Society

  15. Energy-Water System Solutions | Energy Analysis | NREL

    Science.gov (United States)

    System Solutions Energy-Water System Solutions NREL has been a pioneer in the development of energy -water system solutions that explicitly address and optimize energy-water tradeoffs. NREL has evaluated energy-water system solutions for Department of Defense bases, islands, communities recovering from

  16. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  17. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  18. Optimal trajectories of aircraft and spacecraft

    Science.gov (United States)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful

  19. Optimization Under Uncertainty for Wake Steering Strategies

    Energy Technology Data Exchange (ETDEWEB)

    Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Annoni, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fleming, Paul A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ning, Andrew [Brigham Young University

    2017-08-03

    Offsetting turbines' yaw orientations from incoming wind is a powerful tool that may be leveraged to reduce undesirable wake effects on downstream turbines. First, we examine a simple two-turbine case to gain intuition as to how inflow direction uncertainty affects the optimal solution. The turbines are modeled with unidirectional inflow such that one turbine directly wakes the other, using ten rotor diameter spacing. We perform optimization under uncertainty (OUU) via a parameter sweep of the front turbine. The OUU solution generally prefers less steering. We then do this optimization for a 60-turbine wind farm with unidirectional inflow, varying the degree of inflow uncertainty and approaching this OUU problem by nesting a polynomial chaos expansion uncertainty quantification routine within an outer optimization. We examined how different levels of uncertainty in the inflow direction effect the ratio of the expected values of deterministic and OUU solutions for steering strategies in the large wind farm, assuming the directional uncertainty used to reach said OUU solution (this ratio is defined as the value of the stochastic solution or VSS).

  20. Optimal design of a gas transmission network: A case study of the Turkish natural gas pipeline network system

    Science.gov (United States)

    Gunes, Ersin Fatih

    Turkey is located between Europe, which has increasing demand for natural gas and the geographies of Middle East, Asia and Russia, which have rich and strong natural gas supply. Because of the geographical location, Turkey has strategic importance according to energy sources. To supply this demand, a pipeline network configuration with the optimal and efficient lengths, pressures, diameters and number of compressor stations is extremely needed. Because, Turkey has a currently working and constructed network topology, obtaining an optimal configuration of the pipelines, including an optimal number of compressor stations with optimal locations, is the focus of this study. Identifying a network design with lowest costs is important because of the high maintenance and set-up costs. The quantity of compressor stations, the pipeline segments' lengths, the diameter sizes and pressures at compressor stations, are considered to be decision variables in this study. Two existing optimization models were selected and applied to the case study of Turkey. Because of the fixed cost of investment, both models are formulated as mixed integer nonlinear programs, which require branch and bound combined with the nonlinear programming solution methods. The differences between these two models are related to some factors that can affect the network system of natural gas such as wall thickness, material balance compressor isentropic head and amount of gas to be delivered. The results obtained by these two techniques are compared with each other and with the current system. Major differences between results are costs, pressures and flow rates. These solution techniques are able to find a solution with minimum cost for each model both of which are less than the current cost of the system while satisfying all the constraints on diameter, length, flow rate and pressure. These results give the big picture of an ideal configuration for the future state network for the country of Turkey.

  1. Optimal inflation for the U.S.

    OpenAIRE

    Roberto M. Billi

    2007-01-01

    What is the correctly measured inflation rate that monetary policy should aim for in the long-run? This paper characterizes the optimal inflation rate for the U.S. economy in a New Keynesian sticky-price model with an occasionally binding zero lower bound on the nominal interest rate. Real-rate and mark-up shocks jointly determine the optimal inflation rate to be positive but not large. Even allowing for the possibility of extreme model misspecification, the optimal inflation rate is robustly...

  2. Impacts of climate extremes on gross primary production under global warming

    International Nuclear Information System (INIS)

    Williams, I N; Torn, M S; Riley, W J; Wehner, M F

    2014-01-01

    The impacts of historical droughts and heat-waves on ecosystems are often considered indicative of future global warming impacts, under the assumption that water stress sets in above a fixed high temperature threshold. Historical and future (RCP8.5) Earth system model (ESM) climate projections were analyzed in this study to illustrate changes in the temperatures for onset of water stress under global warming. The ESMs examined here predict sharp declines in gross primary production (GPP) at warm temperature extremes in historical climates, similar to the observed correlations between GPP and temperature during historical heat-waves and droughts. However, soil moisture increases at the warm end of the temperature range, and the temperature at which soil moisture declines with temperature shifts to a higher temperature. The temperature for onset of water stress thus increases under global warming and is associated with a shift in the temperature for maximum GPP to warmer temperatures. Despite the shift in this local temperature optimum, the impacts of warm extremes on GPP are approximately invariant when extremes are defined relative to the optimal temperature within each climate period. The GPP sensitivity to these relative temperature extremes therefore remains similar between future and present climates, suggesting that the heat- and drought-induced GPP reductions seen recently can be expected to be similar in the future, and may be underestimates of future impacts given model projections of increased frequency and persistence of heat-waves and droughts. The local temperature optimum can be understood as the temperature at which the combination of water stress and light limitations is minimized, and this concept gives insights into how GPP responds to climate extremes in both historical and future climate periods. Both cold (temperature and light-limited) and warm (water-limited) relative temperature extremes become more persistent in future climate projections

  3. A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events

    Science.gov (United States)

    Zorzetto, E.; Marani, M.

    2017-12-01

    The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.

  4. Extremely Randomized Machine Learning Methods for Compound Activity Prediction

    Directory of Open Access Journals (Sweden)

    Wojciech M. Czarnecki

    2015-11-01

    Full Text Available Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called ‘extremely randomized methods’—Extreme Entropy Machine and Extremely Randomized Trees—for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their ‘non-extreme’ competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  5. Improved creep strength of nickel-base superalloys by optimized γ/γ′ partitioning behavior of solid solution strengthening elements

    International Nuclear Information System (INIS)

    Pröbstle, M.; Neumeier, S.; Feldner, P.; Rettig, R.; Helmer, H.E.; Singer, R.F.; Göken, M.

    2016-01-01

    Solid solution strengthening of the γ matrix is one key factor for improving the creep strength of single crystal nickel-base superalloys at high temperatures. Therefore a strong partitioning of solid solution hardening elements to the matrix is beneficial for high temperature creep strength. Different Rhenium-free alloys which are derived from CMSX-4 are investigated. The alloys have been characterized regarding microstructure, phase compositions as well as creep strength. It is found that increasing the Titanium (Ti) as well as the Tungsten (W) content causes a stronger partitioning of the solid solution strengtheners, in particular W, to the γ phase. As a result the creep resistance is significantly improved. Based on these ideas, a Rhenium-free alloy with an optimized chemistry regarding the partitioning behavior of W is developed and validated in the present study. It shows comparable creep strength to the Rhenium containing second generation alloy CMSX-4 in the high temperature / low stress creep regime and is less prone to the formation of deleterious topologically close packed (TCP) phases. This more effective usage of solid solution strengtheners can enhance the creep properties of nickel-base superalloys while reducing the content of strategic elements like Rhenium.

  6. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    OpenAIRE

    He, Xinhua; Hu, Wenfa

    2017-01-01

    Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total c...

  7. Ant colony search algorithm for optimal reactive power optimization

    Directory of Open Access Journals (Sweden)

    Lenin K.

    2006-01-01

    Full Text Available The paper presents an (ACSA Ant colony search Algorithm for Optimal Reactive Power Optimization and voltage control of power systems. ACSA is a new co-operative agents’ approach, which is inspired by the observation of the behavior of real ant colonies on the topic of ant trial formation and foraging methods. Hence, in the ACSA a set of co-operative agents called "Ants" co-operates to find good solution for Reactive Power Optimization problem. The ACSA is applied for optimal reactive power optimization is evaluated on standard IEEE, 30, 57, 191 (practical test bus system. The proposed approach is tested and compared to genetic algorithm (GA, Adaptive Genetic Algorithm (AGA.

  8. Constrained Optimization Methods in Health Services Research-An Introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force.

    Science.gov (United States)

    Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S

    2017-03-01

    Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  9. The Future of Coral Reefs Subject to Rapid Climate Change: Lessons from Natural Extreme Environments

    Directory of Open Access Journals (Sweden)

    Emma F. Camp

    2018-02-01

    Full Text Available Global climate change and localized anthropogenic stressors are driving rapid declines in coral reef health. In vitro experiments have been fundamental in providing insight into how reef organisms will potentially respond to future climates. However, such experiments are inevitably limited in their ability to reproduce the complex interactions that govern reef systems. Studies examining coral communities that already persist under naturally-occurring extreme and marginal physicochemical conditions have therefore become increasingly popular to advance ecosystem scale predictions of future reef form and function, although no single site provides a perfect analog to future reefs. Here we review the current state of knowledge that exists on the distribution of corals in marginal and extreme environments, and geographic sites at the latitudinal extremes of reef growth, as well as a variety of shallow reef systems and reef-neighboring environments (including upwelling and CO2 vent sites. We also conduct a synthesis of the abiotic data that have been collected at these systems, to provide the first collective assessment on the range of extreme conditions under which corals currently persist. We use the review and data synthesis to increase our understanding of the biological and ecological mechanisms that facilitate survival and success under sub-optimal physicochemical conditions. This comprehensive assessment can begin to: (i highlight the extent of extreme abiotic scenarios under which corals can persist, (ii explore whether there are commonalities in coral taxa able to persist in such extremes, (iii provide evidence for key mechanisms required to support survival and/or persistence under sub-optimal environmental conditions, and (iv evaluate the potential of current sub-optimal coral environments to act as potential refugia under changing environmental conditions. Such a collective approach is critical to better understand the future survival of

  10. Food processing optimization using evolutionary algorithms | Enitan ...

    African Journals Online (AJOL)

    Evolutionary algorithms are widely used in single and multi-objective optimization. They are easy to use and provide solution(s) in one simulation run. They are used in food processing industries for decision making. Food processing presents constrained and unconstrained optimization problems. This paper reviews the ...

  11. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    Science.gov (United States)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  12. A Framework for Constrained Optimization Problems Based on a Modified Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Biwei Tang

    2016-01-01

    Full Text Available This paper develops a particle swarm optimization (PSO based framework for constrained optimization problems (COPs. Aiming at enhancing the performance of PSO, a modified PSO algorithm, named SASPSO 2011, is proposed by adding a newly developed self-adaptive strategy to the standard particle swarm optimization 2011 (SPSO 2011 algorithm. Since the convergence of PSO is of great importance and significantly influences the performance of PSO, this paper first theoretically investigates the convergence of SASPSO 2011. Then, a parameter selection principle guaranteeing the convergence of SASPSO 2011 is provided. Subsequently, a SASPSO 2011-based framework is established to solve COPs. Attempting to increase the diversity of solutions and decrease optimization difficulties, the adaptive relaxation method, which is combined with the feasibility-based rule, is applied to handle constraints of COPs and evaluate candidate solutions in the developed framework. Finally, the proposed method is verified through 4 benchmark test functions and 2 real-world engineering problems against six PSO variants and some well-known methods proposed in the literature. Simulation results confirm that the proposed method is highly competitive in terms of the solution quality and can be considered as a vital alternative to solve COPs.

  13. QUALITATIVE ANALYSIS OF EXTREMAL PROBLEMS IN ARBITRARY DOMAINS

    Directory of Open Access Journals (Sweden)

    Samokhin Mikhail Vasilevich

    2012-10-01

    The author considers the problems concerning where B is either a unit sphere in the (D space or one of the classes , p>1. He shows the possibility of the results concerning the characteristic of extreme functions, their uniqueness, the possilble presentation of the functions from the classes and with the use of the Cauchy-Stieltjes integrals in the component of the D\\ suppµ set and the boundary behavior of an extreme function from the (D class. One should note that the given mathematical system can be implemented for making decisions in the field of construction engineering and structural analysis, it can provide research assistants and engineers with the background necessary for developing sound solutions and rational proposals.

  14. An Electrostatics Approach to the Determination of Extremal Measures

    Energy Technology Data Exchange (ETDEWEB)

    Meinguet, Jean [Universite Catholique de Louvain, Institut Mathematique, Chemin du Cyclotron 2 (Belgium)], E-mail: meinguet@anma.ucl.ac.be

    2000-12-15

    One of the most important aspects of the minimal energy (or induced equilibrium) problem in the presence of an external field - sometimes referred to as the Gauss variation problem - is the determination of the support of its solution (the so-called extremal measure associated with the field). A simple electrostatic interpretation is presented here, which is apparently new and anyway suggests a novel, rather systematic approach to the solution. By way of illustration, the classical results for Jacobi, Laguerre and Freud weights are explicitly recovered by this alternative method.

  15. An Electrostatics Approach to the Determination of Extremal Measures

    International Nuclear Information System (INIS)

    Meinguet, Jean

    2000-01-01

    One of the most important aspects of the minimal energy (or induced equilibrium) problem in the presence of an external field - sometimes referred to as the Gauss variation problem - is the determination of the support of its solution (the so-called extremal measure associated with the field). A simple electrostatic interpretation is presented here, which is apparently new and anyway suggests a novel, rather systematic approach to the solution. By way of illustration, the classical results for Jacobi, Laguerre and Freud weights are explicitly recovered by this alternative method

  16. Optimal allocation of SVC and TCSC using quasi-oppositional chemical reaction optimization for solving multi-objective ORPD problem

    Directory of Open Access Journals (Sweden)

    Susanta Dutta

    2018-05-01

    Full Text Available This paper presents an efficient quasi-oppositional chemical reaction optimization (QOCRO technique to find the feasible optimal solution of the multi objective optimal reactive power dispatch (RPD problem with flexible AC transmission system (FACTS device. The quasi-oppositional based learning (QOBL is incorporated in conventional chemical reaction optimization (CRO, to improve the solution quality and the convergence speed. To check the superiority of the proposed method, it is applied on IEEE 14-bus and 30-bus systems and the simulation results of the proposed approach are compared to those reported in the literature. The computational results reveal that the proposed algorithm has excellent convergence characteristics and is superior to other multi objective optimization algorithms. Keywords: Quasi-oppositional chemical reaction optimization (QOCRO, Reactive power dispatch (RPD, TCSC, SVC, Multi-objective optimization

  17. Many-objective thermodynamic optimization of Stirling heat engine

    International Nuclear Information System (INIS)

    Patel, Vivek; Savsani, Vimal; Mudgal, Anurag

    2017-01-01

    This paper presents a rigorous investigation of many-objective (four-objective) thermodynamic optimization of a Stirling heat engine. Many-objective optimization problem is formed by considering maximization of thermal efficiency, power output, ecological function and exergy efficiency. Multi-objective heat transfer search (MOHTS) algorithm is proposed and applied to obtain a set of Pareto-optimal points. Many objective optimization results form a solution in a four dimensional hyper objective space and for visualization it is represented on a two dimension objective space. Thus, results of four-objective optimization are represented by six Pareto fronts in two dimension objective space. These six Pareto fronts are compared with their corresponding two-objective Pareto fronts. Quantitative assessment of the obtained Pareto solutions is reported in terms of spread and the spacing measures. Different decision making approaches such as LINMAP, TOPSIS and fuzzy are used to select a final optimal solution from Pareto optimal set of many-objective optimization. Finally, to reveal the level of conflict between these objectives, distribution of each decision variable in their allowable range is also shown in two dimensional objective spaces. - Highlights: • Many-objective (i.e. four objective) optimization of Stirling engine is investigated. • MOHTS algorithm is introduced and applied to obtain a set of Pareto points. • Comparative results of many-objective and multi-objectives are presented. • Relationship of design variables in many-objective optimization are obtained. • Optimum solution is selected by using decision making approaches.

  18. Choosing the Mean versus an Extreme Resolution for Intrapersonal Values Conflicts: Is the Mean Usually More Golden?

    Science.gov (United States)

    Kinnier, Richard T.

    1984-01-01

    Examined the resolution of value conflicts in 60 adults who wrote a solution to their conflicts. Compared extreme resolutions with those representing compromise. Compromisers and extremists did not differ in how rationally resolved they were about their solutions but compromisers felt better about their solutions. (JAC)

  19. Optimization of the Runner for Extremely Low Head Bidirectional Tidal Bulb Turbine

    Directory of Open Access Journals (Sweden)

    Yongyao Luo

    2017-06-01

    Full Text Available This paper presents a multi-objective optimization procedure for bidirectional bulb turbine runners which is completed using ANSYS Workbench. The optimization procedure is able to check many more geometries with less manual work. In the procedure, the initial blade shape is parameterized, the inlet and outlet angles (β1, β2, as well as the starting and ending wrap angles (θ1, θ2 for the five sections of the blade profile, are selected as design variables, and the optimization target is set to obtain the maximum of the overall efficiency for the ebb and flood turbine modes. For the flow analysis, the ANSYS CFX code, with a SST (Shear Stress Transport k-ω turbulence model, has been used to evaluate the efficiency of the turbine. An efficient response surface model relating the design parameters and the objective functions is obtained. The optimization strategy was used to optimize a model bulb turbine runner. Model tests were carried out to validate the final designs and the design procedure. For the four-bladed turbine, the efficiency improvement is 5.5% in the ebb operation direction, and 2.9% in the flood operation direction, as well as 4.3% and 4.5% for the three-bladed turbine. Numerical simulations were then performed to analyze the pressure pulsation in the pressure and suction sides of the blade for the prototype turbine with optimal four-bladed and three-bladed runners. The results show that the runner rotational frequency (fn is the dominant frequency of the pressure pulsations in the blades for ebb and flood turbine modes, and the gravitational effect, rather than rotor-stator interaction (RSI, plays an important role in a low head horizontal axial turbine. The amplitudes of the pressure pulsations on the blade side facing the guide vanes varies little with the water head. However, the amplitudes of the pressure pulsations on the blade side facing the diffusion tube linearly increase with the water head. These results could provide

  20. Optimal adaptation to extreme rainfalls in current and future climate

    DEFF Research Database (Denmark)

    Rosbjerg, Dan

    2017-01-01

    . The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate......More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level...... and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level...

  1. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    Science.gov (United States)

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  2. Extreme black hole with an electric dipole moment

    International Nuclear Information System (INIS)

    Horowitz, G.T.; Tada, T.

    1996-01-01

    We construct a new extreme black hole solution in a toroidally compactified heterotic string theory. The black hole saturates the Bogomol close-quote nyi bound, has zero angular momentum, but a nonzero electric dipole moment. It is obtained by starting with a higher-dimensional rotating charged black hole, and compactifying one direction in the plane of rotation. copyright 1996 The American Physical Society

  3. Pseudolinear functions and optimization

    CERN Document Server

    Mishra, Shashi Kant

    2015-01-01

    Pseudolinear Functions and Optimization is the first book to focus exclusively on pseudolinear functions, a class of generalized convex functions. It discusses the properties, characterizations, and applications of pseudolinear functions in nonlinear optimization problems.The book describes the characterizations of solution sets of various optimization problems. It examines multiobjective pseudolinear, multiobjective fractional pseudolinear, static minmax pseudolinear, and static minmax fractional pseudolinear optimization problems and their results. The authors extend these results to locally

  4. Thermal Implications for Extreme Fast Charge

    Energy Technology Data Exchange (ETDEWEB)

    Keyser, Matthew A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-14

    Present-day thermal management systems for battery electric vehicles are inadequate in limiting the maximum temperature rise of the battery during extreme fast charging. If the battery thermal management system is not designed correctly, the temperature of the cells could reach abuse temperatures and potentially send the cells into thermal runaway. Furthermore, the cell and battery interconnect design needs to be improved to meet the lifetime expectations of the consumer. Each of these aspects is explored and addressed as well as outlining where the heat is generated in a cell, the efficiencies of power and energy cells, and what type of battery thermal management solutions are available in today's market. Thermal management is not a limiting condition with regard to extreme fast charging, but many factors need to be addressed especially for future high specific energy density cells to meet U.S. Department of Energy cost and volume goals.

  5. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2018-03-27

    Algorithmic and architecture-oriented optimizations are essential for achieving performance worthy of anticipated energy-austere exascale systems. In this paper, we present an extreme scale FMM-accelerated boundary integral equation solver for wave scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory, targeting emerging Intel extreme performance HPC architectures. We extract the potential thread- and data-level parallelism of the key Helmholtz kernels of FMM. Our application code is well optimized to exploit the AVX-512 SIMD units of Intel Skylake and Knights Landing architectures. We provide different performance models for tuning the task-based tree traversal implementation of FMM, and develop optimal architecture-specific and algorithm aware partitioning, load balancing, and communication reducing mechanisms to scale up to 6,144 compute nodes of a Cray XC40 with 196,608 hardware cores. With shared memory optimizations, we achieve roughly 77% of peak single precision floating point performance of a 56-core Skylake processor, and on average 60% of peak single precision floating point performance of a 72-core KNL. These numbers represent nearly 5.4x and 10x speedup on Skylake and KNL, respectively, compared to the baseline scalar code. With distributed memory optimizations, on the other hand, we report near-optimal efficiency in the weak scalability study with respect to both the logarithmic communication complexity as well as the theoretical scaling complexity of FMM. In addition, we exhibit up to 85% efficiency in strong scaling. We compute in excess of 2 billion DoF on the full-scale of the Cray XC40 supercomputer.

  6. Technology-derived storage solutions for stabilizing insulin in extreme weather conditions I: the ViViCap-1 device.

    Science.gov (United States)

    Pfützner, Andreas; Pesach, Gidi; Nagar, Ron

    2017-06-01

    Injectable life-saving drugs should not be exposed to temperatures 30°C/86°F. Frequently, weather conditions exceed these temperature thresholds in many countries. Insulin is to be kept at 4-8°C/~ 39-47°F until use and once opened, is supposed to be stable for up to 31 days at room temperature (exception: 42 days for insulin levemir). Extremely hot or cold external temperature can lead to insulin degradation in a very short time with loss of its glucose-lowering efficacy. Combined chemical and engineering solutions for heat protection are employed in ViViCap-1 for disposable insulin pens. The device works based on vacuum insulation and heat consumption by phase-change material. Laboratory studies with exposure of ViViCap-1 to hot outside conditions were performed to evaluate the device performance. ViViCap-1 keeps insulin at an internal temperature phase-change process and 'recharges' the device for further use. ViViCap-1 performed within its specifications. The small and convenient device maintains the efficacy and safety of using insulin even when carried under hot weather conditions.

  7. Replica approach to mean-variance portfolio optimization

    Science.gov (United States)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  8. Extremity individual monitoring: 30 years of experience

    International Nuclear Information System (INIS)

    Seda, Rosangela Pinto Guimaraes; Mauricio, Claudia Lucia de Pinho; Moura Junior, Jose; Martins, Marcelo Marques; Meira, Nilton Ferreira; Diz, Ricardo; Goncalves, Sergio Alves

    2002-01-01

    The Thermoluminescent Dosimetry Laboratory of the Departamento de Monitoracao Individual of the Instituto de Radioprotecao e Dosimetria (LDT/DEMIN/IRD) is one of the first extremity individual monitoring service in Brazil. In its 30 years of activities, the laboratory has ever made a great effort to be continuously updated. Equipment and procedures have been updated and optimized in order to guarantee the quality of all measurements. Nowadays, the extremity individual monitoring service evaluates monthly around 300 occupational doses in worker's hands of several Brazilian facilities in the fields of health, industry (including power reactor) and research. It is used a dosimetric ring with LiF:Mg,Ti thermoluminescent detectors (TLDs) from Harshaw/Bicron, named TLD-100. The Service helps the effective occupational control of the Brazilian works, which handle radioactive material or has their hands more exposed than the body. (author)

  9. Optimization theory with applications

    CERN Document Server

    Pierre, Donald A

    1987-01-01

    Optimization principles are of undisputed importance in modern design and system operation. They can be used for many purposes: optimal design of systems, optimal operation of systems, determination of performance limitations of systems, or simply the solution of sets of equations. While most books on optimization are limited to essentially one approach, this volume offers a broad spectrum of approaches, with emphasis on basic techniques from both classical and modern work.After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all typ

  10. Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm

    Science.gov (United States)

    Zhang, Jian; Gan, Yang

    2018-04-01

    The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.

  11. Classification of Near-Horizon Geometries of Extremal Black Holes

    Directory of Open Access Journals (Sweden)

    Hari K. Kunduri

    2013-09-01

    Full Text Available Any spacetime containing a degenerate Killing horizon, such as an extremal black hole, possesses a well-defined notion of a near-horizon geometry. We review such near-horizon geometry solutions in a variety of dimensions and theories in a unified manner. We discuss various general results including horizon topology and near-horizon symmetry enhancement. We also discuss the status of the classification of near-horizon geometries in theories ranging from vacuum gravity to Einstein–Maxwell theory and supergravity theories. Finally, we discuss applications to the classification of extremal black holes and various related topics. Several new results are presented and open problems are highlighted throughout.

  12. Classification of Near-Horizon Geometries of Extremal Black Holes.

    Science.gov (United States)

    Kunduri, Hari K; Lucietti, James

    2013-01-01

    Any spacetime containing a degenerate Killing horizon, such as an extremal black hole, possesses a well-defined notion of a near-horizon geometry. We review such near-horizon geometry solutions in a variety of dimensions and theories in a unified manner. We discuss various general results including horizon topology and near-horizon symmetry enhancement. We also discuss the status of the classification of near-horizon geometries in theories ranging from vacuum gravity to Einstein-Maxwell theory and supergravity theories. Finally, we discuss applications to the classification of extremal black holes and various related topics. Several new results are presented and open problems are highlighted throughout.

  13. Bridge Management Strategy Based on Extreme User Costs for Bridge Network Condition

    Directory of Open Access Journals (Sweden)

    Ladislaus Lwambuka

    2014-01-01

    Full Text Available This paper presents a practical approach for prioritization of bridge maintenance within a given bridge network. The maintenance prioritization is formulated as a multiobjective optimization problem where the simultaneous satisfaction of several conflicting objectives includes minimization of maintenance costs, maximization of bridge deck condition, and minimization of traffic disruption and associated user costs. The prevalence of user cost during maintenance period is twofold; the first case refers to the period of dry season where normally the traffic flow is diverted to alternative routes usually resurfaced to regain traffic access. The second prevalence refers to the absence of alternative routes which is often the case in the least developed countries; in this case the user cost referred to results from the waiting time while the traffic flow is put on hold awaiting accomplishment of the maintenance activity. This paper deals with the second scenario of traffic closure in the absence of alternative diversion routes which in essence results in extreme user cost. The paper shows that the multiobjective optimization approach remains valid for extreme cases of user costs in the absence of detour roads as often is the scenario in countries with extreme poor road infrastructure.

  14. Improved solution for ill-posed linear systems using a constrained optimization ruled by a penalty: evaluation in nuclear medicine tomography

    International Nuclear Information System (INIS)

    Walrand, Stephan; Jamar, François; Pauwels, Stanislas

    2009-01-01

    Ill-posed linear systems occur in many different fields. A class of regularization methods, called constrained optimization, aims to determine the extremum of a penalty function whilst constraining an objective function to a likely value. We propose here a novel heuristic way to screen the local extrema satisfying the discrepancy principle. A modified version of the Landweber algorithm is used for the iteration process. After finding a local extremum, a bound is performed to the 'farthest' estimate in the data space still satisfying the discrepancy principle. Afterwards, the modified Landweber algorithm is again applied to find a new local extremum. This bound-iteration process is repeated until a satisfying solution is reached. For evaluation in nuclear medicine tomography, a novel penalty function that preserves the edge steps in the reconstructed solution was evaluated on Monte Carlo simulations and using real SPECT acquisitions as well. Surprisingly, the first bound always provided a significantly better solution in a wide range of statistics

  15. Supersymmetric giant graviton solutions in AdS3

    International Nuclear Information System (INIS)

    Mandal, Gautam; Raju, Suvrat; Smedbaeck, Mikael

    2008-01-01

    We parametrize all classical probe brane configurations that preserve four supersymmetries in (a) the extremal D1-D5 geometry, (b) the extremal D1-D5-P geometry, (c) the smooth D1-D5 solutions proposed by Lunin and Mathur, and (d) global AdS 3 xS 3 xT 4 /K3. These configurations consist of D1 branes, D5 branes, and bound states of D5 and D1 branes with the property that a particular Killing vector is tangent to the brane world volume at each point. We show that the supersymmetric sector of the D5-brane world volume theory may be analyzed in an effective 1+1 dimensional framework that places it on the same footing as D1 branes. In global AdS and the corresponding Lunin-Mathur solution, the solutions we describe are ''bound'' to the center of AdS for generic parameters and cannot escape to infinity. We show that these probes only exist on the submanifold of moduli space where the background B NS field and theta angle vanish. We quantize these probes in the near-horizon region of the extremal D1-D5 geometry and obtain the theory of long strings discussed by Seiberg and Witten

  16. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach

    Science.gov (United States)

    Ran, J.; Ditmar, P.; Klees, R.; Farahani, H. H.

    2018-03-01

    We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted into mass anomalies per mascon. The limited spectral content of the gravity anomalies is properly accounted for by applying a low-pass filter as part of the inversion procedure to make the functional model spectrally consistent with the data. The full error covariance matrices of the monthly GRACE solutions are properly propagated using the law of covariance propagation. Using numerical experiments, we demonstrate the importance of a proper data weighting and of the spectral consistency between functional model and data. The developed methodology is applied to process real GRACE level-2 data (CSR RL05). The obtained mass anomaly estimates are integrated over five drainage systems, as well as over entire Greenland. We find that the statistically optimal data weighting reduces random noise by 35-69%, depending on the drainage system. The obtained mass anomaly time-series are de-trended to eliminate the contribution of ice discharge and are compared with de-trended surface mass balance (SMB) time-series computed with the Regional Atmospheric Climate Model (RACMO 2.3). We show that when using a statistically optimal data weighting in GRACE data processing, the discrepancies between GRACE-based estimates of SMB and modelled SMB are reduced by 24-47%.

  17. Intelligent discrete particle swarm optimization for multiprocessor task scheduling problem

    Directory of Open Access Journals (Sweden)

    S Sarathambekai

    2017-03-01

    Full Text Available Discrete particle swarm optimization is one of the most recently developed population-based meta-heuristic optimization algorithm in swarm intelligence that can be used in any discrete optimization problems. This article presents a discrete particle swarm optimization algorithm to efficiently schedule the tasks in the heterogeneous multiprocessor systems. All the optimization algorithms share a common algorithmic step, namely population initialization. It plays a significant role because it can affect the convergence speed and also the quality of the final solution. The random initialization is the most commonly used method in majority of the evolutionary algorithms to generate solutions in the initial population. The initial good quality solutions can facilitate the algorithm to locate the optimal solution or else it may prevent the algorithm from finding the optimal solution. Intelligence should be incorporated to generate the initial population in order to avoid the premature convergence. This article presents a discrete particle swarm optimization algorithm, which incorporates opposition-based technique to generate initial population and greedy algorithm to balance the load of the processors. Make span, flow time, and reliability cost are three different measures used to evaluate the efficiency of the proposed discrete particle swarm optimization algorithm for scheduling independent tasks in distributed systems. Computational simulations are done based on a set of benchmark instances to assess the performance of the proposed algorithm.

  18. Data-adaptive Robust Optimization Method for the Economic Dispatch of Active Distribution Networks

    DEFF Research Database (Denmark)

    Zhang, Yipu; Ai, Xiaomeng; Fang, Jiakun

    2018-01-01

    Due to the restricted mathematical description of the uncertainty set, the current two-stage robust optimization is usually over-conservative which has drawn concerns from the power system operators. This paper proposes a novel data-adaptive robust optimization method for the economic dispatch...... of active distribution network with renewables. The scenario-generation method and the two-stage robust optimization are combined in the proposed method. To reduce the conservativeness, a few extreme scenarios selected from the historical data are used to replace the conventional uncertainty set....... The proposed extreme-scenario selection algorithm takes advantage of considering the correlations and can be adaptive to different historical data sets. A theoretical proof is given that the constraints will be satisfied under all the possible scenarios if they hold in the selected extreme scenarios, which...

  19. Optimization strategies for complex engineering applications

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, M.S.

    1998-02-01

    LDRD research activities have focused on increasing the robustness and efficiency of optimization studies for computationally complex engineering problems. Engineering applications can be characterized by extreme computational expense, lack of gradient information, discrete parameters, non-converging simulations, and nonsmooth, multimodal, and discontinuous response variations. Guided by these challenges, the LDRD research activities have developed application-specific techniques, fundamental optimization algorithms, multilevel hybrid and sequential approximate optimization strategies, parallel processing approaches, and automatic differentiation and adjoint augmentation methods. This report surveys these activities and summarizes the key findings and recommendations.

  20. Going Extreme For Small Solutions To Big Environmental Challenges

    Energy Technology Data Exchange (ETDEWEB)

    Bagwell, Christopher E.

    2011-03-31

    This chapter is devoted to the scale, scope, and specific issues confronting the cleanup and long-term disposal of the U.S. nuclear legacy generated during WWII and the Cold War Era. The research reported is aimed at complex microbiological interactions with legacy waste materials generated by past nuclear production activities in the United States. The intended purpose of this research is to identify cost effective solutions to the specific problems (stability) and environmental challenges (fate, transport, exposure) in managing and detoxifying persistent contaminant species. Specifically addressed are high level waste microbiology and bacteria inhabiting plutonium laden soils in the unsaturated subsurface.

  1. Simulation of temperature extremes in the Tibetan Plateau from CMIP5 models and comparison with gridded observations

    Science.gov (United States)

    You, Qinglong; Jiang, Zhihong; Wang, Dai; Pepin, Nick; Kang, Shichang

    2017-09-01

    Understanding changes in temperature extremes in a warmer climate is of great importance for society and for ecosystem functioning due to potentially severe impacts of such extreme events. In this study, temperature extremes defined by the Expert Team on Climate Change Detection and Indices (ETCCDI) from CMIP5 models are evaluated by comparison with homogenized gridded observations at 0.5° resolution across the Tibetan Plateau (TP) for 1961-2005. Using statistical metrics, the models have been ranked in terms of their ability to reproduce similar patterns in extreme events to the observations. Four CMIP5 models have good performance (BNU-ESM, HadGEM2-ES, CCSM4, CanESM2) and are used to create an optimal model ensemble (OME). Most temperature extreme indices in the OME are closer to the observations than in an ensemble using all models. Best performance is given for threshold temperature indices and extreme/absolute value indices are slightly less well modelled. Thus the choice of model in the OME seems to have more influences on temperature extreme indices based on thresholds. There is no significant correlation between elevation and modelled bias of the extreme indices for both the optimal/all model ensembles. Furthermore, the minimum temperature (Tmin) is significanlty positive correlations with the longwave radiation and cloud variables, respectively, but the Tmax fails to find the correlation with the shortwave radiation and cloud variables. This suggests that the cloud-radiation differences influence the Tmin in each CMIP5 model to some extent, and result in the temperature extremes based on Tmin.

  2. Solution of Deformed Einstein Equations and Quantum Black Holes

    International Nuclear Information System (INIS)

    Dil, Emre; Kolay, Erdinç

    2016-01-01

    Recently, one- and two-parameter deformed Einstein equations have been studied for extremal quantum black holes which have been proposed to obey deformed statistics by Strominger. In this study, we give a deeper insight into the deformed Einstein equations and consider the solutions of these equations for the extremal quantum black holes. We then represent the implications of the solutions, such that the deformation parameters lead the charged black holes to have a smaller mass than the usual Reissner-Nordström black holes. This reduction in mass of a usual black hole can be considered as a transition from classical to quantum black hole regime.

  3. Mask Materials and Designs for Extreme Ultra Violet Lithography

    Science.gov (United States)

    Kim, Jung Sik; Ahn, Jinho

    2018-03-01

    Extreme ultra violet lithography (EUVL) is no longer a future technology but is going to be inserted into mass production of semiconductor devices of 7 nm technology node in 2018. EUVL is an extension of optical lithography using extremely short wavelength (13.5 nm). This short wavelength requires major modifications in the optical systems due to the very strong absorption of EUV light by materials. Refractive optics can no longer be used, and reflective optics is the only solution to transfer image from mask to wafer. This is why we need the multilayer (ML) mirror-based mask as well as an oblique incident angle of light. This paper discusses the principal theory on the EUV mask design and its component materials including ML reflector and EUV absorber. Mask shadowing effect (or mask 3D effect) is explained and its technical solutions like phase shift mask is reviewed. Even though not all the technical issues on EUV mask are handled in this review paper, you will be able to understand the principles determining the performance of EUV masks.

  4. Time optimal paths for high speed maneuvering

    Energy Technology Data Exchange (ETDEWEB)

    Reister, D.B.; Lenhart, S.M.

    1993-01-01

    Recent theoretical results have completely solved the problem of determining the minimum length path for a vehicle with a minimum turning radius moving from an initial configuration to a final configuration. Time optimal paths for a constant speed vehicle are a subset of the minimum length paths. This paper uses the Pontryagin maximum principle to find time optimal paths for a constant speed vehicle. The time optimal paths consist of sequences of axes of circles and straight lines. The maximum principle introduces concepts (dual variables, bang-bang solutions, singular solutions, and transversality conditions) that provide important insight into the nature of the time optimal paths. We explore the properties of the optimal paths and present some experimental results for a mobile robot following an optimal path.

  5. Efficient solution method for optimal control of nuclear systems

    International Nuclear Information System (INIS)

    Naser, J.A.; Chambre, P.L.

    1981-01-01

    To improve the utilization of existing fuel sources, the use of optimization techniques is becoming more important. A technique for solving systems of coupled ordinary differential equations with initial, boundary, and/or intermediate conditions is given. This method has a number of inherent advantages over existing techniques as well as being efficient in terms of computer time and space requirements. An example of computing the optimal control for a spatially dependent reactor model with and without temperature feedback is given. 10 refs

  6. On a Highly Nonlinear Self-Obstacle Optimal Control Problem

    Energy Technology Data Exchange (ETDEWEB)

    Di Donato, Daniela, E-mail: daniela.didonato@unitn.it [University of Trento, Department of Mathematics (Italy); Mugnai, Dimitri, E-mail: dimitri.mugnai@unipg.it [Università di Perugia, Dipartimento di Matematica e Informatica (Italy)

    2015-10-15

    We consider a non-quadratic optimal control problem associated to a nonlinear elliptic variational inequality, where the obstacle is the control itself. We show that, fixed a desired profile, there exists an optimal solution which is not far from it. Detailed characterizations of the optimal solution are given, also in terms of approximating problems.

  7. On the validity of the arithmetic-geometric mean method to locate the optimal solution in a supply chain system

    Science.gov (United States)

    Chung, Kun-Jen

    2012-08-01

    Cardenas-Barron [Cardenas-Barron, L.E. (2010) 'A Simple Method to Compute Economic order Quantities: Some Observations', Applied Mathematical Modelling, 34, 1684-1688] indicates that there are several functions in which the arithmetic-geometric mean method (AGM) does not give the minimum. This article presents another situation to reveal that the AGM inequality to locate the optimal solution may be invalid for Teng, Chen, and Goyal [Teng, J.T., Chen, J., and Goyal S.K. (2009), 'A Comprehensive Note on: An Inventory Model under Two Levels of Trade Credit and Limited Storage Space Derived without Derivatives', Applied Mathematical Modelling, 33, 4388-4396], Teng and Goyal [Teng, J.T., and Goyal S.K. (2009), 'Comment on 'Optimal Inventory Replenishment Policy for the EPQ Model under Trade Credit Derived without Derivatives', International Journal of Systems Science, 40, 1095-1098] and Hsieh, Chang, Weng, and Dye [Hsieh, T.P., Chang, H.J., Weng, M.W., and Dye, C.Y. (2008), 'A Simple Approach to an Integrated Single-vendor Single-buyer Inventory System with Shortage', Production Planning and Control, 19, 601-604]. So, the main purpose of this article is to adopt the calculus approach not only to overcome shortcomings of the arithmetic-geometric mean method of Teng et al. (2009), Teng and Goyal (2009) and Hsieh et al. (2008), but also to develop the complete solution procedures for them.

  8. Extreme Experiences and Asking the Unaskable: An Interview with Ted Sizer.

    Science.gov (United States)

    Minton, Elaine

    1996-01-01

    The renowned educational reformer talks about how memorable, "extreme" learning experiences have shaped his views on education; how to create collegial support; the things that have given him satisfaction; his father's influence on him; the irrepressible optimism of teenagers; taking advantage of serendipitous events; and how questioning…

  9. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Science.gov (United States)

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  10. Introduction to Continuous Optimization

    DEFF Research Database (Denmark)

    Andreasson, Niclas; Evgrafov, Anton; Patriksson, Michael

    optimal solutions for continuous optimization models. The main part of the mathematical material therefore concerns the analysis and linear algebra that underlie the workings of convexity and duality, and necessary/sufficient local/global optimality conditions for continuous optimization problems. Natural...... algorithms are then developed from these optimality conditions, and their most important convergence characteristics are analyzed. The book answers many more questions of the form “Why?” and “Why not?” than “How?”. We use only elementary mathematics in the development of the book, yet are rigorous throughout...

  11. Translational research to improve the treatment of severe extremity injuries.

    Science.gov (United States)

    Brown, Kate V; Penn-Barwell, J G; Rand, B C; Wenke, J C

    2014-06-01

    Severe extremity injuries are the most significant injury sustained in combat wounds. Despite optimal clinical management, non-union and infection remain common complications. In a concerted effort to dovetail research efforts, there has been a collaboration between the UK and USA, with British military surgeons conducting translational studies under the auspices of the US Institute of Surgical Research. This paper describes 3 years of work. A variety of studies were conducted using, and developing, a previously validated rat femur critical-sized defect model. Timing of surgical debridement and irrigation, different types of irrigants and different means of delivery of antibiotic and growth factors for infection control and to promote bone healing were investigated. Early debridement and irrigation were independently shown to reduce infection. Normal saline was the most optimal irrigant, superior to disinfectant solutions. A biodegradable gel demonstrated superior antibiotic delivery capabilities than standard polymethylmethacrylate beads. A polyurethane scaffold was shown to have the ability to deliver both antibiotics and growth factors. The importance of early transit times to Role 3 capabilities for definitive surgical care has been underlined. Novel and superior methods of antibiotic and growth factor delivery, compared with current clinical standards of care, have been shown. There is the potential for translation to clinical studies to promote infection control and bone healing in these devastating injuries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Technique optimization of orbital atherectomy in calcified peripheral lesions of the lower extremities: the CONFIRM series, a prospective multicenter registry.

    Science.gov (United States)

    Das, Tony; Mustapha, Jihad; Indes, Jeffrey; Vorhies, Robert; Beasley, Robert; Doshi, Nilesh; Adams, George L

    2014-01-01

    The purpose of CONFIRM registry series was to evaluate the use of orbital atherectomy (OA) in peripheral lesions of the lower extremities, as well as optimize the technique of OA. Methods of treating calcified arteries (historically a strong predictor of treatment failure) have improved significantly over the past decade and now include minimally invasive endovascular treatments, such as OA with unique versatility in modifying calcific lesions above and below-the-knee. Patients (3135) undergoing OA by more than 350 physicians at over 200 US institutions were enrolled on an "all-comers" basis, resulting in registries that provided site-reported patient demographics, ABI, Rutherford classification, co-morbidities, lesion characteristics, plaque morphology, device usage parameters, and procedural outcomes. Treatment with OA reduced pre-procedural stenosis from an average of 88-35%. Final residual stenosis after adjunctive treatments, typically low-pressure percutaneous transluminal angioplasty (PTA), averaged 10%. Plaque removal was most effective for severely calcified lesions and least effective for soft plaque. Shorter spin times and smaller crown sizes significantly lowered procedural complications which included slow flow (4.4%), embolism (2.2%), and spasm (6.3%), emphasizing the importance of treatment regimens that focus on plaque modification over maximizing luminal gain. The OA technique optimization, which resulted in a change of device usage across the CONFIRM registry series, corresponded to a lower incidence of adverse events irrespective of calcium burden or co-morbidities. Copyright © 2013 The Authors. Wiley Periodicals, Inc.

  13. A Novel Adaptive Elite-Based Particle Swarm Optimization Applied to VAR Optimization in Electric Power Systems

    Directory of Open Access Journals (Sweden)

    Ying-Yi Hong

    2014-01-01

    Full Text Available Particle swarm optimization (PSO has been successfully applied to solve many practical engineering problems. However, more efficient strategies are needed to coordinate global and local searches in the solution space when the studied problem is extremely nonlinear and highly dimensional. This work proposes a novel adaptive elite-based PSO approach. The adaptive elite strategies involve the following two tasks: (1 appending the mean search to the original approach and (2 pruning/cloning particles. The mean search, leading to stable convergence, helps the iterative process coordinate between the global and local searches. The mean of the particles and standard deviation of the distances between pairs of particles are utilized to prune distant particles. The best particle is cloned and it replaces the pruned distant particles in the elite strategy. To evaluate the performance and generality of the proposed method, four benchmark functions were tested by traditional PSO, chaotic PSO, differential evolution, and genetic algorithm. Finally, a realistic loss minimization problem in an electric power system is studied to show the robustness of the proposed method.

  14. Analysis Balance Parameter of Optimal Ramp metering

    Science.gov (United States)

    Li, Y.; Duan, N.; Yang, X.

    2018-05-01

    Ramp metering is a motorway control method to avoid onset congestion through limiting the access of ramp inflows into the main road of the motorway. The optimization model of ramp metering is developed based upon cell transmission model (CTM). With the piecewise linear structure of CTM, the corresponding motorway traffic optimization problem can be formulated as a linear programming (LP) problem. It is known that LP problem can be solved by established solution algorithms such as SIMPLEX or interior-point methods for the global optimal solution. The commercial software (CPLEX) is adopted in this study to solve the LP problem within reasonable computational time. The concept is illustrated through a case study of the United Kingdom M25 Motorway. The optimal solution provides useful insights and guidances on how to manage motorway traffic in order to maximize the corresponding efficiency.

  15. Stochastic optimization: beyond mathematical programming

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Stochastic optimization, among which bio-inspired algorithms, is gaining momentum in areas where more classical optimization algorithms fail to deliver satisfactory results, or simply cannot be directly applied. This presentation will introduce baseline stochastic optimization algorithms, and illustrate their efficiency in different domains, from continuous non-convex problems to combinatorial optimization problem, to problems for which a non-parametric formulation can help exploring unforeseen possible solution spaces.

  16. Solution of optimization problems by means of the CASTEM 2000 computer code

    International Nuclear Information System (INIS)

    Charras, Th.; Millard, A.; Verpeaux, P.

    1991-01-01

    In the nuclear industry, it can be necessary to use robots for operation in contaminated environment. Most of the time, positioning of some parts of the robot must be very accurate, which highly depends on the structural (mass and stiffness) properties of its various components. Therefore, there is a need for a 'best' design, which is a compromise between technical (mechanical properties) and economical (material quantities, design and manufacturing cost) matters. This is precisely the aim of optimization techniques, in the frame of structural analysis. A general statement of this problem could be as follows: find the set of parameters which leads to the minimum of a given function, and satisfies some constraints. For example, in the case of a robot component, the parameters can be some geometrical data (plate thickness, ...), the function can be the weight and the constraints can consist in design criteria like a given stiffness and in some manufacturing technological constraints (minimum available thickness, etc). For nuclear industry purposes, a robust method was chosen and implemented in the new generation computer code CASTEM 2000. The solution of the optimum design problem is obtained by solving a sequence of convex subproblems, in which the various functions (the function to minimize and the constraints) are transformed by convex linearization. The method has been programmed in the case of continuous as well as discrete variables. According to the highly modular architecture of the CASTEM 2000 code, only one new operation had to be introduced: the solution of a sub problem with convex linearized functions, which is achieved by means of a conjugate gradient technique. All other operations were already available in the code, and the overall optimum design is realized by means of the Gibiane language. An example of application will be presented to illustrate the possibilities of the method. (author)

  17. Application of extreme value distribution function in the determination of standard meteorological parameters for nuclear power plants

    International Nuclear Information System (INIS)

    Jiang Haimei; Liu Xinjian; Qiu Lin; Li Fengju

    2014-01-01

    Based on the meteorological data from weather stations around several domestic nuclear power plants, the statistical results of extreme minimum temperatures, minimum. central pressures of tropical cyclones and some other parameters are calculated using extreme value I distribution function (EV- I), generalized extreme value distribution function (GEV) and generalized Pareto distribution function (GP), respectively. The influence of different distribution functions and parameter solution methods on the statistical results of extreme values is investigated. Results indicate that generalized extreme value function has better applicability than the other two distribution functions in the determination of standard meteorological parameters for nuclear power plants. (authors)

  18. An analytical method for optimal design of MR valve structures

    International Nuclear Information System (INIS)

    Nguyen, Q H; Choi, S B; Lee, Y S; Han, M S

    2009-01-01

    This paper proposes an analytical methodology for the optimal design of a magnetorheological (MR) valve structure. The MR valve structure is constrained in a specific volume and the optimization problem identifies geometric dimensions of the valve structure that maximize the yield stress pressure drop of a MR valve or the yield stress damping force of a MR damper. In this paper, the single-coil and two-coil annular MR valve structures are considered. After describing the schematic configuration and operating principle of a typical MR valve and damper, a quasi-static model is derived based on the Bingham model of a MR fluid. The magnetic circuit of the valve and damper is then analyzed by applying Kirchoff's law and the magnetic flux conservation rule. Based on quasi-static modeling and magnetic circuit analysis, the optimization problem of the MR valve and damper is built. In order to reduce the computation load, the optimization problem is simplified and a procedure to obtain the optimal solution of the simplified optimization problem is presented. The optimal solution of the simplified optimization problem of the MR valve structure constrained in a specific volume is then obtained and compared with the solution of the original optimization problem and the optimal solution obtained from the finite element method

  19. Economically optimal thermal insulation

    Energy Technology Data Exchange (ETDEWEB)

    Berber, J.

    1978-10-01

    Exemplary calculations to show that exact adherence to the demands of the thermal insulation ordinance does not lead to an optimal solution with regard to economics. This is independent of the mode of financing. Optimal thermal insulation exceeds the values given in the thermal insulation ordinance.

  20. ON PROBLEM OF REGIONAL WAREHOUSE AND TRANSPORT INFRASTRUCTURE OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    I. Yu. Miretskiy

    2017-01-01

    Full Text Available The article suggests an approach of solving the problem of warehouse and transport infrastructure optimization in a region. The task is to determine the optimal capacity and location of the support network of warehouses in the region, as well as power, composition and location of motor fleets. Optimization is carried out using mathematical models of a regional warehouse network and a network of motor fleets. These models are presented as mathematical programming problems with separable functions. The process of finding the optimal solution of problems is complicated due to high dimensionality, non-linearity of functions, and the fact that a part of variables are constrained to integer, and some variables can take values only from a discrete set. Given the mentioned above complications search for an exact solution was rejected. The article suggests an approximate approach to solving problems. This approach employs effective computational schemes for solving multidimensional optimization problems. We use the continuous relaxation of the original problem to obtain its approximate solution. An approximately optimal solution of continuous relaxation is taken as an approximate solution of the original problem. The suggested solution method implies linearization of the obtained continuous relaxation and use of the separable programming scheme and the scheme of branches and bounds. We describe the use of the simplex method for solving the linearized continuous relaxation of the original problem and the specific moments of the branches and bounds method implementation. The paper shows the finiteness of the algorithm and recommends how to accelerate process of finding a solution.

  1. Origin of poor doping efficiency in solution processed organic semiconductors.

    Science.gov (United States)

    Jha, Ajay; Duan, Hong-Guang; Tiwari, Vandana; Thorwart, Michael; Miller, R J Dwayne

    2018-05-21

    Doping is an extremely important process where intentional insertion of impurities in semiconductors controls their electronic properties. In organic semiconductors, one of the convenient, but inefficient, ways of doping is the spin casting of a precursor mixture of components in solution, followed by solvent evaporation. Active control over this process holds the key to significant improvements over current poor doping efficiencies. Yet, an optimized control can only come from a detailed understanding of electronic interactions responsible for the low doping efficiencies. Here, we use two-dimensional nonlinear optical spectroscopy to examine these interactions in the course of the doping process by probing the solution mixture of doped organic semiconductors. A dopant accepts an electron from the semiconductor and the two ions form a duplex of interacting charges known as ion-pair complexes. Well-resolved off-diagonal peaks in the two-dimensional spectra clearly demonstrate the electronic connectivity among the ions in solution. This electronic interaction represents a well resolved electrostatically bound state, as opposed to a random distribution of ions. We developed a theoretical model to recover the experimental data, which reveals an unexpectedly strong electronic coupling of ∼250 cm -1 with an intermolecular distance of ∼4.5 Å between ions in solution, which is approximately the expected distance in processed films. The fact that this relationship persists from solution to the processed film gives direct evidence that Coulomb interactions are retained from the precursor solution to the processed films. This memory effect renders the charge carriers equally bound also in the film and, hence, results in poor doping efficiencies. This new insight will help pave the way towards rational tailoring of the electronic interactions to improve doping efficiencies in processed organic semiconductor thin films.

  2. The New HARSHAW Extremity Dosimeters for Gamma and Beta Ray Monitoring

    International Nuclear Information System (INIS)

    Fellinger, J.; Majewski, M.; Rotunda, J.; Tawi, R.

    1997-01-01

    Large personnel dosimetry services providing extremity monitoring with finger rings based on thermoluminescent detectors have long been looking for a practical method for automated reading including automated identification of the detectors.All existing methods are at least not very suitable for medical applications, particularly for surgery, due to the fact that cold sterilization is usually impossible.Bicron radiation Measurement Products developed in co-operation with the Austrian Research Centre Seibersdorf a new finger ring dosimeter DXT-RAD as a fast and economic solution for fully automated evaluation of extremity dosemeters. (authors)

  3. A SURVEY ON OPTIMIZATION APPROACHES TO SEMANTIC SERVICE DISCOVERY TOWARDS AN INTEGRATED SOLUTION

    Directory of Open Access Journals (Sweden)

    Chellammal Surianarayanan

    2012-07-01

    Full Text Available The process of semantic service discovery using an ontology reasoner such as Pellet is time consuming. This restricts the usage of web services in real time applications having dynamic composition requirements. As performance of semantic service discovery is crucial in service composition, it should be optimized. Various optimization methods are being proposed to improve the performance of semantic discovery. In this work, we investigate the existing optimization methods and broadly classify optimization mechanisms into two categories, namely optimization by efficient reasoning and optimization by efficient matching. Optimization by efficient matching is further classified into subcategories such as optimization by clustering, optimization by inverted indexing, optimization by caching, optimization by hybrid methods, optimization by efficient data structures and optimization by efficient matching algorithms. With a detailed study of different methods, an integrated optimization infrastructure along with matching method has been proposed to improve the performance of semantic matching component. To achieve better optimization the proposed method integrates the effects of caching, clustering and indexing. Theoretical aspects of performance evaluation of the proposed method are discussed.

  4. Tractable Pareto Optimization of Temporal Preferences

    Science.gov (United States)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  5. Efficient Reanalysis Procedures in Structural Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded

    This thesis examines efficient solution procedures for the structural analysis problem within topology optimization. The research is motivated by the observation that when the nested approach to structural optimization is applied, most of the computational effort is invested in repeated solutions...... on approximate reanalysis. For cases where memory limitations require the utilization of iterative equation solvers, we suggest efficient procedures based on alternative termination criteria for such solvers. These approaches are tested on two- and three-dimensional topology optimization problems including...

  6. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model

    International Nuclear Information System (INIS)

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-01-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP. (paper)

  7. Conventional treatment planning optimization using simulated annealing

    International Nuclear Information System (INIS)

    Morrill, S.M.; Langer, M.; Lane, R.G.

    1995-01-01

    Purpose: Simulated annealing (SA) allows for the implementation of realistic biological and clinical cost functions into treatment plan optimization. However, a drawback to the clinical implementation of SA optimization is that large numbers of beams appear in the final solution, some with insignificant weights, preventing the delivery of these optimized plans using conventional (limited to a few coplanar beams) radiation therapy. A preliminary study suggested two promising algorithms for restricting the number of beam weights. The purpose of this investigation was to compare these two algorithms using our current SA algorithm with the aim of producing a algorithm to allow clinically useful radiation therapy treatment planning optimization. Method: Our current SA algorithm, Variable Stepsize Generalized Simulated Annealing (VSGSA) was modified with two algorithms to restrict the number of beam weights in the final solution. The first algorithm selected combinations of a fixed number of beams from the complete solution space at each iterative step of the optimization process. The second reduced the allowed number of beams by a factor of two at periodic steps during the optimization process until only the specified number of beams remained. Results of optimization of beam weights and angles using these algorithms were compared using a standard cadre of abdominal cases. The solution space was defined as a set of 36 custom-shaped open and wedged-filtered fields at 10 deg. increments with a target constant target volume margin of 1.2 cm. For each case a clinically-accepted cost function, minimum tumor dose was maximized subject to a set of normal tissue binary dose-volume constraints. For this study, the optimized plan was restricted to four (4) fields suitable for delivery with conventional therapy equipment. Results: The table gives the mean value of the minimum target dose obtained for each algorithm averaged over 5 different runs and the comparable manual treatment

  8. Particle swarm optimization: an alternative in marine propeller optimization?

    Science.gov (United States)

    Vesting, F.; Bensow, R. E.

    2018-01-01

    This article deals with improving and evaluating the performance of two evolutionary algorithm approaches for automated engineering design optimization. Here a marine propeller design with constraints on cavitation nuisance is the intended application. For this purpose, the particle swarm optimization (PSO) algorithm is adapted for multi-objective optimization and constraint handling for use in propeller design. Three PSO algorithms are developed and tested for the optimization of four commercial propeller designs for different ship types. The results are evaluated by interrogating the generation medians and the Pareto front development. The same propellers are also optimized utilizing the well established NSGA-II genetic algorithm to provide benchmark results. The authors' PSO algorithms deliver comparable results to NSGA-II, but converge earlier and enhance the solution in terms of constraints violation.

  9. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    Science.gov (United States)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any

  10. MINLP solution for an optimal isotope separation system

    International Nuclear Information System (INIS)

    Boisset-Baticle, L.; Latge, C.; Joulia, X.

    1994-01-01

    This paper deals with designing of cryogenic distillation systems for the separation of hydrogen isotopes in a thermonuclear fusion process. The design must minimize the tritium inventory in the distillation columns and satisfy the separation requirements. This induces the optimization of both the structure and the operating conditions of the columns. Such a problem is solved by use of a Mixed-Integer NonLinear Programming (MINLP) tool coupled to a process simulator. The MINLP procedure is based on the iterative and alternative treatment of two subproblems: a NLP problem which is solved by a reduced-gradient method, and a MILP problem, solved with a Branch and Bound method coupled to a simplexe. The formulation of the problem and the choice of an appropriate superstructure are here detailed, and results are finally presented, concerning the optimal design of a specific isotope separation system. (author)

  11. Optimized design of low energy buildings

    DEFF Research Database (Denmark)

    Rudbeck, Claus Christian; Esbensen, Peter Kjær; Svendsen, Sv Aa Højgaard

    1999-01-01

    concern which can be seen during the construction of new buildings. People want energy-friendly solutions, but they should be economical optimized. An exonomical optimized building design with respect to energy consumption is the design with the lowest total cost (investment plus operational cost over its...... to evaluate different separate solutions when they interact in the building.When trying to optimize several parameters there is a need for a method, which will show the correct price-performance of each part of a building under design. The problem with not having such a method will first be showed...

  12. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  13. Optimization of actinides trace precipitation on diamond/Si PIN sensor for alpha-spectrometry in aqueous solution

    International Nuclear Information System (INIS)

    Tran, Q.T.; Pomorski, M.; Sanoit, J. de; Mer-Calfati, C.; Scorsone, E.; Bergonzo, P.

    2014-01-01

    We report here on a new approach for the detection and identification of actinides (Pu, Am, Cm, etc). This approach is based on the use of a novel device consisting of a boron doped nanocrystalline diamond film deposited onto a silicon PIN diode alpha particle sensor. The actinides concentration is probed in situ in the measuring solution using a method based on electro-precipitation that can be carried out via the use of a doped diamond electrode. The device allows probing directly both alpha-particles activity and energy in liquid solutions. In this work, we address the optimization of the actinides electro-precipitation step onto the sensor. The approach is based on fine tuning the pH of the electrolyte, the nature of the supporting electrolytes (Na_2SO_4 or NaNO_3), the electrochemical cell geometry, the current density value, the precipitation duration as well as the sensor surface area. The deposition efficiency was significantly improved with values reaching for instance up to 81.5% in the case of electro-precipitation of 5.96 Bq "2"4"1Am on the sensor. The diamond/silicon sensor can be reused after measurement by performing a fast decontamination step at high yields 99%, where the "2"4"1Am electro-precipitated layer is quickly removed by applying an anodic current (+2 mA.cm"-"2 for 10 minutes) to the boron doped nanocrystalline diamond electrode in aqueous solution. This study demonstrated that alpha-particle spectroscopic measurements could be made feasible for the first time in aqueous solutions after an electrochemical deposition process, with theoretical detections thresholds as low as 0.24 Bq.L"-"1. We believe that this approach can be of very high interest for alpha-particle spectroscopy in liquids for actinides trace detection. (authors)

  14. Extremely Efficient Design of Organic Thin Film Solar Cells via Learning-Based Optimization

    Directory of Open Access Journals (Sweden)

    Mine Kaya

    2017-11-01

    Full Text Available Design of efficient thin film photovoltaic (PV cells require optical power absorption to be computed inside a nano-scale structure of photovoltaics, dielectric and plasmonic materials. Calculating power absorption requires Maxwell’s electromagnetic equations which are solved using numerical methods, such as finite difference time domain (FDTD. The computational cost of thin film PV cell design and optimization is therefore cumbersome, due to successive FDTD simulations. This cost can be reduced using a surrogate-based optimization procedure. In this study, we deploy neural networks (NNs to model optical absorption in organic PV structures. We use the corresponding surrogate-based optimization procedure to maximize light trapping inside thin film organic cells infused with metallic particles. Metallic particles are known to induce plasmonic effects at the metal–semiconductor interface, thus increasing absorption. However, a rigorous design procedure is required to achieve the best performance within known design guidelines. As a result of using NNs to model thin film solar absorption, the required time to complete optimization is decreased by more than five times. The obtained NN model is found to be very reliable. The optimization procedure results in absorption enhancement greater than 200%. Furthermore, we demonstrate that once a reliable surrogate model such as the developed NN is available, it can be used for alternative analyses on the proposed design, such as uncertainty analysis (e.g., fabrication error.

  15. Multi-objective optimization of inverse planning for accurate radiotherapy

    International Nuclear Information System (INIS)

    Cao Ruifen; Pei Xi; Cheng Mengyun; Li Gui; Hu Liqin; Wu Yican; Jing Jia; Li Guoli

    2011-01-01

    The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment plan were transformed into a multi-objective optimization problem with multiple constraints. Then, the fast and elitist multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) was introduced to optimize the problem. A clinical example was tested using this method. The results show that an obtained set of non-dominated solutions were uniformly distributed and the corresponding dose distribution of each solution not only approached the expected dose distribution, but also met the dose-volume constraints. It was indicated that the clinical requirements were better satisfied using the method and the planner could select the optimal treatment plan from the non-dominated solution set. (authors)

  16. Stress fractures of the ribs and upper extremities: causation, evaluation, and management.

    Science.gov (United States)

    Miller, Timothy L; Harris, Joshua D; Kaeding, Christopher C

    2013-08-01

    Stress fractures are common troublesome injuries in athletes and non-athletes. Historically, stress fractures have been thought to predominate in the lower extremities secondary to the repetitive stresses of impact loading. Stress injuries of the ribs and upper extremities are much less common and often unrecognized. Consequently, these injuries are often omitted from the differential diagnosis of rib or upper extremity pain. Given the infrequency of this diagnosis, few case reports or case series have reported on their precipitating activities and common locations. Appropriate evaluation for these injuries requires a thorough history and physical examination. Radiographs may be negative early, requiring bone scintigraphy or MRI to confirm the diagnosis. Nonoperative and operative treatment recommendations are made based on location, injury classification, and causative activity. An understanding of the most common locations of upper extremity stress fractures and their associated causative activities is essential for prompt diagnosis and optimal treatment.

  17. Pitfall in quantum mechanical/molecular mechanical molecular dynamics simulation of small solutes in solution.

    Science.gov (United States)

    Hu, Hao; Liu, Haiyan

    2013-05-30

    Developments in computing hardware and algorithms have made direct molecular dynamics simulation with the combined quantum mechanical/molecular mechanical methods affordable for small solute molecules in solution, in which much improved accuracy can be obtained via the quantum mechanical treatment of the solute molecule and even sometimes water molecules in the first solvation shell. However, unlike the conventional molecular mechanical simulations of large molecules, e.g., proteins, in solutions, special care must be taken in the technical details of the simulation, including the thermostat of the solute/solvent system, so that the conformational space of the solute molecules can be properly sampled. We show here that the common setup for classical molecular mechanical molecular dynamics simulations, such as the Berendsen or single Nose-Hoover thermostat, and/or rigid water models could lead to pathological sampling of the solutes' conformation. In the extreme example of a methanol molecule in aqueous solution, improper and sluggish setups could generate two peaks in the distribution of the O-H bond length. We discuss the factors responsible for this somewhat unexpected result and evoke a simple and ancient technical fix-up to resolve this problem.

  18. Optimal control of hybrid vehicles

    CERN Document Server

    Jager, Bram; Kessels, John

    2013-01-01

    Optimal Control of Hybrid Vehicles provides a description of power train control for hybrid vehicles. The background, environmental motivation and control challenges associated with hybrid vehicles are introduced. The text includes mathematical models for all relevant components in the hybrid power train. The power split problem in hybrid power trains is formally described and several numerical solutions detailed, including dynamic programming and a novel solution for state-constrained optimal control problems based on Pontryagin’s maximum principle.   Real-time-implementable strategies that can approximate the optimal solution closely are dealt with in depth. Several approaches are discussed and compared, including a state-of-the-art strategy which is adaptive for vehicle conditions like velocity and mass. Two case studies are included in the book: ·        a control strategy for a micro-hybrid power train; and ·        experimental results obtained with a real-time strategy implemented in...

  19. Thermodynamic analysis and performance optimization of an ORC (Organic Rankine Cycle) system for multi-strand waste heat sources in petroleum refining industry

    International Nuclear Information System (INIS)

    Song, Jian; Li, Yan; Gu, Chun-wei; Zhang, Li

    2014-01-01

    Low-grade waste heat source accounts for a large part of the total industrial waste heat, which cannot be efficiently recovered. The ORC (Organic Rankine Cycle) system has been proved to be a promising solution for the utilization of low-grade heat sources. It is evident that there might be several waste heat sources distributing in different temperature levels in one industry unit, and the entire recovery system will be extremely large and complex if the different heat sources are utilized one by one through several independent ORC subsystems. This paper aims to design and optimize a comprehensive ORC system to recover multi-strand waste heat sources in Shijiazhuang Refining and Chemical Company in China, involving defining suitable working fluids and operating parameters. Thermal performance is a first priority criterion for the system, and system simplicity, technological feasibility and economic factors are considered during optimization. Four schemes of the recovery system are presented in continuous optimization progress. By comparison, the scheme of dual integrated subsystems with R141B as a working fluid is optimal. Further analysis is implemented from the view of economic factors and off-design conditions. The analytical method and optimization progress presented can be widely applied in similar multi-strand waste heat sources recovery. - Highlights: • This paper focuses on the recovery of multi-strand waste heat sources. • ORC technology is used as a promising solution for the recovery. • Thermal performance, system simplicity and economic factors are considered

  20. Hierarchical Solution of the Traveling Salesman Problem with Random Dyadic Tilings

    Science.gov (United States)

    Kalmár-Nagy, Tamás; Bak, Bendegúz Dezső

    We propose a hierarchical heuristic approach for solving the Traveling Salesman Problem (TSP) in the unit square. The points are partitioned with a random dyadic tiling and clusters are formed by the points located in the same tile. Each cluster is represented by its geometrical barycenter and a “coarse” TSP solution is calculated for these barycenters. Midpoints are placed at the middle of each edge in the coarse solution. Near-optimal (or optimal) minimum tours are computed for each cluster. The tours are concatenated using the midpoints yielding a solution for the original TSP. The method is tested on random TSPs (independent, identically distributed points in the unit square) up to 10,000 points as well as on a popular benchmark problem (att532 — coordinates of 532 American cities). Our solutions are 8-13% longer than the optimal ones. We also present an optimization algorithm for the partitioning to improve our solutions. This algorithm further reduces the solution errors (by several percent using 1000 iteration steps). The numerical experiments demonstrate the viability of the approach.

  1. Optimization Solution of Troesch’s and Bratu’s Problems of Ordinary Type Using Novel Continuous Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Zaer Abo-Hammour

    2014-01-01

    Full Text Available A new kind of optimization technique, namely, continuous genetic algorithm, is presented in this paper for numerically approximating the solutions of Troesch’s and Bratu’s problems. The underlying idea of the method is to convert the two differential problems into discrete versions by replacing each of the second derivatives by an appropriate difference quotient approximation. The new method has the following characteristics. First, it should not resort to more advanced mathematical tools; that is, the algorithm should be simple to understand and implement and should be thus easily accepted in the mathematical and physical application fields. Second, the algorithm is of global nature in terms of the solutions obtained as well as its ability to solve other mathematical and physical problems. Third, the proposed methodology has an implicit parallel nature which points to its implementation on parallel machines. The algorithm is tested on different versions of Troesch’s and Bratu’s problems. Experimental results show that the proposed algorithm is effective, straightforward, and simple.

  2. Rogue wave solutions of the nonlinear Schrödinger equation with ...

    Indian Academy of Sciences (India)

    In this paper, a unified formula of a series of rogue wave solutions for the standard ... rating a noise-sensitive nonlinear process in which extremely broadband radiations are ..... Based on [21,24], the higher-order rational solution of eq. (15) are.

  3. Biogeography-Based Optimization with Orthogonal Crossover

    Directory of Open Access Journals (Sweden)

    Quanxi Feng

    2013-01-01

    Full Text Available Biogeography-based optimization (BBO is a new biogeography inspired, population-based algorithm, which mainly uses migration operator to share information among solutions. Similar to crossover operator in genetic algorithm, migration operator is a probabilistic operator and only generates the vertex of a hyperrectangle defined by the emigration and immigration vectors. Therefore, the exploration ability of BBO may be limited. Orthogonal crossover operator with quantization technique (QOX is based on orthogonal design and can generate representative solution in solution space. In this paper, a BBO variant is presented through embedding the QOX operator in BBO algorithm. Additionally, a modified migration equation is used to improve the population diversity. Several experiments are conducted on 23 benchmark functions. Experimental results show that the proposed algorithm is capable of locating the optimal or closed-to-optimal solution. Comparisons with other variants of BBO algorithms and state-of-the-art orthogonal-based evolutionary algorithms demonstrate that our proposed algorithm possesses faster global convergence rate, high-precision solution, and stronger robustness. Finally, the analysis result of the performance of QOX indicates that QOX plays a key role in the proposed algorithm.

  4. Utilizing Problem Structure in Optimization of Radiation Therapy

    International Nuclear Information System (INIS)

    Carlsson, Fredrik

    2008-04-01

    In this thesis, optimization approaches for intensity-modulated radiation therapy are developed and evaluated with focus on numerical efficiency and treatment delivery aspects. The first two papers deal with strategies for solving fluence map optimization problems efficiently while avoiding solutions with jagged fluence profiles. The last two papers concern optimization of step-and-shoot parameters with emphasis on generating treatment plans that can be delivered efficiently and accurately. In the first paper, the problem dimension of a fluence map optimization problem is reduced through a spectral decomposition of the Hessian of the objective function. The weights of the eigenvectors corresponding to the p largest eigenvalues are introduced as optimization variables, and the impact on the solution of varying p is studied. Including only a few eigenvector weights results in faster initial decrease of the objective value, but with an inferior solution, compared to optimization of the bixel weights. An approach combining eigenvector weights and bixel weights produces improved solutions, but at the expense of the pre-computational time for the spectral decomposition. So-called iterative regularization is performed on fluence map optimization problems in the second paper. The idea is to find regular solutions by utilizing an optimization method that is able to find near-optimal solutions with non-jagged fluence profiles in few iterations. The suitability of a quasi-Newton sequential quadratic programming method is demonstrated by comparing the treatment quality of deliverable step-and-shoot plans, generated through leaf sequencing with a fixed number of segments, for different number of bixel-weight iterations. A conclusion is that over-optimization of the fluence map optimization problem prior to leaf sequencing should be avoided. An approach for dynamically generating multileaf collimator segments using a column generation approach combined with optimization of

  5. Topics in computational linear optimization

    DEFF Research Database (Denmark)

    Hultberg, Tim Helge

    2000-01-01

    Linear optimization has been an active area of research ever since the pioneering work of G. Dantzig more than 50 years ago. This research has produced a long sequence of practical as well as theoretical improvements of the solution techniques avilable for solving linear optimization problems...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...

  6. Neighboring Optimal Aircraft Guidance in a General Wind Environment

    Science.gov (United States)

    Jardin, Matthew R. (Inventor)

    2003-01-01

    Method and system for determining an optimal route for an aircraft moving between first and second waypoints in a general wind environment. A selected first wind environment is analyzed for which a nominal solution can be determined. A second wind environment is then incorporated; and a neighboring optimal control (NOC) analysis is performed to estimate an optimal route for the second wind environment. In particular examples with flight distances of 2500 and 6000 nautical miles in the presence of constant or piecewise linearly varying winds, the difference in flight time between a nominal solution and an optimal solution is 3.4 to 5 percent. Constant or variable winds and aircraft speeds can be used. Updated second wind environment information can be provided and used to obtain an updated optimal route.

  7. Investigating NARCCAP Precipitation Extremes via Bivariate Extreme Value Theory (Invited)

    Science.gov (United States)

    Weller, G. B.; Cooley, D. S.; Sain, S. R.; Bukovsky, M. S.; Mearns, L. O.

    2013-12-01

    We introduce methodology from statistical extreme value theory to examine the ability of reanalysis-drive regional climate models to simulate past daily precipitation extremes. Going beyond a comparison of summary statistics such as 20-year return values, we study whether the most extreme precipitation events produced by climate model simulations exhibit correspondence to the most extreme events seen in observational records. The extent of this correspondence is formulated via the statistical concept of tail dependence. We examine several case studies of extreme precipitation events simulated by the six models of the North American Regional Climate Change Assessment Program (NARCCAP) driven by NCEP reanalysis. It is found that the NARCCAP models generally reproduce daily winter precipitation extremes along the Pacific coast quite well; in contrast, simulation of past daily summer precipitation extremes in a central US region is poor. Some differences in the strength of extremal correspondence are seen in the central region between models which employ spectral nudging and those which do not. We demonstrate how these techniques may be used to draw a link between extreme precipitation events and large-scale atmospheric drivers, as well as to downscale extreme precipitation simulated by a future run of a regional climate model. Specifically, we examine potential future changes in the nature of extreme precipitation along the Pacific coast produced by the pineapple express (PE) phenomenon. A link between extreme precipitation events and a "PE Index" derived from North Pacific sea-surface pressure fields is found. This link is used to study PE-influenced extreme precipitation produced by a future-scenario climate model run.

  8. Multiobjective hyper heuristic scheme for system design and optimization

    Science.gov (United States)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  9. Adaptive scalarization methods in multiobjective optimization

    CERN Document Server

    Eichfelder, Gabriele

    2008-01-01

    This book presents adaptive solution methods for multiobjective optimization problems based on parameter dependent scalarization approaches. Readers will benefit from the new adaptive methods and ideas for solving multiobjective optimization.

  10. Optimization of strontium adsorption from aqueous solution using (mn-Zr) oxide-pan composite spheres

    International Nuclear Information System (INIS)

    Inan, S.; Altas, Y.

    2009-01-01

    The processes based on adsorption and ion exchange have a great role for the pre-concentration and separation of toxic, long lived radionuclides from liquid waste. In Nuclear waste management, the removal of long lived, radiotoxic isotopes from radioactive waste such as strontium reduces the storage problems and facilitates the disposal of the waste. Depending on the waste type, a variety of adsorbents and/or ion exchangers are used. Due to the amorphous structure of hydrous oxides and their mixtures, they don't have reproducible properties. Besides, obtained powders are very fine particles and they can cause operational problems such as pressure drop and filtration. Therefore they are not suitable for column applications. These reasons have recently expedited the study on the preparation of organic-inorganic composite adsorbent beads for industrial applications. PAN, as a stable and porous support for fine particles, provides the utilization of ion exchangers in large scale column applications. The utilization of PAN as a support material with many inorganic ion exchangers was firstly achieved by Sebesta in the beginning of 1990's. Later on, PAN based composite ion exchangers were prepared and used for the removal of radionuclides and heavy metal ions from aqueous solution and waste waters. In this study, spherical (Mn-Zr)oxide-PAN composite were prepared for separation of strontium from aqueous solution in a wide pH range. Sr 2 + adsorption of composite adsorbent was optimized by using experimental design 'Central Composite Design' model.

  11. Centralized Stochastic Optimal Control of Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Malikopoulos, Andreas [ORNL

    2015-01-01

    In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain using the long-run expected average cost per unit time criterion, and we show that the control policy yielding the Pareto optimal solution minimizes the average cost criterion online. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.

  12. Germinal Center Optimization Applied to Neural Inverse Optimal Control for an All-Terrain Tracked Robot

    Directory of Open Access Journals (Sweden)

    Carlos Villaseñor

    2017-12-01

    Full Text Available Nowadays, there are several meta-heuristics algorithms which offer solutions for multi-variate optimization problems. These algorithms use a population of candidate solutions which explore the search space, where the leadership plays a big role in the exploration-exploitation equilibrium. In this work, we propose to use a Germinal Center Optimization algorithm (GCO which implements temporal leadership through modeling a non-uniform competitive-based distribution for particle selection. GCO is used to find an optimal set of parameters for a neural inverse optimal control applied to all-terrain tracked robot. In the Neural Inverse Optimal Control (NIOC scheme, a neural identifier, based on Recurrent High Orden Neural Network (RHONN trained with an extended kalman filter algorithm, is used to obtain a model of the system, then, a control law is design using such model with the inverse optimal control approach. The RHONN identifier is developed without knowledge of the plant model or its parameters, on the other hand, the inverse optimal control is designed for tracking velocity references. Applicability of the proposed scheme is illustrated using simulations results as well as real-time experimental results with an all-terrain tracked robot.

  13. Optimal hydrogenerator governor tuning with a genetic algorithm

    International Nuclear Information System (INIS)

    Lansberry, J.E.; Wozniak, L.; Goldberg, D.E.

    1992-01-01

    Many techniques exist for developing optimal controllers. This paper investigates genetic algorithms as a means of finding optimal solutions over a parameter space. In particular, the genetic algorithm is applied to optimal tuning of a governor for a hydrogenerator plant. Analog and digital simulation methods are compared for use in conjunction with the genetic algorithm optimization process. It is shown that analog plant simulation provides advantages in speed over digital plant simulation. This speed advantage makes application of the genetic algorithm in an actual plant environment feasible. Furthermore, the genetic algorithm is shown to possess the ability to reject plant noise and other system anomalies in its search for optimizing solutions

  14. Particle swarm as optimization tool in complex nuclear engineering problems

    International Nuclear Information System (INIS)

    Medeiros, Jose Antonio Carlos Canedo

    2005-06-01

    Due to its low computational cost, gradient-based search techniques associated to linear programming techniques are being used as optimization tools. These techniques, however, when applied to multimodal search spaces, can lead to local optima. When finding solutions for complex multimodal domains, random search techniques are being used with great efficacy. In this work we exploit the swarm optimization algorithm search power capacity as an optimization tool for the solution of complex high dimension and multimodal search spaces of nuclear problems. Due to its easy and natural representation of high dimension domains, the particle swarm optimization was applied with success for the solution of complex nuclear problems showing its efficacy in the search of solutions in high dimension and complex multimodal spaces. In one of these applications it enabled a natural and trivial solution in a way not obtained with other methods confirming the validity of its application. (author)

  15. Extreme Ultraviolet Process Optimization for Contact Layer of 14 nm Node Logic and 16 nm Half Pitch Memory Devices

    Science.gov (United States)

    Tseng, Shih-En; Chen, Alek

    2012-06-01

    Extreme ultraviolet (EUV) lithography is considered the most promising single exposure technology at the 27 nm half-pitch node and beyond. The imaging performance of ASML TWINSCAN NXE:3100 has been demonstrated to be able to resolve 26 nm Flash gate layer and 16 nm static random access memory (SRAM) metal layer with a 0.25 numerical aperture (NA) and conventional illumination. Targeting for high volume manufacturing, ASML TWINSCAN NXE:3300B, featuring a 0.33 NA lens with off-axis illumination, will generate a higher contrast aerial image due to improved diffraction order collection efficiency and is expected to reduce target dose via mask biasing. This work performed a simulation to determine how EUV high NA imaging benefits the mask rule check trade-offs required to achieve viable lithography solutions in two device application scenarios: a 14 nm node 6T-SRAM contact layer and a 16 nm half-pitch NAND Flash staggered contact layer. In each application, the three-dimensional mask effects versus Kirchhoff mask were also investigated.

  16. An Intelligent Optimization Method for Vortex-Induced Vibration Reducing and Performance Improving in a Large Francis Turbine

    Directory of Open Access Journals (Sweden)

    Xuanlin Peng

    2017-11-01

    Full Text Available In this paper, a new methodology is proposed to reduce the vortex-induced vibration (VIV and improve the performance of the stay vane in a 200-MW Francis turbine. The process can be divided into two parts. Firstly, a diagnosis method for stay vane vibration based on field experiments and a finite element method (FEM is presented. It is found that the resonance between the Kármán vortex and the stay vane is the main cause for the undesired vibration. Then, we focus on establishing an intelligent optimization model of the stay vane’s trailing edge profile. To this end, an approach combining factorial experiments, extreme learning machine (ELM and particle swarm optimization (PSO is implemented. Three kinds of improved profiles of the stay vane are proposed and compared. Finally, the profile with a Donaldson trailing edge is adopted as the best solution for the stay vane, and verifications such as computational fluid dynamics (CFD simulations, structural analysis and fatigue analysis are performed to validate the optimized geometry.

  17. The functional variable method for finding exact solutions of some ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, we implemented the functional variable method and the modified. Riemann–Liouville derivative for the exact solitary wave solutions and periodic wave solutions of the time-fractional Klein–Gordon equation, and the time-fractional Hirota–Satsuma coupled. KdV system. This method is extremely simple ...

  18. Scaling Sparse Matrices for Optimization Algorithms

    OpenAIRE

    Gajulapalli Ravindra S; Lasdon Leon S

    2006-01-01

    To iteratively solve large scale optimization problems in various contexts like planning, operations, design etc., we need to generate descent directions that are based on linear system solutions. Irrespective of the optimization algorithm or the solution method employed for the linear systems, ill conditioning introduced by problem characteristics or the algorithm or both need to be addressed. In [GL01] we used an intuitive heuristic approach in scaling linear systems that improved performan...

  19. Optimal Component Lumping: problem formulation and solution techniques

    DEFF Research Database (Denmark)

    Lin, Bao; Leibovici, Claude F.; Jørgensen, Sten Bay

    2008-01-01

    This paper presents a systematic method for optimal lumping of a large number of components in order to minimize the loss of information. In principle, a rigorous composition-based model is preferable to describe a system accurately. However, computational intensity and numerical issues restrict ...

  20. Solution of optimization problems using hybrid architecture; Solucao de problemas de otimizacao utilizando arquitetura hibrida

    Energy Technology Data Exchange (ETDEWEB)

    Murakami, Lelis Tetsuo

    2008-07-01

    king of problem. Because of the importance and magnitude of this issue, every effort which contributes to the improvement of power planning is welcome and this corroborates with this thesis which has an objective to propose technical, viable and economic solutions to solve the optimization problems with a new approach and has potential to be applied in many others kind of similar problems. (author)

  1. Extremal Kähler metrics and Bach-Merkulov equations

    Science.gov (United States)

    Koca, Caner

    2013-08-01

    In this paper, we study a coupled system of equations on oriented compact 4-manifolds which we call the Bach-Merkulov equations. These equations can be thought of as the conformally invariant version of the classical Einstein-Maxwell equations. Inspired by the work of C. LeBrun on Einstein-Maxwell equations on compact Kähler surfaces, we give a variational characterization of solutions to Bach-Merkulov equations as critical points of the Weyl functional. We also show that extremal Kähler metrics are solutions to these equations, although, contrary to the Einstein-Maxwell analogue, they are not necessarily minimizers of the Weyl functional. We illustrate this phenomenon by studying the Calabi action on Hirzebruch surfaces.

  2. Practical mathematical optimization basic optimization theory and gradient-based algorithms

    CERN Document Server

    Snyman, Jan A

    2018-01-01

    This textbook presents a wide range of tools for a course in mathematical optimization for upper undergraduate and graduate students in mathematics, engineering, computer science, and other applied sciences. Basic optimization principles are presented with emphasis on gradient-based numerical optimization strategies and algorithms for solving both smooth and noisy discontinuous optimization problems. Attention is also paid to the difficulties of expense of function evaluations and the existence of multiple minima that often unnecessarily inhibit the use of gradient-based methods. This second edition addresses further advancements of gradient-only optimization strategies to handle discontinuities in objective functions. New chapters discuss the construction of surrogate models as well as new gradient-only solution strategies and numerical optimization using Python. A special Python module is electronically available (via springerlink) that makes the new algorithms featured in the text easily accessible and dir...

  3. Extremity exams optimization for computed radiography; Otimizacao de exames de extremidade para radiologia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Pavan, Ana Luiza M.; Alves, Allan Felipe F.; Velo, Alexandre F.; Miranda, Jose Ricardo A., E-mail: analuiza@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2013-08-15

    The computed radiography (CR) has become the most used device for image acquisition, since its introduction in the 80s. The detection and early diagnosis, obtained through CR examinations, are important for the successful treatment of diseases of the hand. However, the norms used for optimization of these images are based on international protocols. Therefore, it is necessary to determine letters of radiographic techniques for CR system, which provides a safe medical diagnosis, with doses as low as reasonably achievable. The objective of this work is to develop an extremity homogeneous phantom to be used in the calibration process of radiographic techniques. In the construction process of the simulator, it has been developed a tissues' algorithm quantifier using Matlab®. In this process the average thickness was quantified from bone and soft tissues in the region of the hand of an anthropomorphic simulator as well as the simulators' material thickness corresponding (aluminum and Lucite) using technique of mask application and removal Gaussian histogram corresponding to tissues of interest. The homogeneous phantom was used to calibrate the x-ray beam. The techniques were implemented in a calibrated hand anthropomorphic phantom. The images were evaluated by specialists in radiology by the method of VGA. Skin entrance surface doses were estimated (SED) corresponding to each technique obtained with their respective tube charge. The thicknesses of simulators materials that constitute the homogeneous phantom determined in this study were 19.01 mm of acrylic and 0.81 mm of aluminum. A better picture quality with doses as low as reasonably achievable decreased dose and tube charge around 53.35% and 37.78% respectively, compared normally used by radiology diagnostic routine clinical of HCFMB-UNESP. (author)

  4. Compliance of future emission regulations. Solutions made by MTU Friedrichshafen; Erfuellung kuenftiger Abgasemissionsvorschriften. Loesungen der MTU Friedrichshafen

    Energy Technology Data Exchange (ETDEWEB)

    Dohle, U. [Tognum AG, Friedrichshafen (Germany); Schneemann, A.; Teetz, C.; Wintruff, I. [MTU Friedrichshafen GmbH, Friedrichshafen (Germany)

    2010-07-01

    Between 2011 and 2015, permissible levels of exhaust emissions for off-highway engines in Europe and in the USA will again be significantly reduced. Legislation covering the various applications and power ranges is extremely diverse. Engine manufacturers are responsible for developing different emissions-reduction methods for the individual markets in order to provide optimum solutions in areas such as fuel consumption and CO{sub 2} emissions. This article describes some of the technical concepts, both current and planned, which MTU Friedrichshafen will employ to align its present engine series with future legislation. Basicconfiguration engines are being toughened to withstand high ignition and injection pressures and further development is ongoing on cooled exhaust gas recirculation, the Miller process and 2-stage turbocharging. SCR technology and particulate filtration are also being introduced in individual cases and new control concepts will be employed to optimize interaction between the various subsystems. In the following, the solutions identified for selected applications are presented and discussed. (orig.)

  5. Painlevé IV Solutions from Hamiltonians with Equidistant Gapped Spectrum

    International Nuclear Information System (INIS)

    Estrada-Delgado, M I; Fernández C, D J

    2016-01-01

    Supersymmetry transformations are applied to the harmonic oscillator for generating potentials V k j whose spectra have a gap with respect to the initial one. The extremal states are found and, as the reduction theorem conditions are satisfied, ensuring that the system has third order ladder operators and it is connected with Painlevé IV (PIV) equation, then solutions to this equation can be generated. An alternative transformation is applied, by adding the levels needed to recover the spectrum of V k j . The extremal states are found and, as the reduction theorem is met again, we get also solutions to the PIV equation which will be analysed. (paper)

  6. Dry storage technologies: Optimized solutions for spent fuels and vitrified residues

    International Nuclear Information System (INIS)

    Roland, Vincent; Verdier, Antoine; Sicard, Damien; Neider, Tara

    2006-01-01

    materials - have allowed finding further optimization of this type of cask design. In order to increase the loading capacity in terms of radioactive source terms and heat load by 40%, the cask design relies on innovative solutions and benchmarks from the current shipping campaigns. This paper shows, on examples developed within companies of the AREVA Group, what are the key parameters and elements that can direct toward the selection of a technology in a user specific context. Some of the constraints are ability to dry store a large number of spent fuel assemblies or vitrified residues. Hereafter are also explained the methods used by COGEMA LOGISTICS in its transport and storage systems, which are an integral part of its radioactive waste management services. COGEMA LOGISTICS leverages its experience and uses its analyses to determine overall characteristics, needs, and the lifetime costs of potential programs for transporting and stored nuclear waste. (authors)

  7. [New population curves in spanish extremely preterm neonates].

    Science.gov (United States)

    García-Muñoz Rodrigo, F; García-Alix Pérez, A; Figueras Aloy, J; Saavedra Santana, P

    2014-08-01

    Most anthropometric reference data for extremely preterm infants used in Spain are outdated and based on non-Spanish populations, or are derived from small hospital-based samples that failed to include neonates of borderline viability. To develop gender-specific, population-based curves for birth weight, length, and head circumference in extremely preterm Caucasian infants, using a large contemporary sample size of Spanish singletons. Anthropometric data from neonates ≤ 28 weeks of gestational age were collected between January 2002 and December 2010 using the Spanish database SEN1500. Gestational age was estimated according to obstetric data (early pregnancy ultrasound). The data were analyzed with the SPSS.20 package, and centile tables were created for males and females using the Cole and Green LMS method. This study presents the first population-based growth curves for extremely preterm infants, including those of borderline viability, in Spain. A sexual dimorphism is evident for all of the studied parameters, starting at early gestation. These new gender-specific and population-based data could be useful for the improvement of growth assessments of extremely preterm infants in our country, for the development of epidemiological studies, for the evaluation of temporal trends, and for clinical or public health interventions seeking to optimize fetal growth. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  8. Opportunities and constraints for improved water resources management under increasing hydrological extremes

    Science.gov (United States)

    Wada, Y.

    2017-12-01

    Increased occurrence of extreme climate events is one of the most damaging consequences of global climate change today and in the future. Estimating the impacts of such extreme events on global and regional water resources is therefore crucial for quantifying increasing risks from climate change. The quest for water security has been a struggle throughout human history. Only in recent years has the scale of this quest moved beyond the local, to the national and regional scales and to the planet itself. Absent or unreliable water supply, sanitation and irrigation services, unmitigated floods and droughts, and degraded water environments severely impact half of the planet's population. The scale and complexity of the water challenges faced by society, particularly but not only in the world's poorest regions, are now recognized, as is the imperative of overcoming these challenges for a stable and equitable world. IIASA's Water Futures and Solutions Initiative (WFAS) is an unprecedented inter-disciplinary scientific initiative to identify robust and adaptive portfolios of optional solutions across different economic sectors, including agriculture, energy and industry, and to test these solution-portfolios with multi-model ensembles of hydrologic and sector models to obtain a clearer picture of the trade-offs, risks, and opportunities. The results of WFaS scenarios and models provide a basis for long-term strategic planning of water resource development under changing environments and increasing climate extremes. And given the complexity of the water system, WFaS uniquely provides policy makers with optional sets of solutions that work together and that can be easily adapted as circumstances change in the future. As WFaS progresses, it will establish a network involving information exchange, mutual learning and horizontal cooperation across teams of researchers, public and private decision makers and practitioners exploring solutions at regional, national and local

  9. Current management strategies and long-term clinical outcomes of upper extremity venous thrombosis

    NARCIS (Netherlands)

    Bleker, S. M.; van Es, N.; Kleinjan, A.; Buller, H. R.; Kamphuisen, P. W.; Aggarwal, A.; Beyer-Westendorf, J.; Camporese, G.; Cosmi, B.; Gary, T.; Ghirarduzzi, A.; Kaasjager, K.; Lerede, T.; Marschang, P.; Meijer, Karina; Otten, H. -M.; Porreca, E.; Righini, M.; Verhamme, P.; van Wissen, S.; Di Nisio, M.

    Background: There is scant information on the optimal management and clinical outcome of deep and superficial vein thrombosis of the upper extremity (UEDVT and UESVT). Objectives: To explore treatment strategies and the incidence of recurrent venous thromboembolism (VTE), mortality, postthrombotic

  10. Morphology Development in Solution-Processed Functional Organic Blend Films: An In Situ Viewpoint

    KAUST Repository

    Richter, Lee J.; DeLongchamp, Dean M.; Amassian, Aram

    2017-01-01

    .0, to the Internet of things, to point-of-use heath care and elder care. The extreme sensitivity of the functional performance of organic films to structure and the general nonequilibrium nature of solution drying result in extreme processing-performance correlations

  11. The response matrix discrete ordinates solution to the 1D radiative transfer equation

    International Nuclear Information System (INIS)

    Ganapol, Barry D.

    2015-01-01

    The discrete ordinates method (DOM) of solution to the 1D radiative transfer equation has been an effective method of solution for nearly 70 years. During that time, the method has experienced numerous improvements as numerical and computational techniques have become more powerful and efficient. Here, we again consider the analytical solution to the discrete radiative transfer equation in a homogeneous medium by proposing a new, and consistent, form of solution that improves upon previous forms. Aided by a Wynn-epsilon convergence acceleration, its numerical evaluation can achieve extreme precision as demonstrated by comparison with published benchmarks. Finally, we readily extend the solution to a heterogeneous medium through the star product formulation producing a novel benchmark for closed form Henyey–Greenstein scattering as an example. - Highlights: • Presents a new solution to the RTE called the response matrix DOM (RM/DOM). • Solution representations avoid the instability common in exponential solutions. • Explicit form in terms of matrix hyperbolic functions. • Extreme accuracy through Wynn-epsilon acceleration checked by published benchmarks. • Provides a more transparent numerical evaluation than found previously

  12. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2015-01-01

    operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open

  13. Approximate Reanalysis in Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole

    2009-01-01

    In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...

  14. Optimization in optical systems revisited: Beyond genetic algorithms

    Science.gov (United States)

    Gagnon, Denis; Dumont, Joey; Dubé, Louis

    2013-05-01

    Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).

  15. Religious extremism as a challenge to tertiary education in Nigeria ...

    African Journals Online (AJOL)

    Extremists use religion as a shield either for political or economic agenda and manipulate their gullible followers in order to impose their ideology on them. The reality of religious extremism in Nigeria is a challenge to tertiary education to search for a lasting solution that will enable Nigerians overcome the problem and focus ...

  16. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  17. Towards Optimal Power Management of Hybrid Electric Vehicles in Real-Time: A Review on Methods, Challenges, and State-Of-The-Art Solutions

    Directory of Open Access Journals (Sweden)

    Ahmed M. Ali

    2018-02-01

    Full Text Available In light of increasing alerts about limited energy sources and environment degradation, it has become essential to search for alternatives to thermal engine-based vehicles which are a major source of air pollution and fossil fuel depletion. Hybrid electric vehicles (HEVs, encompassing multiple energy sources, are a short-term solution that meets the performance requirements and contributes to fuel saving and emission reduction aims. Power management methods such as regulating efficient energy flow to the vehicle propulsion, are core technologies of HEVs. Intelligent power management methods, capable of acquiring optimal power handling, accommodating system inaccuracies, and suiting real-time applications can significantly improve the powertrain efficiency at different operating conditions. Rule-based methods are simply structured and easily implementable in real-time; however, a limited optimality in power handling decisions can be achieved. Optimization-based methods are more capable of achieving this optimality at the price of augmented computational load. In the last few years, these optimization-based methods have been under development to suit real-time application using more predictive, recognitive, and artificial intelligence tools. This paper presents a review-based discussion about these new trends in real-time optimal power management methods. More focus is given to the adaptation tools used to boost methods optimality in real-time. The contribution of this work can be identified in two points: First, to provide researchers and scholars with an overview of different power management methods. Second, to point out the state-of-the-art trends in real-time optimal methods and to highlight promising approaches for future development.

  18. Automated MAD and MIR structure solution

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Berendzen, Joel

    1999-01-01

    A fully automated procedure for solving MIR and MAD structures has been developed using a scoring scheme to convert the structure-solution process into an optimization problem. Obtaining an electron-density map from X-ray diffraction data can be difficult and time-consuming even after the data have been collected, largely because MIR and MAD structure determinations currently require many subjective evaluations of the qualities of trial heavy-atom partial structures before a correct heavy-atom solution is obtained. A set of criteria for evaluating the quality of heavy-atom partial solutions in macromolecular crystallography have been developed. These have allowed the conversion of the crystal structure-solution process into an optimization problem and have allowed its automation. The SOLVE software has been used to solve MAD data sets with as many as 52 selenium sites in the asymmetric unit. The automated structure-solution process developed is a major step towards the fully automated structure-determination, model-building and refinement procedure which is needed for genomic scale structure determinations

  19. Potential and challenges in home care service process optimization : a route optimization approach

    OpenAIRE

    Nakari, Pentti J. E.

    2016-01-01

    Aging of the population is an increasing problem in many countries, including Finland, and it poses a challenge to public services such as home care. Vehicle routing optimization (VRP) type optimization solutions are one possible way to decrease the time required for planning home visits and driving to customer addresses, as well as decreasing transportation costs. Although VRP optimization is widely and succesfully applied to commercial and industrial logistics, the home care ...

  20. Technology improves upper extremity rehabilitation.

    Science.gov (United States)

    Kowalczewski, Jan; Prochazka, Arthur

    2011-01-01

    Stroke survivors with hemiparesis and spinal cord injury (SCI) survivors with tetraplegia find it difficult or impossible to perform many activities of daily life. There is growing evidence that intensive exercise therapy, especially when supplemented with functional electrical stimulation (FES), can improve upper extremity function, but delivering the treatment can be costly, particularly after recipients leave rehabilitation facilities. Recently, there has been a growing level of interest among researchers and healthcare policymakers to deliver upper extremity treatments to people in their homes using in-home teletherapy (IHT). The few studies that have been carried out so far have encountered a variety of logistical and technical problems, not least the difficulty of conducting properly controlled and blinded protocols that satisfy the requirements of high-level evidence-based research. In most cases, the equipment and communications technology were not designed for individuals with upper extremity disability. It is clear that exercise therapy combined with interventions such as FES, supervised over the Internet, will soon be adopted worldwide in one form or another. Therefore it is timely that researchers, clinicians, and healthcare planners interested in assessing IHT be aware of the pros and cons of the new technology and the factors involved in designing appropriate studies of it. It is crucial to understand the technical barriers, the role of telesupervisors, the motor improvements that participants can reasonably expect and the process of optimizing IHT-exercise therapy protocols to maximize the benefits of the emerging technology. Copyright © 2011 Elsevier B.V. All rights reserved.