WorldWideScience

Sample records for solutions extremal optimization

  1. Optimization with Extremal Dynamics

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Percus, Allon G.

    2001-01-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard discrete optimization problems. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. Extremal optimization successively updates extremely undesirable variables of a single suboptimal solution, assigning them new, random values. Large fluctuations ensue, efficiently exploring many local optima. We use extremal optimization to elucidate the phase transition in the 3-coloring problem, and we provide independent confirmation of previously reported extrapolations for the ground-state energy of ±J spin glasses in d=3 and 4

  2. Extremal black holes as exact string solutions

    International Nuclear Information System (INIS)

    Horowitz, G.T.; Tseytlin, A.A.

    1994-01-01

    We show that the leading order solution describing an extremal electrically charged black hole in string theory is, in fact, an exact solution to all orders in α' when interpreted in a Kaluza-Klein fashion. This follows from the observation that it can be obtained via dimensional reduction from a five-dimensional background which is proved to be an exact string solution

  3. Extremal solutions of measure differential equations

    Czech Academy of Sciences Publication Activity Database

    Monteiro, Giselle Antunes; Slavík, A.

    2016-01-01

    Roč. 444, č. 1 (2016), s. 568-597 ISSN 0022-247X Institutional support: RVO:67985840 Keywords : measure differential equations * extremal solution * lower solution Subject RIV: BA - General Mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X16302724

  4. Multiobjective optimization of an extremal evolution model

    International Nuclear Information System (INIS)

    Elettreby, M.F.

    2004-09-01

    We propose a two-dimensional model for a co-evolving ecosystem that generalizes the extremal coupled map lattice model. The model takes into account the concept of multiobjective optimization. We find that the system self-organizes into a critical state. The distributions of the distances between subsequent mutations as well as the distribution of avalanches sizes follow power law. (author)

  5. Adaptive extremal optimization by detrended fluctuation analysis

    International Nuclear Information System (INIS)

    Hamacher, K.

    2007-01-01

    Global optimization is one of the key challenges in computational physics as several problems, e.g. protein structure prediction, the low-energy landscape of atomic clusters, detection of community structures in networks, or model-parameter fitting can be formulated as global optimization problems. Extremal optimization (EO) has become in recent years one particular, successful approach to the global optimization problem. As with almost all other global optimization approaches, EO is driven by an internal dynamics that depends crucially on one or more parameters. Recently, the existence of an optimal scheme for this internal parameter of EO was proven, so as to maximize the performance of the algorithm. However, this proof was not constructive, that is, one cannot use it to deduce the optimal parameter itself a priori. In this study we analyze the dynamics of EO for a test problem (spin glasses). Based on the results we propose an online measure of the performance of EO and a way to use this insight to reformulate the EO algorithm in order to construct optimal values of the internal parameter online without any input by the user. This approach will ultimately allow us to make EO parameter free and thus its application in general global optimization problems much more efficient

  6. Optimal security investments and extreme risk.

    Science.gov (United States)

    Mohtadi, Hamid; Agiwal, Swati

    2012-08-01

    In the aftermath of 9/11, concern over security increased dramatically in both the public and the private sector. Yet, no clear algorithm exists to inform firms on the amount and the timing of security investments to mitigate the impact of catastrophic risks. The goal of this article is to devise an optimum investment strategy for firms to mitigate exposure to catastrophic risks, focusing on how much to invest and when to invest. The latter question addresses the issue of whether postponing a risk mitigating decision is an optimal strategy or not. Accordingly, we develop and estimate both a one-period model and a multiperiod model within the framework of extreme value theory (EVT). We calibrate these models using probability measures for catastrophic terrorism risks associated with attacks on the food sector. We then compare our findings with the purchase of catastrophic risk insurance. © 2012 Society for Risk Analysis.

  7. Existence of extremal periodic solutions for quasilinear parabolic equations

    Directory of Open Access Journals (Sweden)

    Siegfried Carl

    1997-01-01

    bounded domain under periodic Dirichlet boundary conditions. Our main goal is to prove the existence of extremal solutions among all solutions lying in a sector formed by appropriately defined upper and lower solutions. The main tools used in the proof of our result are recently obtained abstract results on nonlinear evolution equations, comparison and truncation techniques and suitably constructed special testfunction.

  8. Extreme Trust Region Policy Optimization for Active Object Recognition.

    Science.gov (United States)

    Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei

    2018-06-01

    In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.

  9. On some interconnections between combinatorial optimization and extremal graph theory

    Directory of Open Access Journals (Sweden)

    Cvetković Dragoš M.

    2004-01-01

    Full Text Available The uniting feature of combinatorial optimization and extremal graph theory is that in both areas one should find extrema of a function defined in most cases on a finite set. While in combinatorial optimization the point is in developing efficient algorithms and heuristics for solving specified types of problems, the extremal graph theory deals with finding bounds for various graph invariants under some constraints and with constructing extremal graphs. We analyze by examples some interconnections and interactions of the two theories and propose some conclusions.

  10. Discrete optimization in architecture extremely modular systems

    CERN Document Server

    Zawidzki, Machi

    2017-01-01

    This book is comprised of two parts, both of which explore modular systems: Pipe-Z (PZ) and Truss-Z (TZ), respectively. It presents several methods of creating PZ and TZ structures subjected to discrete optimization. The algorithms presented employ graph-theoretic and heuristic methods. The underlying idea of both systems is to create free-form structures using the minimal number of types of modular elements. PZ is more conceptual, as it forms single-branch mathematical knots with a single type of module. Conversely, TZ is a skeletal system for creating free-form pedestrian ramps and ramp networks among any number of terminals in space. In physical space, TZ uses two types of modules that are mirror reflections of each other. The optimization criteria discussed include: the minimal number of units, maximal adherence to the given guide paths, etc.

  11. Spatial planning via extremal optimization enhanced by cell-based local search

    International Nuclear Information System (INIS)

    Sidiropoulos, Epaminondas

    2014-01-01

    A new treatment is presented for land use planning problems by means of extremal optimization in conjunction to cell-based neighborhood local search. Extremal optimization, inspired by self-organized critical models of evolution has been applied mainly to the solution of classical combinatorial optimization problems. Cell-based local search has been employed by the author elsewhere in problems of spatial resource allocation in combination with genetic algorithms and simulated annealing. In this paper it complements extremal optimization in order to enhance its capacity for a spatial optimization problem. The hybrid method thus formed is compared to methods of the literature on a specific characteristic problem. It yields better results both in terms of objective function values and in terms of compactness. The latter is an important quantity for spatial planning. The present treatment yields significant compactness values as emergent results

  12. Approximative solutions of stochastic optimization problem

    Czech Academy of Sciences Publication Activity Database

    Lachout, Petr

    2010-01-01

    Roč. 46, č. 3 (2010), s. 513-523 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic optimization problem * sensitivity * approximative solution Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/lachout-approximative solutions of stochastic optimization problem.pdf

  13. [Optimal solution and analysis of muscular force during standing balance].

    Science.gov (United States)

    Wang, Hongrui; Zheng, Hui; Liu, Kun

    2015-02-01

    The present study was aimed at the optimal solution of the main muscular force distribution in the lower extremity during standing balance of human. The movement musculoskeletal system of lower extremity was simplified to a physical model with 3 joints and 9 muscles. Then on the basis of this model, an optimum mathematical model was built up to solve the problem of redundant muscle forces. Particle swarm optimization (PSO) algorithm is used to calculate the single objective and multi-objective problem respectively. The numerical results indicated that the multi-objective optimization could be more reasonable to obtain the distribution and variation of the 9 muscular forces. Finally, the coordination of each muscle group during maintaining standing balance under the passive movement was qualitatively analyzed using the simulation results obtained.

  14. Solution quality improvement in chiller loading optimization

    International Nuclear Information System (INIS)

    Geem, Zong Woo

    2011-01-01

    In order to reduce greenhouse gas emission, we can energy-efficiently operate a multiple chiller system using optimization techniques. So far, various optimization techniques have been proposed to the optimal chiller loading problem. Most of those techniques are meta-heuristic algorithms such as genetic algorithm, simulated annealing, and particle swarm optimization. However, this study applied a gradient-based method, named generalized reduced gradient, and then obtains better results when compared with other approaches. When two additional approaches (hybridization between meta-heuristic algorithm and gradient-based algorithm; and reformulation of optimization structure by adding a binary variable which denotes chiller's operating status) were introduced, generalized reduced gradient found even better solutions. - Highlights: → Chiller loading problem is optimized by generalized reduced gradient (GRG) method. → Results are compared with meta-heuristic algorithms such as genetic algorithm. → Results are even enhanced by hybridizing meta-heuristic and gradient techniques. → Results are even enhanced by modifying the optimization formulation.

  15. Optimization of the annual construction program solutions

    Directory of Open Access Journals (Sweden)

    Oleinik Pavel

    2017-01-01

    Full Text Available The article considers potentially possible optimization solutions in scheduling while forming the annual production programs of the construction complex organizations. The optimization instrument is represented as a two-component system. As a fundamentally new approach in the first block of the annual program solutions, the authors propose to use a scientifically grounded methodology for determining the scope of work permissible for the transfer to a subcontractor without risk of General Contractor’s management control losing over the construction site. For this purpose, a special indicator is introduced that characterizes the activity of the general construction organization - the coefficient of construction production management. In the second block, the principal methods for the formation of calendar plans for the fulfillment of the critical work effort by the leading stream are proposed, depending on the intensity characteristic.

  16. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    Science.gov (United States)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  17. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    Science.gov (United States)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  18. Modernizing Distribution System Restoration to Achieve Grid Resiliency Against Extreme Weather Events: An Integrated Solution

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chen; Wang, Jianhui; Ton, Dan

    2017-07-07

    Recent severe power outages caused by extreme weather hazards have highlighted the importance and urgency of improving the resilience of the electric power grid. As the distribution grids still remain vulnerable to natural disasters, the power industry has focused on methods of restoring distribution systems after disasters in an effective and quick manner. The current distribution system restoration practice for utilities is mainly based on predetermined priorities and tends to be inefficient and suboptimal, and the lack of situational awareness after the hazard significantly delays the restoration process. As a result, customers may experience an extended blackout, which causes large economic loss. On the other hand, the emerging advanced devices and technologies enabled through grid modernization efforts have the potential to improve the distribution system restoration strategy. However, utilizing these resources to aid the utilities in better distribution system restoration decision-making in response to extreme weather events is a challenging task. Therefore, this paper proposes an integrated solution: a distribution system restoration decision support tool designed by leveraging resources developed for grid modernization. We first review the current distribution restoration practice and discuss why it is inadequate in response to extreme weather events. Then we describe how the grid modernization efforts could benefit distribution system restoration, and we propose an integrated solution in the form of a decision support tool to achieve the goal. The advantages of the solution include improving situational awareness of the system damage status and facilitating survivability for customers. The paper provides a comprehensive review of how the existing methodologies in the literature could be leveraged to achieve the key advantages. The benefits of the developed system restoration decision support tool include the optimal and efficient allocation of repair crews

  19. Optimal bounds and extremal trajectories for time averages in dynamical systems

    Science.gov (United States)

    Tobasco, Ian; Goluskin, David; Doering, Charles

    2017-11-01

    For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.

  20. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2014-01-01

    Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

  1. Optimal bounds and extremal trajectories for time averages in nonlinear dynamical systems

    Science.gov (United States)

    Tobasco, Ian; Goluskin, David; Doering, Charles R.

    2018-02-01

    For any quantity of interest in a system governed by ordinary differential equations, it is natural to seek the largest (or smallest) long-time average among solution trajectories, as well as the extremal trajectories themselves. Upper bounds on time averages can be proved a priori using auxiliary functions, the optimal choice of which is a convex optimization problem. We prove that the problems of finding maximal trajectories and minimal auxiliary functions are strongly dual. Thus, auxiliary functions provide arbitrarily sharp upper bounds on time averages. Moreover, any nearly minimal auxiliary function provides phase space volumes in which all nearly maximal trajectories are guaranteed to lie. For polynomial equations, auxiliary functions can be constructed by semidefinite programming, which we illustrate using the Lorenz system.

  2. Using qualimetric engineering and extremal analysis to optimize a proton exchange membrane fuel cell stack

    International Nuclear Information System (INIS)

    Besseris, George J.

    2014-01-01

    Highlights: • We consider the optimal configuration of a PEMFC stack. • We utilize qualimetric engineering tools (Taguchi screening, regression analysis). • We achieve analytical solution on a restructured power-law fitting. • We discuss the Pt-cost involvement in the unit and area minimization scope. - Abstract: The optimal configuration of the proton exchange membrane fuel-cell (PEMFC) stack has received attention recently because of its potential use as an isolated energy distributor for household needs. In this work, the original complex problem for generating an optimal PEMFC stack based on the number of cell units connected in series and parallel arrangements as well as on the cell area is revisited. A qualimetric engineering strategy is formulated which is based on quick profiling the PEMFC stack voltage response. Stochastic screening is initiated by employing an L 9 (3 3 ) Taguchi-type OA for partitioning numerically the deterministic expression of the output PEMFC stack voltage such that to facilitate the sizing of the magnitude of the individual effects. The power and current household specifications for the stack system are maintained at the typical settings of 200 W at 12 V, respectively. The minimization of the stack total-area requirement becomes explicit in this work. The relationship of cell voltage against cell area is cast into a power-law model by regression fitting that achieves a coefficient of determination value of 99.99%. Thus, the theoretical formulation simplifies into a non-linear extremal problem with a constrained solution due to a singularity which is solved analytically. The optimal solution requires 22 cell units connected in series where each unit is designed with an area value of 151.4 cm 2 . It is also demonstrated how to visualize the optimal solution using the graphical method of operating lines. The total area of 3270.24 cm 2 becomes a new benchmark for the optimal design of the studied PEMFC stack configuration. It is

  3. Extraction of indium from extremely diluted solutions; Gewinnung von Indium aus extrem verduennten Loesungen

    Energy Technology Data Exchange (ETDEWEB)

    Vostal, Radek; Singliar, Ute; Froehlich, Peter [TU Bergakademie Freiberg (Germany). Inst. fuer Technische Chemie

    2017-02-15

    The demand for indium is rising with the growth of the electronics industry, where it is mainly used. Therefore, a multistage extraction process was developed to separate indium from a model solution whose composition was adequate to sphalerite ore. The initially very low concentration of indium in the solution was significantly increased by several successive extraction and reextraction steps. The process described is characterized by a low requirement for chemicals and a high purity of the obtained indium oxide.

  4. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    Science.gov (United States)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  5. Going Extreme For Small Solutions To Big Environmental Challenges

    Energy Technology Data Exchange (ETDEWEB)

    Bagwell, Christopher E.

    2011-03-31

    This chapter is devoted to the scale, scope, and specific issues confronting the cleanup and long-term disposal of the U.S. nuclear legacy generated during WWII and the Cold War Era. The research reported is aimed at complex microbiological interactions with legacy waste materials generated by past nuclear production activities in the United States. The intended purpose of this research is to identify cost effective solutions to the specific problems (stability) and environmental challenges (fate, transport, exposure) in managing and detoxifying persistent contaminant species. Specifically addressed are high level waste microbiology and bacteria inhabiting plutonium laden soils in the unsaturated subsurface.

  6. Framatome ANP outage optimization support solutions

    International Nuclear Information System (INIS)

    Bombail, Jean Paul

    2003-01-01

    Over the last several years, leading plant operators have demonstrated that availability factors can be improved while safety and reliability can be enhanced on a long-term basis and operating costs reduced. Outage optimization is the new term being used to describe these long-term initiatives through which a variety of measures aimed at shortening scheduled plant outages have been developed and successfully implemented by these leaders working with their service providers who were introducing new technologies and process improvements. Following the leaders, all operators now have ambitious outage optimization plans and the median and average outage duration are decreasing world-wide. Future objectives are even more stringent and must include plant upgrades and component replacements being performed for life extension of plant operation. Outage optimization covers a broad range of activities from modifications of plant systems to faster cool down rates to human behavior improvements. It has been proven to reduce costs, avoid unplanned outages and thus support plant availability and help to ensure the utility's competitive position in the marketplace

  7. Finding Multiple Optimal Solutions to Optimal Load Distribution Problem in Hydropower Plant

    Directory of Open Access Journals (Sweden)

    Xinhao Jiang

    2012-05-01

    Full Text Available Optimal load distribution (OLD among generator units of a hydropower plant is a vital task for hydropower generation scheduling and management. Traditional optimization methods for solving this problem focus on finding a single optimal solution. However, many practical constraints on hydropower plant operation are very difficult, if not impossible, to be modeled, and the optimal solution found by those models might be of limited practical uses. This motivates us to find multiple optimal solutions to the OLD problem, which can provide more flexible choices for decision-making. Based on a special dynamic programming model, we use a modified shortest path algorithm to produce multiple solutions to the problem. It is shown that multiple optimal solutions exist for the case study of China’s Geheyan hydropower plant, and they are valuable for assessing the stability of generator units, showing the potential of reducing occurrence times of units across vibration areas.

  8. Optimization of process and solution parameters in electrospinning polyethylene oxide

    CSIR Research Space (South Africa)

    Jacobs, V

    2011-11-01

    Full Text Available This paper reports the optimization of electrospinning process and solution parameters using factorial design approach to obtain uniform polyethylene oxide (PEO) nanofibers. The parameters studied were distance between nozzle and collector screen...

  9. IMRT optimization: Variability of solutions and its radiobiological impact

    International Nuclear Information System (INIS)

    Mattia, Maurizio; Del Giudice, Paolo; Caccia, Barbara

    2004-01-01

    We aim at (1) defining and measuring a 'complexity' index for the optimization process of an intensity modulated radiation therapy treatment plan (IMRT TP), (2) devising an efficient approximate optimization strategy, and (3) evaluating the impact of the complexity of the optimization process on the radiobiological quality of the treatment. In this work, for a prostate therapy case, the IMRT TP optimization problem has been formulated in terms of dose-volume constraints. The cost function has been minimized in order to achieve the optimal solution, by means of an iterative procedure, which is repeated for many initial modulation profiles, and for each of them the final optimal solution is recorded. To explore the complexity of the space of such solutions we have chosen to minimize the cost function with an algorithm that is unable to avoid local minima. The size of the (sub)optimal solutions distribution is taken as an indicator of the complexity of the optimization problem. The impact of the estimated complexity on the probability of success of the therapy is evaluated using radiobiological indicators (Poissonian TCP model [S. Webb and A. E. Nahum, Phys. Med. Biol. 38(6), 653-666 (1993)] and NTCP relative seriality model [Kallman et al., Int. J. Radiat. Biol. 62(2), 249-262 (1992)]). We find in the examined prostate case a nontrivial distribution of local minima, which has symmetry properties allowing a good estimate of near-optimal solutions with a moderate computational load. We finally demonstrate that reducing the a priori uncertainty in the optimal solution results in a significant improvement of the probability of success of the TP, based on TCP and NTCP estimates

  10. Optimal Mortgage Refinancing: A Closed Form Solution.

    Science.gov (United States)

    Agarwal, Sumit; Driscoll, John C; Laibson, David I

    2013-06-01

    We derive the first closed-form optimal refinancing rule: Refinance when the current mortgage interest rate falls below the original rate by at least [Formula: see text] In this formula W (.) is the Lambert W -function, [Formula: see text] ρ is the real discount rate, λ is the expected real rate of exogenous mortgage repayment, σ is the standard deviation of the mortgage rate, κ/M is the ratio of the tax-adjusted refinancing cost and the remaining mortgage value, and τ is the marginal tax rate. This expression is derived by solving a tractable class of refinancing problems. Our quantitative results closely match those reported by researchers using numerical methods.

  11. Optimal Mortgage Refinancing: A Closed Form Solution

    Science.gov (United States)

    Agarwal, Sumit; Driscoll, John C.; Laibson, David I.

    2013-01-01

    We derive the first closed-form optimal refinancing rule: Refinance when the current mortgage interest rate falls below the original rate by at least 1ψ[ϕ+W(−exp(−ϕ))]. In this formula W(.) is the Lambert W-function, ψ=2(ρ+λ)σ,ϕ=1+ψ(ρ+λ)κ∕M(1−τ), ρ is the real discount rate, λ is the expected real rate of exogenous mortgage repayment, σ is the standard deviation of the mortgage rate, κ/M is the ratio of the tax-adjusted refinancing cost and the remaining mortgage value, and τ is the marginal tax rate. This expression is derived by solving a tractable class of refinancing problems. Our quantitative results closely match those reported by researchers using numerical methods. PMID:25843977

  12. Optimal adaptation to extreme rainfalls in current and future climate

    DEFF Research Database (Denmark)

    Rosbjerg, Dan

    2017-01-01

    . The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate......More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level...... and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level...

  13. Optimal adaptation to extreme rainfalls under climate change

    Science.gov (United States)

    Rosbjerg, Dan

    2017-04-01

    More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time span. Immediate as well as delayed adaptation is considered.

  14. Optimal adaptation to extreme rainfalls in current and future climate

    Science.gov (United States)

    Rosbjerg, Dan

    2017-01-01

    More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases, the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time-span. Immediate as well as delayed adaptation is considered.

  15. New numerical methods for open-loop and feedback solutions to dynamic optimization problems

    Science.gov (United States)

    Ghosh, Pradipto

    The topic of the first part of this research is trajectory optimization of dynamical systems via computational swarm intelligence. Particle swarm optimization is a nature-inspired heuristic search method that relies on a group of potential solutions to explore the fitness landscape. Conceptually, each particle in the swarm uses its own memory as well as the knowledge accumulated by the entire swarm to iteratively converge on an optimal or near-optimal solution. It is relatively straightforward to implement and unlike gradient-based solvers, does not require an initial guess or continuity in the problem definition. Although particle swarm optimization has been successfully employed in solving static optimization problems, its application in dynamic optimization, as posed in optimal control theory, is still relatively new. In the first half of this thesis particle swarm optimization is used to generate near-optimal solutions to several nontrivial trajectory optimization problems including thrust programming for minimum fuel, multi-burn spacecraft orbit transfer, and computing minimum-time rest-to-rest trajectories for a robotic manipulator. A distinct feature of the particle swarm optimization implementation in this work is the runtime selection of the optimal solution structure. Optimal trajectories are generated by solving instances of constrained nonlinear mixed-integer programming problems with the swarming technique. For each solved optimal programming problem, the particle swarm optimization result is compared with a nearly exact solution found via a direct method using nonlinear programming. Numerical experiments indicate that swarm search can locate solutions to very great accuracy. The second half of this research develops a new extremal-field approach for synthesizing nearly optimal feedback controllers for optimal control and two-player pursuit-evasion games described by general nonlinear differential equations. A notable revelation from this development

  16. Design of materials with extreme thermal expansion using a three-phase topology optimization method

    DEFF Research Database (Denmark)

    Sigmund, Ole; Torquato, S.

    1997-01-01

    We show how composites with extremal or unusual thermal expansion coefficients can be designed using a numerical topology optimization method. The composites are composed of two different material phases and void. The optimization method is illustrated by designing materials having maximum therma...

  17. Complicated problem solution techniques in optimal parameter searching

    International Nuclear Information System (INIS)

    Gergel', V.P.; Grishagin, V.A.; Rogatneva, E.A.; Strongin, R.G.; Vysotskaya, I.N.; Kukhtin, V.V.

    1992-01-01

    An algorithm is presented of a global search for numerical solution of multidimentional multiextremal multicriteria optimization problems with complicated constraints. A boundedness of object characteristic changes is assumed at restricted changes of its parameters (Lipschitz condition). The algorithm was realized as a computer code. The algorithm was realized as a computer code. The programme was used to solve in practice the different applied optimization problems. 10 refs.; 3 figs

  18. Analytic solution to variance optimization with no short positions

    Science.gov (United States)

    Kondor, Imre; Papp, Gábor; Caccioli, Fabio

    2017-12-01

    We consider the variance portfolio optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric \

  19. Inverse planning and optimization: a comparison of solutions

    Energy Technology Data Exchange (ETDEWEB)

    Ringor, Michael [School of Health Sciences, Purdue University, West Lafayette, IN (United States); Papiez, Lech [Department of Radiation Oncology, Indiana University, Indianapolis, IN (United States)

    1998-09-01

    The basic problem in radiation therapy treatment planning is to determine an appropriate set of treatment parameters that would induce an effective dose distribution inside a patient. One can approach this task as an inverse problem, or as an optimization problem. In this presentation, we compare both approaches. The inverse problem is presented as a dose reconstruction problem similar to tomography reconstruction. We formulate the optimization problem as linear and quadratic programs. Explicit comparisons are made between the solutions obtained by inversion and those obtained by optimization for the case in which scatter and attenuation are ignored (the NS-NA approximation)

  20. On the complexity of determining tolerances for ->e--optimal solutions to min-max combinatorial optimization problems

    NARCIS (Netherlands)

    Ghosh, D.; Sierksma, G.

    2000-01-01

    Sensitivity analysis of e-optimal solutions is the problem of calculating the range within which a problem parameter may lie so that the given solution re-mains e-optimal. In this paper we study the sensitivity analysis problem for e-optimal solutions tocombinatorial optimization problems with

  1. Non-extremal black hole solutions from the c-map

    International Nuclear Information System (INIS)

    Errington, D.; Mohaupt, T.; Vaughan, O.

    2015-01-01

    We construct new static, spherically symmetric non-extremal black hole solutions of four-dimensional N=2 supergravity, using a systematic technique based on dimensional reduction over time (the c-map) and the real formulation of special geometry. For a certain class of models we actually obtain the general solution to the full second order equations of motion, whilst for other classes of models, such as those obtainable by dimensional reduction from five dimensions, heterotic tree-level models, and type-II Calabi-Yau compactifications in the large volume limit a partial set of solutions are found. When considering specifically non-extremal black hole solutions we find that regularity conditions reduce the number of integration constants by one half. Such solutions satisfy a unique set of first order equations, which we identify. Several models are investigated in detail, including examples of non-homogeneous spaces such as the quantum deformed STU model. Though we focus on static, spherically symmetric solutions of ungauged supergravity, the method is adaptable to other types of solutions and to gauged supergravity.

  2. Optimization of the solution of the problem of scheduling theory ...

    African Journals Online (AJOL)

    This article describes the genetic algorithm used to solve the problem related to the scheduling theory. A large number of different methods is described in the scientific literature. The main issue that faced the problem in question is that it is necessary to search the optimal solution in a large search space for the set of ...

  3. Portfolio optimization for heavy-tailed assets: Extreme Risk Index vs. Markowitz

    OpenAIRE

    Mainik, Georg; Mitov, Georgi; Rüschendorf, Ludger

    2015-01-01

    Using daily returns of the S&P 500 stocks from 2001 to 2011, we perform a backtesting study of the portfolio optimization strategy based on the extreme risk index (ERI). This method uses multivariate extreme value theory to minimize the probability of large portfolio losses. With more than 400 stocks to choose from, our study seems to be the first application of extreme value techniques in portfolio management on a large scale. The primary aim of our investigation is the potential of ERI in p...

  4. Optimality conditions for the numerical solution of optimization problems with PDE constraints :

    Energy Technology Data Exchange (ETDEWEB)

    Aguilo Valentin, Miguel Alejandro; Ridzal, Denis

    2014-03-01

    A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.

  5. Design of materials with extreme thermal expansion using a three-phase topology optimization method

    DEFF Research Database (Denmark)

    Sigmund, Ole; Torquato, S.

    1997-01-01

    Composites with extremal or unusual thermal expansion coefficients are designed using a three-phase topology optimization method. The composites are made of two different material phases and a void phase. The topology optimization method consists in finding the distribution of material phases...... materials having maximum directional thermal expansion (thermal actuators), zero isotropic thermal expansion, and negative isotropic thermal expansion. It is shown that materials with effective negative thermal expansion coefficients can be obtained by mixing two phases with positive thermal expansion...

  6. Research into Financial Position of Listed Companies following Classification via Extreme Learning Machine Based upon DE Optimization

    OpenAIRE

    Fu Yu; Mu Jiong; Duan Xu Liang

    2016-01-01

    By means of the model of extreme learning machine based upon DE optimization, this article particularly centers on the optimization thinking of such a model as well as its application effect in the field of listed company’s financial position classification. It proves that the improved extreme learning machine algorithm based upon DE optimization eclipses the traditional extreme learning machine algorithm following comparison. Meanwhile, this article also intends to introduce certain research...

  7. Optimal calibration of variable biofuel blend dual-injection engines using sparse Bayesian extreme learning machine and metaheuristic optimization

    International Nuclear Information System (INIS)

    Wong, Ka In; Wong, Pak Kin

    2017-01-01

    Highlights: • A new calibration method is proposed for dual-injection engines under biofuel blends. • Sparse Bayesian extreme learning machine and flower pollination algorithm are employed in the proposed method. • An SI engine is retrofitted for operating under dual-injection strategy. • The proposed method is verified experimentally under the two idle speed conditions. • Comparison with other machine learning methods and optimization algorithms is conducted. - Abstract: Although many combinations of biofuel blends are available in the market, it is more beneficial to vary the ratio of biofuel blends at different engine operating conditions for optimal engine performance. Dual-injection engines have the potential to implement such function. However, while optimal engine calibration is critical for achieving high performance, the use of two injection systems, together with other modern engine technologies, leads the calibration of the dual-injection engines to a very complicated task. Traditional trial-and-error-based calibration approach can no longer be adopted as it would be time-, fuel- and labor-consuming. Therefore, a new and fast calibration method based on sparse Bayesian extreme learning machine (SBELM) and metaheuristic optimization is proposed to optimize the dual-injection engines operating with biofuels. A dual-injection spark-ignition engine fueled with ethanol and gasoline is employed for demonstration purpose. The engine response for various parameters is firstly acquired, and an engine model is then constructed using SBELM. With the engine model, the optimal engine settings are determined based on recently proposed metaheuristic optimization methods. Experimental results validate the optimal settings obtained with the proposed methodology, indicating that the use of machine learning and metaheuristic optimization for dual-injection engine calibration is effective and promising.

  8. Investigation of the existence and uniqueness of extremal and positive definite solutions of nonlinear matrix equations

    Directory of Open Access Journals (Sweden)

    Abdel-Shakoor M Sarhan

    2016-05-01

    Full Text Available Abstract We consider two nonlinear matrix equations X r ± ∑ i = 1 m A i ∗ X δ i A i = I $X^{r} \\pm \\sum_{i = 1}^{m} A_{i}^{*}X^{\\delta_{i}}A_{i} = I$ , where − 1 < δ i < 0 $- 1 < \\delta_{i} < 0$ , and r, m are positive integers. For the first equation (plus case, we prove the existence of positive definite solutions and extremal solutions. Two algorithms and proofs of their convergence to the extremal positive definite solutions are constructed. For the second equation (negative case, we prove the existence and the uniqueness of a positive definite solution. Moreover, the algorithm given in (Duan et al. in Linear Algebra Appl. 429:110-121, 2008 (actually, in (Shi et al. in Linear Multilinear Algebra 52:1-15, 2004 for r = 1 $r = 1$ is proved to be valid for any r. Numerical examples are given to illustrate the performance and effectiveness of all the constructed algorithms. In Appendix, we analyze the ordering on the positive cone P ( n ‾ $\\overline{P(n}$ .

  9. Optimized Extreme Learning Machine for Power System Transient Stability Prediction Using Synchrophasors

    Directory of Open Access Journals (Sweden)

    Yanjun Zhang

    2015-01-01

    Full Text Available A new optimized extreme learning machine- (ELM- based method for power system transient stability prediction (TSP using synchrophasors is presented in this paper. First, the input features symbolizing the transient stability of power systems are extracted from synchronized measurements. Then, an ELM classifier is employed to build the TSP model. And finally, the optimal parameters of the model are optimized by using the improved particle swarm optimization (IPSO algorithm. The novelty of the proposal is in the fact that it improves the prediction performance of the ELM-based TSP model by using IPSO to optimize the parameters of the model with synchrophasors. And finally, based on the test results on both IEEE 39-bus system and a large-scale real power system, the correctness and validity of the presented approach are verified.

  10. Optimal Design Solutions for Permanent Magnet Synchronous Machines

    Directory of Open Access Journals (Sweden)

    POPESCU, M.

    2011-11-01

    Full Text Available This paper presents optimal design solutions for reducing the cogging torque of permanent magnets synchronous machines. A first solution proposed in the paper consists in using closed stator slots that determines a nearly isotropic magnetic structure of the stator core, reducing the mutual attraction between permanent magnets and the slotted armature. To avoid complications in the windings manufacture technology the stator slots are closed using wedges made of soft magnetic composite materials. The second solution consists in properly choosing the combination of pole number and stator slots number that typically leads to a winding with fractional number of slots/pole/phase. The proposed measures for cogging torque reduction are analyzed by means of 2D/3D finite element models developed using the professional Flux software package. Numerical results are discussed and compared with experimental ones obtained by testing a PMSM prototype.

  11. Persistent junk solutions in time-domain modeling of extreme mass ratio binaries

    International Nuclear Information System (INIS)

    Field, Scott E.; Hesthaven, Jan S.; Lau, Stephen R.

    2010-01-01

    In the context of metric perturbation theory for nonspinning black holes, extreme mass ratio binary systems are described by distributionally forced master wave equations. Numerical solution of a master wave equation as an initial boundary value problem requires initial data. However, because the correct initial data for generic-orbit systems is unknown, specification of trivial initial data is a common choice, despite being inconsistent and resulting in a solution which is initially discontinuous in time. As is well known, this choice leads to a burst of junk radiation which eventually propagates off the computational domain. We observe another potential consequence of trivial initial data: development of a persistent spurious solution, here referred to as the Jost junk solution, which contaminates the physical solution for long times. This work studies the influence of both types of junk on metric perturbations, waveforms, and self-force measurements, and it demonstrates that smooth modified source terms mollify the Jost solution and reduce junk radiation. Our concluding section discusses the applicability of these observations to other numerical schemes and techniques used to solve distributionally forced master wave equations.

  12. Research into Financial Position of Listed Companies following Classification via Extreme Learning Machine Based upon DE Optimization

    Directory of Open Access Journals (Sweden)

    Fu Yu

    2016-01-01

    Full Text Available By means of the model of extreme learning machine based upon DE optimization, this article particularly centers on the optimization thinking of such a model as well as its application effect in the field of listed company’s financial position classification. It proves that the improved extreme learning machine algorithm based upon DE optimization eclipses the traditional extreme learning machine algorithm following comparison. Meanwhile, this article also intends to introduce certain research thinking concerning extreme learning machine into the economics classification area so as to fulfill the purpose of computerizing the speedy but effective evaluation of massive financial statements of listed companies pertain to different classes

  13. Multiresolution strategies for the numerical solution of optimal control problems

    Science.gov (United States)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a

  14. Optimized extreme learning machine for urban land cover classification using hyperspectral imagery

    Science.gov (United States)

    Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam

    2017-12-01

    This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.

  15. Amodified probabilistic genetic algorithm for the solution of complex constrained optimization problems

    OpenAIRE

    Vorozheikin, A.; Gonchar, T.; Panfilov, I.; Sopov, E.; Sopov, S.

    2009-01-01

    A new algorithm for the solution of complex constrained optimization problems based on the probabilistic genetic algorithm with optimal solution prediction is proposed. The efficiency investigation results in comparison with standard genetic algorithm are presented.

  16. Solar photovoltaic power forecasting using optimized modified extreme learning machine technique

    Directory of Open Access Journals (Sweden)

    Manoja Kumar Behera

    2018-06-01

    Full Text Available Prediction of photovoltaic power is a significant research area using different forecasting techniques mitigating the effects of the uncertainty of the photovoltaic generation. Increasingly high penetration level of photovoltaic (PV generation arises in smart grid and microgrid concept. Solar source is irregular in nature as a result PV power is intermittent and is highly dependent on irradiance, temperature level and other atmospheric parameters. Large scale photovoltaic generation and penetration to the conventional power system introduces the significant challenges to microgrid a smart grid energy management. It is very critical to do exact forecasting of solar power/irradiance in order to secure the economic operation of the microgrid and smart grid. In this paper an extreme learning machine (ELM technique is used for PV power forecasting of a real time model whose location is given in the Table 1. Here the model is associated with the incremental conductance (IC maximum power point tracking (MPPT technique that is based on proportional integral (PI controller which is simulated in MATLAB/SIMULINK software. To train single layer feed-forward network (SLFN, ELM algorithm is implemented whose weights are updated by different particle swarm optimization (PSO techniques and their performance are compared with existing models like back propagation (BP forecasting model. Keywords: PV array, Extreme learning machine, Maximum power point tracking, Particle swarm optimization, Craziness particle swarm optimization, Accelerate particle swarm optimization, Single layer feed-forward network

  17. New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems

    Directory of Open Access Journals (Sweden)

    Xiguang Li

    2017-01-01

    Full Text Available Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA, is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent.

  18. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2015-11-01

    © 2015 Elsevier B.V. All rights reserved. Significant research has been conducted in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research efforts aim to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open MPI. The proposed optimization technique is designed to address the challenge of extreme scale of future HPC platforms. It is based on hierarchical transformation of the traditionally flat logical arrangement of communicating processors. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of the Grid\\'5000 platform are presented.

  19. A Visualization Technique for Accessing Solution Pool in Interactive Methods of Multiobjective Optimization

    OpenAIRE

    Filatovas, Ernestas; Podkopaev, Dmitry; Kurasova, Olga

    2015-01-01

    Interactive methods of multiobjective optimization repetitively derive Pareto optimal solutions based on decision maker’s preference information and present the obtained solutions for his/her consideration. Some interactive methods save the obtained solutions into a solution pool and, at each iteration, allow the decision maker considering any of solutions obtained earlier. This feature contributes to the flexibility of exploring the Pareto optimal set and learning about the op...

  20. Analyze the optimal solutions of optimization problems by means of fractional gradient based system using VIM

    Directory of Open Access Journals (Sweden)

    Firat Evirgen

    2016-04-01

    Full Text Available In this paper, a class of Nonlinear Programming problem is modeled with gradient based system of fractional order differential equations in Caputo's sense. To see the overlap between the equilibrium point of the fractional order dynamic system and theoptimal solution of the NLP problem in a longer timespan the Multistage Variational İteration Method isapplied. The comparisons among the multistage variational iteration method, the variationaliteration method and the fourth order Runge-Kutta method in fractional and integer order showthat fractional order model and techniques can be seen as an effective and reliable tool for finding optimal solutions of Nonlinear Programming problems.

  1. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  2. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  3. Optimal resource allocation solutions for heterogeneous cognitive radio networks

    Directory of Open Access Journals (Sweden)

    Babatunde Awoyemi

    2017-05-01

    Full Text Available Cognitive radio networks (CRN are currently gaining immense recognition as the most-likely next-generation wireless communication paradigm, because of their enticing promise of mitigating the spectrum scarcity and/or underutilisation challenge. Indisputably, for this promise to ever materialise, CRN must of necessity devise appropriate mechanisms to judiciously allocate their rather scarce or limited resources (spectrum and others among their numerous users. ‘Resource allocation (RA in CRN', which essentially describes mechanisms that can effectively and optimally carry out such allocation, so as to achieve the utmost for the network, has therefore recently become an important research focus. However, in most research works on RA in CRN, a highly significant factor that describes a more realistic and practical consideration of CRN has been ignored (or only partially explored, i.e., the aspect of the heterogeneity of CRN. To address this important aspect, in this paper, RA models that incorporate the most essential concepts of heterogeneity, as applicable to CRN, are developed and the imports of such inclusion in the overall networking are investigated. Furthermore, to fully explore the relevance and implications of the various heterogeneous classifications to the RA formulations, weights are attached to the different classes and their effects on the network performance are studied. In solving the developed complex RA problems for heterogeneous CRN, a solution approach that examines and exploits the structure of the problem in achieving a less-complex reformulation, is extensively employed. This approach, as the results presented show, makes it possible to obtain optimal solutions to the rather difficult RA problems of heterogeneous CRN.

  4. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhao, Changhong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zamzam, Admed S. [University of Minnesota; Sidiropoulos, Nicholas D. [University of Minnesota; Taylor, Josh A. [University of Toronto

    2018-01-12

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successive convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.

  5. Electrical Discharge Platinum Machining Optimization Using Stefan Problem Solutions

    Directory of Open Access Journals (Sweden)

    I. B. Stavitskiy

    2015-01-01

    Full Text Available The article presents the theoretical study results of platinum workability by electrical discharge machining (EDM, based on the solution of the thermal problem of moving the boundary of material change phase, i.e. Stefan problem. The problem solution enables defining the surface melt penetration of the material under the heat flow proceeding from the time of its action and the physical properties of the processed material. To determine the rational EDM operating conditions of platinum the article suggests relating its workability with machinability of materials, for which the rational EDM operating conditions are, currently, defined. It is shown that at low densities of the heat flow corresponding to the finishing EDM operating conditions, the processing conditions used for steel 45 are appropriate for platinum machining; with EDM at higher heat flow densities (e.g. 50 GW / m2 for this purpose copper processing conditions are used; at the high heat flow densities corresponding to heavy roughing EDM it is reasonable to use tungsten processing conditions. The article also represents how the minimum width of the current pulses, at which platinum starts melting and, accordingly, the EDM process becomes possible, depends on the heat flow density. It is shown that the processing of platinum is expedient at a pulse width corresponding to the values, called the effective pulse width. Exceeding these values does not lead to a substantial increase in removal of material per pulse, but considerably reduces the maximum repetition rate and therefore, the EDM capacity. The paper shows the effective pulse width versus the heat flow density. It also presents the dependences of the maximum platinum surface melt penetration and the corresponding pulse width on the heat flow density. Results obtained using solutions of the Stephen heat problem can be used to optimize EDM operating conditions of platinum machining.

  6. Charged de Sitter-like black holes: quintessence-dependent enthalpy and new extreme solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Ainou, Mustapha [Baskent University, Faculty of Engineering, Ankara (Turkey)

    2015-01-01

    We consider Reissner-Nordstroem black holes surrounded by quintessence where both a non-extremal event horizon and a cosmological horizon exist besides an inner horizon (-1 ≤ ω < -1/3). We determine new extreme black hole solutions that generalize the Nariai horizon to asymptotically de Sitter-like solutions for any order relation between the squares of the charge q{sup 2} and the mass parameter M{sup 2} provided q{sup 2} remains smaller than some limit, which is larger than M{sup 2}. In the limit case q{sup 2} = 9ω{sup 2}M{sup 2}/(9ω{sup 2}-1), we derive the general expression of the extreme cosmo-blackhole, where the three horizons merge, and we discuss some of its properties.We also show that the endpoint of the evaporation process is independent of any order relation between q{sup 2} and M{sup 2}. The Teitelboim energy and the Padmanabhan energy are related by a nonlinear expression and are shown to correspond to different ensembles. We also determine the enthalpy H of the event horizon, as well as the effective thermodynamic volume which is the conjugate variable of the negative quintessential pressure, and show that in general the mass parameter and the Teitelboim energy are different from the enthalpy and internal energy; only in the cosmological case, that is, for Reissner-Nordstroem-de Sitter black hole we have H = M. Generalized Smarr formulas are also derived. It is concluded that the internal energy has a universal expression for all static charged black holes, with possibly a variable mass parameter, but it is not a suitable thermodynamic potential for static-black-hole thermodynamics if M is constant. It is also shown that the reverse isoperimetric inequality holds. We generalize the results to the case of the Reissner-Nordstroem-de Sitter black hole surrounded by quintessence with two physical constants yielding two thermodynamic volumes. (orig.)

  7. Irrigation solutions in open fractures of the lower extremities: evaluation of isotonic saline and distilled water.

    Science.gov (United States)

    Olufemi, Olukemi Temiloluwa; Adeyeye, Adeolu Ikechukwu

    2017-01-01

    Open fractures are widely considered as orthopaedic emergencies requiring immediate intervention. The initial management of these injuries usually affects the ultimate outcome because open fractures may be associated with significant morbidity. Wound irrigation forms one of the pivotal principles in the treatment of open fractures. The choice of irrigation fluid has since been a source of debate. This study aimed to evaluate and compare the effects of isotonic saline and distilled water as irrigation solutions in the management of open fractures of the lower extremities. Wound infection and wound healing rates using both solutions were evaluated. This was a prospective hospital-based study of 109 patients who presented to the Accident and Emergency department with open lower limb fractures. Approval was sought and obtained from the Ethics Committee of the Hospital. Patients were randomized into either the isotonic saline (NS) or the distilled water (DW) group using a simple ballot technique. Twelve patients were lost to follow-up, while 97 patients were available until conclusion of the study. There were 50 patients in the isotonic saline group and 47 patients in the distilled water group. Forty-one (42.3%) of the patients were in the young and economically productive strata of the population. There was a male preponderance with a 1.7:1 male-to-female ratio. The wound infection rate was 34% in the distilled water group and 44% in the isotonic saline group (p = 0.315). The mean time ± SD to wound healing was 2.7 ± 1.5 weeks in the distilled water group and 3.1 ± 1.8 weeks in the isotonic saline group (p = 0.389). It was concluded from this study that the use of distilled water compares favourably with isotonic saline as an irrigation solution in open fractures of the lower extremities. © The Authors, published by EDP Sciences, 2017.

  8. Irrigation solutions in open fractures of the lower extremities: evaluation of isotonic saline and distilled water

    Directory of Open Access Journals (Sweden)

    Olufemi Olukemi Temiloluwa

    2017-01-01

    Full Text Available Introduction: Open fractures are widely considered as orthopaedic emergencies requiring immediate intervention. The initial management of these injuries usually affects the ultimate outcome because open fractures may be associated with significant morbidity. Wound irrigation forms one of the pivotal principles in the treatment of open fractures. The choice of irrigation fluid has since been a source of debate. This study aimed to evaluate and compare the effects of isotonic saline and distilled water as irrigation solutions in the management of open fractures of the lower extremities. Wound infection and wound healing rates using both solutions were evaluated. Methods: This was a prospective hospital-based study of 109 patients who presented to the Accident and Emergency department with open lower limb fractures. Approval was sought and obtained from the Ethics Committee of the Hospital. Patients were randomized into either the isotonic saline (NS or the distilled water (DW group using a simple ballot technique. Twelve patients were lost to follow-up, while 97 patients were available until conclusion of the study. There were 50 patients in the isotonic saline group and 47 patients in the distilled water group. Results: Forty-one (42.3% of the patients were in the young and economically productive strata of the population. There was a male preponderance with a 1.7:1 male-to-female ratio. The wound infection rate was 34% in the distilled water group and 44% in the isotonic saline group (p = 0.315. The mean time ± SD to wound healing was 2.7 ± 1.5 weeks in the distilled water group and 3.1 ± 1.8 weeks in the isotonic saline group (p = 0.389. Conclusions: It was concluded from this study that the use of distilled water compares favourably with isotonic saline as an irrigation solution in open fractures of the lower extremities.

  9. Aero Engine Component Fault Diagnosis Using Multi-Hidden-Layer Extreme Learning Machine with Optimized Structure

    Directory of Open Access Journals (Sweden)

    Shan Pang

    2016-01-01

    Full Text Available A new aero gas turbine engine gas path component fault diagnosis method based on multi-hidden-layer extreme learning machine with optimized structure (OM-ELM was proposed. OM-ELM employs quantum-behaved particle swarm optimization to automatically obtain the optimal network structure according to both the root mean square error on training data set and the norm of output weights. The proposed method is applied to handwritten recognition data set and a gas turbine engine diagnostic application and is compared with basic ELM, multi-hidden-layer ELM, and two state-of-the-art deep learning algorithms: deep belief network and the stacked denoising autoencoder. Results show that, with optimized network structure, OM-ELM obtains better test accuracy in both applications and is more robust to sensor noise. Meanwhile it controls the model complexity and needs far less hidden nodes than multi-hidden-layer ELM, thus saving computer memory and making it more efficient to implement. All these advantages make our method an effective and reliable tool for engine component fault diagnosis tool.

  10. Particle Swarm Optimization with Various Inertia Weight Variants for Optimal Power Flow Solution

    Directory of Open Access Journals (Sweden)

    Prabha Umapathy

    2010-01-01

    Full Text Available This paper proposes an efficient method to solve the optimal power flow problem in power systems using Particle Swarm Optimization (PSO. The objective of the proposed method is to find the steady-state operating point which minimizes the fuel cost, while maintaining an acceptable system performance in terms of limits on generator power, line flow, and voltage. Three different inertia weights, a constant inertia weight (CIW, a time-varying inertia weight (TVIW, and global-local best inertia weight (GLbestIW, are considered with the particle swarm optimization algorithm to analyze the impact of inertia weight on the performance of PSO algorithm. The PSO algorithm is simulated for each of the method individually. It is observed that the PSO algorithm with the proposed inertia weight yields better results, both in terms of optimal solution and faster convergence. The proposed method has been tested on the standard IEEE 30 bus test system to prove its efficacy. The algorithm is computationally faster, in terms of the number of load flows executed, and provides better results than other heuristic techniques.

  11. Solution for state constrained optimal control problems applied to power split control for hybrid vehicles

    NARCIS (Netherlands)

    Keulen, van T.A.C.; Gillot, J.; Jager, de A.G.; Steinbuch, M.

    2014-01-01

    This paper presents a numerical solution for scalar state constrained optimal control problems. The algorithm rewrites the constrained optimal control problem as a sequence of unconstrained optimal control problems which can be solved recursively as a two point boundary value problem. The solution

  12. Software Support for Optimizing Layout Solution in Lean Production

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2018-02-01

    Full Text Available As progressive managerial styles, the techniques based on "lean thinking" are being increasingly promoted. They are focused on applying lean production concepts to all phases of product lifecycle and also to business environment. This innovative approach strives to eliminate any wasting of resources and shortens the time to respond to customer requirements, including redesigning the structure of the organization’ supply chain. A lean organization is created mainly by employees, their creative potential, knowledge, self-realization and motivation for continuous improvement of the processes and the production systems. A set of tools, techniques and methods of lean production is basically always very similar. Only a form of their presentation or classification into individual phases of the product lifecycle may differ. The authors present the results of their research from the designing phases of production systems to optimize their dispositional solution with software support and 3D simulation and visualization. Modelling is based on use of Tecnomatix's and Photomodeler's progressive software tools and a dynamic model for capacitive dimensioning of more intelligent production system

  13. PARETO OPTIMAL SOLUTIONS FOR MULTI-OBJECTIVE GENERALIZED ASSIGNMENT PROBLEM

    Directory of Open Access Journals (Sweden)

    S. Prakash

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The Multi-Objective Generalized Assignment Problem (MGAP with two objectives, where one objective is linear and the other one is non-linear, has been considered, with the constraints that a job is assigned to only one worker – though he may be assigned more than one job, depending upon the time available to him. An algorithm is proposed to find the set of Pareto optimal solutions of the problem, determining assignments of jobs to workers with two objectives without setting priorities for them. The two objectives are to minimise the total cost of the assignment and to reduce the time taken to complete all the jobs.

    AFRIKAANSE OPSOMMING: ‘n Multi-doelwit veralgemeende toekenningsprobleem (“multi-objective generalised assignment problem – MGAP” met twee doelwitte, waar die een lineêr en die ander nielineêr is nie, word bestudeer, met die randvoorwaarde dat ‘n taak slegs toegedeel word aan een werker – alhoewel meer as een taak aan hom toegedeel kan word sou die tyd beskikbaar wees. ‘n Algoritme word voorgestel om die stel Pareto-optimale oplossings te vind wat die taaktoedelings aan werkers onderhewig aan die twee doelwitte doen sonder dat prioriteite toegeken word. Die twee doelwitte is om die totale koste van die opdrag te minimiseer en om die tyd te verminder om al die take te voltooi.

  14. Asymptotic Normality of the Optimal Solution in Multiresponse Surface Mathematical Programming

    OpenAIRE

    Díaz-García, José A.; Caro-Lopera, Francisco J.

    2015-01-01

    An explicit form for the perturbation effect on the matrix of regression coeffi- cients on the optimal solution in multiresponse surface methodology is obtained in this paper. Then, the sensitivity analysis of the optimal solution is studied and the critical point characterisation of the convex program, associated with the optimum of a multiresponse surface, is also analysed. Finally, the asymptotic normality of the optimal solution is derived by the standard methods.

  15. Game theory and extremal optimization for community detection in complex dynamic networks.

    Science.gov (United States)

    Lung, Rodica Ioana; Chira, Camelia; Andreica, Anca

    2014-01-01

    The detection of evolving communities in dynamic complex networks is a challenging problem that recently received attention from the research community. Dynamics clearly add another complexity dimension to the difficult task of community detection. Methods should be able to detect changes in the network structure and produce a set of community structures corresponding to different timestamps and reflecting the evolution in time of network data. We propose a novel approach based on game theory elements and extremal optimization to address dynamic communities detection. Thus, the problem is formulated as a mathematical game in which nodes take the role of players that seek to choose a community that maximizes their profit viewed as a fitness function. Numerical results obtained for both synthetic and real-world networks illustrate the competitive performance of this game theoretical approach.

  16. Solution of optimal power flow using evolutionary-based algorithms

    African Journals Online (AJOL)

    It aims to estimate the optimal settings of real generator output power, bus voltage, ...... Lansey, K. E., 2003, Optimization of water distribution network design using ... Pandit, M., 2016, Economic load dispatch of wind-solar-thermal system using ...

  17. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    Science.gov (United States)

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  18. Families of optimal thermodynamic solutions for combined cycle gas turbine (CCGT) power plants

    International Nuclear Information System (INIS)

    Godoy, E.; Scenna, N.J.; Benz, S.J.

    2010-01-01

    Optimal designs of a CCGT power plant characterized by maximum second law efficiency values are determined for a wide range of power demands and different values of the available heat transfer area. These thermodynamic optimal solutions are found within a feasible operation region by means of a non-linear mathematical programming (NLP) model, where decision variables (i.e. transfer areas, power production, mass flow rates, temperatures and pressures) can vary freely. Technical relationships among them are used to systematize optimal values of design and operative variables of a CCGT power plant into optimal solution sets, named here as optimal solution families. From an operative and design point of view, the families of optimal solutions let knowing in advance optimal values of the CCGT variables when facing changes of power demand or adjusting the design to an available heat transfer area.

  19. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  20. Optimization and photomodification of extremely broadband optical response of plasmonic core-shell obscurants.

    Science.gov (United States)

    de Silva, Vashista C; Nyga, Piotr; Drachev, Vladimir P

    2016-12-15

    Plasmonic resonances of the metallic shells depend on their nanostructure and geometry of the core, which can be optimized for the broadband extinction normalized by mass. The fractal nanostructures can provide a broadband extinction. It allows as well for a laser photoburning of holes in the extinction spectra and consequently windows of transparency in a controlled manner. The studied core-shell microparticles synthesized using colloidal chemistry consist of gold fractal nanostructures grown on precipitated calcium carbonate (PCC) microparticles or silica (SiO 2 ) microspheres. The optimization includes different core sizes and shapes, and shell nanostructures. It shows that the rich surface of the PCC flakes is the best core for the fractal shells providing the highest mass normalized extinction over the extremely broad spectral range. The mass normalized extinction cross section up to 3m 2 /g has been demonstrated in the broad spectral range from the visible to mid-infrared. Essentially, the broadband response is a characteristic feature of each core-shell microparticle in contrast to a combination of several structures resonant at different wavelengths, for example nanorods with different aspect ratios. The photomodification at an IR wavelength makes the window of transparency at the longer wavelength side. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Pressure Prediction of Coal Slurry Transportation Pipeline Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Xue-cun Yang

    2015-01-01

    Full Text Available For coal slurry pipeline blockage prediction problem, through the analysis of actual scene, it is determined that the pressure prediction from each measuring point is the premise of pipeline blockage prediction. Kernel function of support vector machine is introduced into extreme learning machine, the parameters are optimized by particle swarm algorithm, and blockage prediction method based on particle swarm optimization kernel function extreme learning machine (PSOKELM is put forward. The actual test data from HuangLing coal gangue power plant are used for simulation experiments and compared with support vector machine prediction model optimized by particle swarm algorithm (PSOSVM and kernel function extreme learning machine prediction model (KELM. The results prove that mean square error (MSE for the prediction model based on PSOKELM is 0.0038 and the correlation coefficient is 0.9955, which is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy.

  2. Efficient Output Solution for Nonlinear Stochastic Optimal Control Problem with Model-Reality Differences

    Directory of Open Access Journals (Sweden)

    Sie Long Kek

    2015-01-01

    Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.

  3. An accurate approximate solution of optimal sequential age replacement policy for a finite-time horizon

    International Nuclear Information System (INIS)

    Jiang, R.

    2009-01-01

    It is difficult to find the optimal solution of the sequential age replacement policy for a finite-time horizon. This paper presents an accurate approximation to find an approximate optimal solution of the sequential replacement policy. The proposed approximation is computationally simple and suitable for any failure distribution. Their accuracy is illustrated by two examples. Based on the approximate solution, an approximate estimate for the total cost is derived.

  4. System Approach of Logistic Costs Optimization Solution in Supply Chain

    OpenAIRE

    Majerčák, Peter; Masárová, Gabriela; Buc, Daniel; Majerčáková, Eva

    2013-01-01

    This paper is focused on the possibility of using the costs simulation in supply chain, which are on relative high level. Our goal is to determine the costs using logistic costs optimization which must necessarily be used in business activities in the supply chain management. The paper emphasizes the need to perform not isolated optimization in the whole supply chain. Our goal is to compare classic approach, when every part tracks its costs isolated, a try to minimize them, with the system (l...

  5. Efficient solution method for optimal control of nuclear systems

    International Nuclear Information System (INIS)

    Naser, J.A.; Chambre, P.L.

    1981-01-01

    To improve the utilization of existing fuel sources, the use of optimization techniques is becoming more important. A technique for solving systems of coupled ordinary differential equations with initial, boundary, and/or intermediate conditions is given. This method has a number of inherent advantages over existing techniques as well as being efficient in terms of computer time and space requirements. An example of computing the optimal control for a spatially dependent reactor model with and without temperature feedback is given. 10 refs

  6. Regulation of Dynamical Systems to Optimal Solutions of Semidefinite Programs: Algorithms and Applications to AC Optimal Power Flow

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    2015-07-01

    This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the control of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.

  7. Medical Dataset Classification: A Machine Learning Paradigm Integrating Particle Swarm Optimization with Extreme Learning Machine Classifier

    Directory of Open Access Journals (Sweden)

    C. V. Subbulakshmi

    2015-01-01

    Full Text Available Medical data classification is a prime data mining problem being discussed about for a decade that has attracted several researchers around the world. Most classifiers are designed so as to learn from the data itself using a training process, because complete expert knowledge to determine classifier parameters is impracticable. This paper proposes a hybrid methodology based on machine learning paradigm. This paradigm integrates the successful exploration mechanism called self-regulated learning capability of the particle swarm optimization (PSO algorithm with the extreme learning machine (ELM classifier. As a recent off-line learning method, ELM is a single-hidden layer feedforward neural network (FFNN, proved to be an excellent classifier with large number of hidden layer neurons. In this research, PSO is used to determine the optimum set of parameters for the ELM, thus reducing the number of hidden layer neurons, and it further improves the network generalization performance. The proposed method is experimented on five benchmarked datasets of the UCI Machine Learning Repository for handling medical dataset classification. Simulation results show that the proposed approach is able to achieve good generalization performance, compared to the results of other classifiers.

  8. Hybrid Cascading Outage Analysis of Extreme Events with Optimized Corrective Actions

    Energy Technology Data Exchange (ETDEWEB)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.; Samaan, Nader A.; Makarov, Yuri V.; Diao, Ruisheng; Huang, Qiuhua; Ke, Xinda

    2017-10-19

    Power system are vulnerable to extreme contingencies (like an outage of a major generating substation) that can cause significant generation and load loss and can lead to further cascading outages of other transmission facilities and generators in the system. Some cascading outages are seen within minutes following a major contingency, which may not be captured exclusively using the dynamic simulation of the power system. The utilities plan for contingencies either based on dynamic or steady state analysis separately which may not accurately capture the impact of one process on the other. We address this gap in cascading outage analysis by developing Dynamic Contingency Analysis Tool (DCAT) that can analyze hybrid dynamic and steady state behavior of the power system, including protection system models in dynamic simulations, and simulating corrective actions in post-transient steady state conditions. One of the important implemented steady state processes is to mimic operator corrective actions to mitigate aggravated states caused by dynamic cascading. This paper presents an Optimal Power Flow (OPF) based formulation for selecting corrective actions that utility operators can take during major contingency and thus automate the hybrid dynamic-steady state cascading outage process. The improved DCAT framework with OPF based corrective actions is demonstrated on IEEE 300 bus test system.

  9. A Pathological Brain Detection System based on Extreme Learning Machine Optimized by Bat Algorithm.

    Science.gov (United States)

    Lu, Siyuan; Qiu, Xin; Shi, Jianping; Li, Na; Lu, Zhi-Hai; Chen, Peng; Yang, Meng-Meng; Liu, Fang-Yuan; Jia, Wen-Juan; Zhang, Yudong

    2017-01-01

    It is beneficial to classify brain images as healthy or pathological automatically, because 3D brain images can generate so much information which is time consuming and tedious for manual analysis. Among various 3D brain imaging techniques, magnetic resonance (MR) imaging is the most suitable for brain, and it is now widely applied in hospitals, because it is helpful in the four ways of diagnosis, prognosis, pre-surgical, and postsurgical procedures. There are automatic detection methods; however they suffer from low accuracy. Therefore, we proposed a novel approach which employed 2D discrete wavelet transform (DWT), and calculated the entropies of the subbands as features. Then, a bat algorithm optimized extreme learning machine (BA-ELM) was trained to identify pathological brains from healthy controls. A 10x10-fold cross validation was performed to evaluate the out-of-sample performance. The method achieved a sensitivity of 99.04%, a specificity of 93.89%, and an overall accuracy of 98.33% over 132 MR brain images. The experimental results suggest that the proposed approach is accurate and robust in pathological brain detection. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Extreme learning machine based optimal embedding location finder for image steganography.

    Directory of Open Access Journals (Sweden)

    Hayfaa Abdulzahra Atee

    Full Text Available In image steganography, determining the optimum location for embedding the secret message precisely with minimum distortion of the host medium remains a challenging issue. Yet, an effective approach for the selection of the best embedding location with least deformation is far from being achieved. To attain this goal, we propose a novel approach for image steganography with high-performance, where extreme learning machine (ELM algorithm is modified to create a supervised mathematical model. This ELM is first trained on a part of an image or any host medium before being tested in the regression mode. This allowed us to choose the optimal location for embedding the message with best values of the predicted evaluation metrics. Contrast, homogeneity, and other texture features are used for training on a new metric. Furthermore, the developed ELM is exploited for counter over-fitting while training. The performance of the proposed steganography approach is evaluated by computing the correlation, structural similarity (SSIM index, fusion matrices, and mean square error (MSE. The modified ELM is found to outperform the existing approaches in terms of imperceptibility. Excellent features of the experimental results demonstrate that the proposed steganographic approach is greatly proficient for preserving the visual information of an image. An improvement in the imperceptibility as much as 28% is achieved compared to the existing state of the art methods.

  11. Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

    Science.gov (United States)

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  12. Optimization problems with equilibrium constraints and their numerical solution

    Czech Academy of Sciences Publication Activity Database

    Kočvara, Michal; Outrata, Jiří

    Roč. 101 , č. 1 (2004), s. 119-149 ISSN 0025-5610 R&D Projects: GA AV ČR IAA1075005 Grant - others:BMBF(DE) 03ZOM3ER Institutional research plan: CEZ:AV0Z1075907 Keywords : optimization problems * MPEC * MPCC Subject RIV: BA - General Mathematics Impact factor: 1.016, year: 2004

  13. Optimization of chromium biosorption in aqueous solution by marine ...

    African Journals Online (AJOL)

    Optimization of a chromium biosorption process was performed by varying three independent variables pH (0.5 to 3.5), initial chromium ion concentration (10 to 30 mg/L), and Yarrowia lipolytica dosage (2 to 4 g/L) using a Doehlert experimental design (DD) involving response surface methodology (RSM). For the maximum ...

  14. Optimal Component Lumping: problem formulation and solution techniques

    DEFF Research Database (Denmark)

    Lin, Bao; Leibovici, Claude F.; Jørgensen, Sten Bay

    2008-01-01

    This paper presents a systematic method for optimal lumping of a large number of components in order to minimize the loss of information. In principle, a rigorous composition-based model is preferable to describe a system accurately. However, computational intensity and numerical issues restrict ...

  15. Optimized Baxter model of protein solutions : Electrostatics versus adhesion

    NARCIS (Netherlands)

    Prinsen, P.; Odijk, T.

    2004-01-01

    A theory is set up of spherical proteins interacting by screened electrostatics and constant adhesion, in which the effective adhesion parameter is optimized by a variational principle for the free energy. An analytical approach to the second virial coefficient is first outlined by balancing the

  16. The linear ordering problem: an algorithm for the optimal solution ...

    African Journals Online (AJOL)

    In this paper we describe and implement an algorithm for the exact solution of the Linear Ordering problem. Linear Ordering is the problem of finding a linear order of the nodes of a graph such that the sum of the weights which are consistent with this order is as large as possible. It is an NP - Hard combinatorial optimisation ...

  17. Optimization of bioethanol production from carbohydrate rich wastes by extreme thermophilic microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Tomas, A.F.

    2013-05-15

    Second-generation bioethanol is produced from residual biomass such as industrial and municipal waste or agricultural and forestry residues. However, Saccharomyces cerevisiae, the microorganism currently used in industrial first-generation bioethanol production, is not capable of converting all of the carbohydrates present in these complex substrates into ethanol. This is in particular true for pentose sugars such as xylose, generally the second major sugar present in lignocellulosic biomass. The transition of second-generation bioethanol production from pilot to industrial scale is hindered by the recalcitrance of the lignocellulosic biomass, and by the lack of a microorganism capable of converting this feedstock to bioethanol with high yield, efficiency and productivity. In this study, a new extreme thermophilic ethanologenic bacterium was isolated from household waste. When assessed for ethanol production from xylose, an ethanol yield of 1.39 mol mol-1 xylose was obtained. This represents 83 % of the theoretical ethanol yield from xylose and is to date the highest reported value for a native, not genetically modified microorganism. The bacterium was identified as a new member of the genus Thermoanaerobacter, named Thermoanaerobacter pentosaceus and was subsequently used to investigate some of the factors that influence secondgeneration bioethanol production, such as initial substrate concentration and sensitivity to inhibitors. Furthermore, T. pentosaceus was used to develop and optimize bioethanol production from lignocellulosic biomass using a range of different approaches, including combination with other microorganisms and immobilization of the cells. T. pentosaceus could produce ethanol from a wide range of substrates without the addition of nutrients such as yeast extract and vitamins to the medium. It was initially sensitive to concentrations of 10 g l-1 of xylose and 1 % (v/v) ethanol. However, long term repeated batch cultivation showed that the strain

  18. Multiobjective Optimization of Linear Cooperative Spectrum Sensing: Pareto Solutions and Refinement.

    Science.gov (United States)

    Yuan, Wei; You, Xinge; Xu, Jing; Leung, Henry; Zhang, Tianhang; Chen, Chun Lung Philip

    2016-01-01

    In linear cooperative spectrum sensing, the weights of secondary users and detection threshold should be optimally chosen to minimize missed detection probability and to maximize secondary network throughput. Since these two objectives are not completely compatible, we study this problem from the viewpoint of multiple-objective optimization. We aim to obtain a set of evenly distributed Pareto solutions. To this end, here, we introduce the normal constraint (NC) method to transform the problem into a set of single-objective optimization (SOO) problems. Each SOO problem usually results in a Pareto solution. However, NC does not provide any solution method to these SOO problems, nor any indication on the optimal number of Pareto solutions. Furthermore, NC has no preference over all Pareto solutions, while a designer may be only interested in some of them. In this paper, we employ a stochastic global optimization algorithm to solve the SOO problems, and then propose a simple method to determine the optimal number of Pareto solutions under a computational complexity constraint. In addition, we extend NC to refine the Pareto solutions and select the ones of interest. Finally, we verify the effectiveness and efficiency of the proposed methods through computer simulations.

  19. Extremity exams optimization for computed radiography; Otimizacao de exames de extremidade para radiologia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Pavan, Ana Luiza M.; Alves, Allan Felipe F.; Velo, Alexandre F.; Miranda, Jose Ricardo A., E-mail: analuiza@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2013-08-15

    The computed radiography (CR) has become the most used device for image acquisition, since its introduction in the 80s. The detection and early diagnosis, obtained through CR examinations, are important for the successful treatment of diseases of the hand. However, the norms used for optimization of these images are based on international protocols. Therefore, it is necessary to determine letters of radiographic techniques for CR system, which provides a safe medical diagnosis, with doses as low as reasonably achievable. The objective of this work is to develop an extremity homogeneous phantom to be used in the calibration process of radiographic techniques. In the construction process of the simulator, it has been developed a tissues' algorithm quantifier using Matlab®. In this process the average thickness was quantified from bone and soft tissues in the region of the hand of an anthropomorphic simulator as well as the simulators' material thickness corresponding (aluminum and Lucite) using technique of mask application and removal Gaussian histogram corresponding to tissues of interest. The homogeneous phantom was used to calibrate the x-ray beam. The techniques were implemented in a calibrated hand anthropomorphic phantom. The images were evaluated by specialists in radiology by the method of VGA. Skin entrance surface doses were estimated (SED) corresponding to each technique obtained with their respective tube charge. The thicknesses of simulators materials that constitute the homogeneous phantom determined in this study were 19.01 mm of acrylic and 0.81 mm of aluminum. A better picture quality with doses as low as reasonably achievable decreased dose and tube charge around 53.35% and 37.78% respectively, compared normally used by radiology diagnostic routine clinical of HCFMB-UNESP. (author)

  20. Numerical solution of optimal departure frequency of Taipei TMS

    Science.gov (United States)

    Young, Lih-jier; Chiu, Chin-Hsin

    2016-05-01

    Route Number 5 (Bannan Line) of Taipei Mass Rapid Transit (MRT) is the most popular line in the Taipei Metro System especially during rush hours periods. It has been estimated there are more than 8,000 passengers on the ticket platform during 18:00∼19:00 at Taipei main station. The purpose of this research is to predict a specific departure frequency of passengers per train. Monte Carlo Simulation will be used to optimize departure frequency according to the passenger information provided by 22 stations, i.e., 22 random variables of route number 5. It is worth mentioning that we used 30,000 iterations to get the different samples of the optimization departure frequency, i.e., 10 trains/hr which matches the practical situation.

  1. MINLP solution for an optimal isotope separation system

    International Nuclear Information System (INIS)

    Boisset-Baticle, L.; Latge, C.; Joulia, X.

    1994-01-01

    This paper deals with designing of cryogenic distillation systems for the separation of hydrogen isotopes in a thermonuclear fusion process. The design must minimize the tritium inventory in the distillation columns and satisfy the separation requirements. This induces the optimization of both the structure and the operating conditions of the columns. Such a problem is solved by use of a Mixed-Integer NonLinear Programming (MINLP) tool coupled to a process simulator. The MINLP procedure is based on the iterative and alternative treatment of two subproblems: a NLP problem which is solved by a reduced-gradient method, and a MILP problem, solved with a Branch and Bound method coupled to a simplexe. The formulation of the problem and the choice of an appropriate superstructure are here detailed, and results are finally presented, concerning the optimal design of a specific isotope separation system. (author)

  2. A solution to the optimal power flow using multi-verse optimizer

    Directory of Open Access Journals (Sweden)

    Bachir Bentouati

    2016-12-01

    Full Text Available In this work, the most common problem of the modern power system named optimal power flow (OPF is optimized using the novel meta-heuristic optimization Multi-verse Optimizer(MVO algorithm. In order to solve the optimal power flow problem, the IEEE 30-bus and IEEE 57-bus systems are used. MVO is applied to solve the proposed problem. The problems considered in the OPF problem are fuel cost reduction, voltage profile improvement, voltage stability enhancement. The obtained results are compared with recently published meta-heuristics. Simulation results clearly reveal the effectiveness and the rapidity of the proposed algorithm for solving the OPF problem.

  3. Simple Machines Forum, a Solution for Dialogue Optimization between Physicians

    Directory of Open Access Journals (Sweden)

    Laura SÎNGIORZAN

    2013-02-01

    Full Text Available We developed an instrument which can ensure a quick and easy dialogue between the physicians of the Oncology Institute and family physicians. The platform we chose was Simple Machines Forum (abbreviated as SMF, a free Internet forum (BBS - Bulletin Board System application. The purpose of this article is not to detail the software platform, but to emphasize the facilities and advantages of using this solution in the medical community.

  4. The Fundamental Solution and Its Role in the Optimal Control of Infinite Dimensional Neutral Systems

    International Nuclear Information System (INIS)

    Liu Kai

    2009-01-01

    In this work, we shall consider standard optimal control problems for a class of neutral functional differential equations in Banach spaces. As the basis of a systematic theory of neutral models, the fundamental solution is constructed and a variation of constants formula of mild solutions is established. We introduce a class of neutral resolvents and show that the Laplace transform of the fundamental solution is its neutral resolvent operator. Necessary conditions in terms of the solutions of neutral adjoint systems are established to deal with the fixed time integral convex cost problem of optimality. Based on optimality conditions, the maximum principle for time varying control domain is presented. Finally, the time optimal control problem to a target set is investigated

  5. Iterative solution to the optimal poison management problem in pressurized water reactors

    International Nuclear Information System (INIS)

    Colletti, J.P.; Levine, S.H.; Lewis, J.B.

    1983-01-01

    A new method for solving the optimal poison management problem for a multiregion pressurized water reactor has been developed. The optimization objective is to maximize the end-of-cycle core excess reactivity for any given beginning-of-cycle fuel loading. The problem is treated as an optimal control problem with the region burnup and control absorber concentrations acting as the state and control variables, respectively. Constraints are placed on the power peaking, soluble boron concentration, and control absorber concentrations. The solution method consists of successive relinearizations of the system equations resulting in a sequence of nonlinear programming problems whose solutions converge to the desired optimal control solution. Application of the method to several test problems based on a simplified three-region reactor suggests a bang-bang optimal control strategy with the peak power location switching between the inner and outer regions of the core and the critical soluble boron concentration as low as possible throughout the cycle

  6. A fast method for optimal reactive power flow solution

    Energy Technology Data Exchange (ETDEWEB)

    Sadasivam, G; Khan, M A [Anna Univ., Madras (IN). Coll. of Engineering

    1990-01-01

    A fast successive linear programming (SLP) method for minimizing transmission losses and improving the voltage profile is proposed. The method uses the same compactly stored, factorized constant matrices in all the LP steps, both for power flow solution and for constructing the LP model. The inherent oscillatory convergence of SLP methods is overcome by proper selection of initial step sizes and their gradual reduction. Detailed studies on three systems, including a 109-bus system, reveal the fast and reliable convergence property of the method. (author).

  7. Practical solutions for multi-objective optimization: An application to system reliability design problems

    International Nuclear Information System (INIS)

    Taboada, Heidi A.; Baheranwala, Fatema; Coit, David W.; Wattanapongsakorn, Naruemon

    2007-01-01

    For multiple-objective optimization problems, a common solution methodology is to determine a Pareto optimal set. Unfortunately, these sets are often large and can become difficult to comprehend and consider. Two methods are presented as practical approaches to reduce the size of the Pareto optimal set for multiple-objective system reliability design problems. The first method is a pseudo-ranking scheme that helps the decision maker select solutions that reflect his/her objective function priorities. In the second approach, we used data mining clustering techniques to group the data by using the k-means algorithm to find clusters of similar solutions. This provides the decision maker with just k general solutions to choose from. With this second method, from the clustered Pareto optimal set, we attempted to find solutions which are likely to be more relevant to the decision maker. These are solutions where a small improvement in one objective would lead to a large deterioration in at least one other objective. To demonstrate how these methods work, the well-known redundancy allocation problem was solved as a multiple objective problem by using the NSGA genetic algorithm to initially find the Pareto optimal solutions, and then, the two proposed methods are applied to prune the Pareto set

  8. Numerical solution of the state-delayed optimal control problems by a fast and accurate finite difference θ-method

    Science.gov (United States)

    Hajipour, Mojtaba; Jajarmi, Amin

    2018-02-01

    Using the Pontryagin's maximum principle for a time-delayed optimal control problem results in a system of coupled two-point boundary-value problems (BVPs) involving both time-advance and time-delay arguments. The analytical solution of this advance-delay two-point BVP is extremely difficult, if not impossible. This paper provides a discrete general form of the numerical solution for the derived advance-delay system by applying a finite difference θ-method. This method is also implemented for the infinite-time horizon time-delayed optimal control problems by using a piecewise version of the θ-method. A matrix formulation and the error analysis of the suggested technique are provided. The new scheme is accurate, fast and very effective for the optimal control of linear and nonlinear time-delay systems. Various types of finite- and infinite-time horizon problems are included to demonstrate the accuracy, validity and applicability of the new technique.

  9. Feature selection in wind speed prediction systems based on a hybrid coral reefs optimizationExtreme learning machine approach

    International Nuclear Information System (INIS)

    Salcedo-Sanz, S.; Pastor-Sánchez, A.; Prieto, L.; Blanco-Aguilera, A.; García-Herrera, R.

    2014-01-01

    Highlights: • A novel approach for short-term wind speed prediction is presented. • The system is formed by a coral reefs optimization algorithm and an extreme learning machine. • Feature selection is carried out with the CRO to improve the ELM performance. • The method is tested in real wind farm data in USA, for the period 2007–2008. - Abstract: This paper presents a novel approach for short-term wind speed prediction based on a Coral Reefs Optimization algorithm (CRO) and an Extreme Learning Machine (ELM), using meteorological predictive variables from a physical model (the Weather Research and Forecast model, WRF). The approach is based on a Feature Selection Problem (FSP) carried out with the CRO, that must obtain a reduced number of predictive variables out of the total available from the WRF. This set of features will be the input of an ELM, that finally provides the wind speed prediction. The CRO is a novel bio-inspired approach, based on the simulation of reef formation and coral reproduction, able to obtain excellent results in optimization problems. On the other hand, the ELM is a new paradigm in neural networks’ training, that provides a robust and extremely fast training of the network. Together, these algorithms are able to successfully solve this problem of feature selection in short-term wind speed prediction. Experiments in a real wind farm in the USA show the excellent performance of the CRO–ELM approach in this FSP wind speed prediction problem

  10. Root-induced Changes in the Rhizosphere of Extreme High Yield Tropical Rice: 2. Soil Solution Chemical Properties

    Directory of Open Access Journals (Sweden)

    Mitsuru Osaki

    2012-09-01

    Full Text Available Our previous studies showed that the extreme high yield tropical rice (Padi Panjang produced 3-8 t ha-1 without fertilizers. We also found that the rice yield did not correlate with some soil properties. We thought that it may be due to ability of root in affecting soil properties in the root zone. Therefore, we studied the extent of rice root in affecting the chemical properties of soil solution surrounding the root zone. A homemade rhizobox (14x10x12 cm was used in this experiment. The rhizobox was vertically segmented 2 cm interval using nylon cloth that could be penetrated neither root nor mycorrhiza, but, soil solution was freely passing the cloth. Three soils of different origins (Kuin, Bunipah and Guntung Papuyu were used. The segment in the center was sown with 20 seeds of either Padi Panjang or IR64 rice varieties. After emerging, 10 seedlings were maintained for 5 weeks. At 4 weeks after sowing, some chemical properties of the soil solution were determined. These were ammonium (NH4+, nitrate (NO3-, phosphorus (P and iron (Fe2+ concentrations and pH, electric conductivity (EC and oxidation reduction potential (ORP. In general, the plant root changed solution chemical properties both in- and outside the soil rhizosphere. The patterns of changes were affected by the properties of soil origins. The release of exudates and change in ORP may have been responsible for the changes soil solution chemical properties.

  11. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    OpenAIRE

    He, Xinhua; Hu, Wenfa

    2017-01-01

    Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total c...

  12. Optimal management of night eating syndrome: challenges and solutions

    Directory of Open Access Journals (Sweden)

    Kucukgoncu S

    2015-03-01

    Full Text Available Suat Kucukgoncu, Margaretta Midura, Cenk Tek Department of Psychiatry, Yale University, New Haven, CT, USA Abstract: Night Eating Syndrome (NES is a unique disorder characterized by a delayed pattern of food intake in which recurrent episodes of nocturnal eating and/or excessive food consumption occur after the evening meal. NES is a clinically important disorder due to its relationship to obesity, its association with other psychiatric disorders, and problems concerning sleep. However, NES often goes unrecognized by both health professionals and patients. The lack of knowledge regarding NES in clinical settings may lead to inadequate diagnoses and inappropriate treatment approaches. Therefore, the proper diagnosis of NES is the most important issue when identifying NES and providing treatment for this disorder. Clinical assessment tools such as the Night Eating Questionnaire may help health professionals working with populations vulnerable to NES. Although NES treatment studies are still in their infancy, antidepressant treatments and psychological therapies can be used for optimal management of patients with NES. Other treatment options such as melatonergic medications, light therapy, and the anticonvulsant topiramate also hold promise as future treatment options. The purpose of this review is to provide a summary of NES, including its diagnosis, comorbidities, and treatment approaches. Possible challenges addressing patients with NES and management options are also discussed. Keywords: night eating, obesity, psychiatric disorders, weight, depression

  13. On Attainability of Optimal Solutions for Linear Elliptic Equations with Unbounded Coefficients

    Directory of Open Access Journals (Sweden)

    P. I. Kogut

    2011-12-01

    Full Text Available We study an optimal boundary control problem (OCP associated to a linear elliptic equation —div (Vj/ + A(xVy = f describing diffusion in a turbulent flow. The characteristic feature of this equation is the fact that, in applications, the stream matrix A(x = [a,ij(x]i,j=i,...,N is skew-symmetric, ац(х = —a,ji(x, measurable, and belongs to L -space (rather than L°°. An optimal solution to such problem can inherit a singular character of the original stream matrix A. We show that optimal solutions can be attainable by solutions of special optimal boundary control problems.

  14. Smooth Solutions to Optimal Investment Models with Stochastic Volatilities and Portfolio Constraints

    International Nuclear Information System (INIS)

    Pham, H.

    2002-01-01

    This paper deals with an extension of Merton's optimal investment problem to a multidimensional model with stochastic volatility and portfolio constraints. The classical dynamic programming approach leads to a characterization of the value function as a viscosity solution of the highly nonlinear associated Bellman equation. A logarithmic transformation expresses the value function in terms of the solution to a semilinear parabolic equation with quadratic growth on the derivative term. Using a stochastic control representation and some approximations, we prove the existence of a smooth solution to this semilinear equation. An optimal portfolio is shown to exist, and is expressed in terms of the classical solution to this semilinear equation. This reduction is useful for studying numerical schemes for both the value function and the optimal portfolio. We illustrate our results with several examples of stochastic volatility models popular in the financial literature

  15. OPTIMAL SOLUTIONS FOR IMPLEMENTING THE SUPPLY-SALES CHAIN MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Elena COFAS

    2014-04-01

    Full Text Available The supply chain represents all physical flows , information and financial flows linking suppliers and customers. It leads on the one hand, the idea of the chain in which the various elements of an industrial production system are interrelated and secondly to a broad definition of supply (flow between plants, flow between a supplier and a customer, flow between two workstations etc.. For a number of enterprise managers, supply chain is a topic of major interest. In contrast, non-chain coordination, losses may result for the enterprise: obsolete inventory devaluation, impairment etc. Since the 1980’s, several companies came together in the same service all functions dealing logistic flow from supply to distribution, through production management and resource planning. At the same time it was developed the notion of “time" to expand these flows and to increase quality and reduce inventory. 1990’s promotes the trend of broadening the concept of integrated logistics to a more open organization, "supply chain" in which is contained the whole organization of the enterprise, designed around streams: sales, distribution, manufacturing, purchasing, and supply. This is the area where, through this work, I try to make a contribution towards finding practical solutions to implement an efficient supply chain that contribute to increased economic performance of companies.

  16. Optimizing the pathology workstation "cockpit": Challenges and solutions

    Directory of Open Access Journals (Sweden)

    Elizabeth A Krupinski

    2010-01-01

    Full Text Available The 21 st century has brought numerous changes to the clinical reading (i.e., image or virtual pathology slide interpretation environment of pathologists and it will continue to change even more dramatically as information and communication technologies (ICTs become more widespread in the integrated healthcare enterprise. The extent to which these changes impact the practicing pathologist differ as a function of the technology under consideration, but digital "virtual slides" and the viewing of images on computer monitors instead of glass slides through a microscope clearly represents a significant change in the way that pathologists extract information from these images and render diagnostic decisions. One of the major challenges facing pathologists in this new era is how to best optimize the pathology workstation, the reading environment and the new and varied types of information available in order to ensure efficient and accurate processing of this information. Although workstations can be stand-alone units with images imported via external storage devices, this scenario is becoming less common as pathology departments connect to information highways within their hospitals and to external sites. Picture Archiving and Communications systems are no longer confined to radiology departments but are serving the entire integrated healthcare enterprise, including pathology. In radiology, the workstation is often referred to as the "cockpit" with a "digital dashboard" and the reading room as the "control room." Although pathology has yet to "go digital" to the extent that radiology has, lessons derived from radiology reading "cockpits" can be quite valuable in setting up the digital pathology reading room. In this article, we describe the concept of the digital dashboard and provide some recent examples of informatics-based applications that have been shown to improve the workflow and quality in digital reading environments.

  17. Selective recovery of Pd(II) from extremely acidic solution using ion-imprinted chitosan fiber: Adsorption performance and mechanisms

    International Nuclear Information System (INIS)

    Lin, Shuo; Wei, Wei; Wu, Xiaohui; Zhou, Tao; Mao, Juan; Yun, Yeoung-Sang

    2015-01-01

    Highlights: • An acid-resisting chitosan fiber was prepared by ion-imprinting technique. • Pd(II) and ECH were as template and two-step crosslinking agent, respectively. • IIF showed a good adsorption and selectivity performance on Pd(II) solutions. • Selectivity was due to the electrostatic attraction between −NH_3"+ and [PdCl_4]"2"−. • Stable sorption/desorption performance shows a potential in further application. - Abstract: A novel, selective and acid-resisting chitosan fiber adsorbent was prepared by the ion-imprinting technique using Pd(II) and epichlorohydrin as the template and two-step crosslinking agent, respectively. The resulting ion-imprinted chitosan fibers (IIF) were used to selectively adsorb Pd(II) under extremely acidic synthetic metal solutions. The adsorption and selectivity performances of IIF including kinetics, isotherms, pH effects, and regeneration were investigated. Pd(II) rapidly adsorbed on the IIF within 100 min, achieving the adsorption equilibrium. The isotherm results showed that the maximum Pd(II) uptake on the IIF was maintained as 324.6–326.4 mg g"−"1 in solutions containing single and multiple metals, whereas the Pd(II) uptake on non-imprinted fibers (NIF) decreased from 313.7 to 235.3 mg g"−"1 in solution containing multiple metals. Higher selectivity coefficients values were obtained from the adsorption on the IIF, indicating a better Pd(II) selectivity. The amine group, supposedly the predominant adsorption site for Pd(II), was confirmed by Fourier transform infrared spectroscopy and X-ray photoelectron spectroscopy. The pH value played a significant role on the mechanism of the selective adsorption in the extremely acidic conditions. Furthermore, the stabilized performance for three cycles of sorption/desorption shows a potential for further large-scale applications.

  18. Selective recovery of Pd(II) from extremely acidic solution using ion-imprinted chitosan fiber: Adsorption performance and mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Shuo [School of Environmental Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Wei, Wei [School of Chemical Engineering, Chonbuk National University, Jeonbuk 561-756 (Korea, Republic of); Wu, Xiaohui; Zhou, Tao [School of Environmental Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Mao, Juan, E-mail: monicamao45@hust.edu.cn [School of Environmental Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Yun, Yeoung-Sang, E-mail: ysyun@jbnu.ac.kr [School of Chemical Engineering, Chonbuk National University, Jeonbuk 561-756 (Korea, Republic of)

    2015-12-15

    Highlights: • An acid-resisting chitosan fiber was prepared by ion-imprinting technique. • Pd(II) and ECH were as template and two-step crosslinking agent, respectively. • IIF showed a good adsorption and selectivity performance on Pd(II) solutions. • Selectivity was due to the electrostatic attraction between −NH{sub 3}{sup +} and [PdCl{sub 4}]{sup 2−}. • Stable sorption/desorption performance shows a potential in further application. - Abstract: A novel, selective and acid-resisting chitosan fiber adsorbent was prepared by the ion-imprinting technique using Pd(II) and epichlorohydrin as the template and two-step crosslinking agent, respectively. The resulting ion-imprinted chitosan fibers (IIF) were used to selectively adsorb Pd(II) under extremely acidic synthetic metal solutions. The adsorption and selectivity performances of IIF including kinetics, isotherms, pH effects, and regeneration were investigated. Pd(II) rapidly adsorbed on the IIF within 100 min, achieving the adsorption equilibrium. The isotherm results showed that the maximum Pd(II) uptake on the IIF was maintained as 324.6–326.4 mg g{sup −1} in solutions containing single and multiple metals, whereas the Pd(II) uptake on non-imprinted fibers (NIF) decreased from 313.7 to 235.3 mg g{sup −1} in solution containing multiple metals. Higher selectivity coefficients values were obtained from the adsorption on the IIF, indicating a better Pd(II) selectivity. The amine group, supposedly the predominant adsorption site for Pd(II), was confirmed by Fourier transform infrared spectroscopy and X-ray photoelectron spectroscopy. The pH value played a significant role on the mechanism of the selective adsorption in the extremely acidic conditions. Furthermore, the stabilized performance for three cycles of sorption/desorption shows a potential for further large-scale applications.

  19. Smooth non-extremal D1-D5-P solutions as charged gravitational instantons

    International Nuclear Information System (INIS)

    Chakrabarty, Bidisha; Rocha, Jorge V.; Virmani, Amitabh

    2016-01-01

    We present an alternative and more direct construction of the non-super-symmetric D1-D5-P supergravity solutions found by Jejjala, Madden, Ross and Titchener. We show that these solutions — with all three charges and both rotations turned on — can be viewed as a charged version of the Myers-Perry instanton. We present an inverse scattering construction of the Myers-Perry instanton metric in Euclidean five-dimensional gravity. The angular momentum bounds in this construction turn out to be precisely the ones necessary for the smooth microstate geometries. We add charges on the Myers-Perry instanton using appropriate SO(4,4) hidden symmetry transformations. The full construction can be viewed as an extension and simplification of a previous work by Katsimpouri, Kleinschmidt and Virmani.

  20. Unusual black-holes: about some stable (non-evaporating) extremal solutions of Einstein equations

    International Nuclear Information System (INIS)

    Tonin-Zanchin, V.; Recami, E.

    1990-01-01

    Within a purely classical formulation of ''strong gravity'', we associated hadron constituents (and even hadrons themselves) with suitable stationary, axisymmetric solutions of certain new Einsten-type equations supposed to describe the strong field inside hadrons. As a consequence, the cosmological constant Λ and the masses M result in theory to be scaled up, and transformed into a ''hadronic constant'' and into ''strong masses'', respectively. Due to the unusual range of Λ and M values considered, we met a series of solutions of the Kerr-Newman-de Sitter (KNdS) type with so uncommon horizon properties (e.g., completely impermeable horizons), that it is worth studing them also in the case of ordinary gravity. This is the aim of the present work. The requirement that those solutions be stable, i.e., that their temperature (or surface gravity) be vanishingly small, implies the coincidence of at least two of their (in general, three) horizons. In the case of ordinary Einstein equations and for stable black holes of the KNdS type, we get Regge-like relations among mass M, angular momentum J, charge q and cosmological constant Λ. For instance, with the standard definitions Q 2 ≡ Gq 2 / (4Π ε 0 c 4 )); a ≡ J/(Mc); m ≡ GM/c 2 , in the case Λ = 0 in which m 2 = a 2 + Q 2 and q is negligible we find M 2 = J, where c = G = 1. When considering, for simplicity, Λ > 0 and J = 0 (and q still negligible), then we obtain m 2 = 1/(9Λ). In the most general case, the condition, for instance, of ''triple coincidence'' among the three horizons yields for |Λa 2 / 2 = 2/(9Λ) ; m 2 = 8(a 2 + Q 2 )/9. One of the interesting points is that - with few exceptions - all such relations (among M, J, q, Λ) lead to solutions that can be regarded as (stable) cosmological models. Worth of notice are those representing isolated worlds, bounded by a two-way impermeable horizon. (author) [pt

  1. Optimization and Modeling of Extreme Freshwater Discharge from Japanese First-Class River Basins to Coastal Oceans

    Science.gov (United States)

    Kuroki, R.; Yamashiki, Y. A.; Varlamov, S.; Miyazawa, Y.; Gupta, H. V.; Racault, M.; Troselj, J.

    2017-12-01

    We estimated the effects of extreme fluvial outflow events from river mouths on the salinity distribution in the Japanese coastal zones. Targeted extreme event was a typhoon from 06/09/2015 to 12/09/2015, and we generated a set of hourly simulated river outflow data of all Japanese first-class rivers from these basins to the Pacific Ocean and the Sea of Japan during the period by using our model "Cell Distributed Runoff Model Version 3.1.1 (CDRMV3.1.1)". The model simulated fresh water discharges for the case of the typhoon passage over Japan. We used these data with a coupled hydrological-oceanographic model JCOPE-T, developed by Japan Agency for Marine-earth Science and Technology (JAMSTEC), for estimation of the circulation and salinity distribution in Japanese coastal zones. By using the model, the coastal oceanic circulation was reproduced adequately, which was verified by satellite remote sensing. In addition to this, we have successfully optimized 5 parameters, soil roughness coefficient, river roughness coefficient, effective porosity, saturated hydraulic conductivity, and effective rainfall by using Shuffled Complex Evolution method developed by University of Arizona (SCE-UA method), that is one of the optimization method for hydrological model. Increasing accuracy of peak discharge prediction of extreme typhoon events on river mouths is essential for continental-oceanic mutual interaction.

  2. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  3. Optimization and analysis of large chemical kinetic mechanisms using the solution mapping method - Combustion of methane

    Science.gov (United States)

    Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.

    1992-01-01

    A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.

  4. Design of a Fractional Order Frequency PID Controller for an Islanded Microgrid: A Multi-Objective Extremal Optimization Method

    Directory of Open Access Journals (Sweden)

    Huan Wang

    2017-10-01

    Full Text Available Fractional order proportional-integral-derivative(FOPID controllers have attracted increasing attentions recently due to their better control performance than the traditional integer-order proportional-integral-derivative (PID controllers. However, there are only few studies concerning the fractional order control of microgrids based on evolutionary algorithms. From the perspective of multi-objective optimization, this paper presents an effective FOPID based frequency controller design method called MOEO-FOPID for an islanded microgrid by using a Multi-objective extremal optimization (MOEO algorithm to minimize frequency deviation and controller output signal simultaneously in order to improve finally the efficient operation of distributed generations and energy storage devices. Its superiority to nondominated sorting genetic algorithm-II (NSGA-II based FOPID/PID controllers and other recently reported single-objective evolutionary algorithms such as Kriging-based surrogate modeling and real-coded population extremal optimization-based FOPID controllers is demonstrated by the simulation studies on a typical islanded microgrid in terms of the control performance including frequency deviation, deficit grid power, controller output signal and robustness.

  5. Optimization of the recycling process of precipitation barren solution in a uranium mine

    International Nuclear Information System (INIS)

    Long Qing; Yu Suqin; Zhao Wucheng; Han Wei; Zhang Hui; Chen Shuangxi

    2014-01-01

    Alkaline leaching process was adopted to recover uranium from ores in a uranium mine, and high concentration uranium solution, which would be later used in precipitation, was obtained after ion-exchange and elution steps. The eluting agent consisted of NaCl and NaHCO 3 . Though precipitation barren solution contained as high as 80 g/L Na 2 CO 3 , it still can not be recycled due to presence of high Cl - concentration So, both elution and precipitation processes were optimized in order to control the Cl - concentration in the precipitation barren solution to the recyclable concentration range. Because the precipitation barren solution can be recycled by optimization, the agent consumption was lowered and the discharge of waste water was reduced. (authors)

  6. The Successor Function and Pareto Optimal Solutions of Cooperative Differential Systems with Concavity. I

    DEFF Research Database (Denmark)

    Andersen, Kurt Munk; Sandqvist, Allan

    1997-01-01

    We investigate the domain of definition and the domain of values for the successor function of a cooperative differential system x'=f(t,x), where the coordinate functions are concave in x for any fixed value of t. Moreover, we give a characterization of a weakly Pareto optimal solution.......We investigate the domain of definition and the domain of values for the successor function of a cooperative differential system x'=f(t,x), where the coordinate functions are concave in x for any fixed value of t. Moreover, we give a characterization of a weakly Pareto optimal solution....

  7. Multiobjective optimization of urban water resources: Moving toward more practical solutions

    Science.gov (United States)

    Mortazavi, Mohammad; Kuczera, George; Cui, Lijie

    2012-03-01

    The issue of drought security is of paramount importance for cities located in regions subject to severe prolonged droughts. The prospect of "running out of water" for an extended period would threaten the very existence of the city. Managing drought security for an urban water supply is a complex task involving trade-offs between conflicting objectives. In this paper a multiobjective optimization approach for urban water resource planning and operation is developed to overcome practically significant shortcomings identified in previous work. A case study based on the headworks system for Sydney (Australia) demonstrates the approach and highlights the potentially serious shortcomings of Pareto optimal solutions conditioned on short climate records, incomplete decision spaces, and constraints to which system response is sensitive. Where high levels of drought security are required, optimal solutions conditioned on short climate records are flawed. Our approach addresses drought security explicitly by identifying approximate optimal solutions in which the system does not "run dry" in severe droughts with expected return periods up to a nominated (typically large) value. In addition, it is shown that failure to optimize the full mix of interacting operational and infrastructure decisions and to explore the trade-offs associated with sensitive constraints can lead to significantly more costly solutions.

  8. Topology-oblivious optimization of MPI broadcast algorithms on extreme-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2015-01-01

    operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a simple but general approach to optimization of the legacy MPI broadcast algorithms, which are widely used in MPICH and Open

  9. Intelligent fault diagnosis of photovoltaic arrays based on optimized kernel extreme learning machine and I-V characteristics

    International Nuclear Information System (INIS)

    Chen, Zhicong; Wu, Lijun; Cheng, Shuying; Lin, Peijie; Wu, Yue; Lin, Wencheng

    2017-01-01

    Highlights: •An improved Simulink based modeling method is proposed for PV modules and arrays. •Key points of I-V curves and PV model parameters are used as the feature variables. •Kernel extreme learning machine (KELM) is explored for PV arrays fault diagnosis. •The parameters of KELM algorithm are optimized by the Nelder-Mead simplex method. •The optimized KELM fault diagnosis model achieves high accuracy and reliability. -- Abstract: Fault diagnosis of photovoltaic (PV) arrays is important for improving the reliability, efficiency and safety of PV power stations, because the PV arrays usually operate in harsh outdoor environment and tend to suffer various faults. Due to the nonlinear output characteristics and varying operating environment of PV arrays, many machine learning based fault diagnosis methods have been proposed. However, there still exist some issues: fault diagnosis performance is still limited due to insufficient monitored information; fault diagnosis models are not efficient to be trained and updated; labeled fault data samples are hard to obtain by field experiments. To address these issues, this paper makes contribution in the following three aspects: (1) based on the key points and model parameters extracted from monitored I-V characteristic curves and environment condition, an effective and efficient feature vector of seven dimensions is proposed as the input of the fault diagnosis model; (2) the emerging kernel based extreme learning machine (KELM), which features extremely fast learning speed and good generalization performance, is utilized to automatically establish the fault diagnosis model. Moreover, the Nelder-Mead Simplex (NMS) optimization method is employed to optimize the KELM parameters which affect the classification performance; (3) an improved accurate Simulink based PV modeling approach is proposed for a laboratory PV array to facilitate the fault simulation and data sample acquisition. Intensive fault experiments are

  10. Ranked solutions to a class of combinatorial optimizations - with applications in mass spectrometry based peptide sequencing

    Science.gov (United States)

    Doerr, Timothy; Alves, Gelio; Yu, Yi-Kuo

    2006-03-01

    Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time. This suggests a way to efficiently find approximate solutions - - find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the fininte number of high- ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks - - peptide sequencing using tandem mass spectrometry data.

  11. An effective secondary decomposition approach for wind power forecasting using extreme learning machine trained by crisscross optimization

    International Nuclear Information System (INIS)

    Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo

    2017-01-01

    Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.

  12. Asymptotic Method of Solution for a Problem of Construction of Optimal Gas-Lift Process Modes

    Directory of Open Access Journals (Sweden)

    Fikrat A. Aliev

    2010-01-01

    Full Text Available Mathematical model in oil extraction by gas-lift method for the case when the reciprocal value of well's depth represents a small parameter is considered. Problem of optimal mode construction (i.e., construction of optimal program trajectories and controls is reduced to the linear-quadratic optimal control problem with a small parameter. Analytic formulae for determining the solutions at the first-order approximation with respect to the small parameter are obtained. Comparison of the obtained results with known ones on a specific example is provided, which makes it, in particular, possible to use obtained results in realizations of oil extraction problems by gas-lift method.

  13. Extremely Efficient Design of Organic Thin Film Solar Cells via Learning-Based Optimization

    Directory of Open Access Journals (Sweden)

    Mine Kaya

    2017-11-01

    Full Text Available Design of efficient thin film photovoltaic (PV cells require optical power absorption to be computed inside a nano-scale structure of photovoltaics, dielectric and plasmonic materials. Calculating power absorption requires Maxwell’s electromagnetic equations which are solved using numerical methods, such as finite difference time domain (FDTD. The computational cost of thin film PV cell design and optimization is therefore cumbersome, due to successive FDTD simulations. This cost can be reduced using a surrogate-based optimization procedure. In this study, we deploy neural networks (NNs to model optical absorption in organic PV structures. We use the corresponding surrogate-based optimization procedure to maximize light trapping inside thin film organic cells infused with metallic particles. Metallic particles are known to induce plasmonic effects at the metal–semiconductor interface, thus increasing absorption. However, a rigorous design procedure is required to achieve the best performance within known design guidelines. As a result of using NNs to model thin film solar absorption, the required time to complete optimization is decreased by more than five times. The obtained NN model is found to be very reliable. The optimization procedure results in absorption enhancement greater than 200%. Furthermore, we demonstrate that once a reliable surrogate model such as the developed NN is available, it can be used for alternative analyses on the proposed design, such as uncertainty analysis (e.g., fabrication error.

  14. Energy efficiency: EDF Optimal Solutions improves the l'Oreal firm of Vichy; Efficacite energetique: EDF Optimal Solutions ameliore l'usine l'Oreal de Vichy

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2011-09-15

    The l'Oreal CAP site has inaugurated its energetic eco-efficiency installations realized by EDF Optimal Solutions. This solution combines several techniques and makes possible to halve its yearly CO{sub 2} releases. (O.M.)

  15. Approximate analytical solution of diffusion equation with fractional time derivative using optimal homotopy analysis method

    Directory of Open Access Journals (Sweden)

    S. Das

    2013-12-01

    Full Text Available In this article, optimal homotopy-analysis method is used to obtain approximate analytic solution of the time-fractional diffusion equation with a given initial condition. The fractional derivatives are considered in the Caputo sense. Unlike usual Homotopy analysis method, this method contains at the most three convergence control parameters which describe the faster convergence of the solution. Effects of parameters on the convergence of the approximate series solution by minimizing the averaged residual error with the proper choices of parameters are calculated numerically and presented through graphs and tables for different particular cases.

  16. High-Level Topology-Oblivious Optimization of MPI Broadcast Algorithms on Extreme-Scale Platforms

    KAUST Repository

    Hasanov, Khalid

    2014-01-01

    There has been a significant research in collective communication operations, in particular in MPI broadcast, on distributed memory platforms. Most of the research works are done to optimize the collective operations for particular architectures by taking into account either their topology or platform parameters. In this work we propose a very simple and at the same time general approach to optimize legacy MPI broadcast algorithms, which are widely used in MPICH and OpenMPI. Theoretical analysis and experimental results on IBM BlueGene/P and a cluster of Grid’5000 platform are presented.

  17. Spike-layer solutions to nonlinear fractional Schrodinger equations with almost optimal nonlinearities

    Directory of Open Access Journals (Sweden)

    Jinmyoung Seok

    2015-07-01

    Full Text Available In this article, we are interested in singularly perturbed nonlinear elliptic problems involving a fractional Laplacian. Under a class of nonlinearity which is believed to be almost optimal, we construct a positive solution which exhibits multiple spikes near any given local minimum components of an exterior potential of the problem.

  18. Methods for providing decision makers with optimal solutions for multiple objectives that change over time

    CSIR Research Space (South Africa)

    Greeff, M

    2010-09-01

    Full Text Available Decision making - with the goal of finding the optimal solution - is an important part of modern life. For example: In the control room of an airport, the goals or objectives are to minimise the risk of airplanes colliding, minimise the time that a...

  19. K-maps: a vehicle to an optimal solution in combinational logic ...

    African Journals Online (AJOL)

    K-maps: a vehicle to an optimal solution in combinational logic design problems using digital multiplexers. ... Abstract. Application of Karnaugh maps (K-Maps) for the design of combinational logic circuits and sequential logic circuits is a subject that has been widely discussed. However, the use of K-Maps in the design of ...

  20. Tax solutions for optimal reduction of tobacco use in West Africa ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Tax solutions for optimal reduction of tobacco use in West Africa. During the first phase of this project, numerous decision-makers were engaged and involved in discussions with the goal of establishing a new taxation system to reduce tobacco use in West Africa. Although regional economic authorities (ECOWAS and ...

  1. Efficient Solutions and Cost-Optimal Analysis for Existing School Buildings

    Directory of Open Access Journals (Sweden)

    Paolo Maria Congedo

    2016-10-01

    Full Text Available The recast of the energy performance of buildings directive (EPBD describes a comparative methodological framework to promote energy efficiency and establish minimum energy performance requirements in buildings at the lowest costs. The aim of the cost-optimal methodology is to foster the achievement of nearly zero energy buildings (nZEBs, the new target for all new buildings by 2020, characterized by a high performance with a low energy requirement almost covered by renewable sources. The paper presents the results of the application of the cost-optimal methodology in two existing buildings located in the Mediterranean area. These buildings are a kindergarten and a nursery school that differ in construction period, materials and systems. Several combinations of measures have been applied to derive cost-effective efficient solutions for retrofitting. The cost-optimal level has been identified for each building and the best performing solutions have been selected considering both a financial and a macroeconomic analysis. The results illustrate the suitability of the methodology to assess cost-optimality and energy efficiency in school building refurbishment. The research shows the variants providing the most cost-effective balance between costs and energy saving. The cost-optimal solution reduces primary energy consumption by 85% and gas emissions by 82%–83% in each reference building.

  2. High-Level Topology-Oblivious Optimization of MPI Broadcast Algorithms on Extreme-Scale Platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    by taking into account either their topology or platform parameters. In this work we propose a very simple and at the same time general approach to optimize legacy MPI broadcast algorithms, which are widely used in MPICH and OpenMPI. Theoretical analysis

  3. Optimizing image quality and dose for digital radiography of distal pediatric extremities using the contrast-to-noise ratio

    International Nuclear Information System (INIS)

    Hess, R.; Neitzel, U.

    2012-01-01

    Purpose: To investigate the influence of X-ray tube voltage and filtration on image quality in terms of contrast-to-noise ratio (CNR) and dose for digital radiography of distal pediatric extremities and to determine conditions that give the best balance of CNR and patient dose. Materials and Methods: In a phantom study simulating the absorption properties of distal extremities, the CNR and the related patient dose were determined as a function of tube voltage in the range 40 - 66 kV, both with and without additional filtration of 0.1 mm Cu/1 mm Al. The measured CNR was used as an indicator of image quality, while the mean absorbed dose (MAD) - determined by a combination of measurement and simulation - was used as an indicator of the patient dose. Results: The most favorable relation of CNR and dose was found for the lowest tube voltage investigated (40 kV) without additional filtration. Compared to a situation with 50 kV or 60 kV, the mean absorbed dose could be lowered by 24 % and 50 %, respectively, while keeping the image quality (CNR) at the same level. Conclusion: For digital radiography of distal pediatric extremities, further CNR and dose optimization appears to be possible using lower tube voltages. Further clinical investigation of the suggested parameters is necessary. (orig.)

  4. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  5. Optimal Analytical Solution for a Capacitive Wireless Power Transfer System with One Transmitter and Two Receivers

    Directory of Open Access Journals (Sweden)

    Ben Minnaert

    2017-09-01

    Full Text Available Wireless power transfer from one transmitter to multiple receivers through inductive coupling is slowly entering the market. However, for certain applications, capacitive wireless power transfer (CWPT using electric coupling might be preferable. In this work, we determine closed-form expressions for a CWPT system with one transmitter and two receivers. We determine the optimal solution for two design requirements: (i maximum power transfer, and (ii maximum system efficiency. We derive the optimal loads and provide the analytical expressions for the efficiency and power. We show that the optimal load conductances for the maximum power configuration are always larger than for the maximum efficiency configuration. Furthermore, it is demonstrated that if the receivers are coupled, this can be compensated for by introducing susceptances that have the same value for both configurations. Finally, we numerically verify our results. We illustrate the similarities to the inductive wireless power transfer (IWPT solution and find that the same, but dual, expressions apply.

  6. Regulation of Renewable Energy Sources to Optimal Power Flow Solutions Using ADMM: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yijian; Hong, Mingyi; Dall' Anese, Emiliano; Dhople, Sairaj; Xu, Zi

    2017-03-03

    This paper considers power distribution systems featuring renewable energy sources (RESs), and develops a distributed optimization method to steer the RES output powers to solutions of AC optimal power flow (OPF) problems. The design of the proposed method leverages suitable linear approximations of the AC-power flow equations, and is based on the Alternating Direction Method of Multipliers (ADMM). Convergence of the RES-inverter output powers to solutions of the OPF problem is established under suitable conditions on the stepsize as well as mismatches between the commanded setpoints and actual RES output powers. In a broad sense, the methods and results proposed here are also applicable to other distributed optimization problem setups with ADMM and inexact dual updates.

  7. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  8. Optimization of the Runner for Extremely Low Head Bidirectional Tidal Bulb Turbine

    Directory of Open Access Journals (Sweden)

    Yongyao Luo

    2017-06-01

    Full Text Available This paper presents a multi-objective optimization procedure for bidirectional bulb turbine runners which is completed using ANSYS Workbench. The optimization procedure is able to check many more geometries with less manual work. In the procedure, the initial blade shape is parameterized, the inlet and outlet angles (β1, β2, as well as the starting and ending wrap angles (θ1, θ2 for the five sections of the blade profile, are selected as design variables, and the optimization target is set to obtain the maximum of the overall efficiency for the ebb and flood turbine modes. For the flow analysis, the ANSYS CFX code, with a SST (Shear Stress Transport k-ω turbulence model, has been used to evaluate the efficiency of the turbine. An efficient response surface model relating the design parameters and the objective functions is obtained. The optimization strategy was used to optimize a model bulb turbine runner. Model tests were carried out to validate the final designs and the design procedure. For the four-bladed turbine, the efficiency improvement is 5.5% in the ebb operation direction, and 2.9% in the flood operation direction, as well as 4.3% and 4.5% for the three-bladed turbine. Numerical simulations were then performed to analyze the pressure pulsation in the pressure and suction sides of the blade for the prototype turbine with optimal four-bladed and three-bladed runners. The results show that the runner rotational frequency (fn is the dominant frequency of the pressure pulsations in the blades for ebb and flood turbine modes, and the gravitational effect, rather than rotor-stator interaction (RSI, plays an important role in a low head horizontal axial turbine. The amplitudes of the pressure pulsations on the blade side facing the guide vanes varies little with the water head. However, the amplitudes of the pressure pulsations on the blade side facing the diffusion tube linearly increase with the water head. These results could provide

  9. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  10. Solution of optimization problems using hybrid architecture; Solucao de problemas de otimizacao utilizando arquitetura hibrida

    Energy Technology Data Exchange (ETDEWEB)

    Murakami, Lelis Tetsuo

    2008-07-01

    king of problem. Because of the importance and magnitude of this issue, every effort which contributes to the improvement of power planning is welcome and this corroborates with this thesis which has an objective to propose technical, viable and economic solutions to solve the optimization problems with a new approach and has potential to be applied in many others kind of similar problems. (author)

  11. Micro-scale NMR Experiments for Monitoring the Optimization of Membrane Protein Solutions for Structural Biology.

    Science.gov (United States)

    Horst, Reto; Wüthrich, Kurt

    2015-07-20

    Reconstitution of integral membrane proteins (IMP) in aqueous solutions of detergent micelles has been extensively used in structural biology, using either X-ray crystallography or NMR in solution. Further progress could be achieved by establishing a rational basis for the selection of detergent and buffer conditions, since the stringent bottleneck that slows down the structural biology of IMPs is the preparation of diffracting crystals or concentrated solutions of stable isotope labeled IMPs. Here, we describe procedures to monitor the quality of aqueous solutions of [ 2 H, 15 N]-labeled IMPs reconstituted in detergent micelles. This approach has been developed for studies of β-barrel IMPs, where it was successfully applied for numerous NMR structure determinations, and it has also been adapted for use with α-helical IMPs, in particular GPCRs, in guiding crystallization trials and optimizing samples for NMR studies (Horst et al ., 2013). 2D [ 15 N, 1 H]-correlation maps are used as "fingerprints" to assess the foldedness of the IMP in solution. For promising samples, these "inexpensive" data are then supplemented with measurements of the translational and rotational diffusion coefficients, which give information on the shape and size of the IMP/detergent mixed micelles. Using microcoil equipment for these NMR experiments enables data collection with only micrograms of protein and detergent. This makes serial screens of variable solution conditions viable, enabling the optimization of parameters such as the detergent concentration, sample temperature, pH and the composition of the buffer.

  12. Genetic search for an optimal power flow solution from a high density cluster

    Energy Technology Data Exchange (ETDEWEB)

    Amarnath, R.V. [Hi-Tech College of Engineering and Technology, Hyderabad (India); Ramana, N.V. [JNTU College of Engineering, Jagityala (India)

    2008-07-01

    This paper proposed a novel method to solve optimal power flow (OPF) problems. The method is based on a genetic algorithm (GA) search from a High Density Cluster (GAHDC). The algorithm of the proposed method includes 3 stages, notably (1) a suboptimal solution is obtained via a conventional analytical method, (2) a high density cluster, which consists of other suboptimal data points from the first stage, is formed using a density-based cluster algorithm, and (3) a genetic algorithm based search is carried out for the exact optimal solution from a low population sized, high density cluster. The final optimal solution thoroughly satisfies the well defined fitness function. A standard IEEE 30-bus test system was considered for the simulation study. Numerical results were presented and compared with the results of other approaches. It was concluded that although there is not much difference in numerical values, the proposed method has the advantage of minimal computational effort and reduced CPU time. As such, the method would be suitable for online applications such as the present Optimal Power Flow problem. 24 refs., 2 tabs., 4 figs.

  13. The optimal solution of a non-convex state-dependent LQR problem and its applications.

    Directory of Open Access Journals (Sweden)

    Xudan Xu

    Full Text Available This paper studies a Non-convex State-dependent Linear Quadratic Regulator (NSLQR problem, in which the control penalty weighting matrix [Formula: see text] in the performance index is state-dependent. A necessary and sufficient condition for the optimal solution is established with a rigorous proof by Euler-Lagrange Equation. It is found that the optimal solution of the NSLQR problem can be obtained by solving a Pseudo-Differential-Riccati-Equation (PDRE simultaneously with the closed-loop system equation. A Comparison Theorem for the PDRE is given to facilitate solution methods for the PDRE. A linear time-variant system is employed as an example in simulation to verify the proposed optimal solution. As a non-trivial application, a goal pursuit process in psychology is modeled as a NSLQR problem and two typical goal pursuit behaviors found in human and animals are reproduced using different control weighting [Formula: see text]. It is found that these two behaviors save control energy and cause less stress over Conventional Control Behavior typified by the LQR control with a constant control weighting [Formula: see text], in situations where only the goal discrepancy at the terminal time is of concern, such as in Marathon races and target hitting missions.

  14. Optimization of an on-board imaging system for extremely rapid radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Cherry Kemmerling, Erica M.; Wu, Meng, E-mail: mengwu@stanford.edu; Yang, He; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Maxim, Peter G.; Loo, Billy W. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Stanford Cancer Institute, Stanford University School of Medicine, Stanford, California 94305 (United States)

    2015-11-15

    Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors are proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration

  15. Optimization of an on-board imaging system for extremely rapid radiation therapy

    International Nuclear Information System (INIS)

    Cherry Kemmerling, Erica M.; Wu, Meng; Yang, He; Fahrig, Rebecca; Maxim, Peter G.; Loo, Billy W.

    2015-01-01

    Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors are proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration

  16. Pareto Optimal Solutions for Network Defense Strategy Selection Simulator in Multi-Objective Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Yang Sun

    2018-01-01

    Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.

  17. Optimal solutions for a bio mathematical model for the evolution of smoking habit

    Science.gov (United States)

    Sikander, Waseem; Khan, Umar; Ahmed, Naveed; Mohyud-Din, Syed Tauseef

    In this study, we apply Variation of Parameter Method (VPM) coupled with an auxiliary parameter to obtain the approximate solutions for the epidemic model for the evolution of smoking habit in a constant population. Convergence of the developed algorithm, namely VPM with an auxiliary parameter is studied. Furthermore, a simple way is considered for obtaining an optimal value of auxiliary parameter via minimizing the total residual error over the domain of problem. Comparison of the obtained results with standard VPM shows that an auxiliary parameter is very feasible and reliable in controlling the convergence of approximate solutions.

  18. Sensitivity of Optimal Solutions to Control Problems for Second Order Evolution Subdifferential Inclusions.

    Science.gov (United States)

    Bartosz, Krzysztof; Denkowski, Zdzisław; Kalita, Piotr

    In this paper the sensitivity of optimal solutions to control problems described by second order evolution subdifferential inclusions under perturbations of state relations and of cost functionals is investigated. First we establish a new existence result for a class of such inclusions. Then, based on the theory of sequential [Formula: see text]-convergence we recall the abstract scheme concerning convergence of minimal values and minimizers. The abstract scheme works provided we can establish two properties: the Kuratowski convergence of solution sets for the state relations and some complementary [Formula: see text]-convergence of the cost functionals. Then these two properties are implemented in the considered case.

  19. Choice of optimal working fluid for binary power plants at extremely low temperature brine

    Science.gov (United States)

    Tomarov, G. V.; Shipkov, A. A.; Sorokina, E. V.

    2016-12-01

    The geothermal energy development problems based on using binary power plants utilizing lowpotential geothermal resources are considered. It is shown that one of the possible ways of increasing the efficiency of heat utilization of geothermal brine in a wide temperature range is the use of multistage power systems with series-connected binary power plants based on incremental primary energy conversion. Some practically significant results of design-analytical investigations of physicochemical properties of various organic substances and their influence on the main parameters of the flowsheet and the technical and operational characteristics of heat-mechanical and heat-exchange equipment for binary power plant operating on extremely-low temperature geothermal brine (70°C) are presented. The calculation results of geothermal brine specific flow rate, capacity (net), and other operation characteristics of binary power plants with the capacity of 2.5 MW at using various organic substances are a practical interest. It is shown that the working fluid selection significantly influences on the parameters of the flowsheet and the operational characteristics of the binary power plant, and the problem of selection of working fluid is in the search for compromise based on the priorities in the field of efficiency, safety, and ecology criteria of a binary power plant. It is proposed in the investigations on the working fluid selection of the binary plant to use the plotting method of multiaxis complex diagrams of relative parameters and characteristic of binary power plants. Some examples of plotting and analyzing these diagrams intended to choose the working fluid provided that the efficiency of geothermal brine is taken as main priority.

  20. Optimal solution of full fuzzy transportation problems using total integral ranking

    Science.gov (United States)

    Sam’an, M.; Farikhin; Hariyanto, S.; Surarso, B.

    2018-03-01

    Full fuzzy transportation problem (FFTP) is a transportation problem where transport costs, demand, supply and decision variables are expressed in form of fuzzy numbers. To solve fuzzy transportation problem, fuzzy number parameter must be converted to a crisp number called defuzzyfication method. In this new total integral ranking method with fuzzy numbers from conversion of trapezoidal fuzzy numbers to hexagonal fuzzy numbers obtained result of consistency defuzzyfication on symmetrical fuzzy hexagonal and non symmetrical type 2 numbers with fuzzy triangular numbers. To calculate of optimum solution FTP used fuzzy transportation algorithm with least cost method. From this optimum solution, it is found that use of fuzzy number form total integral ranking with index of optimism gives different optimum value. In addition, total integral ranking value using hexagonal fuzzy numbers has an optimal value better than the total integral ranking value using trapezoidal fuzzy numbers.

  1. Reinforcement learning solution for HJB equation arising in constrained optimal control problem.

    Science.gov (United States)

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong

    2015-11-01

    The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Technology-derived storage solutions for stabilizing insulin in extreme weather conditions I: the ViViCap-1 device.

    Science.gov (United States)

    Pfützner, Andreas; Pesach, Gidi; Nagar, Ron

    2017-06-01

    Injectable life-saving drugs should not be exposed to temperatures 30°C/86°F. Frequently, weather conditions exceed these temperature thresholds in many countries. Insulin is to be kept at 4-8°C/~ 39-47°F until use and once opened, is supposed to be stable for up to 31 days at room temperature (exception: 42 days for insulin levemir). Extremely hot or cold external temperature can lead to insulin degradation in a very short time with loss of its glucose-lowering efficacy. Combined chemical and engineering solutions for heat protection are employed in ViViCap-1 for disposable insulin pens. The device works based on vacuum insulation and heat consumption by phase-change material. Laboratory studies with exposure of ViViCap-1 to hot outside conditions were performed to evaluate the device performance. ViViCap-1 keeps insulin at an internal temperature phase-change process and 'recharges' the device for further use. ViViCap-1 performed within its specifications. The small and convenient device maintains the efficacy and safety of using insulin even when carried under hot weather conditions.

  3. Searching for optimal integer solutions to set partitioning problems using column generation

    OpenAIRE

    Bredström, David; Jörnsten, Kurt; Rönnqvist, Mikael

    2007-01-01

    We describe a new approach to produce integer feasible columns to a set partitioning problem directly in solving the linear programming (LP) relaxation using column generation. Traditionally, column generation is aimed to solve the LP relaxation as quick as possible without any concern of the integer properties of the columns formed. In our approach we aim to generate the columns forming the optimal integer solution while simultaneously solving the LP relaxation. By this we can re...

  4. Perioperative Optimization of Geriatric Lower Extremity Bypass in the Era of Increased Performance Accountability.

    Science.gov (United States)

    Adkar, Shaunak S; Turley, Ryan S; Benrashid, Ehsan; Lagoo, Sandhya; Shortell, Cynthia K; Mureebe, Leila

    2017-01-01

    The initiation of bundled payment for care improvement by Centers for Medicare and Medicaid Services (CMS) has led to increased financial and performance accountability. As most vascular surgery patients are elderly and reimbursed via CMS, improving their outcomes will be critical for durable financial stability. As a first step in forming a multidisciplinary pathway for the elderly vascular patients, we sought to identify modifiable perioperative variables in geriatric patients undergoing lower extremity bypass (LEB). The 2011-2013 LEB-targeted American College of Surgeons National Surgical Quality Improvement Program database was used for this analysis (n = 5316). Patients were stratified by age <65 (n = 2171), 65-74 (n = 1858), 75-84 (n = 1190), and ≥85 (n = 394) years. Comparisons of patient- and procedure-related characteristics and 30-day postoperative outcomes stratified by age groups were performed with Pearson χ 2 tests for categorical variables and Wilcoxon rank-sum tests for continuous variables. During the study period, 5316 total patients were identified. There were 2171 patients aged <65 years, 1858 patients in the 65-74 years age group, 1190 patients in the 75-84 years age group, and 394 patients in the ≥85 years age group. Increasing age was associated with an increased frequency of cardiopulmonary disease (P < 0.001) and a decreased frequency of diabetes, tobacco use, and prior surgical intervention (P < 0.001). Only 79% and 68% of all patients were on antiplatelet and statin therapies, respectively. Critical limb ischemia occurred more frequently in older patients (P < 0.001). Length of hospital stay, transfusion requirements, and discharge to a skilled nursing facility increased with age (P < 0.001). Thirty-day amputation rates did not differ significantly with age (P = 0.12). Geriatric patients undergoing LEB have unique and potentially modifiable perioperative factors that may improve postoperative outcomes. These

  5. Optimization of soymilk, mango nectar and sucrose solution mixes for better quality of soymilk based beverage.

    Science.gov (United States)

    Getu, Rahel; Tola, Yetenayet B; Neela, Satheesh

    2017-01-01

    Soy milk-based beverages play an important role as a healthy food alternative for human consumption. However, the ‘beany’ flavor and chalky mouth feel of soy milk often makes it unpalatable to consumers. The objective of the present study is to optimize a blend of soy milk, mango nectar and sucrose solution for the best quality soy milk-based beverage. This study was designed to develop a soy milk blended beverage, with mango nectar and sucrose solutions, with the best physicochemical and sensory properties. Fourteen combinations of formulations were determined by D-optimal mixture simplex lattice design, by using Design expert. The blended beverages were prepared by mixing the three basic ingredients with the range of 60−100% soy milk, 0–25% mango nectar and 0–15% sucrose solution. The prepared blended beverage was analyzed for selected physicochemical and sensory properties. The statistical significance of the terms in the regression equations were examined by Analysis of Variance (ANOVA) for each response and the significance test level was set at 5% (p nectar and sucrose solution increased, total color change, total soluble solid, gross energy, titratable acidity, and beta-carotene contents increased but with a decrease in moisture , ash, protein, ether extract, minerals and phytic acid contents was observed. Fi- nally, numerical optimization determined that 81% soy milk, 16% Mango nectar and 3% sugar solution will give by a soy milk blended beverage with the best physicochemical and sensory properties, with a desirability of 0.564. Blending soy milk with fruit juice such as mango is beneficial, as it improves sensory as well as selected nutritional parameters.

  6. Improved Solutions for the Optimal Coordination of DOCRs Using Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Muhammad Sulaiman

    2018-01-01

    Full Text Available Nature-inspired optimization techniques are useful tools in electrical engineering problems to minimize or maximize an objective function. In this paper, we use the firefly algorithm to improve the optimal solution for the problem of directional overcurrent relays (DOCRs. It is a complex and highly nonlinear constrained optimization problem. In this problem, we have two types of design variables, which are variables for plug settings (PSs and the time dial settings (TDSs for each relay in the circuit. The objective function is to minimize the total operating time of all the basic relays to avoid unnecessary delays. We have considered four models in this paper which are IEEE (3-bus, 4-bus, 6-bus, and 8-bus models. From the numerical results, it is obvious that the firefly algorithm with certain parameter settings performs better than the other state-of-the-art algorithms.

  7. A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Lizhi Cui

    2014-01-01

    Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.

  8. Sensitive Constrained Optimal PMU Allocation with Complete Observability for State Estimation Solution

    Directory of Open Access Journals (Sweden)

    R. Manam

    2017-12-01

    Full Text Available In this paper, a sensitive constrained integer linear programming approach is formulated for the optimal allocation of Phasor Measurement Units (PMUs in a power system network to obtain state estimation. In this approach, sensitive buses along with zero injection buses (ZIB are considered for optimal allocation of PMUs in the network to generate state estimation solutions. Sensitive buses are evolved from the mean of bus voltages subjected to increase of load consistently up to 50%. Sensitive buses are ranked in order to place PMUs. Sensitive constrained optimal PMU allocation in case of single line and no line contingency are considered in observability analysis to ensure protection and control of power system from abnormal conditions. Modeling of ZIB constraints is included to minimize the number of PMU network allocations. This paper presents optimal allocation of PMU at sensitive buses with zero injection modeling, considering cost criteria and redundancy to increase the accuracy of state estimation solution without losing observability of the whole system. Simulations are carried out on IEEE 14, 30 and 57 bus systems and results obtained are compared with traditional and other state estimation methods available in the literature, to demonstrate the effectiveness of the proposed method.

  9. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  10. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    Science.gov (United States)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  11. Space-planning and structural solutions of low-rise buildings: Optimal selection methods

    Science.gov (United States)

    Gusakova, Natalya; Minaev, Nikolay; Filushina, Kristina; Dobrynina, Olga; Gusakov, Alexander

    2017-11-01

    The present study is devoted to elaboration of methodology used to select appropriately the space-planning and structural solutions in low-rise buildings. Objective of the study is working out the system of criteria influencing the selection of space-planning and structural solutions which are most suitable for low-rise buildings and structures. Application of the defined criteria in practice aim to enhance the efficiency of capital investments, energy and resource saving, create comfortable conditions for the population considering climatic zoning of the construction site. Developments of the project can be applied while implementing investment-construction projects of low-rise housing at different kinds of territories based on the local building materials. The system of criteria influencing the optimal selection of space-planning and structural solutions of low-rise buildings has been developed. Methodological basis has been also elaborated to assess optimal selection of space-planning and structural solutions of low-rise buildings satisfying the requirements of energy-efficiency, comfort and safety, and economical efficiency. Elaborated methodology enables to intensify the processes of low-rise construction development for different types of territories taking into account climatic zoning of the construction site. Stimulation of low-rise construction processes should be based on the system of approaches which are scientifically justified; thus it allows enhancing energy efficiency, comfort, safety and economical effectiveness of low-rise buildings.

  12. Technique optimization of orbital atherectomy in calcified peripheral lesions of the lower extremities: the CONFIRM series, a prospective multicenter registry.

    Science.gov (United States)

    Das, Tony; Mustapha, Jihad; Indes, Jeffrey; Vorhies, Robert; Beasley, Robert; Doshi, Nilesh; Adams, George L

    2014-01-01

    The purpose of CONFIRM registry series was to evaluate the use of orbital atherectomy (OA) in peripheral lesions of the lower extremities, as well as optimize the technique of OA. Methods of treating calcified arteries (historically a strong predictor of treatment failure) have improved significantly over the past decade and now include minimally invasive endovascular treatments, such as OA with unique versatility in modifying calcific lesions above and below-the-knee. Patients (3135) undergoing OA by more than 350 physicians at over 200 US institutions were enrolled on an "all-comers" basis, resulting in registries that provided site-reported patient demographics, ABI, Rutherford classification, co-morbidities, lesion characteristics, plaque morphology, device usage parameters, and procedural outcomes. Treatment with OA reduced pre-procedural stenosis from an average of 88-35%. Final residual stenosis after adjunctive treatments, typically low-pressure percutaneous transluminal angioplasty (PTA), averaged 10%. Plaque removal was most effective for severely calcified lesions and least effective for soft plaque. Shorter spin times and smaller crown sizes significantly lowered procedural complications which included slow flow (4.4%), embolism (2.2%), and spasm (6.3%), emphasizing the importance of treatment regimens that focus on plaque modification over maximizing luminal gain. The OA technique optimization, which resulted in a change of device usage across the CONFIRM registry series, corresponded to a lower incidence of adverse events irrespective of calcium burden or co-morbidities. Copyright © 2013 The Authors. Wiley Periodicals, Inc.

  13. Optimal volume of injectate for fluoroscopy-guided cervical interlaminar epidural injection in patients with neck and upper extremity pain.

    Science.gov (United States)

    Park, Jun Young; Kim, Doo Hwan; Lee, Kunhee; Choi, Seong-Soo; Leem, Jeong-Gil

    2016-10-01

    There is no study of optimal volume of contrast medium to use in cervical interlaminar epidural injections (CIEIs) for appropriate spread to target lesions. To determine optimal volume of contrast medium to use in CIEIs. We analyzed the records of 80 patients who had undergone CIEIs. Patients were divided into 3 groups according to the amount of contrast: 3, 4.5, and 6 mL. The spread of medium to the target level was analyzed. Numerical rating scale data were also analyzed. The dye had spread to a point above the target level in 15 (78.9%), 22 (84.6%), and 32 (91.4%) patients in groups 1 to 3, respectively. The dye reached both sides in 14 (73.7%), 18 (69.2%), and 23 (65.7%) patients, and reached the ventral epidural space in 15 (78.9%), 22 (84.6%), and 30 (85.7%) patients, respectively. There were no significant differences of contrast spread among the groups. There were no significant differences in the numerical rating scale scores among the groups during the 3 months. When performing CIEIs, 3 mL medication is sufficient volume for the treatment of neck and upper-extremity pain induced by lower cervical degenerative disease.

  14. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    International Nuclear Information System (INIS)

    Zbijewski, W.; De Jean, P.; Prakash, P.; Ding, Y.; Stayman, J. W.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Machado, A.; Carrino, J. A.; Siewerdsen, J. H.

    2011-01-01

    cm diameter bore (20 x 20 x 20 cm 3 field of view); total acquisition arc of ∼240 deg. The system MTF declines to 50% at ∼1.3 mm -1 and to 10% at ∼2.7 mm -1 , consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from ∼500 projections at less than ∼0.5 kW power, implying ∼6.4 mGy (0.064 mSv) for low-dose protocols and ∼15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10-20 HU contrast resolution). Conclusions: The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography.

  15. The importance of functional form in optimal control solutions of problems in population dynamics

    Science.gov (United States)

    Runge, M.C.; Johnson, F.A.

    2002-01-01

    Optimal control theory is finding increased application in both theoretical and applied ecology, and it is a central element of adaptive resource management. One of the steps in an adaptive management process is to develop alternative models of system dynamics, models that are all reasonable in light of available data, but that differ substantially in their implications for optimal control of the resource. We explored how the form of the recruitment and survival functions in a general population model for ducks affected the patterns in the optimal harvest strategy, using a combination of analytical, numerical, and simulation techniques. We compared three relationships between recruitment and population density (linear, exponential, and hyperbolic) and three relationships between survival during the nonharvest season and population density (constant, logistic, and one related to the compensatory harvest mortality hypothesis). We found that the form of the component functions had a dramatic influence on the optimal harvest strategy and the ultimate equilibrium state of the system. For instance, while it is commonly assumed that a compensatory hypothesis leads to higher optimal harvest rates than an additive hypothesis, we found this to depend on the form of the recruitment function, in part because of differences in the optimal steady-state population density. This work has strong direct consequences for those developing alternative models to describe harvested systems, but it is relevant to a larger class of problems applying optimal control at the population level. Often, different functional forms will not be statistically distinguishable in the range of the data. Nevertheless, differences between the functions outside the range of the data can have an important impact on the optimal harvest strategy. Thus, development of alternative models by identifying a single functional form, then choosing different parameter combinations from extremes on the likelihood

  16. Optimal design of cluster-based ad-hoc networks using probabilistic solution discovery

    International Nuclear Information System (INIS)

    Cook, Jason L.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    The reliability of ad-hoc networks is gaining popularity in two areas: as a topic of academic interest and as a key performance parameter for defense systems employing this type of network. The ad-hoc network is dynamic and scalable and these descriptions are what attract its users. However, these descriptions are also synonymous for undefined and unpredictable when considering the impacts to the reliability of the system. The configuration of an ad-hoc network changes continuously and this fact implies that no single mathematical expression or graphical depiction can describe the system reliability-wise. Previous research has used mobility and stochastic models to address this challenge successfully. In this paper, the authors leverage the stochastic approach and build upon it a probabilistic solution discovery (PSD) algorithm to optimize the topology for a cluster-based mobile ad-hoc wireless network (MAWN). Specifically, the membership of nodes within the back-bone network or networks will be assigned in such as way as to maximize reliability subject to a constraint on cost. The constraint may also be considered as a non-monetary cost, such as weight, volume, power, or the like. When a cost is assigned to each component, a maximum cost threshold is assigned to the network, and the method is run; the result is an optimized allocation of the radios enabling back-bone network(s) to provide the most reliable network possible without exceeding the allowable cost. The method is intended for use directly as part of the architectural design process of a cluster-based MAWN to efficiently determine an optimal or near-optimal design solution. It is capable of optimizing the topology based upon all-terminal reliability (ATR), all-operating terminal reliability (AoTR), or two-terminal reliability (2TR)

  17. Optimizing an Investment Solution in Conditions of Uncertainty and Risk as a Multicriterial Task

    Directory of Open Access Journals (Sweden)

    Kotsyuba Oleksiy S.

    2017-10-01

    Full Text Available The article is concerned with the methodology for optimizing investment decisions in conditions of uncertainty and risk. The subject area of the study relates, first of all, to real investment. The problem of modeling an optimal investment solution is considered to be a multicriterial task. Also, the constructive part of the publication is based on the position that the multicriteriality of objectives of investment projecting is the result, first, of the complex nature of the category of economic attractiveness (efficiency of real investment, and secondly, of the need to take into account the risk factor, which is a vector measure, in the preparation of an investment solution. An attempt has been made to develop an instrumentarium to optimize investment decisions in a situation of uncertainty and the risk it engenders, based on the use of roll-up of the local criteria. As a result of its implementation, a model has been proposed, which has the advantage that it takes into account, to a greater extent than is the case for standardized roll-up options, the contensive and formal features of the local (detailed criteria.

  18. General solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging.

    Science.gov (United States)

    Nakata, Toshihiko; Ninomiya, Takanori

    2006-10-10

    A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.

  19. An Ad-Hoc Initial Solution Heuristic for Metaheuristic Optimization of Energy Market Participation Portfolios

    Directory of Open Access Journals (Sweden)

    Ricardo Faia

    2017-06-01

    Full Text Available The deregulation of the electricity sector has culminated in the introduction of competitive markets. In addition, the emergence of new forms of electric energy production, namely the production of renewable energy, has brought additional changes in electricity market operation. Renewable energy has significant advantages, but at the cost of an intermittent character. The generation variability adds new challenges for negotiating players, as they have to deal with a new level of uncertainty. In order to assist players in their decisions, decision support tools enabling assisting players in their negotiations are crucial. Artificial intelligence techniques play an important role in this decision support, as they can provide valuable results in rather small execution times, namely regarding the problem of optimizing the electricity markets participation portfolio. This paper proposes a heuristic method that provides an initial solution that allows metaheuristic techniques to improve their results through a good initialization of the optimization process. Results show that by using the proposed heuristic, multiple metaheuristic optimization methods are able to improve their solutions in a faster execution time, thus providing a valuable contribution for players support in energy markets negotiations.

  20. Study of Research and Development Processes through Fuzzy Super FRM Model and Optimization Solutions

    Directory of Open Access Journals (Sweden)

    Flavius Aurelian Sârbu

    2015-01-01

    Full Text Available The aim of this study is to measure resources for R&D (research and development at the regional level in Romania and also obtain primary data that will be important in making the right decisions to increase competitiveness and development based on an economic knowledge. As our motivation, we would like to emphasize that by the use of Super Fuzzy FRM model we want to determine the state of R&D processes at regional level using a mean different from the statistical survey, while by the two optimization methods we mean to provide optimization solutions for the R&D actions of the enterprises. Therefore to fulfill the above mentioned aim in this application-oriented paper we decided to use a questionnaire and for the interpretation of the results the Super Fuzzy FRM model, representing the main novelty of our paper, as this theory provides a formalism based on matrix calculus, which allows processing of large volumes of information and also delivers results difficult or impossible to see, through statistical processing. Furthermore another novelty of the paper represents the optimization solutions submitted in this work, given for the situation when the sales price is variable, and the quantity sold is constant in time and for the reverse situation.

  1. MATHEMATICAL SOLUTIONS FOR OPTIMAL DIMENSIONING OF NUMBER AND HEIGHTS OF SOME HYDROTECHNIQUE ON TORRENTIAL FORMATION

    Directory of Open Access Journals (Sweden)

    Nicolae Petrescu

    2010-01-01

    Full Text Available This paper is intended to achieve a mathematical model for the optimal dimensioning of number and heights of somedams/thresholds during a downpour, a decrease of water flow rate being obtained and by the solid material depositionsbehind the constructions a new smaller slope of the valley that changes the torrential nature that had before theconstruction is obtained.The choice of the dam and its characteristic dimensions may be an optimization issue and the location of dams on thetorrential (rainfall aspect is dictated by the capabilities of the foundation and restraint so that the chosen solutions willhave to comply with these sites. Finally, the choice of optimal solution to limit torrential (rainfall aspect will be basedon a calculation where the number of thresholds / dams can be a variable related to this, their height properly varying.The calculation method presented is an attempt to demonstrate the multiple opportunities available to implement atechnical issue solving conditions offered by the mathematics against soil erosion, which now is currently very topicalon the environmental protection.

  2. A New Method Based on Simulation-Optimization Approach to Find Optimal Solution in Dynamic Job-shop Scheduling Problem with Breakdown and Rework

    Directory of Open Access Journals (Sweden)

    Farzad Amirkhani

    2017-03-01

    The proposed method is implemented on classical job-shop problems with objective of makespan and results are compared with mixed integer programming model. Moreover, the appropriate dispatching priorities are achieved for dynamic job-shop problem minimizing a multi-objective criteria. The results show that simulation-based optimization are highly capable to capture the main characteristics of the shop and produce optimal/near-optimal solutions with highly credibility degree.

  3. Cocoa agroforestry is less resilient to sub-optimal and extreme climate than cocoa in full sun.

    Science.gov (United States)

    Abdulai, Issaka; Vaast, Philippe; Hoffmann, Munir P; Asare, Richard; Jassogne, Laurence; Van Asten, Piet; Rötter, Reimund P; Graefe, Sophie

    2018-01-01

    Cocoa agroforestry is perceived as potential adaptation strategy to sub-optimal or adverse environmental conditions such as drought. We tested this strategy over wet, dry and extremely dry periods comparing cocoa in full sun with agroforestry systems: shaded by (i) a leguminous tree species, Albizia ferruginea and (ii) Antiaris toxicaria, the most common shade tree species in the region. We monitored micro-climate, sap flux density, throughfall, and soil water content from November 2014 to March 2016 at the forest-savannah transition zone of Ghana with climate and drought events during the study period serving as proxy for projected future climatic conditions in marginal cocoa cultivation areas of West Africa. Combined transpiration of cocoa and shade trees was significantly higher than cocoa in full sun during wet and dry periods. During wet period, transpiration rate of cocoa plants shaded by A. ferruginea was significantly lower than cocoa under A. toxicaria and full sun. During the extreme drought of 2015/16, all cocoa plants under A. ferruginea died. Cocoa plants under A. toxicaria suffered 77% mortality and massive stress with significantly reduced sap flux density of 115 g cm -2  day -1 , whereas cocoa in full sun maintained higher sap flux density of 170 g cm -2  day -1 . Moreover, cocoa sap flux recovery after the extreme drought was significantly higher in full sun (163 g cm -2  day -1 ) than under A. toxicaria (37 g cm -2  day -1 ). Soil water content in full sun was higher than in shaded systems suggesting that cocoa mortality in the shaded systems was linked to strong competition for soil water. The present results have major implications for cocoa cultivation under climate change. Promoting shade cocoa agroforestry as drought resilient system especially under climate change needs to be carefully reconsidered as shade tree species such as the recommended leguminous A. ferruginea constitute major risk to cocoa functioning under

  4. Anti-predatory particle swarm optimization: Solution to nonconvex economic dispatch problems

    Energy Technology Data Exchange (ETDEWEB)

    Selvakumar, A. Immanuel [Department of Electrical and Electronics Engineering, Karunya Institute of Technology and Sciences, Coimbatore 641114, Tamilnadu (India); Thanushkodi, K. [Department of Electronics and Instrumentation Engineering, Government College of Technology, Coimbatore 641013, Tamilnadu (India)

    2008-01-15

    This paper proposes a new particle swarm optimization (PSO) strategy namely, anti-predatory particle swarm optimization (APSO) to solve nonconvex economic dispatch problems. In the classical PSO, the movement of a particle (bird) is governed by three behaviors: inertial, cognitive and social. The cognitive and social behaviors are the components of the foraging activity, which help the swarm of birds to locate food. Another activity that is observed in birds is the anti-predatory nature, which helps the swarm to escape from the predators. In this work, the anti-predatory activity is modeled and embedded in the classical PSO to form APSO. This inclusion enhances the exploration capability of the swarm. To validate the proposed APSO model, it is applied to two test systems having nonconvex solution spaces. Satisfactory results are obtained when compared with previous approaches. (author)

  5. A Generalized Measure for the Optimal Portfolio Selection Problem and its Explicit Solution

    Directory of Open Access Journals (Sweden)

    Zinoviy Landsman

    2018-03-01

    Full Text Available In this paper, we offer a novel class of utility functions applied to optimal portfolio selection. This class incorporates as special cases important measures such as the mean-variance, Sharpe ratio, mean-standard deviation and others. We provide an explicit solution to the problem of optimal portfolio selection based on this class. Furthermore, we show that each measure in this class generally reduces to the efficient frontier that coincides or belongs to the classical mean-variance efficient frontier. In addition, a condition is provided for the existence of the a one-to-one correspondence between the parameter of this class of utility functions and the trade-off parameter λ in the mean-variance utility function. This correspondence essentially provides insight into the choice of this parameter. We illustrate our results by taking a portfolio of stocks from National Association of Securities Dealers Automated Quotation (NASDAQ.

  6. A Complete First-Order Analytical Solution for Optimal Low-Thrust Limited-Power Transfers Between Coplanar Orbits with Small Eccentricities

    Science.gov (United States)

    Da Silva Fernandes, Sandro; Das Chagas Carvalho, Francisco; Vilhena de Moraes, Rodolpho

    The purpose of this work is to present a complete first order analytical solution, which includes short periodic terms, for the problem of optimal low-thrust limited power trajectories with large amplitude transfers (no rendezvous) between coplanar orbits with small eccentricities in Newtonian central gravity field. The study of these transfers is particularly interesting because the orbits found in practice often have a small eccentricity and the problem of transferring a vehicle from a low earth orbit to a high earth orbit is frequently found. Besides, the analysis has been motivated by the renewed interest in the use of low-thrust propulsion systems in space missions verified in the last two decades. Several researchers have obtained numerical and sometimes analytical solutions for a number of specific initial orbits and specific thrust profiles. Averaging methods are also used in such researches. Firstly, the optimization problem associated to the space transfer problem is formulated as a Mayer problem of optimal control with Cartesian elements - position and velocity vectors - as state variables. After applying the Pontryagin Maximum Principle, successive Mathieu transformations are performed and suitable sets of orbital elements are introduced. The short periodic terms are eliminated from the maximum Hamiltonian function through an infinitesimal canonical transformation built through Hori method - a perturbation canonical method based on Lie series. The new Hamiltonian function, which results from the infinitesimal canonical transformation, describes the extremal trajectories for long duration maneuvers. Closed-form analytical solutions are obtained for the new canonical system by solving the Hamilton-Jacobi equation through the separation of variables technique. By applying the transformation equations of the algorithm of Hori method, a first order analytical solution for the problem is obtained in non-singular orbital elements. For long duration maneuvers

  7. Shape Optimization in Contact Problems with Coulomb Friction and a Solution-Dependent Friction Coefficient

    Czech Academy of Sciences Publication Activity Database

    Beremlijski, P.; Outrata, Jiří; Haslinger, Jaroslav; Pathó, R.

    2014-01-01

    Roč. 52, č. 5 (2014), s. 3371-3400 ISSN 0363-0129 R&D Projects: GA ČR(CZ) GAP201/12/0671 Grant - others:GA MŠK(CZ) CZ.1.05/1.1.00/02.0070; GA MŠK(CZ) CZ.1.07/2.3.00/20.0070 Institutional support: RVO:67985556 ; RVO:68145535 Keywords : shape optimization * contact problems * Coulomb friction * solution-dependent coefficient of friction * mathematical programs with equilibrium constraints Subject RIV: BA - General Mathematics Impact factor: 1.463, year: 2014 http://library.utia.cas.cz/separaty/2014/MTR/outrata-0434234.pdf

  8. Global stability, periodic solutions, and optimal control in a nonlinear differential delay model

    Directory of Open Access Journals (Sweden)

    Anatoli F. Ivanov

    2010-09-01

    Full Text Available A nonlinear differential equation with delay serving as a mathematical model of several applied problems is considered. Sufficient conditions for the global asymptotic stability and for the existence of periodic solutions are given. Two particular applications are treated in detail. The first one is a blood cell production model by Mackey, for which new periodicity criteria are derived. The second application is a modified economic model with delay due to Ramsey. An optimization problem for a maximal consumption is stated and solved for the latter.

  9. An Analytical Solution for Yaw Maneuver Optimization on the International Space Station and Other Orbiting Space Vehicles

    Science.gov (United States)

    Dobrinskaya, Tatiana

    2015-01-01

    This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command

  10. Simple and accurate solution for convective-radiative fin with temperature dependent thermal conductivity using double optimal linearization

    International Nuclear Information System (INIS)

    Bouaziz, M.N.; Aziz, Abdul

    2010-01-01

    A novel concept of double optimal linearization is introduced and used to obtain a simple and accurate solution for the temperature distribution in a straight rectangular convective-radiative fin with temperature dependent thermal conductivity. The solution is built from the classical solution for a pure convection fin of constant thermal conductivity which appears in terms of hyperbolic functions. When compared with the direct numerical solution, the double optimally linearized solution is found to be accurate within 4% for a range of radiation-conduction and thermal conductivity parameters that are likely to be encountered in practice. The present solution is simple and offers superior accuracy compared with the fairly complex approximate solutions based on the homotopy perturbation method, variational iteration method, and the double series regular perturbation method. The fin efficiency expression resembles the classical result for the constant thermal conductivity convecting fin. The present results are easily usable by the practicing engineers in their thermal design and analysis work involving fins.

  11. Short-Term Distribution System State Forecast Based on Optimal Synchrophasor Sensor Placement and Extreme Learning Machine

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Huaiguang; Zhang, Yingchen

    2016-11-14

    This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vector regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.

  12. Extreme Ultraviolet Process Optimization for Contact Layer of 14 nm Node Logic and 16 nm Half Pitch Memory Devices

    Science.gov (United States)

    Tseng, Shih-En; Chen, Alek

    2012-06-01

    Extreme ultraviolet (EUV) lithography is considered the most promising single exposure technology at the 27 nm half-pitch node and beyond. The imaging performance of ASML TWINSCAN NXE:3100 has been demonstrated to be able to resolve 26 nm Flash gate layer and 16 nm static random access memory (SRAM) metal layer with a 0.25 numerical aperture (NA) and conventional illumination. Targeting for high volume manufacturing, ASML TWINSCAN NXE:3300B, featuring a 0.33 NA lens with off-axis illumination, will generate a higher contrast aerial image due to improved diffraction order collection efficiency and is expected to reduce target dose via mask biasing. This work performed a simulation to determine how EUV high NA imaging benefits the mask rule check trade-offs required to achieve viable lithography solutions in two device application scenarios: a 14 nm node 6T-SRAM contact layer and a 16 nm half-pitch NAND Flash staggered contact layer. In each application, the three-dimensional mask effects versus Kirchhoff mask were also investigated.

  13. Optimal control of quantum dissipative dynamics: Analytic solution for cooling the three-level Λ system

    International Nuclear Information System (INIS)

    Sklarz, Shlomo E.; Tannor, David J.; Khaneja, Navin

    2004-01-01

    We study the problem of optimal control of dissipative quantum dynamics. Although under most circumstances dissipation leads to an increase in entropy (or a decrease in purity) of the system, there is an important class of problems for which dissipation with external control can decrease the entropy (or increase the purity) of the system. An important example is laser cooling. In such systems, there is an interplay of the Hamiltonian part of the dynamics, which is controllable, and the dissipative part of the dynamics, which is uncontrollable. The strategy is to control the Hamiltonian portion of the evolution in such a way that the dissipation causes the purity of the system to increase rather than decrease. The goal of this paper is to find the strategy that leads to maximal purity at the final time. Under the assumption that Hamiltonian control is complete and arbitrarily fast, we provide a general framework by which to calculate optimal cooling strategies. These assumptions lead to a great simplification, in which the control problem can be reformulated in terms of the spectrum of eigenvalues of ρ, rather than ρ itself. By combining this formulation with the Hamilton-Jacobi-Bellman theorem we are able to obtain an equation for the globally optimal cooling strategy in terms of the spectrum of the density matrix. For the three-level Λ system, we provide a complete analytic solution for the optimal cooling strategy. For this system it is found that the optimal strategy does not exploit system coherences and is a 'greedy' strategy, in which the purity is increased maximally at each instant

  14. Use of response surface methodology for optimization of fluoride adsorption in an aqueous solution by Brushite

    Directory of Open Access Journals (Sweden)

    M. Mourabet

    2017-05-01

    Full Text Available In the present study, Response surface methodology (RSM was employed for the removal of fluoride on Brushite and the process parameters were optimized. Four important process parameters including initial fluoride concentration (40–50 mg/L, pH (4–11, temperature (10–40 °C and B dose (0.05–0.15 g were optimized to obtain the best response of fluoride removal using the statistical Box–Behnken design. The experimental data obtained were analyzed by analysis of variance (ANOVA and fitted to a second-order polynomial equation using multiple regression analysis. Numerical optimization applying desirability function was used to identify the optimum conditions for maximum removal of fluoride. The optimum conditions were found to be initial concentration = 49.06 mg/L, initial solution pH = 5.36, adsorbent dose = 0.15 g and temperature = 31.96 °C. A confirmatory experiment was performed to evaluate the accuracy of the optimization procedure and maximum fluoride removal of 88.78% was achieved under the optimized conditions. Several error analysis equations were used to measure the goodness-of-fit. Kinetic studies showed that the adsorption followed a pseudo-second order reaction. The equilibrium data were analyzed using Langmuir, Freundlich, and Sips isotherm models at different temperatures. The Langmuir model was found to be describing the data. The adsorption capacity from the Langmuir isotherm (QL was found to be 29.212, 35.952 and 36.260 mg/g at 298, 303, and 313 K respectively.

  15. Homogenized blocked arcs for multicriteria optimization of radiotherapy: Analytical and numerical solutions

    International Nuclear Information System (INIS)

    Fenwick, John D.; Pardo-Montero, Juan

    2010-01-01

    Purpose: Homogenized blocked arcs are intuitively appealing as basis functions for multicriteria optimization of rotational radiotherapy. Such arcs avoid an organ-at-risk (OAR), spread dose out well over the rest-of-body (ROB), and deliver homogeneous doses to a planning target volume (PTV) using intensity modulated fluence profiles, obtainable either from closed-form solutions or iterative numerical calculations. Here, the analytic and iterative arcs are compared. Methods: Dose-distributions have been calculated for nondivergent beams, both including and excluding scatter, beam penumbra, and attenuation effects, which are left out of the derivation of the analytic arcs. The most straightforward analytic arc is created by truncating the well-known Brahme, Roos, and Lax (BRL) solution, cutting its uniform dose region down from an annulus to a smaller nonconcave region lying beyond the OAR. However, the truncation leaves behind high dose hot-spots immediately on either side of the OAR, generated by very high BRL fluence levels just beyond the OAR. These hot-spots can be eliminated using alternative analytical solutions ''C'' and ''L,'' which, respectively, deliver constant and linearly rising fluences in the gap region between the OAR and PTV (before truncation). Results: Measured in terms of PTV dose homogeneity, ROB dose-spread, and OAR avoidance, C solutions generate better arc dose-distributions than L when scatter, penumbra, and attenuation are left out of the dose modeling. Including these factors, L becomes the best analytical solution. However, the iterative approach generates better dose-distributions than any of the analytical solutions because it can account and compensate for penumbra and scatter effects. Using the analytical solutions as starting points for the iterative methodology, dose-distributions almost as good as those obtained using the conventional iterative approach can be calculated very rapidly. Conclusions: The iterative methodology is

  16. Methanol Synthesis: Optimal Solution for a Better Efficiency of the Process

    Directory of Open Access Journals (Sweden)

    Grazia Leonzio

    2018-02-01

    Full Text Available In this research, an ANOVA analysis and a response surface methodology are applied to analyze the equilibrium of methanol reaction from pure carbon dioxide and hydrogen. In the ANOVA analysis, carbon monoxide composition in the feed, reaction temperature, recycle and water removal through a zeolite membrane are the analyzed factors. Carbon conversion, methanol yield, methanol productivity and methanol selectivity are the analyzed responses. Results show that main factors have the same effect on responses and a common significant interaction is not present. Carbon monoxide composition and water removal have a positive effect, while temperature and recycle have a negative effect on the system. From central composite design, an optimal solution is found in order to overcome thermodynamic limit: the reactor works with a membrane at lower temperature with carbon monoxide composition in the feed equal to 10 mol % and without recycle. In these conditions, carbon conversion, methanol yield, methanol selectivity, and methanol production are, respectively, higher than 60%, higher than 60%, between 90% and 95% and higher than 0.15 mol/h when considering a feed flow rate of 1 mol/h. A comparison with a traditional reactor is also developed: the membrane reactor ensures to have a carbon conversion higher of the 29% and a methanol yield higher of the 34%. Future researches should evaluate an economic analysis about the optimal solution.

  17. Optimized bacterial expression and purification of the c-Src catalytic domain for solution NMR studies

    International Nuclear Information System (INIS)

    Piserchio, Andrea; Ghose, Ranajeet; Cowburn, David

    2009-01-01

    Progression of a host of human cancers is associated with elevated levels of expression and catalytic activity of the Src family of tyrosine kinases (SFKs), making them key therapeutic targets. Even with the availability of multiple crystal structures of active and inactive forms of the SFK catalytic domain (CD), a complete understanding of its catalytic regulation is unavailable. Also unavailable are atomic or near-atomic resolution information about their interactions, often weak or transient, with regulating phosphatases and downstream targets. Solution NMR, the biophysical method best suited to tackle this problem, was previously hindered by difficulties in bacterial expression and purification of sufficient quantities of soluble, properly folded protein for economically viable labeling with NMR-active isotopes. Through a choice of optimal constructs, co-expression with chaperones and optimization of the purification protocol, we have achieved the ability to bacterially produce large quantities of the isotopically-labeled CD of c-Src, the prototypical SFK, and of its activating Tyr-phosphorylated form. All constructs produce excellent spectra allowing solution NMR studies of this family in an efficient manner

  18. Stochastic network interdiction optimization via capacitated network reliability modeling and probabilistic solution discovery

    International Nuclear Information System (INIS)

    Ramirez-Marquez, Jose Emmanuel; Rocco S, Claudio M.

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve stochastic network interdiction problems (SNIP). The network interdiction problem solved considers the minimization of the cost associated with an interdiction strategy such that the maximum flow that can be transmitted between a source node and a sink node for a fixed network design is greater than or equal to a given reliability requirement. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link and that such interdiction has a probability of being successful. This version of the SNIP is for the first time modeled as a capacitated network reliability problem allowing for the implementation of computation and solution techniques previously unavailable. The solution process is based on an evolutionary algorithm that implements: (1) Monte-Carlo simulation, to generate potential network interdiction strategies, (2) capacitated network reliability techniques to analyze strategies' source-sink flow reliability and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks are used throughout the paper to illustrate the approach

  19. Optimization of the indirect at neutron activation technique for the determination of boron in aqueous solutions

    International Nuclear Information System (INIS)

    Luz, L.C.Q.P. da.

    1984-01-01

    The purpose of this work was the development of an instrumental method for the optimization of the indirect neutron activation analysis of boron in aqueous solutions. The optimization took into account the analytical parameters under laboratory conditions: activation carried out with a 241 Am/Be neutron source and detection of the activity induced in vanadium with two NaI(Tl) gamma spectrometers. A calibration curve was thus obtained for a concentration range of 0 to 5000 ppm B. Later on, experimental models were built in order to study the feasibility of automation. The analysis of boron was finally performed, under the previously established conditions, with an automated system comprising the operations of transport, irradiation and counting. An improvement in the quality of the analysis was observed, with boron concentrations as low as 5 ppm being determined with a precision level better than 0.4%. The experimental model features all basic design elements for an automated device for the analysis of boron in agueous solutions wherever this is required, as in the operation of nuclear reactors. (Author) [pt

  20. A New Methodology to Select the Preferred Solutions from the Pareto-optimal Set: Application to Polymer Extrusion

    International Nuclear Information System (INIS)

    Ferreira, Jose C.; Gaspar-Cunha, Antonio; Fonseca, Carlos M.

    2007-01-01

    Most of the real world optimization problems involve multiple, usually conflicting, optimization criteria. Generating Pareto optimal solutions plays an important role in multi-objective optimization, and the problem is considered to be solved when the Pareto optimal set is found, i.e., the set of non-dominated solutions. Multi-Objective Evolutionary Algorithms based on the principle of Pareto optimality are designed to produce the complete set of non-dominated solutions. However, this is not allays enough since the aim is not only to know the Pareto set but, also, to obtain one solution from this Pareto set. Thus, the definition of a methodology able to select a single solution from the set of non-dominated solutions (or a region of the Pareto frontier), and taking into account the preferences of a Decision Maker (DM), is necessary. A different method, based on a weighted stress function, is proposed. It is able to integrate the user's preferences in order to find the best region of the Pareto frontier accordingly with these preferences. This method was tested on some benchmark test problems, with two and three criteria, and on a polymer extrusion problem. This methodology is able to select efficiently the best Pareto-frontier region for the specified relative importance of the criteria

  1. Calculation of Pareto-optimal solutions to multiple-objective problems using threshold-of-acceptability constraints

    Science.gov (United States)

    Giesy, D. P.

    1978-01-01

    A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.

  2. Enhancement of conversion efficiency of extreme ultraviolet radiation from a liquid aqueous solution microjet target by use of dual laser pulses

    Science.gov (United States)

    Higashiguchi, Takeshi; Dojyo, Naoto; Hamada, Masaya; Kawasaki, Keita; Sasaki, Wataru; Kubodera, Shoichi

    2006-03-01

    We demonstrated a debris-free, efficient laser-produced plasma extreme ultraviolet (EUV) source by use of a regenerative liquid microjet target containing tin-dioxide (SnO II) nano-particles. By using a low SnO II concentration (6%) solution and dual laser pulses for the plasma control, we observed the EUV conversion efficiency of 1.2% with undetectable debris.

  3. Efficient solution to the stagnation problem of the particle swarm optimization algorithm for phase diversity.

    Science.gov (United States)

    Qi, Xin; Ju, Guohao; Xu, Shuyan

    2018-04-10

    The phase diversity (PD) technique needs optimization algorithms to minimize the error metric and find the global minimum. Particle swarm optimization (PSO) is very suitable for PD due to its simple structure, fast convergence, and global searching ability. However, the traditional PSO algorithm for PD still suffers from the stagnation problem (premature convergence), which can result in a wrong solution. In this paper, the stagnation problem of the traditional PSO algorithm for PD is illustrated first. Then, an explicit strategy is proposed to solve this problem, based on an in-depth understanding of the inherent optimization mechanism of the PSO algorithm. Specifically, a criterion is proposed to detect premature convergence; then a redistributing mechanism is proposed to prevent premature convergence. To improve the efficiency of this redistributing mechanism, randomized Halton sequences are further introduced to ensure the uniform distribution and randomness of the redistributed particles in the search space. Simulation results show that this strategy can effectively solve the stagnation problem of the PSO algorithm for PD, especially for large-scale and high-dimension wavefront sensing and noisy conditions. This work is further verified by an experiment. This work can improve the robustness and performance of PD wavefront sensing.

  4. An n -material thresholding method for improving integerness of solutions in topology optimization

    International Nuclear Information System (INIS)

    Watts, Seth; Engineering); Tortorelli, Daniel A.; Engineering)

    2016-01-01

    It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, the canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.

  5. Optimal wind-hydro solution for the Marmara region of Turkey to meet electricity demand

    International Nuclear Information System (INIS)

    Dursun, Bahtiyar; Alboyaci, Bora; Gokcol, Cihan

    2011-01-01

    Wind power technology is now a reliable electricity production system. It presents an economically attractive solution for the continuously increasing energy demand of the Marmara region located in Turkey. However, the stochastic behavior of wind speed in the Marmara region can lead to significant disharmony between wind energy production and electricity demand. Therefore, to overcome wind's variable nature, a more reliable solution would be to integrate hydropower with wind energy. In this study, a methodology to estimate an optimal wind-hydro solution is developed and it is subsequently applied to six typical different site cases in the Marmara region in order to define the most beneficial configuration of the wind-hydro system. All numerical calculations are based on the long-term wind speed measurements, electrical load demand and operational characteristics of the system components. -- Research highlights: → This study is the first application of a wind-hydro pumped storage system in Turkey. → The methodology developed in this study is applied to the six sites in the Marmara region of Turkey. A wind - hydro pumped storage system is proposed to meet the electric energy demand of the Marmara region.

  6. Design of Distributed Controllers Seeking Optimal Power Flow Solutions Under Communication Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    2016-12-29

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltage measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.

  7. Optimal Thermal Unit Commitment Solution integrating Renewable Energy with Generator Outage

    Directory of Open Access Journals (Sweden)

    S. Sivasakthi

    2017-06-01

    Full Text Available The increasing concern of global climate changes, the promotion of renewable energy sources, primarily wind generation, is a welcome move to reduce the pollutant emissions from conventional power plants. Integration of wind power generation with the existing power network is an emerging research field. This paper presents a meta-heuristic algorithm based approach to determine the feasible dispatch solution for wind integrated thermal power system. The Unit Commitment (UC process aims to identify the best feasible generation scheme of the committed units such that the overall generation cost is reduced, when subjected to a variety of constraints at each time interval. As the UC formulation involves many variables and system and operational constraints, identifying the best solution is still a research task. Nowadays, it is inevitable to include power system reliability issues in operation strategy. The generator failure and malfunction are the prime influencing factor for reliability issues hence they have considered in UC formulation of wind integrated thermal power system. The modern evolutionary algorithm known as Grey Wolf Optimization (GWO algorithm is applied to solve the intended UC problem. The potential of the GWO algorithm is validated by the standard test systems. Besides, the ramp rate limits are also incorporated in the UC formulation. The simulation results reveal that the GWO algorithm has the capability of obtaining economical resolutions with good solution quality.

  8. Design of Distributed Controllers Seeking Optimal Power Flow Solutions under Communication Constraints: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    2016-12-01

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltage measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.

  9. Impact of Optimized Land Surface Parameters on the Land-Atmosphere Coupling in WRF Simulations of Dry and Wet Extremes

    Science.gov (United States)

    Kumar, S.; Santanello, J. A.; Peters-Lidard, C. D.; Harrison, K.

    2011-12-01

    Land-atmosphere (L-A) interactions play a critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface temperature and moisture budgets, as well as controlling feedbacks with clouds and precipitation that lead to the persistence of dry and wet regimes. Recent efforts to quantify the strength of L-A coupling in prediction models have produced diagnostics that integrate across both the land and PBL components of the system. In this study, we examine the impact of improved specification of land surface states, anomalies, and fluxes on coupled WRF forecasts during the summers of extreme dry (2006) and wet (2007) conditions in the U.S. Southern Great Plains. The improved land initialization and surface flux parameterizations are obtained through the use of a new optimization and uncertainty module in NASA's Land Information System (LIS-OPT), whereby parameter sets are calibrated in the Noah land surface model and classified according to the land cover and soil type mapping of the observations and the full domain. The impact of the calibrated parameters on the a) spinup of land surface states used as initial conditions, and b) heat and moisture fluxes of the coupled (LIS-WRF) simulations are then assessed in terms of ambient weather, PBL budgets, and precipitation along with L-A coupling diagnostics. In addition, the sensitivity of this approach to the period of calibration (dry, wet, normal) is investigated. Finally, tradeoffs of computational tractability and scientific validity (e.g.,. relating to the representation of the spatial dependence of parameters) and the feasibility of calibrating to multiple observational datasets are also discussed.

  10. Moment-tensor solutions estimated using optimal filter theory: Global seismicity, 2001

    Science.gov (United States)

    Sipkin, S.A.; Bufe, C.G.; Zirbes, M.D.

    2003-01-01

    This paper is the 12th in a series published yearly containing moment-tensor solutions computed at the US Geological Survey using an algorithm based on the theory of optimal filter design (Sipkin, 1982 and Sipkin, 1986b). An inversion has been attempted for all earthquakes with a magnitude, mb or MS, of 5.5 or greater. Previous listings include solutions for earthquakes that occurred from 1981 to 2000 (Sipkin, 1986b; Sipkin and Needham, 1989, Sipkin and Needham, 1991, Sipkin and Needham, 1992, Sipkin and Needham, 1993, Sipkin and Needham, 1994a and Sipkin and Needham, 1994b; Sipkin and Zirbes, 1996 and Sipkin and Zirbes, 1997; Sipkin et al., 1998, Sipkin et al., 1999, Sipkin et al., 2000a, Sipkin et al., 2000b and Sipkin et al., 2002).The entire USGS moment-tensor catalog can be obtained via anonymous FTP at ftp://ghtftp.cr.usgs.gov. After logging on, change directory to “momten”. This directory contains two compressed ASCII files that contain the finalized solutions, “mt.lis.Z” and “fmech.lis.Z”. “mt.lis.Z” contains the elements of the moment tensors along with detailed event information; “fmech.lis.Z” contains the decompositions into the principal axes and best double-couples. The fast moment-tensor solutions for more recent events that have not yet been finalized and added to the catalog, are gathered by month in the files “jan01.lis.Z”, etc. “fmech.doc.Z” describes the various fields.

  11. Optimizing nanodiscs and bicelles for solution NMR studies of two β-barrel membrane proteins

    International Nuclear Information System (INIS)

    Kucharska, Iga; Edrington, Thomas C.; Liang, Binyong; Tamm, Lukas K.

    2015-01-01

    Solution NMR spectroscopy has become a robust method to determine structures and explore the dynamics of integral membrane proteins. The vast majority of previous studies on membrane proteins by solution NMR have been conducted in lipid micelles. Contrary to the lipids that form a lipid bilayer in biological membranes, micellar lipids typically contain only a single hydrocarbon chain or two chains that are too short to form a bilayer. Therefore, there is a need to explore alternative more bilayer-like media to mimic the natural environment of membrane proteins. Lipid bicelles and lipid nanodiscs have emerged as two alternative membrane mimetics that are compatible with solution NMR spectroscopy. Here, we have conducted a comprehensive comparison of the physical and spectroscopic behavior of two outer membrane proteins from Pseudomonas aeruginosa, OprG and OprH, in lipid micelles, bicelles, and nanodiscs of five different sizes. Bicelles stabilized with a fraction of negatively charged lipids yielded spectra of almost comparable quality as in the best micellar solutions and the secondary structures were found to be almost indistinguishable in the two environments. Of the five nanodiscs tested, nanodiscs assembled from MSP1D1ΔH5 performed the best with both proteins in terms of sample stability and spectral resolution. Even in these optimal nanodiscs some broad signals from the membrane embedded barrel were severely overlapped with sharp signals from the flexible loops making their assignments difficult. A mutant OprH that had two of the flexible loops truncated yielded very promising spectra for further structural and dynamical analysis in MSP1D1ΔH5 nanodiscs

  12. Trajectory planning of mobile robots using indirect solution of optimal control method in generalized point-to-point task

    Science.gov (United States)

    Nazemizadeh, M.; Rahimi, H. N.; Amini Khoiy, K.

    2012-03-01

    This paper presents an optimal control strategy for optimal trajectory planning of mobile robots by considering nonlinear dynamic model and nonholonomic constraints of the system. The nonholonomic constraints of the system are introduced by a nonintegrable set of differential equations which represent kinematic restriction on the motion. The Lagrange's principle is employed to derive the nonlinear equations of the system. Then, the optimal path planning of the mobile robot is formulated as an optimal control problem. To set up the problem, the nonlinear equations of the system are assumed as constraints, and a minimum energy objective function is defined. To solve the problem, an indirect solution of the optimal control method is employed, and conditions of the optimality derived as a set of coupled nonlinear differential equations. The optimality equations are solved numerically, and various simulations are performed for a nonholonomic mobile robot to illustrate effectiveness of the proposed method.

  13. Optimal solutions for the evolution of a social obesity epidemic model

    Science.gov (United States)

    Sikander, Waseem; Khan, Umar; Mohyud-Din, Syed Tauseef

    2017-06-01

    In this work, a novel modification in the traditional homotopy perturbation method (HPM) is proposed by embedding an auxiliary parameter in the boundary condition. The scheme is used to carry out a mathematical evaluation of the social obesity epidemic model. The incidence of excess weight and obesity in adulthood population and prediction of its behavior in the coming years is analyzed by using a modified algorithm. The proposed method increases the convergence of the approximate analytical solution over the domain of the problem. Furthermore, a convenient way is considered for choosing an optimal value of auxiliary parameters via minimizing the total residual error. The graphical comparison of the obtained results with the standard HPM explicitly reveals the accuracy and efficiency of the developed scheme.

  14. Normalization in Unsupervised Segmentation Parameter Optimization: A Solution Based on Local Regression Trend Analysis

    Directory of Open Access Journals (Sweden)

    Stefanos Georganos

    2018-02-01

    Full Text Available In object-based image analysis (OBIA, the appropriate parametrization of segmentation algorithms is crucial for obtaining satisfactory image classification results. One of the ways this can be done is by unsupervised segmentation parameter optimization (USPO. A popular USPO method does this through the optimization of a “global score” (GS, which minimizes intrasegment heterogeneity and maximizes intersegment heterogeneity. However, the calculated GS values are sensitive to the minimum and maximum ranges of the candidate segmentations. Previous research proposed the use of fixed minimum/maximum threshold values for the intrasegment/intersegment heterogeneity measures to deal with the sensitivity of user-defined ranges, but the performance of this approach has not been investigated in detail. In the context of a remote sensing very-high-resolution urban application, we show the limitations of the fixed threshold approach, both in a theoretical and applied manner, and instead propose a novel solution to identify the range of candidate segmentations using local regression trend analysis. We found that the proposed approach showed significant improvements over the use of fixed minimum/maximum values, is less subjective than user-defined threshold values and, thus, can be of merit for a fully automated procedure and big data applications.

  15. Hybrid solution and pump-storage optimization in water supply system efficiency: A case study

    International Nuclear Information System (INIS)

    Vieira, F.; Ramos, H.M.

    2008-01-01

    Environmental targets and saving energy have become ones of the world main concerns over the last years and it will increase and become more important in a near future. The world population growth rate is the major factor contributing for the increase in global pollution and energy and water consumption. In 2005, the world population was approximately 6.5 billion and this number is expected to reach 9 billion by 2050 [United Nations, 2008. (www.un.org), accessed on July]. Water supply systems use energy for pumping water, so new strategies must be developed and implemented in order to reduce this consumption. In addition, if there is excess of hydraulic energy in a water system, some type of water power generation can be implemented. This paper presents an optimization model that determines the best hourly operation for 1 day, according to the electricity tariff, for a pumped storage system with water consumption and inlet discharge. Wind turbines are introduced in the system. The rules obtained as output of the optimization process are subsequently introduced in a hydraulic simulator, in order to verify the system behaviour. A comparison with the normal water supply operating mode is done and the energy cost savings with this hybrid solution are calculated

  16. Optimizing the recovery of copper from electroplating rinse bath solution by hollow fiber membrane.

    Science.gov (United States)

    Oskay, Kürşad Oğuz; Kul, Mehmet

    2015-01-01

    This study aimed to recover and remove copper from industrial model wastewater solution by non-dispersive solvent extraction (NDSX). Two mathematical models were developed to simulate the performance of an integrated extraction-stripping process, based on the use of hollow fiber contactors using the response surface method. The models allow one to predict the time dependent efficiencies of the two phases involved in individual extraction or stripping processes. The optimal recovery efficiency parameters were determined as 227 g/L of H2SO4 concentration, 1.22 feed/strip ratio, 450 mL/min flow rate (115.9 cm/min. flow velocity) and 15 volume % LIX 84-I concentration in 270 min by central composite design (CCD). At these optimum conditions, the experimental value of recovery efficiency was 95.88%, which was in close agreement with the 97.75% efficiency value predicted by the model. At the end of the process, almost all the copper in the model wastewater solution was removed and recovered as CuSO4.5H2O salt, which can be reused in the copper electroplating industry.

  17. Optimization of strontium adsorption from aqueous solution using (mn-Zr) oxide-pan composite spheres

    International Nuclear Information System (INIS)

    Inan, S.; Altas, Y.

    2009-01-01

    The processes based on adsorption and ion exchange have a great role for the pre-concentration and separation of toxic, long lived radionuclides from liquid waste. In Nuclear waste management, the removal of long lived, radiotoxic isotopes from radioactive waste such as strontium reduces the storage problems and facilitates the disposal of the waste. Depending on the waste type, a variety of adsorbents and/or ion exchangers are used. Due to the amorphous structure of hydrous oxides and their mixtures, they don't have reproducible properties. Besides, obtained powders are very fine particles and they can cause operational problems such as pressure drop and filtration. Therefore they are not suitable for column applications. These reasons have recently expedited the study on the preparation of organic-inorganic composite adsorbent beads for industrial applications. PAN, as a stable and porous support for fine particles, provides the utilization of ion exchangers in large scale column applications. The utilization of PAN as a support material with many inorganic ion exchangers was firstly achieved by Sebesta in the beginning of 1990's. Later on, PAN based composite ion exchangers were prepared and used for the removal of radionuclides and heavy metal ions from aqueous solution and waste waters. In this study, spherical (Mn-Zr)oxide-PAN composite were prepared for separation of strontium from aqueous solution in a wide pH range. Sr 2 + adsorption of composite adsorbent was optimized by using experimental design 'Central Composite Design' model.

  18. Assessment of colour changes during storage of elderberry juice concentrate solutions using the optimization method.

    Science.gov (United States)

    Walkowiak-Tomczak, Dorota; Czapski, Janusz; Młynarczyk, Karolina

    2016-01-01

    Elderberries are a source of dietary supplements and bioactive compounds, such as anthocyanins. These dyes are used in food technology. The aim of the study was to assess the changes in colour parameters, anthocyanin contents and sensory attributes in solutions of elderberry juice concentrates during storage in a model system and to determine predictability of sensory attributes of colour in solutions based on regression equations using the response surface methodology. The experiment was carried out according to the 3-level factorial design for three factors. Independent variables included pH, storage time and temperature. Dependent variables were assumed to be the components and colour parameters in the CIE L*a*b* system, pigment contents and sensory attributes. Changes in colour components X, Y, Z and colour parameters L*, a*, b*, C* and h* were most dependent on pH values. Colour lightness L* and tone h* increased with an increase in experimental factors, while the share of the red colour a* and colour saturation C* decreased. The greatest effect on the anthocyanin concentration was recorded for storage time. Sensory attributes deteriorated during storage. The highest correlation coefficients were found between the value of colour tone h* and anthocyanin contents in relation to the assessment of the naturalness and desirability of colour. A high goodness-of-fit of the model to data and high values of R2 for regression equations were obtained for all responses. The response surface method facilitates optimization of experimental factor values in order to obtain a specific attribute of the product, but not in all cases of the experiment. Within the tested range of factors, it is possible to predict changes in anthocyanin content and the sensory attributes of elderberry juice concentrate solutions as food dye, on the basis of the lack of a fit test. The highest stability of dyes and colour of elderberry solutions was found in the samples at pH 3.0, which confirms

  19. A hydro-meteorological model chain to assess the influence of natural variability and impacts of climate change on extreme events and propose optimal water management

    Science.gov (United States)

    von Trentini, F.; Willkofer, F.; Wood, R. R.; Schmid, F. J.; Ludwig, R.

    2017-12-01

    The ClimEx project (Climate change and hydrological extreme events - risks and perspectives for water management in Bavaria and Québec) focuses on the effects of climate change on hydro-meteorological extreme events and their implications for water management in Bavaria and Québec. Therefore, a hydro-meteorological model chain is applied. It employs high performance computing capacity of the Leibniz Supercomputing Centre facility SuperMUC to dynamically downscale 50 members of the Global Circulation Model CanESM2 over European and Eastern North American domains using the Canadian Regional Climate Model (RCM) CRCM5. Over Europe, the unique single model ensemble is conjointly analyzed with the latest information provided through the CORDEX-initiative, to better assess the influence of natural climate variability and climatic change in the dynamics of extreme events. Furthermore, these 50 members of a single RCM will enhance extreme value statistics (extreme return periods) by exploiting the available 1500 model years for the reference period from 1981 to 2010. Hence, the RCM output is applied to drive the process based, fully distributed, and deterministic hydrological model WaSiM in high temporal (3h) and spatial (500m) resolution. WaSiM and the large ensemble are further used to derive a variety of hydro-meteorological patterns leading to severe flood events. A tool for virtual perfect prediction shall provide a combination of optimal lead time and management strategy to mitigate certain flood events following these patterns.

  20. The Primary Experiments of an Analysis of Pareto Solutions for Conceptual Design Optimization Problem of Hybrid Rocket Engine

    Science.gov (United States)

    Kudo, Fumiya; Yoshikawa, Tomohiro; Furuhashi, Takeshi

    Recentry, Multi-objective Genetic Algorithm, which is the application of Genetic Algorithm to Multi-objective Optimization Problems is focused on in the engineering design field. In this field, the analysis of design variables in the acquired Pareto solutions, which gives the designers useful knowledge in the applied problem, is important as well as the acquisition of advanced solutions. This paper proposes a new visualization method using Isomap which visualizes the geometric distances of solutions in the design variable space considering their distances in the objective space. The proposed method enables a user to analyze the design variables of the acquired solutions considering their relationship in the objective space. This paper applies the proposed method to the conceptual design optimization problem of hybrid rocket engine and studies the effectiveness of the proposed method.

  1. Shape optimization in 2D contact problems with given friction and a solution-dependent coefficient of friction

    Czech Academy of Sciences Publication Activity Database

    Haslinger, J.; Outrata, Jiří; Pathó, R.

    2012-01-01

    Roč. 20, č. 1 (2012), s. 31-59 ISSN 1877-0533 R&D Projects: GA AV ČR IAA100750802 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : shape optimization * Signorini problem * model with given frinction * solution-dependent coefficient of friction * mathematical probrams with equilibrium constraints Subject RIV: BA - General Mathematics Impact factor: 1.036, year: 2012 http://library.utia.cas.cz/separaty/2012/MTR/outrata-shape optimization in 2d contact problems with given friction and a solution-dependent coefficient of friction .pdf

  2. Evaluation of Persian Professional Web Social Networks\\\\\\' Features, to Provide a Suitable Solution for Optimization of These Networks in Iran

    Directory of Open Access Journals (Sweden)

    Nadjla Hariri

    2013-03-01

    Full Text Available This study aimed to determine the status of Persian professional web social networks' features and provide a suitable solution for optimization of these networks in Iran. The research methods were library research and evaluative method, and study population consisted of 10 Persian professional web social networks. In this study, for data collection, a check list of social networks important tools and features was used. According to the results, “Cloob”, “IR Experts” and “Doreh” were the most compatible networks with the criteria of social networks. Finally, some solutions were presented for optimization of capabilities of Persian professional web social networks.

  3. Solution of a General Linear Complementarity Problem Using Smooth Optimization and Its Application to Bilinear Programming and LCP

    International Nuclear Information System (INIS)

    Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.

    2001-01-01

    This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper

  4. Solution of optimization problems by means of the CASTEM 2000 computer code

    International Nuclear Information System (INIS)

    Charras, Th.; Millard, A.; Verpeaux, P.

    1991-01-01

    In the nuclear industry, it can be necessary to use robots for operation in contaminated environment. Most of the time, positioning of some parts of the robot must be very accurate, which highly depends on the structural (mass and stiffness) properties of its various components. Therefore, there is a need for a 'best' design, which is a compromise between technical (mechanical properties) and economical (material quantities, design and manufacturing cost) matters. This is precisely the aim of optimization techniques, in the frame of structural analysis. A general statement of this problem could be as follows: find the set of parameters which leads to the minimum of a given function, and satisfies some constraints. For example, in the case of a robot component, the parameters can be some geometrical data (plate thickness, ...), the function can be the weight and the constraints can consist in design criteria like a given stiffness and in some manufacturing technological constraints (minimum available thickness, etc). For nuclear industry purposes, a robust method was chosen and implemented in the new generation computer code CASTEM 2000. The solution of the optimum design problem is obtained by solving a sequence of convex subproblems, in which the various functions (the function to minimize and the constraints) are transformed by convex linearization. The method has been programmed in the case of continuous as well as discrete variables. According to the highly modular architecture of the CASTEM 2000 code, only one new operation had to be introduced: the solution of a sub problem with convex linearized functions, which is achieved by means of a conjugate gradient technique. All other operations were already available in the code, and the overall optimum design is realized by means of the Gibiane language. An example of application will be presented to illustrate the possibilities of the method. (author)

  5. Sensitivity analysis of efficient solution in vector MINMAX boolean programming problem

    Directory of Open Access Journals (Sweden)

    Vladimir A. Emelichev

    2002-11-01

    Full Text Available We consider a multiple criterion Boolean programming problem with MINMAX partial criteria. The extreme level of independent perturbations of partial criteria parameters such that efficient (Pareto optimal solution preserves optimality was obtained.

  6. Optimizing Electrocoagulation Process for the Removal of Nitrate From Aqueous Solution

    Directory of Open Access Journals (Sweden)

    Dehghani

    2016-01-01

    Full Text Available Background High levels of nitrate anion are frequently detected in many groundwater resources in Fars province. Objectives The present study aimed to determine the removal efficiency of nitrate from aqueous solutions by electrocoagulation process using aluminum and iron electrodes. Materials and Methods A laboratory-scale batch reactor was conducted to determine nitrate removal efficiency using the electrocoagulation method. The removal of nitrate was determined at pH levels of 3, 7, and 11, different voltages (15, 20, and 30 V, and operation times of 30, 60, and 75 min, respectively. Data were analyzed using the SPSS software version 16 (Chicago, Illinois, USA and Pearson’s correlation coefficient was used to analyze the relationship between the parameters. Results Results of the present study showed that the removal efficiency was increased from 27% to 86% as pH increased from 3 to 11 at the optimal condition of 30 V and 75 min operation time. Moreover, by increasing the reaction time from 30 V to 75 min the removal efficiency was increased from 63% to 86%, respectively (30 V and pH = 11. Pearson’s correlation analysis showed that there was a significant relationship between removal efficiency and voltage and reaction time as well (P < 0.01. Conclusions In conclusion, the electrocoagulation process can be used for removing nitrate from water resources because of high efficiency, simplicity, and relatively low cost.

  7. Selection of an optimal antiseptic solution for intraoperative irrigation: an in vitro study.

    Science.gov (United States)

    van Meurs, S J; Gawlitta, D; Heemstra, K A; Poolman, R W; Vogely, H C; Kruyt, M C

    2014-02-19

    With increasing bacterial antibiotic resistance and an increased infection risk due to more complicated surgical procedures and patient populations, prevention of surgical infection is of paramount importance. Intraoperative irrigation with an antiseptic solution could provide an effective way to reduce postoperative infection rates. Although numerous studies have been conducted on the bactericidal or cytotoxic characteristics of antiseptics, the combination of these characteristics for intraoperative application has not been addressed. Bacteria (Staphylococcus aureus and S. epidermidis) and human cells were exposed to polyhexanide, hydrogen peroxide, octenidine dihydrochloride, povidone-iodine, and chlorhexidine digluconate at various dilutions for two minutes. Bactericidal properties were calculated by means of the quantitative suspension method. The cytotoxic effect on human fibroblasts and mesenchymal stromal cells was determined by a WST-1 metabolic activity assay. All of the antiseptics except for polyhexanide were bactericidal and cytotoxic at the commercially available concentrations. When diluted, only povidone-iodine was bactericidal at a concentration at which some cell viability remained. The other antiseptics tested showed no cellular survival at the minimal bactericidal concentration. Povidone-iodine diluted to a concentration of 1.3 g/L could be the optimal antiseptic for intraoperative irrigation. This should be established by future clinical studies.

  8. Cathodic deposition of CdSe films from dimethyl formamide solution at optimized temperature

    Energy Technology Data Exchange (ETDEWEB)

    Datta, J. [Department of Chemistry, Bengal Engineering and Science University, Shibpur, Howrah 711 103, West Bengal (India)]. E-mail: jayati_datta@rediffmail.com; Bhattacharya, C. [Department of Chemistry, Bengal Engineering and Science University, Shibpur, Howrah 711 103, West Bengal (India); Visiting Research Associate, School of Materials Science and Engineering, UNSW (Australia); Bandyopadhyay, S. [School of Materials Science and Engineering, UNSW, Sydney 2052 (Australia)

    2006-12-15

    In the present paper, thin film CdSe compound semiconductors have been electroplated on transparent conducting oxide coated glass substrates from nonaqueous dimethyl formamide bath containing CdCl{sub 2}, KI and Se under controlled temperature ranging from 100 to 140 deg. C. Thickness of the deposited films as obtained through focussed ion beam technique as well as their microstructural and photoelectrochemical properties have been found to depend on temperature. The film growth was therefore optimized at a bath temperature {approx}125 deg. C. The formation of crystallites in the range of 100-150 nm size has been ascertained through atomic force microscopy and scanning electron microscopy. Energy dispersive analysis of X-rays for the as deposited film confirmed the 1:1 composition of CdSe compound in the matrix exhibiting band-gap energy of 1.74 eV. Microstructural properties of the deposited films have been determined through X-ray diffraction studies, high-resolution transmission electron microscopy and electron diffraction pattern analysis. Electrochemical impedance spectroscopy and current-potential measurements have been performed to characterize the electrochemical behavior of the semiconductor-electrolyte interface. The photo-activity of the films have been recorded in polysulphide solution under illumination and solar conversion efficiency {>=}1% was achieved.

  9. Determining the optimal system-specific cut-off frequencies for filtering in-vitro upper extremity impact force and acceleration data by residual analysis.

    Science.gov (United States)

    Burkhart, Timothy A; Dunning, Cynthia E; Andrews, David M

    2011-10-13

    The fundamental nature of impact testing requires a cautious approach to signal processing, to minimize noise while preserving important signal information. However, few recommendations exist regarding the most suitable filter frequency cut-offs to achieve these goals. Therefore, the purpose of this investigation is twofold: to illustrate how residual analysis can be utilized to quantify optimal system-specific filter cut-off frequencies for force, moment, and acceleration data resulting from in-vitro upper extremity impacts, and to show how optimal cut-off frequencies can vary based on impact condition intensity. Eight human cadaver radii specimens were impacted with a pneumatic impact testing device at impact energies that increased from 20J, in 10J increments, until fracture occurred. The optimal filter cut-off frequency for pre-fracture and fracture trials was determined with a residual analysis performed on all force and acceleration waveforms. Force and acceleration data were filtered with a dual pass, 4th order Butterworth filter at each of 14 different cut-off values ranging from 60Hz to 1500Hz. Mean (SD) pre-fracture and fracture optimal cut-off frequencies for the force variables were 605.8 (82.7)Hz and 513.9 (79.5)Hz, respectively. Differences in the optimal cut-off frequency were also found between signals (e.g. Fx (medial-lateral), Fy (superior-inferior), Fz (anterior-posterior)) within the same test. These optimal cut-off frequencies do not universally agree with the recommendations of filtering all upper extremity impact data using a cut-off frequency of 600Hz. This highlights the importance of quantifying the filter frequency cut-offs specific to the instrumentation and experimental set-up. Improper digital filtering may lead to erroneous results and a lack of standardized approaches makes it difficult to compare findings of in-vitro dynamic testing between laboratories. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. A SURVEY ON OPTIMIZATION APPROACHES TO SEMANTIC SERVICE DISCOVERY TOWARDS AN INTEGRATED SOLUTION

    Directory of Open Access Journals (Sweden)

    Chellammal Surianarayanan

    2012-07-01

    Full Text Available The process of semantic service discovery using an ontology reasoner such as Pellet is time consuming. This restricts the usage of web services in real time applications having dynamic composition requirements. As performance of semantic service discovery is crucial in service composition, it should be optimized. Various optimization methods are being proposed to improve the performance of semantic discovery. In this work, we investigate the existing optimization methods and broadly classify optimization mechanisms into two categories, namely optimization by efficient reasoning and optimization by efficient matching. Optimization by efficient matching is further classified into subcategories such as optimization by clustering, optimization by inverted indexing, optimization by caching, optimization by hybrid methods, optimization by efficient data structures and optimization by efficient matching algorithms. With a detailed study of different methods, an integrated optimization infrastructure along with matching method has been proposed to improve the performance of semantic matching component. To achieve better optimization the proposed method integrates the effects of caching, clustering and indexing. Theoretical aspects of performance evaluation of the proposed method are discussed.

  11. Optimizing Solute-Solute Interactions in the GLYCAM06 and CHARMM36 Carbohydrate Force Fields Using Osmotic Pressure Measurements.

    Science.gov (United States)

    Lay, Wesley K; Miller, Mark S; Elcock, Adrian H

    2016-04-12

    GLYCAM06 and CHARMM36 are successful force fields for modeling carbohydrates. To correct recently identified deficiencies with both force fields, we adjusted intersolute nonbonded parameters to reproduce the experimental osmotic coefficient of glucose at 1 M. The modified parameters improve behavior of glucose and sucrose up to 4 M and improve modeling of a dextran 55-mer. While the modified parameters may not be applicable to all carbohydrates, they highlight the use of osmotic simulations to optimize force fields.

  12. Optimization of the test intervals of a nuclear safety system by genetic algorithms, solution clustering and fuzzy preference assignment

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2010-01-01

    In this paper, a procedure is developed for identifying a number of representative solutions manageable for decision-making in a multiobjective optimization problem concerning the test intervals of the components of a safety system of a nuclear power plant. Pareto Front solutions are identified by a genetic algorithm and then clustered by subtractive clustering into 'families'. On the basis of the decision maker's preferences, each family is then synthetically represented by a 'head of the family' solution. This is done by introducing a scoring system that ranks the solutions with respect to the different objectives: a fuzzy preference assignment is employed to this purpose. Level Diagrams are then used to represent, analyze and interpret the Pareto Fronts reduced to the head-of-the-family solutions

  13. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    Science.gov (United States)

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  14. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    Science.gov (United States)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  15. Near optimal solution to the inverse problem for gravitational-wave bursts

    International Nuclear Information System (INIS)

    Guersel, Y.; Tinto, M.

    1989-01-01

    We develop a method for determining the source direction (θ,φ) and the two waveforms h + (t), h x (t) of a gravitational-wave burst using noisy data from three wideband gravitational-wave detectors running in coincidence. The scheme does not rely on any assumptions about the waveforms and in fact it works for gravitational-wave bursts of any kind. To improve the accuracy of the solution for (θ,φ), h + (t), h x (t), we construct a near optimal filter for the noisy data which is deduced from the data themselves. We implement the method numerically using simulated data for detectors that operate, with white Gaussian noise, in the frequency band of 500--2500 Hz. We show that for broadband signals centered around 1 kHz with a conventional signal-to-noise ratio of at least 10 in each detector we are able to locate the source within a solid angle of 1x10 -5 sr. If the signals and the detectors' band were scaled downwards in frequency by a factor ι, at fixed signal-to-noise ratio, then the solid angle of the source's error box would increase by a factor ι 2 . The simulated data are assumed to be produced by three detectors: one on the east coast of the United States of America, one on the west coast of the United States of America, and the third in Germany or Western Australia. For conventional signal-to-noise ratios significantly lower than 10 the method still converges to the correct combination of the relative time delays but it is unable to distinguish between the two mirror-image directions defined by the relative time delays. The angular spread around these points increases as the signal-to-noise ratio decreases. For conventional signal-to-noise ratios near 1 the method loses its resolution completely

  16. Control and optimization of solute transport in a thin porous tube

    KAUST Repository

    Griffiths, I. M.; Howell, P. D.; Shipley, R. J.

    2013-01-01

    differentials upon the dispersive solute behaviour are investigated. The model is used to explore the control of solute transport across the membrane walls via the membrane permeability, and a parametric expression for the permeability required to generate a

  17. Dry storage technologies: Optimized solutions for spent fuels and vitrified residues

    International Nuclear Information System (INIS)

    Roland, Vincent; Verdier, Antoine; Sicard, Damien; Neider, Tara

    2006-01-01

    materials - have allowed finding further optimization of this type of cask design. In order to increase the loading capacity in terms of radioactive source terms and heat load by 40%, the cask design relies on innovative solutions and benchmarks from the current shipping campaigns. This paper shows, on examples developed within companies of the AREVA Group, what are the key parameters and elements that can direct toward the selection of a technology in a user specific context. Some of the constraints are ability to dry store a large number of spent fuel assemblies or vitrified residues. Hereafter are also explained the methods used by COGEMA LOGISTICS in its transport and storage systems, which are an integral part of its radioactive waste management services. COGEMA LOGISTICS leverages its experience and uses its analyses to determine overall characteristics, needs, and the lifetime costs of potential programs for transporting and stored nuclear waste. (authors)

  18. Closed-form solutions for linear regulator-design of mechanical systems including optimal weighting matrix selection

    Science.gov (United States)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.

  19. Direct aperture optimization: A turnkey solution for step-and-shoot IMRT

    International Nuclear Information System (INIS)

    Shepard, D.M.; Earl, M.A.; Li, X.A.; Naqvi, S.; Yu, C.

    2002-01-01

    IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach 'direct aperture optimization'. This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT

  20. Concentration-discharge relationships during an extreme event: Contrasting behavior of solutes and changes to chemical quality of dissolved organic material in the Boulder Creek Watershed during the September 2013 flood: SOLUTE FLUX IN A FLOOD EVENT

    Energy Technology Data Exchange (ETDEWEB)

    Rue, Garrett P. [Institute of Arctic and Alpine Research, University of Colorado, Boulder Colorado USA; Rock, Nathan D. [Institute of Arctic and Alpine Research, University of Colorado, Boulder Colorado USA; Gabor, Rachel S. [Institute of Arctic and Alpine Research, University of Colorado, Boulder Colorado USA; Pitlick, John [Department of Geography, University of Colorado, Boulder Colorado USA; Tfaily, Malak [Environmental Molecular Sciences Laboratory, Pacific Northwest National Laboratory, Richland Washington USA; McKnight, Diane M. [Institute of Arctic and Alpine Research, University of Colorado, Boulder Colorado USA

    2017-07-01

    During the week of September 10-17, 2013, close to 20 inches of rain fell across Boulder County, Colorado, USA. This rainfall represented a 1000-year event that caused massive hillslope erosion, landslides, and mobilization of sediments. The resultant stream flows corresponded to a 100-year flood. For the Boulder Creek Critical Zone Observatory (BC-CZO), this event provided an opportunity to study the effect of extreme rainfall on solute concentration-discharge relationships and biogeochemical catchment processes. We observed base cation and dissolved organic carbon (DOC) concentrations at two sites on Boulder Creek following the recession of peak flow. We also isolated three distinct fractions of dissolved organic matter (DOM) for chemical characterization. At the upper site, which represented the forested mountain catchment, the concentrations of the base cations Ca, Mg and Na were greatest at the peak flood and decreased only slightly, in contrast with DOC and K concentrations, which decreased substantially. At the lower site within urban corridor, all solutes decreased abruptly after the first week of flow recession, with base cation concentrations stabilizing while DOC and K continued to decrease. Additionally, we found significant spatiotemporal trends in the chemical quality of organic matter exported during the flood recession, as measured by fluorescence, 13C-NMR spectroscopy, and FTICR-MS. Similar to the effect of extreme rainfall events in driving landslides and mobilizing sediments, our findings suggest that such events mobilize solutes by the flushing of the deeper layers of the critical zone, and that this flushing regulates terrestrial-aquatic biogeochemical linkages during the flow recession.

  1. Application of integer programming on logistics solution for load transportation: the solver tool and its limitations in the search for the optimal solution

    Directory of Open Access Journals (Sweden)

    Ricardo França Santos

    2012-01-01

    Full Text Available This work tries to solve a typical logistics problem of Navy of Brazil regards the allocation, transportation and distribution of genera refrigerated for Military Organizations within Grande Rio (RJ. After a brief review of literature on Linear/Integer Programming and some of their applications, we proposed the use of Integer Programming, using the Excel’s Solver as a tool for obtaining the optimal load configuration for the fleet, obtaining the lower distribution costs in order to meet the demand schedule. The assumptions were met in a first attempt with a single spreadsheet, but it could not find a convergent solution, without degeneration problems and with a reasonable solution time. A second solution was proposed separating the problem into three phases, which allowed us to highlight the potential and limitations of the Solver tool. This study showed the importance of formulating a realistic model and of a detailed critical analysis, which could be seen through the lack of convergence of the first solution and the success achieved by the second one.

  2. Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal

    Science.gov (United States)

    Steinley, Douglas; Hubert, Lawrence

    2008-01-01

    This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…

  3. Thermal management optimization of an air-cooled hydrogen fuel cell system in an extreme environmental condition

    DEFF Research Database (Denmark)

    Gao, Xin; Olesen, Anders Christian; Kær, Søren Knudsen

    2018-01-01

    An air-cooled proton exchange membrane (PEM) fuel cell system is designed and under manufacture for telecommunication back-up power. To enhance its competence in various environments, the system thermal feature is optimized in this work via simulation based on a computational fluid dynamics (CFD......, the intake airflow magnitude, is also studied for a more uniform airflow and in turn a suppressed temperature disparity inside the system. Following the guidelines drawn by this work on the system design and the operation setting, the air-cooled fuel cell system can be expected with better performances......) model. The model is three-dimensional (3D) and built in the commercial CFD package Fluent (ANSYS Inc.). It makes the full-scale system-level study feasible by only considering the system essences with adequate accuracy. Through the model, the optimization is attained in several aspects. Firstly...

  4. Particle Swarm Optimization applied to combinatorial problem aiming the fuel recharge problem solution in a nuclear reactor

    International Nuclear Information System (INIS)

    Meneses, Anderson Alvarenga de Moura; Schirru, Roberto

    2005-01-01

    This work focuses on the usage the Artificial Intelligence technique Particle Swarm Optimization (PSO) to optimize the fuel recharge at a nuclear reactor. This is a combinatorial problem, in which the search of the best feasible solution is done by minimizing a specific objective function. However, in this first moment it is possible to compare the fuel recharge problem with the Traveling Salesman Problem (TSP), since both of them are combinatorial, with one advantage: the evaluation of the TSP objective function is much more simple. Thus, the proposed methods have been applied to two TSPs: Oliver 30 and Rykel 48. In 1995, KENNEDY and EBERHART presented the PSO technique to optimize non-linear continued functions. Recently some PSO models for discrete search spaces have been developed for combinatorial optimization. Although all of them having different formulation from the ones presented here. In this paper, we use the PSO theory associated with to the Random Keys (RK)model, used in some optimizations with Genetic Algorithms. The Particle Swarm Optimization with Random Keys (PSORK) results from this association, which combines PSO and RK. The adaptations and changings in the PSO aim to allow the usage of the PSO at the nuclear fuel recharge. This work shows the PSORK being applied to the proposed combinatorial problem and the obtained results. (author)

  5. Marine snow, organic solute plumes, and optimal chemosensory behavior of bacteria

    DEFF Research Database (Denmark)

    Kiørboe, Thomas; Jackson, G.A.

    2001-01-01

    Leaking organic solutes form an elongated plume in the wake of a sinking aggregate. These solutes may both be assimilated by suspended bacteria and guide bacteria with chemokinetic swimming behavior toward the aggregate. We used modifications of previously published models of the flow and concent......Leaking organic solutes form an elongated plume in the wake of a sinking aggregate. These solutes may both be assimilated by suspended bacteria and guide bacteria with chemokinetic swimming behavior toward the aggregate. We used modifications of previously published models of the flow...... behavior was used to examine the potential contribution of aggregate-generated solute plumes for water column bacteria] production. Despite occupying only a small volume fraction, the plumes may provide important growth habitats for free bacteria and account for a significant proportion of water column...

  6. Mechanical Design Optimization Using Advanced Optimization Techniques

    CERN Document Server

    Rao, R Venkata

    2012-01-01

    Mechanical design includes an optimization process in which designers always consider objectives such as strength, deflection, weight, wear, corrosion, etc. depending on the requirements. However, design optimization for a complete mechanical assembly leads to a complicated objective function with a large number of design variables. It is a good practice to apply optimization techniques for individual components or intermediate assemblies than a complete assembly. Analytical or numerical methods for calculating the extreme values of a function may perform well in many practical cases, but may fail in more complex design situations. In real design problems, the number of design parameters can be very large and their influence on the value to be optimized (the goal function) can be very complicated, having nonlinear character. In these complex cases, advanced optimization algorithms offer solutions to the problems, because they find a solution near to the global optimum within reasonable time and computational ...

  7. Viscosity Solutions for a System of Integro-PDEs and Connections to Optimal Switching and Control of Jump-Diffusion Processes

    International Nuclear Information System (INIS)

    Biswas, Imran H.; Jakobsen, Espen R.; Karlsen, Kenneth H.

    2010-01-01

    We develop a viscosity solution theory for a system of nonlinear degenerate parabolic integro-partial differential equations (IPDEs) related to stochastic optimal switching and control problems or stochastic games. In the case of stochastic optimal switching and control, we prove via dynamic programming methods that the value function is a viscosity solution of the IPDEs. In our setting the value functions or the solutions of the IPDEs are not smooth, so classical verification theorems do not apply.

  8. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    Science.gov (United States)

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  9. Reaction kinetics of hydrazine neutralization in steam generator wet lay-up solution: Identifying optimal degradation conditions

    International Nuclear Information System (INIS)

    Schildermans, Kim; Lecocq, Raphael; Girasa, Emmanuel

    2012-09-01

    During a nuclear power plant outage, hydrazine is used as an oxygen scavenger in the steam generator lay-up solution. However, due to the carcinogenic effects of hydrazine, more stringent discharge limits are or will be imposed in the environmental permits. Hydrazine discharge could even be prohibited. Consequently, hydrazine alternatives or hydrazine degradation before discharge is needed. This paper presents the laboratory tests performed to characterize the reaction kinetics of hydrazine neutralization using bleach or hydrogen peroxide, catalyzed with either copper sulfate (CuSO 4 ) or potassium permanganate (KMnO 4 ). The tests are performed on two standard steam generator lay-up solutions based on different pH control agents: ammonia or ethanolamine. Different neutralization conditions are tested by varying temperature, oxidant addition, and catalyst concentration, among others, in order to identify the optimal parameters for hydrazine neutralization in a steam generator wet lay-up solution. (authors)

  10. Solution of wind integrated thermal generation system for environmental optimal power flow using hybrid algorithm

    Directory of Open Access Journals (Sweden)

    Ambarish Panda

    2016-09-01

    Full Text Available A new evolutionary hybrid algorithm (HA has been proposed in this work for environmental optimal power flow (EOPF problem. The EOPF problem has been formulated in a nonlinear constrained multi objective optimization framework. Considering the intermittency of available wind power a cost model of the wind and thermal generation system is developed. Suitably formed objective function considering the operational cost, cost of emission, real power loss and cost of installation of FACTS devices for maintaining a stable voltage in the system has been optimized with HA and compared with particle swarm optimization algorithm (PSOA to prove its effectiveness. All the simulations are carried out in MATLAB/SIMULINK environment taking IEEE30 bus as the test system.

  11. Structural Damage Detection using Frequency Response Function Index and Surrogate Model Based on Optimized Extreme Learning Machine Algorithm

    Directory of Open Access Journals (Sweden)

    R. Ghiasi

    2017-09-01

    Full Text Available Utilizing surrogate models based on artificial intelligence methods for detecting structural damages has attracted the attention of many researchers in recent decades. In this study, a new kernel based on Littlewood-Paley Wavelet (LPW is proposed for Extreme Learning Machine (ELM algorithm to improve the accuracy of detecting multiple damages in structural systems.  ELM is used as metamodel (surrogate model of exact finite element analysis of structures in order to efficiently reduce the computational cost through updating process. In the proposed two-step method, first a damage index, based on Frequency Response Function (FRF of the structure, is used to identify the location of damages. In the second step, the severity of damages in identified elements is detected using ELM. In order to evaluate the efficacy of ELM, the results obtained from the proposed kernel were compared with other kernels proposed for ELM as well as Least Square Support Vector Machine algorithm. The solved numerical problems indicated that ELM algorithm accuracy in detecting structural damages is increased drastically in case of using LPW kernel.

  12. Inpatient weight loss as a precursor to bariatric surgery for adolescents with extreme obesity: optimizing bariatric surgery.

    Science.gov (United States)

    Koeck, Emily; Davenport, Katherine; Barefoot, Leah C; Qureshi, Faisal G; Davidow, Daniel; Nadler, Evan P

    2013-07-01

    As the obesity epidemic takes its toll on patients stricken with the disease and our health care system, debate continues regarding the use of weight loss surgery and its long-term consequences, especially for adolescents. One subset of patients regarding whom there is increased controversy is adolescents with extreme obesity (BMI > 60 kg/m(2)) because the risk of complications in this weight category is higher than for others undergoing bariatric surgery. Several strategies have been suggested for this patient group, including staged operations, combined operations, intragastric balloon use, and endoluminal sleeve placement. However, the device options are often not available to adolescents, and there are no data regarding staged or combined procedures in this age group. All adolescents with BMI >60 kg/m(2) referred to our program were evaluated for inpatient medical weight loss prior to laparoscopic sleeve gastrectomy. The program utilizes a multidisciplinary approach with a protein-sparing modified fast diet, exercise, and behavioral modification. Three patients completed the program, and each achieved significant preoperative weight loss through the inpatient program and successfully underwent bariatric surgery. Presurgical weight loss via an inpatient program for adolescents with a BMI >60 kg/m(2) results in total weight loss comparable to a primary surgical procedure alone, with the benefit of decreasing the perioperative risk.

  13. Optimal Solution Volume for Luminal Preservation: A Preclinical Study in Porcine Intestinal Preservation.

    Science.gov (United States)

    Oltean, M; Papurica, M; Jiga, L; Hoinoiu, B; Glameanu, C; Bresler, A; Patrut, G; Grigorie, R; Ionac, M; Hellström, M

    2016-03-01

    Rodent studies suggest that luminal solutions alleviate the mucosal injury and prolong intestinal preservation but concerns exist that excessive volumes of luminal fluid may promote tissue edema. Differences in size, structure, and metabolism between rats and humans require studies in large animals before clinical use. Intestinal procurement was performed in 7 pigs. After perfusion with histidine-tryptophan-ketoglutarate (HTK), 40-cm-long segments were cut and filled with 13.5% polyethylene glycol (PEG) 3350 solution as follows: V0 (controls, none), V1 (0.5 mL/cm), V2 (1 mL/cm), V3 (1.5 mL/cm), and V4 (2 mL/cm). Tissue and luminal solutions were sampled after 8, 14, and 24 hours of cold storage (CS). Preservation injury (Chiu score), the apical membrane (ZO-1, brush-border maltase activity), and the electrolyte content in the luminal solution were studied. In control intestines, 8-hour CS in HTK solution resulted in minimal mucosal changes (grade 1) that progressed to significant subepithelial edema (grade 3) by 24 hours. During this time, a gradual loss in ZO-1 was recorded, whereas maltase activity remained unaltered. Moreover, variable degrees of submucosal edema were observed. Luminal introduction of high volumes (2 mL/mL) of PEG solution accelerated the development of the subepithelial edema and submucosal edema, leading to worse histology. However, ZO-1 was preserved better over time than in control intestines (no luminal solution). Maltase activity was reduced in intestines receiving luminal preservation. Luminal sodium content decreased in time and did not differ between groups. This PEG solution protects the apical membrane and the tight-junction proteins but may favor water absorption and tissue (submucosal) edema, and luminal volumes >2 mL/cm may result in worse intestinal morphology. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Control and optimization of solute transport in a thin porous tube

    KAUST Repository

    Griffiths, I. M.

    2013-03-01

    Predicting the distribution of solutes or particles in flows within porous-walled tubes is essential to inform the design of devices that rely on cross-flow filtration, such as those used in water purification, irrigation devices, field-flow fractionation, and hollow-fibre bioreactors for tissue-engineering applications. Motivated by these applications, a radially averaged model for fluid and solute transport in a tube with thin porous walls is derived by developing the classical ideas of Taylor dispersion. The model includes solute diffusion and advection via both radial and axial flow components, and the advection, diffusion, and uptake coefficients in the averaged equation are explicitly derived. The effect of wall permeability, slip, and pressure differentials upon the dispersive solute behaviour are investigated. The model is used to explore the control of solute transport across the membrane walls via the membrane permeability, and a parametric expression for the permeability required to generate a given solute distribution is derived. The theory is applied to the specific example of a hollow-fibre membrane bioreactor, where a uniform delivery of nutrient across the membrane walls to the extra-capillary space is required to promote spatially uniform cell growth. © 2013 American Institute of Physics.

  15. The Practice of Physical Activity in the Setting of Lower-Extremities Sarcomas: A First Step toward Clinical Optimization

    Directory of Open Access Journals (Sweden)

    Mohamad Assi

    2017-10-01

    Full Text Available Lower-extremities sarcoma patients, with bone tumor and soft-tissue sarcoma, are a unique population at high risk of physical dysfunction and chronic heart diseases. Thus, providing an adequate physical activity (PA program constitutes a primary part of the adjuvant treatment, aiming to improve patients' quality of life. The main goal of this paper is to offer clear suggestions for clinicians regarding PA around the time between diagnosis and offered treatments. These preliminary recommendations reflect our interpretation of the clinical and preclinical data published on this topic, after a systematic search on the PubMed database. Accordingly, patients could be advised to (1 start sessions of supportive rehabilitation and low-intensity PA after surgery and (2 increase PA intensities progressively during home stay. The usefulness of PA during the preoperative period remains largely unknown but emerging preclinical data on mice bearing intramuscular sarcoma are most likely discouraging. However, efforts are still needed to in-depth elucidate the impact of PA before surgery completion. PA should be age-, sex-, and treatment-adapted, as young/adolescent, women and patients receiving platinum-based chemotherapy are more susceptible to physical quality deterioration. Concerning PA intensity, the practice of moderate-intensity resistance and endurance exercises (30–60 min/day are safe after surgery, even when receiving adjuvant chemo/radiotherapy. The general PA recommendations for cancer patients, 150 min/week of combined moderate-intensity endurance/resistance exercises, could be feasible after 18–24 months of rehabilitation. We believe that these suggestions will help clinicians to design a low-risk and useful PA program.

  16. Dual-Energy Computed Tomography Angiography of the Lower Extremity Runoff: Impact of Noise-Optimized Virtual Monochromatic Imaging on Image Quality and Diagnostic Accuracy.

    Science.gov (United States)

    Wichmann, Julian L; Gillott, Matthew R; De Cecco, Carlo N; Mangold, Stefanie; Varga-Szemes, Akos; Yamada, Ricardo; Otani, Katharina; Canstein, Christian; Fuller, Stephen R; Vogl, Thomas J; Todoran, Thomas M; Schoepf, U Joseph

    2016-02-01

    The aim of this study was to evaluate the impact of a noise-optimized virtual monochromatic imaging algorithm (VMI+) on image quality and diagnostic accuracy at dual-energy computed tomography angiography (CTA) of the lower extremity runoff. This retrospective Health Insurance Portability and Accountability Act-compliant study was approved by the local institutional review board. We evaluated dual-energy CTA studies of the lower extremity runoff in 48 patients (16 women; mean age, 63.3 ± 13.8 years) performed on a third-generation dual-source CT system. Images were reconstructed with standard linear blending (F_0.5), VMI+, and traditional monochromatic (VMI) algorithms at 40 to 120 keV in 10-keV intervals. Vascular attenuation and image noise in 18 artery segments were measured; signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. Five-point scales were used to subjectively evaluate vascular attenuation and image noise. In a subgroup of 21 patients who underwent additional invasive catheter angiography, diagnostic accuracy for the detection of significant stenosis (≥50% lumen restriction) of F_0.5, 50-keV VMI+, and 60-keV VMI data sets were assessed. Objective image quality metrics were highest in the 40- and 50-keV VMI+ series (SNR: 20.2 ± 10.7 and 19.0 ± 9.5, respectively; CNR: 18.5 ± 10.3 and 16.8 ± 9.1, respectively) and were significantly (all P traditional VMI technique and standard linear blending for evaluation of the lower extremity runoff using dual-energy CTA.

  17. Improving real-time estimation of heavy-to-extreme precipitation using rain gauge data via conditional bias-penalized optimal estimation

    Science.gov (United States)

    Seo, Dong-Jun; Siddique, Ridwan; Zhang, Yu; Kim, Dongsoo

    2014-11-01

    A new technique for gauge-only precipitation analysis for improved estimation of heavy-to-extreme precipitation is described and evaluated. The technique is based on a novel extension of classical optimal linear estimation theory in which, in addition to error variance, Type-II conditional bias (CB) is explicitly minimized. When cast in the form of well-known kriging, the methodology yields a new kriging estimator, referred to as CB-penalized kriging (CBPK). CBPK, however, tends to yield negative estimates in areas of no or light precipitation. To address this, an extension of CBPK, referred to herein as extended conditional bias penalized kriging (ECBPK), has been developed which combines the CBPK estimate with a trivial estimate of zero precipitation. To evaluate ECBPK, we carried out real-world and synthetic experiments in which ECBPK and the gauge-only precipitation analysis procedure used in the NWS's Multisensor Precipitation Estimator (MPE) were compared for estimation of point precipitation and mean areal precipitation (MAP), respectively. The results indicate that ECBPK improves hourly gauge-only estimation of heavy-to-extreme precipitation significantly. The improvement is particularly large for estimation of MAP for a range of combinations of basin size and rain gauge network density. This paper describes the technique, summarizes the results and shares ideas for future research.

  18. Adapting crop management practices to climate change: Modeling optimal solutions at the field scale

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.; Walter, A.

    2013-01-01

    Climate change will alter the environmental conditions for crop growth and require adjustments in management practices at the field scale. In this paper, we analyzed the impacts of two different climate change scenarios on optimal field management practices in winterwheat and grain maize production

  19. Tax solutions for optimal reduction of tobacco use in West Africa ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This study will estimate the costs of tobacco use in Senegal and show, for the first time, the extent of the chronic morbidity and the economic costs associated with tobacco use in a West African country. In addition, data on tobacco demand in Senegal and Nigeria (collected in 2015) will be used to determine the optimal ...

  20. Optimization of adaptive radiation therapy in cervical cancer: Solutions for photon and proton therapy

    NARCIS (Netherlands)

    van de Schoot, A.J.A.J.

    2016-01-01

    In cervical cancer radiation therapy, an adaptive strategy is required to compensate for interfraction anatomical variations in order to achieve adequate dose delivery. In this thesis, we have aimed at optimizing adaptive radiation therapy in cervical cancer to improve treatment efficiency and

  1. Enzyme allocation problems in kinetic metabolic networks: Optimal solutions are elementary flux modes

    Czech Academy of Sciences Publication Activity Database

    Müller, Stefan; Regensburger, G.; Steuer, Ralf

    2014-01-01

    Roč. 347, APR 2014 (2014), s. 182-190 ISSN 0022-5193 R&D Projects: GA MŠk(CZ) EE2.3.20.0256 Institutional support: RVO:67179843 Keywords : metabolic optimization * enzyme kinetics * oriented matroid * elementary vector * conformal sum Subject RIV: EI - Biotechnology ; Bionics Impact factor: 2.116, year: 2014

  2. Finding a pareto-optimal solution for multi-region models subject to capital trade and spillover externalities

    Energy Technology Data Exchange (ETDEWEB)

    Leimbach, Marian [Potsdam-Institut fuer Klimafolgenforschung e.V., Potsdam (Germany); Eisenack, Klaus [Oldenburg Univ. (Germany). Dept. of Economics and Statistics

    2008-11-15

    In this paper we present an algorithm that deals with trade interactions within a multi-region model. In contrast to traditional approaches this algorithm is able to handle spillover externalities. Technological spillovers are expected to foster the diffusion of new technologies, which helps to lower the cost of climate change mitigation. We focus on technological spillovers which are due to capital trade. The algorithm of finding a pareto-optimal solution in an intertemporal framework is embedded in a decomposed optimization process. The paper analyzes convergence and equilibrium properties of this algorithm. In the final part of the paper, we apply the algorithm to investigate possible impacts of technological spillovers. While benefits of technological spillovers are significant for the capital-importing region, benefits for the capital-exporting region depend on the type of regional disparities and the resulting specialization and terms-of-trade effects. (orig.)

  3. Optimization of membrane stack configuration for efficient hydrogen production in microbial reverse-electrodialysis electrolysis cells coupled with thermolytic solutions

    KAUST Repository

    Luo, Xi

    2013-07-01

    Waste heat can be captured as electrical energy to drive hydrogen evolution in microbial reverse-electrodialysis electrolysis cells (MRECs) by using thermolytic solutions such as ammonium bicarbonate. To determine the optimal membrane stack configuration for efficient hydrogen production in MRECs using ammonium bicarbonate solutions, different numbers of cell pairs and stack arrangements were tested. The optimum number of cell pairs was determined to be five based on MREC performance and a desire to minimize capital costs. The stack arrangement was altered by placing an extra low concentration chamber adjacent to anode chamber to reduce ammonia crossover. This additional chamber decreased ammonia nitrogen losses into anolyte by 60%, increased the coulombic efficiency to 83%, and improved the hydrogen yield to a maximum of 3.5mol H2/mol acetate, with an overall energy efficiency of 27%. These results improve the MREC process, making it a more efficient method for renewable hydrogen gas production. © 2013 Elsevier Ltd.

  4. Mehar Methods for Fuzzy Optimal Solution and Sensitivity Analysis of Fuzzy Linear Programming with Symmetric Trapezoidal Fuzzy Numbers

    Directory of Open Access Journals (Sweden)

    Sukhpreet Kaur Sidhu

    2014-01-01

    Full Text Available The drawbacks of the existing methods to obtain the fuzzy optimal solution of such linear programming problems, in which coefficients of the constraints are represented by real numbers and all the other parameters as well as variables are represented by symmetric trapezoidal fuzzy numbers, are pointed out, and to resolve these drawbacks, a new method (named as Mehar method is proposed for the same linear programming problems. Also, with the help of proposed Mehar method, a new method, much easy as compared to the existing methods, is proposed to deal with the sensitivity analysis of the same type of linear programming problems.

  5. The Solution of Two-Phase Inverse Stefan Problem Based on a Hybrid Method with Optimization

    Directory of Open Access Journals (Sweden)

    Yang Yu

    2015-01-01

    Full Text Available The two-phase Stefan problem is widely used in industrial field. This paper focuses on solving the two-phase inverse Stefan problem when the interface moving is unknown, which is more realistic from the practical point of view. With the help of optimization method, the paper presents a hybrid method which combines the homotopy perturbation method with the improved Adomian decomposition method to solve this problem. Simulation experiment demonstrates the validity of this method. Optimization method plays a very important role in this paper, so we propose a modified spectral DY conjugate gradient method. And the convergence of this method is given. Simulation experiment illustrates the effectiveness of this modified spectral DY conjugate gradient method.

  6. A Direct Algorithm Maple Package of One-Dimensional Optimal System for Group Invariant Solutions

    Science.gov (United States)

    Zhang, Lin; Han, Zhong; Chen, Yong

    2018-01-01

    To construct the one-dimensional optimal system of finite dimensional Lie algebra automatically, we develop a new Maple package One Optimal System. Meanwhile, we propose a new method to calculate the adjoint transformation matrix and find all the invariants of Lie algebra in spite of Killing form checking possible constraints of each classification. Besides, a new conception called invariance set is raised. Moreover, this Maple package is proved to be more efficiency and precise than before by applying it to some classic examples. Supported by the Global Change Research Program of China under Grant No. 2015CB95390, National Natural Science Foundation of China under Grant Nos. 11675054 and 11435005, and Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things under Grant No. ZF1213

  7. The continuous 1.5D terrain guarding problem: Discretization, optimal solutions, and PTAS

    Directory of Open Access Journals (Sweden)

    Stephan Friedrichs

    2016-05-01

    Full Text Available In the NP-hard continuous 1.5D Terrain Guarding Problem (TGP we are given an $x$-monotone chain of line segments in $R^2$ (the terrain $T$, and ask for the minimum number of guards (located anywhere on $T$ required to guard all of $T$. We construct guard candidate and witness sets $G, W \\subset T$ of polynomial size such that any feasible (optimal guard cover $G^* \\subseteq G$ for $W$ is also feasible (optimal for the continuous TGP. This discretization allows us to: (1 settle NP-completeness for the continuous TGP; (2 provide a Polynomial Time Approximation Scheme (PTAS for the continuous TGP using the PTAS for the discrete TGP by Gibson et al.; (3 formulate the continuous TGP as an Integer Linear Program (IP. Furthermore, we propose several filtering techniques reducing the size of our discretization, allowing us to devise an efficient IP-based algorithm that reliably provides optimal guard placements for terrains with up to $10^6$ vertices within minutes on a standard desktop computer.

  8. Analytical development and optimization of a graphene–solution interface capacitance model

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2014-05-01

    Full Text Available Graphene, which as a new carbon material shows great potential for a range of applications because of its exceptional electronic and mechanical properties, becomes a matter of attention in these years. The use of graphene in nanoscale devices plays an important role in achieving more accurate and faster devices. Although there are lots of experimental studies in this area, there is a lack of analytical models. Quantum capacitance as one of the important properties of field effect transistors (FETs is in our focus. The quantum capacitance of electrolyte-gated transistors (EGFETs along with a relevant equivalent circuit is suggested in terms of Fermi velocity, carrier density, and fundamental physical quantities. The analytical model is compared with the experimental data and the mean absolute percentage error (MAPE is calculated to be 11.82. In order to decrease the error, a new function of E composed of α and β parameters is suggested. In another attempt, the ant colony optimization (ACO algorithm is implemented for optimization and development of an analytical model to obtain a more accurate capacitance model. To further confirm this viewpoint, based on the given results, the accuracy of the optimized model is more than 97% which is in an acceptable range of accuracy.

  9. Optimized Hypernetted-Chain Solutions for Helium -4 Surfaces and Metal Surfaces

    Science.gov (United States)

    Qian, Guo-Xin

    This thesis is a study of inhomogeneous Bose systems such as liquid ('4)He slabs and inhomogeneous Fermi systems such as the electron gas in metal films, at zero temperature. Using a Jastrow-type many-body wavefunction, the ground state energy is expressed by means of Bogoliubov-Born-Green-Kirkwood -Yvon and Hypernetted-Chain techniques. For Bose systems, Euler-Lagrange equations are derived for the one- and two -body functions and systematic approximation methods are physically motivated. It is shown that the optimized variational method includes a self-consistent summation of ladder- and ring-diagrams of conventional many-body theory. For Fermi systems, a linear potential model is adopted to generate the optimized Hartree-Fock basis. Euler-Lagrange equations are derived for the two-body correlations which serve to screen the strong bare Coulomb interaction. The optimization of the pair correlation leads to an expression of correlation energy in which the state averaged RPA part is separated. Numerical applications are presented for the density profile and pair distribution function for both ('4)He surfaces and metal surfaces. Both the bulk and surface energies are calculated in good agreement with experiments.

  10. Iterative solution to the optimal control of depletion problem in pressurized water reactors

    International Nuclear Information System (INIS)

    Colletti, J.P.

    1981-01-01

    A method is described for determining the optimal time and spatial dependence of control absorbers in the core of a pressurized water reactor over a single refueling cycle. The reactor is modeled in two dimensions with many regions using two-group diffusion theory. The problem is formulated as an optimal control problem with the cycle length fixed and the initial reactor state known. Constraints are placed on the regionwise normalized powers, control absorber concentrations, and the critical soluble boron concentration of the core. The cost functional contains two terms which may be used individually or together. One term maximizes the end-of-cycle (EOC) critical soluble boron concentration, and the other minimizes the norm of the distance between the actual and a target EOC burnup distribution. Results are given for several test problems which are based on a three-region model of the Three Mile Island Unit 1 reactor. The resulting optimal control strategies are bang-bang and lead to EOC states with the power peaking at its maximum and no control absorbers remaining in the core. Throughout the cycle the core soluble boron concentration is zero

  11. Solution to automatic generation control problem using firefly algorithm optimized I(λ)D(µ) controller.

    Science.gov (United States)

    Debbarma, Sanjoy; Saikia, Lalit Chandra; Sinha, Nidul

    2014-03-01

    Present work focused on automatic generation control (AGC) of a three unequal area thermal systems considering reheat turbines and appropriate generation rate constraints (GRC). A fractional order (FO) controller named as I(λ)D(µ) controller based on crone approximation is proposed for the first time as an appropriate technique to solve the multi-area AGC problem in power systems. A recently developed metaheuristic algorithm known as firefly algorithm (FA) is used for the simultaneous optimization of the gains and other parameters such as order of integrator (λ) and differentiator (μ) of I(λ)D(µ) controller and governor speed regulation parameters (R). The dynamic responses corresponding to optimized I(λ)D(µ) controller gains, λ, μ, and R are compared with that of classical integer order (IO) controllers such as I, PI and PID controllers. Simulation results show that the proposed I(λ)D(µ) controller provides more improved dynamic responses and outperforms the IO based classical controllers. Further, sensitivity analysis confirms the robustness of the so optimized I(λ)D(µ) controller to wide changes in system loading conditions and size and position of SLP. Proposed controller is also found to have performed well as compared to IO based controllers when SLP takes place simultaneously in any two areas or all the areas. Robustness of the proposed I(λ)D(µ) controller is also tested against system parameter variations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Enhanced Performance of PbS-quantum-dot-sensitized Solar Cells via Optimizing Precursor Solution and Electrolytes

    Science.gov (United States)

    Tian, Jianjun; Shen, Ting; Liu, Xiaoguang; Fei, Chengbin; Lv, Lili; Cao, Guozhong

    2016-03-01

    This work reports a PbS-quantum-dot-sensitized solar cell (QDSC) with power conversion efficiency (PCE) of 4%. PbS quantum dots (QDs) were grown on mesoporous TiO2 film using a successive ion layer absorption and reaction (SILAR) method. The growth of QDs was found to be profoundly affected by the concentration of the precursor solution. At low concentrations, the rate-limiting factor of the crystal growth was the adsorption of the precursor ions, and the surface growth of the crystal became the limiting factor in the high concentration solution. The optimal concentration of precursor solution with respect to the quantity and size of synthesized QDs was 0.06 M. To further increase the performance of QDSCs, the 30% deionized water of polysulfide electrolyte was replaced with methanol to improve the wettability and permeability of electrolytes in the TiO2 film, which accelerated the redox couple diffusion in the electrolyte solution and improved charge transfer at the interfaces between photoanodes and electrolytes. The stability of PbS QDs in the electrolyte was also improved by methanol to reduce the charge recombination and prolong the electron lifetime. As a result, the PCE of QDSC was increased to 4.01%.

  13. The Development and Full-Scale Experimental Validation of an Optimal Water Treatment Solution in Improving Chiller Performances

    Directory of Open Access Journals (Sweden)

    Chen-Yu Chiang

    2016-06-01

    Full Text Available An optimal solution, in combining physical and chemical water treatment methods, has been developed. This method uses a high voltage capacitance based (HVCB electrodes, coupled with biocides to form a sustainable solution in improving chiller plant performances. In this study, the industrial full-scale tests, instead of laboratory tests, have been conducted on chiller plants at the size of 5000 RT to 10,000 RT cooling capacities under commercial operation for more than two years. The experimental results indicated that the condenser approach temperatures can be maintained at below 1 °C for over two years. It has been validated that the coefficient of performance (COP of a chiller can be improved by over 5% by implementing this solution. Every 1 °C reduction in condenser approach temperature can yield approximately 3% increase on chiller COP, which warrants its future application potential in the HVAC industry, where Ta can degrade by 1 °C every three to six months. The solution developed in this study could also reduce chemical dosages and conserve makeup water substantially and is more environment friendly.

  14. Parallelizing ATLAS Reconstruction and Simulation: Issues and Optimization Solutions for Scaling on Multi- and Many-CPU Platforms

    International Nuclear Information System (INIS)

    Leggett, C; Jackson, K; Tatarkhanov, M; Yao, Y; Binet, S; Levinthal, D

    2011-01-01

    Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.

  15. Exact Solution of a Constraint Optimization Problem for the Thermoelectric Figure of Merit

    Directory of Open Access Journals (Sweden)

    Wolfgang Seifert

    2012-03-01

    Full Text Available In the classical theory of thermoelectricity, the performance integrals for a fully self-compatible material depend on the dimensionless figure of merit zT. Usually these integrals are evaluated for constraints z = const. and zT = const., respectively. In this paper we discuss the question from a mathematical point of view whether there is an optimal temperature characteristics of the figure of merit. We solve this isoperimetric variational problem for the best envelope of a family of curves z(TT.

  16. Exact Solution of a Constraint Optimization Problem for the Thermoelectric Figure of Merit.

    Science.gov (United States)

    Seifert, Wolfgang; Pluschke, Volker

    2012-03-21

    In the classical theory of thermoelectricity, the performance integrals for a fully self-compatible material depend on the dimensionless figure of merit zT. Usually these integrals are evaluated for constraints z = const. and zT = const., respectively. In this paper we discuss the question from a mathematical point of view whether there is an optimal temperature characteristics of the figure of merit. We solve this isoperimetric variational problem for the best envelope of a family of curves z(T)T.

  17. An energy-optimal solution for transportation control of cranes with double pendulum dynamics: Design and experiments

    Science.gov (United States)

    Sun, Ning; Wu, Yiming; Chen, He; Fang, Yongchun

    2018-03-01

    Underactuated cranes play an important role in modern industry. Specifically, in most situations of practical applications, crane systems exhibit significant double pendulum characteristics, which makes the control problem quite challenging. Moreover, most existing planners/controllers obtained with standard methods/techniques for double pendulum cranes cannot minimize the energy consumption when fulfilling the transportation tasks. Therefore, from a practical perspective, this paper proposes an energy-optimal solution for transportation control of double pendulum cranes. By applying the presented approach, the transportation objective, including fast trolley positioning and swing elimination, is achieved with minimized energy consumption, and the residual oscillations are suppressed effectively with all the state constrains being satisfied during the entire transportation process. As far as we know, this is the first energy-optimal solution for transportation control of underactuated double pendulum cranes with various state and control constraints. Hardware experimental results are included to verify the effectiveness of the proposed approach, whose superior performance is reflected by being experimentally compared with some comparative controllers.

  18. On the validity of the arithmetic-geometric mean method to locate the optimal solution in a supply chain system

    Science.gov (United States)

    Chung, Kun-Jen

    2012-08-01

    Cardenas-Barron [Cardenas-Barron, L.E. (2010) 'A Simple Method to Compute Economic order Quantities: Some Observations', Applied Mathematical Modelling, 34, 1684-1688] indicates that there are several functions in which the arithmetic-geometric mean method (AGM) does not give the minimum. This article presents another situation to reveal that the AGM inequality to locate the optimal solution may be invalid for Teng, Chen, and Goyal [Teng, J.T., Chen, J., and Goyal S.K. (2009), 'A Comprehensive Note on: An Inventory Model under Two Levels of Trade Credit and Limited Storage Space Derived without Derivatives', Applied Mathematical Modelling, 33, 4388-4396], Teng and Goyal [Teng, J.T., and Goyal S.K. (2009), 'Comment on 'Optimal Inventory Replenishment Policy for the EPQ Model under Trade Credit Derived without Derivatives', International Journal of Systems Science, 40, 1095-1098] and Hsieh, Chang, Weng, and Dye [Hsieh, T.P., Chang, H.J., Weng, M.W., and Dye, C.Y. (2008), 'A Simple Approach to an Integrated Single-vendor Single-buyer Inventory System with Shortage', Production Planning and Control, 19, 601-604]. So, the main purpose of this article is to adopt the calculus approach not only to overcome shortcomings of the arithmetic-geometric mean method of Teng et al. (2009), Teng and Goyal (2009) and Hsieh et al. (2008), but also to develop the complete solution procedures for them.

  19. Mercury Removal From Aqueous Solutions With Chitosan-Coated Magnetite Nanoparticles Optimized Using the Box-Behnken Design

    Science.gov (United States)

    Rahbar, Nadereh; Jahangiri, Alireza; Boumi, Shahin; Khodayar, Mohammad Javad

    2014-01-01

    Background: Nowadays, removal of heavy metals from the environment is an important problem due to their toxicity. Objectives: In this study, a modified method was used to synthesize chitosan-coated magnetite nanoparticles (CCMN) to be used as a low cost and nontoxic adsorbent. CCMN was then employed to remove Hg2+ from water solutions. Materials and Methods: To remove the highest percentage of mercury ions, the Box-Behnken model of response surface methodology (RSM) was applied to simultaneously optimize all parameters affecting the adsorption process. Studied parameters of the process were pH (5-8), initial metal concentration (2-8 mg/L), and the amount of damped adsorbent (0.25-0.75 g). A second-order mathematical model was developed using regression analysis of experimental data obtained from 15 batch runs. Results: The optimal conditions predicted by the model were pH = 5, initial concentration of mercury ions = 6.2 mg/L, and the amount of damped adsorbent = 0.67 g. Confirmatory testing was performed and the maximum percentage of Hg2+ removed was found to be 99.91%. Kinetic studies of the adsorption process specified the efficiency of the pseudo second-order kinetic model. The adsorption isotherm was well-fitted to both the Langmuir and Freundlich models. Conclusions: CCMN as an excellent adsorbent could remove the mercury ions from water solutions at low and moderate concentrations, which is the usual amount found in environment. PMID:24872943

  20. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach

    Science.gov (United States)

    Ran, J.; Ditmar, P.; Klees, R.; Farahani, H. H.

    2018-03-01

    We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted into mass anomalies per mascon. The limited spectral content of the gravity anomalies is properly accounted for by applying a low-pass filter as part of the inversion procedure to make the functional model spectrally consistent with the data. The full error covariance matrices of the monthly GRACE solutions are properly propagated using the law of covariance propagation. Using numerical experiments, we demonstrate the importance of a proper data weighting and of the spectral consistency between functional model and data. The developed methodology is applied to process real GRACE level-2 data (CSR RL05). The obtained mass anomaly estimates are integrated over five drainage systems, as well as over entire Greenland. We find that the statistically optimal data weighting reduces random noise by 35-69%, depending on the drainage system. The obtained mass anomaly time-series are de-trended to eliminate the contribution of ice discharge and are compared with de-trended surface mass balance (SMB) time-series computed with the Regional Atmospheric Climate Model (RACMO 2.3). We show that when using a statistically optimal data weighting in GRACE data processing, the discrepancies between GRACE-based estimates of SMB and modelled SMB are reduced by 24-47%.

  1. Optimization of fuel cells for BWR using Path Re linking and flexible strategies of solution

    International Nuclear Information System (INIS)

    Castillo M, J. A.; Ortiz S, J. J.; Torres V, M.; Perusquia del Cueto, R.

    2009-10-01

    In this work are presented the obtained preliminary results to design nuclear fuel cells for boiling water reactors (BWR) using new strategies. To carry out the cells design some of the used rules in the fuel administration were discarded and other were implemented. The above-mentioned with the idea of making a comparative analysis between the used rules and those implemented here, under the hypothesis that it can be possible to design nuclear fuel cells without using all the used rules and executing the security restrictions that are imposed in these cases. To evaluate the quality of the obtained cells it was taken into account the power pick factor and the infinite multiplication factor, in the same sense, to evaluate the proposed configurations and to obtain the mentioned parameters was used the CASMO-4 code. To optimize the design it is uses the combinatorial optimization technique named Path Re linking and the Dispersed Search as local search method. The preliminary results show that it is possible to implement new strategies for the cells design of nuclear fuel following new rules. (Author)

  2. Process optimization for inkjet printing of triisopropylsilylethynyl pentacene with single-solvent solutions

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xianghua, E-mail: xhwang@hfut.edu.cn [Key Lab of Special Display Technology, Ministry of Education, National Engineering Lab of Special Display Technology, National Key Lab of Advanced Display Technology, Academy of Opto-Electronic Technology, Hefei University of Technology, Hefei 230009 (China); Yuan, Miao [Key Lab of Special Display Technology, Ministry of Education, National Engineering Lab of Special Display Technology, National Key Lab of Advanced Display Technology, Academy of Opto-Electronic Technology, Hefei University of Technology, Hefei 230009 (China); School of Electronic Science & Applied Physics, Hefei University of Technology, Hefei 230009 (China); Xiong, Xianfeng; Chen, Mengjie [Key Lab of Special Display Technology, Ministry of Education, National Engineering Lab of Special Display Technology, National Key Lab of Advanced Display Technology, Academy of Opto-Electronic Technology, Hefei University of Technology, Hefei 230009 (China); Qin, Mengzhi [Key Lab of Special Display Technology, Ministry of Education, National Engineering Lab of Special Display Technology, National Key Lab of Advanced Display Technology, Academy of Opto-Electronic Technology, Hefei University of Technology, Hefei 230009 (China); School of Electronic Science & Applied Physics, Hefei University of Technology, Hefei 230009 (China); Qiu, Longzhen; Lu, Hongbo; Zhang, Guobing; Lv, Guoqiang [Key Lab of Special Display Technology, Ministry of Education, National Engineering Lab of Special Display Technology, National Key Lab of Advanced Display Technology, Academy of Opto-Electronic Technology, Hefei University of Technology, Hefei 230009 (China); Choi, Anthony H.W. [Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong (China)

    2015-03-02

    Inkjet printing of 6,13-bis(triisopropylsilylethynyl) pentacene (TIPS-PEN), a small molecule organic semiconductor, is performed on two types of substrates. Hydrophilic SiO{sub 2} substrates prepared by a combination of surface treatments lead to either a smaller size or a coffee-ring profile of the single-drop film. A hydrophobic surface with dominant dispersive component of surface energy such as that of a spin-coated poly(4-vinylphenol) film favors profile formation with uniform thickness of the printed semiconductor owing to the strong dispersion force between the semiconductor molecules and the hydrophobic surface of the substrate. With a hydrophobic dielectric as the substrate and via a properly selected solvent, high quality TIPS-PEN films were printed at a very low substrate temperature of 35 °C. Saturated field-effect mobility measured with top-contact thin-film transistor structure shows a narrow distribution and a maximum of 0.78 cm{sup 2}V{sup −1} s{sup −1}, which confirmed the film growth on the hydrophobic substrate with increased crystal coverage and continuity under the optimized process condition. - Highlights: • Hydrophobic substrates were employed to inhibit the coffee-ring effect. • Contact-line pinning is primarily controlled by the dispersion force. • Solvent selection is critical to crystal coverage of the printed film. • High performance and uniformity are achieved by process optimization.

  3. Optimization of uranyl ions removal from aqueous solution by natural and modified kaolinites

    Energy Technology Data Exchange (ETDEWEB)

    Elhefnawy, O.A.; Elabd, A.A. [Nuclear and Radiological Regulatory Authority (NRRA), Cairo (Egypt). Nuclear Safeguards and Physical Protection Dept.

    2017-10-01

    The paper addresses the modifications of the most common mineral clay ''kaolinite'' for U(VI) removal from aqueous solutions. A new modified Egyptian natural kaolinite (Ca-MK) was prepared by coating kaolinite with calcium oxide. Another modification process was utilized by calcination and acid activation of kaolinite (E-MK). The Egyptian natural kaolinite (E-NK) and the two modified kaolinites were characterized by different techniques SEM, EDX, XRD, and FTIR. The removal process were investigated in batch experiments as a function of pH, contact time, initial U(VI) concentration, effect of temperature, and recovery of U(VI) were studied. The equilibrium stage was achieved after 60 min and the kinetic data was described well by pseudo-second order model. Isothermal data was better described by the Langmuir isotherm model, indicating the homogeneous removal process. Also the removal process was studied on different temperature 293, 313, and 323 K. The thermodynamic parameters ΔH , ΔS , and ΔG were calculated. The thermodynamic results pointed to the endothermic and favorable nature of the U(VI) removal process in the three kaolinite adsorbents. This study indicated that (Ca-MK) has higher CEC and can be used as a new adsorbent for highly efficient removal of U(VI) from aqueous solutions.

  4. Bioremediation of zirconium from aqueous solution by coriolus versicolor: process optimization

    International Nuclear Information System (INIS)

    Amin, M.; Bhatti, H. N.; Sadaf, S.

    2013-01-01

    In the present study the potential of live mycelia of Coriolus versicolor was explored for the removal of zirconium from simulated aqueous solution. Optimum experimental parameters for the bioremediation of zirconium using C. versicolor biomass have been investigated by studying the effect of mycelia dose, concentration of zirconium, contact time and temperature. The isothermal studies indicated that the ongoing bioremediation process was exothermic in nature and obeyed Langmuir adsorption isotherm model. The Gibbs free energy (ΔG), entropy (ΔS) and enthalpy (ΔH) of bioremediation were also determined. The result showed that bioremediation of zirconium by live C. versicolor was feasible and spontaneous at room temperature. The equilibrium data verified the involvement of chemisorption during the bioremediation. The kinetic data indicated the operation of pseudo-second order process during the biosorption of zirconium from aqueous solution. Maximum bioremediation capacity (110.75 mg/g) of C. versicolor was observed under optimum operational conditions: pH 4.5, biomass dose 0.05 mg/100 mL, contact time 6 h and temperature 30 degree C. The results showed that C. versicolor could be used for bioremediation of heavy metal ions from aqueous systems. (author)

  5. Using Central Composite Experimental Design to Optimize the Degradation of Tylosin from Aqueous Solution by Photo-Fenton Reaction

    Directory of Open Access Journals (Sweden)

    Abd Elaziz Sarrai

    2016-05-01

    Full Text Available The feasibility of the application of the Photo-Fenton process in the treatment of aqueous solution contaminated by Tylosin antibiotic was evaluated. The Response Surface Methodology (RSM based on Central Composite Design (CCD was used to evaluate and optimize the effect of hydrogen peroxide, ferrous ion concentration and initial pH as independent variables on the total organic carbon (TOC removal as the response function. The interaction effects and optimal parameters were obtained by using MODDE software. The significance of the independent variables and their interactions was tested by means of analysis of variance (ANOVA with a 95% confidence level. Results show that the concentration of the ferrous ion and pH were the main parameters affecting TOC removal, while peroxide concentration had a slight effect on the reaction. The optimum operating conditions to achieve maximum TOC removal were determined. The model prediction for maximum TOC removal was compared to the experimental result at optimal operating conditions. A good agreement between the model prediction and experimental results confirms the soundness of the developed model.

  6. Semi-Analytical Solution of Optimization on Moon-Pool Shaped WEC

    Directory of Open Access Journals (Sweden)

    Zhang W.C.

    2016-10-01

    Full Text Available In order to effectively extract and maximize the energy from ocean waves, a new kind of oscillating-body WEC (wave energy converter with moon pool has been put forward. The main emphasis in this paper is placed on inserting the damping into the equation of heaving motion applied for a complex wave energy converter and expressions for velocity potential added mass, damping coefficients associated with exciting forces were derived by using eigenfunction expansion matching method. By using surface-wave hydrodynamics, the exact theoretical conditions were solved to allow the maximum energy to be absorbed from regular waves. To optimize the ability of the wave energy conversion, oscillating system models under different radius-ratios are calculated and comparatively analyzed. Numerical calculations indicated that the capture width reaches the maximum in the vicinity of the natural frequency and the new kind of oscillating-body WEC has a positive ability of wave energy conversion.

  7. Parallel approach on sorting of genes in search of optimal solution.

    Science.gov (United States)

    Kumar, Pranav; Sahoo, G

    2018-05-01

    An important tool for comparing genome analysis is the rearrangement event that can transform one given genome into other. For finding minimum sequence of fission and fusion, we have proposed here an algorithm and have shown a transformation example for converting the source genome into the target genome. The proposed algorithm comprises of circular sequence i.e. "cycle graph" in place of mapping. The main concept of algorithm is based on optimal result of permutation. These sorting processes are performed in constant running time by showing permutation in the form of cycle. In biological instances it has been observed that transposition occurs half of the frequency as that of reversal. In this paper we are not dealing with reversal instead commencing with the rearrangement of fission, fusion as well as transposition. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Emergence of robust solutions to 0-1 optimization problems in multi-agent systems

    DEFF Research Database (Denmark)

    constructive application in engineering. The approach is demonstrated by giving two examples: First, time-dependent robot-target assignment problems with several autonomous robots and several targets are considered as model of flexible manufacturing systems. Each manufacturing target has to be served...... of autonomous space robots building a space station by a distributed transportation of several parts from a space shuttle to defined positions at the space station. Second, the suggested approach is used for the design and selection of traffic networks. The topology of the network is optimized with respect...... to an additive quantity like the length of route segments and an upper bound for the number of route segments. For this, the dynamics of the selection processes of the previous example is extended such that for each vertex several choices for the edges can be made simultaneously up to an individually given upper...

  9. TNTM85 and TNTM81 transports / storage flasks: An optimized solution for vitrified residues

    International Nuclear Information System (INIS)

    Sicard, D.; Verdier, A.; Dyck, P.

    2006-01-01

    By analyzing the evolution of burnup of spent fuel to be reprocessed, the high activity vitrified residues would not be transported in the existing flask designs. Therefore COGEMA LOGISTICS decided in the late nineties to develop a design with optimized capacity able to store and transport the most active and hottest canisters. The TN TM 85 flask shall permit in the near future in Germany the storage and the transport of the highest vitrified residues defining a thermal power of 56 kW. The challenge for the TN TM 85 flask design was that the geometry entry data were very restrictive and were combined with a fairly wide range set by COGEMA Specification 300AQ16 relative to vitrified residue canister. In addition, the cask had to fit as much as possible in the existing procedures for the TN TM 28 cask and TS 28 V cask, all along the logistics chain of loading, unloading, transport and maintenance. (authors)

  10. Optimizing the molarity of a EDTA washing solution for saturated-soil remediation of trace metal contaminated soils

    International Nuclear Information System (INIS)

    Andrade, M.D.; Prasher, S.O.; Hendershot, W.H.

    2007-01-01

    Three experiments were conducted to optimize the use of ethylenediaminetetraacetic acid (EDTA) for reclaiming urban soils contaminated with trace metals. As compared to Na 2 EDTA (NH 4 ) 2 EDTA extracted 60% more Zn and equivalent amounts of Cd, Cu and Pb from a sandy loam. When successively saturating and draining loamy sand columns during a washing cycle, which submerged it once with a (NH 4 ) 2 EDTA wash and four times with deionised water, the post-wash rinses largely contributed to the total cumulative extraction of Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn. Both the washing solution and the deionised water rinses were added in a 2:5 liquid to soil (L:S) weight ratio. For equal amounts of EDTA, concentrating the washing solution and applying it and the ensuing rinses in a smaller 1:5 L:S weight ratio, instead of a 2:5 L:S weight ratio, increased the extraction of targeted Cr, Cu, Ni, Pb and Zn. - A single EDTA addition is best utilised in a highly concentrated washing solution given in a small liquid to soil weight ratio

  11. Solution of Constrained Optimal Control Problems Using Multiple Shooting and ESDIRK Methods

    DEFF Research Database (Denmark)

    Capolei, Andrea; Jørgensen, John Bagterp

    2012-01-01

    of this paper is the use of ESDIRK integration methods for solution of the initial value problems and the corresponding sensitivity equations arising in the multiple shooting algorithm. Compared to BDF-methods, ESDIRK-methods are advantageous in multiple shooting algorithms in which restarts and frequent...... algorithm. As we consider stiff systems, implicit solvers with sensitivity computation capabilities for initial value problems must be used in the multiple shooting algorithm. Traditionally, multi-step methods based on the BDF algorithm have been used for such problems. The main novel contribution...... discontinuities on each shooting interval are present. The ESDIRK methods are implemented using an inexact Newton method that reuses the factorization of the iteration matrix for the integration as well as the sensitivity computation. Numerical experiments are provided to demonstrate the algorithm....

  12. Quantitative assessment of in-solution digestion efficiency identifies optimal protocols for unbiased protein analysis

    DEFF Research Database (Denmark)

    Leon, Ileana R; Schwämmle, Veit; Jensen, Ole N

    2013-01-01

    a combination of qualitative and quantitative LC-MS/MS methods and statistical data analysis. In contrast to previous studies we employed both standard qualitative as well as data-independent quantitative workflows to systematically assess trypsin digestion efficiency and bias using mitochondrial protein...... conditions (buffer, RapiGest, deoxycholate, urea), and two methods for removal of detergents prior to analysis of peptides (acid precipitation or phase separation with ethyl acetate). Our data-independent quantitative LC-MS/MS workflow quantified over 3700 distinct peptides with 96% completeness between all...... protocols and replicates, with an average 40% protein sequence coverage and an average of 11 peptides identified per protein. Systematic quantitative and statistical analysis of physicochemical parameters demonstrated that deoxycholate-assisted in-solution digestion combined with phase transfer allows...

  13. Life Cycle Network Modeling Framework and Solution Algorithms for Systems Analysis and Optimization of the Water-Energy Nexus

    Directory of Open Access Journals (Sweden)

    Daniel J. Garcia

    2015-07-01

    Full Text Available The water footprint of energy systems must be considered, as future water scarcity has been identified as a major concern. This work presents a general life cycle network modeling and optimization framework for energy-based products and processes using a functional unit of liters of water consumed in the processing pathway. We analyze and optimize the water-energy nexus over the objectives of water footprint minimization, maximization of economic output per liter of water consumed (economic efficiency of water, and maximization of energy output per liter of water consumed (energy efficiency of water. A mixed integer, multiobjective nonlinear fractional programming (MINLFP model is formulated. A mixed integer linear programing (MILP-based branch and refine algorithm that incorporates both the parametric algorithm and nonlinear programming (NLP subproblems is developed to boost solving efficiency. A case study in bioenergy is presented, and the water footprint is considered from biomass cultivation to biofuel production, providing a novel perspective into the consumption of water throughout the value chain. The case study, optimized successively over the three aforementioned objectives, utilizes a variety of candidate biomass feedstocks to meet primary fuel products demand (ethanol, diesel, and gasoline. A minimum water footprint of 55.1 ML/year was found, economic efficiencies of water range from −$1.31/L to $0.76/L, and energy efficiencies of water ranged from 15.32 MJ/L to 27.98 MJ/L. These results show optimization provides avenues for process improvement, as reported values for the energy efficiency of bioethanol range from 0.62 MJ/L to 3.18 MJ/L. Furthermore, the proposed solution approach was shown to be an order of magnitude more efficient than directly solving the original MINLFP problem with general purpose solvers.

  14. Determination of Optimal Temperature for Biosorption of Heavy Metal Mixture from Aqueous Solution by Pretreated Biomass of Aspergillus niger

    Directory of Open Access Journals (Sweden)

    Javad Yousefi

    2012-01-01

    Full Text Available Biosorption is a novel technology that uses dead and inactive biomass for removal of heavy metals from aqueous solution. Some parameters such as temperature, contact time, solution pH, initial metal concentration, biosorbent dose and also agitating speed of solution and biosorbent mixing can affect the amount of metal sorption by biosorbent. The aim of this study was to investigate the effects of different treatments of temperatures (25, 35, 45 and 55oC on biosorption of metals mixture in order to determine optimal temperature for more metals removal from aqueous solution. This study uses dead and pretreated biomass of Aspergillus niger with 0.5N NaOH for removal of Zn(II, Co(II and Cd(II. In all temperature treatments and in the case of all of heavy metals, maximum amount of metal sorption and concentration decrease was occurred in first 5 minutes and achieved to equilibrium after 20 minute. The percent of metals sorption show growth trend with temperature increase. Between 4 experimental treatments, 55oC treatment was shown maximum sorption and 25oC was shown minimum sorption amount. The percent of Cr(II sorption was increase from 28.5% in 25oC to 44.7% in 55oC. Also, this increase was from 40% to 58% for Cd(II and from 37.7% to 65.6% for Zn(II. About 60% of increase in sorption by A. niger was due to increase in temperature. Therefore the amount of metals sorption can be increase, only with temperature increase and without any biomass addition.

  15. Optimization of actinides trace precipitation on diamond/Si PIN sensor for alpha-spectrometry in aqueous solution

    International Nuclear Information System (INIS)

    Tran, Q.T.; Pomorski, M.; Sanoit, J. de; Mer-Calfati, C.; Scorsone, E.; Bergonzo, P.

    2014-01-01

    We report here on a new approach for the detection and identification of actinides (Pu, Am, Cm, etc). This approach is based on the use of a novel device consisting of a boron doped nanocrystalline diamond film deposited onto a silicon PIN diode alpha particle sensor. The actinides concentration is probed in situ in the measuring solution using a method based on electro-precipitation that can be carried out via the use of a doped diamond electrode. The device allows probing directly both alpha-particles activity and energy in liquid solutions. In this work, we address the optimization of the actinides electro-precipitation step onto the sensor. The approach is based on fine tuning the pH of the electrolyte, the nature of the supporting electrolytes (Na_2SO_4 or NaNO_3), the electrochemical cell geometry, the current density value, the precipitation duration as well as the sensor surface area. The deposition efficiency was significantly improved with values reaching for instance up to 81.5% in the case of electro-precipitation of 5.96 Bq "2"4"1Am on the sensor. The diamond/silicon sensor can be reused after measurement by performing a fast decontamination step at high yields 99%, where the "2"4"1Am electro-precipitated layer is quickly removed by applying an anodic current (+2 mA.cm"-"2 for 10 minutes) to the boron doped nanocrystalline diamond electrode in aqueous solution. This study demonstrated that alpha-particle spectroscopic measurements could be made feasible for the first time in aqueous solutions after an electrochemical deposition process, with theoretical detections thresholds as low as 0.24 Bq.L"-"1. We believe that this approach can be of very high interest for alpha-particle spectroscopy in liquids for actinides trace detection. (authors)

  16. Data of cost-optimal solutions and retrofit design methods for school renovation in a warm climate.

    Science.gov (United States)

    Zacà, Ilaria; Tornese, Giuliano; Baglivo, Cristina; Congedo, Paolo Maria; D'Agostino, Delia

    2016-12-01

    "Efficient Solutions and Cost-Optimal Analysis for Existing School Buildings" (Paolo Maria Congedo, Delia D'Agostino, Cristina Baglivo, Giuliano Tornese, Ilaria Zacà) [1] is the paper that refers to this article. It reports the data related to the establishment of several variants of energy efficient retrofit measures selected for two existing school buildings located in the Mediterranean area. In compliance with the cost-optimal analysis described in the Energy Performance of Buildings Directive and its guidelines (EU, Directive, EU 244,) [2], [3], these data are useful for the integration of renewable energy sources and high performance technical systems for school renovation. The data of cost-efficient high performance solutions are provided in tables that are explained within the following sections. The data focus on the describe school refurbishment sector to which European policies and investments are directed. A methodological approach already used in previous studies about new buildings is followed (Baglivo Cristina, Congedo Paolo Maria, D׳Agostino Delia, Zacà Ilaria, 2015; IlariaZacà, Delia D'Agostino, Paolo Maria Congedo, Cristina Baglivo; Baglivo Cristina, Congedo Paolo Maria, D'Agostino Delia, Zacà Ilaria, 2015; Ilaria Zacà, Delia D'Agostino, Paolo Maria Congedo, Cristina Baglivo, 2015; Paolo Maria Congedo, Cristina Baglivo, IlariaZacà, Delia D'Agostino,2015) [4], [5], [6], [7], [8]. The files give the cost-optimal solutions for a kindergarten (REF1) and a nursery (REF2) school located in Sanarica and Squinzano (province of Lecce Southern Italy). The two reference buildings differ for construction period, materials and systems. The eleven tables provided contain data about the localization of the buildings, geometrical features and thermal properties of the envelope, as well as the energy efficiency measures related to walls, windows, heating, cooling, dhw and renewables. Output values of energy consumption, gas emission and costs are given for a

  17. Water evaporation algorithm: A new metaheuristic algorithm towards the solution of optimal power flow

    Directory of Open Access Journals (Sweden)

    Anulekha Saha

    2017-12-01

    Full Text Available A relatively new technique to solve the optimal power flow (OPF problem inspired by the evaporation (vaporization of small quantity water particles from dense surfaces is presented in this paper. IEEE 30 bus and IEEE 118 bus test systems are assessed for various objectives to determine water evaporation algorithm’s (WEA efficiency in handling the OPF problem after satisfying constraints. Comparative study with other established techniques demonstrate competitiveness of WEA in treating varied objectives. It achieved superior results for all the objectives considered. The algorithm is found to minimize its objective values by great margins even in case of large test system. Statistical analysis of all the cases using Wilcoxon’s signed rank test resulted in p-values much lower than the required value of 0.05, thereby establishing the robustness of the applied technique. Best performance of the algorithm are obtained for voltage deviation minimization and voltage stability index minimization objectives in case of IEEE 30 and IEEE 118 bus test systems respectively.

  18. A solution for exposure tool optimization at the 65-nm node and beyond

    Science.gov (United States)

    Itai, Daisuke

    2007-03-01

    As device geometries shrink, tolerances for critical dimension, focus, and overlay control decrease. For the stable manufacture of semiconductor devices at (and beyond) the 65nm node, both performance variability and drift in exposure tools are no longer negligible factors. With EES (Equipment Engineering System) as a guidepost, hopes of improving productivity of semiconductor manufacturing are growing. We are developing a system, EESP (Equipment Engineering Support Program), based on the concept of EES. The EESP system collects and stores large volumes of detailed data generated from Canon lithographic equipment while product is being manufactured. It uses that data to monitor both equipment characteristics and process characteristics, which cannot be examined without this system. The goal of EESP is to maximize equipment capabilities, by feeding the result back to APC/FDC and the equipment maintenance list. This was a collaborative study of the system's effectiveness at the device maker's factories. We analyzed the performance variability of exposure tools by using focus residual data. We also attempted to optimize tool performance using the analyzed results. The EESP system can make the optimum performance of exposure tools available to the device maker.

  19. A Dynamic Programming Solution for Energy-Optimal Video Playback on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Minseok Song

    2016-01-01

    Full Text Available Due to the development of mobile technology and wide availability of smartphones, the Internet of Things (IoT starts to handle high volumes of video data to facilitate multimedia-based services, which requires energy-efficient video playback. In video playback, frames have to be decoded and rendered at high playback rate, increasing the computation cost on the CPU. To save the CPU power, dynamic voltage and frequency scaling (DVFS dynamically adjusts the operating voltage of the processor along with frequency, in which appropriate selection of frequency on power could achieve a balance between performance and power. We present a decoding model that allows buffering frames to let the CPU run at low frequency and then propose an algorithm that determines the CPU frequency needed to decode each frame in a video, with the aim of minimizing power consumption while meeting buffer size and deadline constraints, using a dynamic programming technique. We finally extend this algorithm to optimize CPU frequencies over a short sequence of frames, producing a practical method of reducing the energy required for video decoding. Experimental results show a system-wide reduction in energy of 27%, compared with a processor running at full speed.

  20. Generalized solution of design optimization and failure analysis of composite drive shaft

    Energy Technology Data Exchange (ETDEWEB)

    Kollipalli, K.; Shivaramakrishna, K.V.S.; Prabhakaran, R.T.D. [Birla Institute of Technology and Science, Goa (India)

    2012-07-01

    Composites have an edge over conventional metals like steel and aluminum due to higher stiffness-to-weight ratio and strength-to-weight ratio. Due to these advantages, composites can bring out a revolutionary change in materials used in automotive engineering, as weight savings has positive impacts on other attributes like fuel economy and possible noise, vibration and harshness (NVH). In this paper, the drive line system of an automotive system is targeted for use of composites by keeping constraints in view such as such as torque transmission, torsional buckling load and fundamental natural frequency. Composite drive shafts made of three different composites ('HM Carbon/HS Carbon/E-glass'-epoxy) was modeled using Catia V5R16 CPD workbench and a finite element analysis with boundary conditions, fiber orientation and stacking sequence was performed using ANSYS Composite module. Results obtained were compared to theoretical results and were found to be accurate and in the limits. This paper also speaks on drive shaft modeling and analysis generalization i.e., changes in stacking sequence in the future can be incorporated directly into ANSYS model without modeling it again in Catia. Hence the base model and analysis method made up in this analysis generalization facilitated by CAD/CAE can be used to carry out any composite shaft design optimization process. (Author)

  1. Analytic hierarchy process-based approach for selecting a Pareto-optimal solution of a multi-objective, multi-site supply-chain planning problem

    Science.gov (United States)

    Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi

    2017-07-01

    The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.

  2. An Optimized, Data Distribution Service-Based Solution for Reliable Data Exchange Among Autonomous Underwater Vehicles.

    Science.gov (United States)

    Rodríguez-Molina, Jesús; Bilbao, Sonia; Martínez, Belén; Frasheri, Mirgita; Cürüklü, Baran

    2017-08-05

    Major challenges are presented when managing a large number of heterogeneous vehicles that have to communicate underwater in order to complete a global mission in a cooperative manner. In this kind of application domain, sending data through the environment presents issues that surpass the ones found in other overwater, distributed, cyber-physical systems (i.e., low bandwidth, unreliable transport medium, data representation and hardware high heterogeneity). This manuscript presents a Publish/Subscribe-based semantic middleware solution for unreliable scenarios and vehicle interoperability across cooperative and heterogeneous autonomous vehicles. The middleware relies on different iterations of the Data Distribution Service (DDS) software standard and their combined work between autonomous maritime vehicles and a control entity. It also uses several components with different functionalities deemed as mandatory for a semantic middleware architecture oriented to maritime operations (device and service registration, context awareness, access to the application layer) where other technologies are also interweaved with middleware (wireless communications, acoustic networks). Implementation details and test results, both in a laboratory and a deployment scenario, have been provided as a way to assess the quality of the system and its satisfactory performance.

  3. An Optimized, Data Distribution Service-Based Solution for Reliable Data Exchange Among Autonomous Underwater Vehicles

    Directory of Open Access Journals (Sweden)

    Jesús Rodríguez-Molina

    2017-08-01

    Full Text Available Major challenges are presented when managing a large number of heterogeneous vehicles that have to communicate underwater in order to complete a global mission in a cooperative manner. In this kind of application domain, sending data through the environment presents issues that surpass the ones found in other overwater, distributed, cyber-physical systems (i.e., low bandwidth, unreliable transport medium, data representation and hardware high heterogeneity. This manuscript presents a Publish/Subscribe-based semantic middleware solution for unreliable scenarios and vehicle interoperability across cooperative and heterogeneous autonomous vehicles. The middleware relies on different iterations of the Data Distribution Service (DDS software standard and their combined work between autonomous maritime vehicles and a control entity. It also uses several components with different functionalities deemed as mandatory for a semantic middleware architecture oriented to maritime operations (device and service registration, context awareness, access to the application layer where other technologies are also interweaved with middleware (wireless communications, acoustic networks. Implementation details and test results, both in a laboratory and a deployment scenario, have been provided as a way to assess the quality of the system and its satisfactory performance.

  4. SQUEEZE-E: The Optimal Solution for Molecular Simulations with Periodic Boundary Conditions.

    Science.gov (United States)

    Wassenaar, Tsjerk A; de Vries, Sjoerd; Bonvin, Alexandre M J J; Bekker, Henk

    2012-10-09

    In molecular simulations of macromolecules, it is desirable to limit the amount of solvent in the system to avoid spending computational resources on uninteresting solvent-solvent interactions. As a consequence, periodic boundary conditions are commonly used, with a simulation box chosen as small as possible, for a given minimal distance between images. Here, we describe how such a simulation cell can be set up for ensembles, taking into account a priori available or estimable information regarding conformational flexibility. Doing so ensures that any conformation present in the input ensemble will satisfy the distance criterion during the simulation. This helps avoid periodicity artifacts due to conformational changes. The method introduces three new approaches in computational geometry: (1) The first is the derivation of an optimal packing of ensembles, for which the mathematical framework is described. (2) A new method for approximating the α-hull and the contact body for single bodies and ensembles is presented, which is orders of magnitude faster than existing routines, allowing the calculation of packings of large ensembles and/or large bodies. 3. A routine is described for searching a combination of three vectors on a discretized contact body forming a reduced base for a lattice with minimal cell volume. The new algorithms reduce the time required to calculate packings of single bodies from minutes or hours to seconds. The use and efficacy of the method is demonstrated for ensembles obtained from NMR, MD simulations, and elastic network modeling. An implementation of the method has been made available online at http://haddock.chem.uu.nl/services/SQUEEZE/ and has been made available as an option for running simulations through the weNMR GRID MD server at http://haddock.science.uu.nl/enmr/services/GROMACS/main.php .

  5. Optimal nonpharmacological management of agitation in Alzheimer's disease: challenges and solutions.

    Science.gov (United States)

    Millán-Calenti, José Carlos; Lorenzo-López, Laura; Alonso-Búa, Begoña; de Labra, Carmen; González-Abraldes, Isabel; Maseda, Ana

    2016-01-01

    Many patients with Alzheimer's disease will develop agitation at later stages of the disease, which constitutes one of the most challenging and distressing aspects of dementia. Recently, nonpharmacological therapies have become increasingly popular and have been proven to be effective in managing the behavioral symptoms (including agitation) that are common in the middle or later stages of dementia. These therapies seem to be a good alternative to pharmacological treatment to avoid unpleasant side effects. We present a systematic review of randomized controlled trials (RCTs) focused on the nonpharmacological management of agitation in Alzheimer's disease (AD) patients aged 65 years and above. Of the 754 studies found, eight met the inclusion criteria. This review suggests that music therapy is optimal for the management of agitation in institutionalized patients with moderately severe and severe AD, particularly when the intervention includes individualized and interactive music. Bright light therapy has little and possibly no clinically significant effects with respect to observational ratings of agitation but decreases caregiver ratings of physical and verbal agitation. Therapeutic touch is effective for reducing physical nonaggressive behaviors but is not superior to simulated therapeutic touch or usual care for reducing physically aggressive and verbally agitated behaviors. Melissa oil aromatherapy and behavioral management techniques are not superior to placebo or pharmacological therapies for managing agitation in AD. Further research in clinical trials is required to confirm the effectiveness and long-term effects of nonpharmacological interventions for managing agitation in AD. These types of studies may lead to the development of future intervention protocols to improve the well-being and daily functioning of these patients, thereby avoiding residential care placement.

  6. Flowchart on Choosing Optimal Method of Observing Transverse Dispersion Coefficient for Solute Transport in Open Channel Flow

    Directory of Open Access Journals (Sweden)

    Kyong Oh Baek

    2018-04-01

    Full Text Available There are a number of methods for observing and estimating the transverse dispersion coefficient in an analysis of the solute transport in open channel flow. It may be difficult to select an optimal method to calculate dispersion coefficients from tracer data among numerous methodologies. A flowchart was proposed in this study to select an appropriate method under the transport situation of either time-variant or steady condition. When making the flowchart, the strengths and limitations of the methods were evaluated based on its derivation procedure which was conducted under specific assumptions. Additionally, application examples of these methods on experimental data were illustrated using previous works. Furthermore, the observed dispersion coefficients in a laboratory channel were validated by using transport numerical modeling, and the simulation results were compared with the experimental results from tracer tests. This flowchart may assist in choosing the better methods for determining the transverse dispersion coefficient in various river mixing situations.

  7. Precise Orbit Solution for Swarm Using Space-Borne GPS Data and Optimized Pseudo-Stochastic Pulses

    Directory of Open Access Journals (Sweden)

    Bingbing Zhang

    2017-03-01

    Full Text Available Swarm is a European Space Agency (ESA project that was launched on 22 November 2013, which consists of three Swarm satellites. Swarm precise orbits are essential to the success of the above project. This study investigates how well Swarm zero-differenced (ZD reduced-dynamic orbit solutions can be determined using space-borne GPS data and optimized pseudo-stochastic pulses under high ionospheric activity. We choose Swarm space-borne GPS data from 1–25 October 2014, and Swarm reduced-dynamic orbits are obtained. Orbit quality is assessed by GPS phase observation residuals and compared with Precise Science Orbits (PSOs released by ESA. Results show that pseudo-stochastic pulses with a time interval of 6 min and a priori standard deviation (STD of 10−2 mm/s in radial (R, along-track (T and cross-track (N directions are optimized to Swarm ZD reduced-dynamic precise orbit determination (POD. During high ionospheric activity, the mean Root Mean Square (RMS of Swarm GPS phase residuals is at 9–11 mm, Swarm orbit solutions are also compared with Swarm PSOs released by ESA and the accuracy of Swarm orbits can reach 2–4 cm in R, T and N directions. Independent Satellite Laser Ranging (SLR validation indicates that Swarm reduced-dynamic orbits have an accuracy of 2–4 cm. Swarm-B orbit quality is better than those of Swarm-A and Swarm-C. The Swarm orbits can be applied to the geomagnetic, geoelectric and gravity field recovery.

  8. Mixture-mixture design for the fingerprint optimization of chromatographic mobile phases and extraction solutions for Camellia sinensis.

    Science.gov (United States)

    Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S

    2007-07-09

    A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.

  9. Optimal treatment of Alzheimer’s disease psychosis: challenges and solutions

    Directory of Open Access Journals (Sweden)

    Koppel J

    2014-11-01

    Full Text Available Jeremy Koppel,1,2 Blaine S Greenwald2 1The Feinstein Institute for Medical Research, North Shore–Long Island Jewish Health System, Manhasset, NY, USA; 2Zucker Hillside Hospital, Hofstra North Shore-Long Island Jewish School of Medicine, Glen Oaks, NY, USA Abstract: Psychotic symptoms emerging in the context of neurodegeneration as a consequence of Alzheimer’s disease was recognized and documented by Alois Alzheimer himself in his description of the first reported case of the disease. Over a quarter of a century ago, in the context of attempting to develop prognostic markers of disease progression, psychosis was identified as an independent predictor of a more-rapid cognitive decline. This finding has been subsequently well replicated, rendering psychotic symptoms an important area of exploration in clinical history taking – above and beyond treatment necessity – as their presence has prognostic significance. Further, there is now a rapidly accreting body of research that suggests that psychosis in Alzheimer’s disease (AD+P is a heritable disease subtype that enjoys neuropathological specificity and localization. There is now hope that the elucidation of the neurobiology of the syndrome will pave the way to translational research eventuating in new treatments. To date, however, the primary treatments employed in alleviating the suffering caused by AD+P are the atypical antipsychotics. These agents are approved by the US Food and Drug Administration for the treatment of schizophrenia, but they have only marginal efficacy in treating AD+P and are associated with troubling levels of morbidity and mortality. For clinical approaches to AD+P to be optimized, this syndrome must be disentangled from other primary psychotic disorders, and recent scientific advances must be translated into disease-specific therapeutic interventions. Here we provide a review of atypical antipsychotic efficacy in AD+P, followed by an overview of critical

  10. Optimal nonpharmacological management of agitation in Alzheimer’s disease: challenges and solutions

    Directory of Open Access Journals (Sweden)

    Millán-Calenti JC

    2016-02-01

    Full Text Available José Carlos Millán-Calenti,1 Laura Lorenzo-López,1 Begoña Alonso-Búa,1 Carmen de Labra,2 Isabel González-Abraldes,1 Ana Maseda1 1Gerontology Research Group, Department of Medicine, Faculty of Health Sciences, Universidade da Coruña, A Coruña, Spain; 2Research, Development and Innovation Department, Gerontological Complex La Milagrosa, Provincial Association of Pensioners and Retired People (UDP from A Coruña, A Coruña, Spain Abstract: Many patients with Alzheimer’s disease will develop agitation at later stages of the disease, which constitutes one of the most challenging and distressing aspects of dementia. Recently, nonpharmacological therapies have become increasingly popular and have been proven to be effective in managing the behavioral symptoms (including agitation that are common in the middle or later stages of dementia. These therapies seem to be a good alternative to pharmacological treatment to avoid unpleasant side effects. We present a systematic review of randomized controlled trials (RCTs focused on the nonpharmacological management of agitation in Alzheimer’s disease (AD patients aged 65 years and above. Of the 754 studies found, eight met the inclusion criteria. This review suggests that music therapy is optimal for the management of agitation in institutionalized patients with moderately severe and severe AD, particularly when the intervention includes individualized and interactive music. Bright light therapy has little and possibly no clinically significant effects with respect to observational ratings of agitation but decreases caregiver ratings of physical and verbal agitation. Therapeutic touch is effective for reducing physical nonaggressive behaviors but is not superior to simulated therapeutic touch or usual care for reducing physically aggressive and verbally agitated behaviors. Melissa oil aromatherapy and behavioral management techniques are not superior to placebo or pharmacological therapies for

  11. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass-or Time-Optimal Solutions

    Science.gov (United States)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood---allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  12. Kalai-Smorodinsky bargaining solution for optimal resource allocation over wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2012-01-01

    Surveillance applications usually require high levels of video quality, resulting in high power consumption. The existence of a well-behaved scheme to balance video quality and power consumption is crucial for the system's performance. In the present work, we adopt the game-theoretic approach of Kalai-Smorodinsky Bargaining Solution (KSBS) to deal with the problem of optimal resource allocation in a multi-node wireless visual sensor network (VSN). In our setting, the Direct Sequence Code Division Multiple Access (DS-CDMA) method is used for channel access, while a cross-layer optimization design, which employs a central processing server, accounts for the overall system efficacy through all network layers. The task assigned to the central server is the communication with the nodes and the joint determination of their transmission parameters. The KSBS is applied to non-convex utility spaces, efficiently distributing the source coding rate, channel coding rate and transmission powers among the nodes. In the underlying model, the transmission powers assume continuous values, whereas the source and channel coding rates can take only discrete values. Experimental results are reported and discussed to demonstrate the merits of KSBS over competing policies.

  13. Optimizing Cu(II) removal from aqueous solution by magnetic nanoparticles immobilized on activated carbon using Taguchi method.

    Science.gov (United States)

    Ebrahimi Zarandi, Mohammad Javad; Sohrabi, Mahmoud Reza; Khosravi, Morteza; Mansouriieh, Nafiseh; Davallo, Mehran; Khosravan, Azita

    2016-01-01

    This study synthesized magnetic nanoparticles (Fe(3)O(4)) immobilized on activated carbon (AC) and used them as an effective adsorbent for Cu(II) removal from aqueous solution. The effect of three parameters, including the concentration of Cu(II), dosage of Fe(3)O(4)/AC magnetic nanocomposite and pH on the removal of Cu(II) using Fe(3)O(4)/AC nanocomposite were studied. In order to examine and describe the optimum condition for each of the mentioned parameters, Taguchi's optimization method was used in a batch system and L9 orthogonal array was used for the experimental design. The removal percentage (R%) of Cu(II) and uptake capacity (q) were transformed into an accurate signal-to-noise ratio (S/N) for a 'larger-the-better' response. Taguchi results, which were analyzed based on choosing the best run by examining the S/N, were statistically tested using analysis of variance; the tests showed that all the parameters' main effects were significant within a 95% confidence level. The best conditions for removal of Cu(II) were determined at pH of 7, nanocomposite dosage of 0.1 gL(-1) and initial Cu(II) concentration of 20 mg L(-1) at constant temperature of 25 °C. Generally, the results showed that the simple Taguchi's method is suitable to optimize the Cu(II) removal experiments.

  14. Mandelbrot's Extremism

    NARCIS (Netherlands)

    Beirlant, J.; Schoutens, W.; Segers, J.J.J.

    2004-01-01

    In the sixties Mandelbrot already showed that extreme price swings are more likely than some of us think or incorporate in our models.A modern toolbox for analyzing such rare events can be found in the field of extreme value theory.At the core of extreme value theory lies the modelling of maxima

  15. Solution of the Helmholtz-Poincare Wave Equation using the coupled boundary integral equations and optimal surface eigenfunctions

    International Nuclear Information System (INIS)

    Werby, M.F.; Broadhead, M.K.; Strayer, M.R.; Bottcher, C.

    1992-01-01

    The Helmholtz-Poincarf Wave Equation (H-PWE) arises in many areas of classical wave scattering theory. In particular it can be found for the cases of acoustical scattering from submerged bounded objects and electromagnetic scattering from objects. The extended boundary integral equations (EBIE) method is derived from considering both the exterior and interior solutions of the H-PWECs. This coupled set of expressions has the advantage of not only offering a prescription for obtaining a solution for the exterior scattering problem, but it also obviates the problem of irregular values corresponding to fictitious interior eigenvalues. Once the coupled equations are derived, they can be obtained in matrix form by expanding all relevant terms in partial wave expansions, including a bi-orthogonal expansion of the Green's function. However some freedom in the choice of the surface expansion is available since the unknown surface quantities may be expanded in a variety of ways so long as closure is obtained. Out of many possible choices, we develop an optimal method to obtain such expansions which is based on the optimum eigenfunctions related to the surface of the object. In effect, we convert part of the problem (that associated with the Fredholms integral equation of the first kind) an eigenvalue problem of a related Hermitian operator. The methodology will be explained in detail and examples will be presented

  16. Optimization Solution of Troesch’s and Bratu’s Problems of Ordinary Type Using Novel Continuous Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Zaer Abo-Hammour

    2014-01-01

    Full Text Available A new kind of optimization technique, namely, continuous genetic algorithm, is presented in this paper for numerically approximating the solutions of Troesch’s and Bratu’s problems. The underlying idea of the method is to convert the two differential problems into discrete versions by replacing each of the second derivatives by an appropriate difference quotient approximation. The new method has the following characteristics. First, it should not resort to more advanced mathematical tools; that is, the algorithm should be simple to understand and implement and should be thus easily accepted in the mathematical and physical application fields. Second, the algorithm is of global nature in terms of the solutions obtained as well as its ability to solve other mathematical and physical problems. Third, the proposed methodology has an implicit parallel nature which points to its implementation on parallel machines. The algorithm is tested on different versions of Troesch’s and Bratu’s problems. Experimental results show that the proposed algorithm is effective, straightforward, and simple.

  17. Optimization of radioactivation analysis for the determination of iodine, bromine, and chlorine contents in soils, plants, soil solutions and rain water

    International Nuclear Information System (INIS)

    Yuita, Kouichi

    1983-01-01

    The conventional analytical procedures for iodine, bromine and chlorine in soils, plants, soil solutions and rain water, especially in the former two, have not been sufficient in their accuracy and sensitivity. With emphasis on the radioactivation analysis known to be a highly accurate analytical method, practical radioactivation procedures with high sensitivity, accurate and covenient, have been investigated for the determination of the three halogen elements in various soils and plants and of the three contained in extremely low concentrations in soil solutions and rain water. Consequently, the following methods were able to be established: (1) non-destructive radioactivation analysis without the chemical separation of bromine and chlorine in plants, soil solutions and rain water; (2) radioactivation analysis by group separating, simultaneous determination of iodine, bromine and chlorine in soils; (3) highsensitivity radioactivation analysis for iodine in plants, soil solutions and rain water. A manual for the analytical procedures was prepared accordingly. (Mori, K.)

  18. Valorization of aquaculture waste in removal of cadmium from aqueous solution: optimization by kinetics and ANN analysis

    Science.gov (United States)

    Aditya, Gautam; Hossain, Asif

    2018-05-01

    Cadmium is one of the most hazardous heavy metal concerning human health and aquatic pollution. The removal of cadmium through biosorption is a feasible option for restoration of the ecosystem health of the contaminated freshwater ecosystems. In compliance with this proposition and considering the efficiency of calcium carbonate as biosorbent, the shell dust of the economically important snail Bellamya bengalensis was tested for the removal of cadmium from aqueous medium. Following use of the flesh as a cheap source of protein, the shells of B. bengalensis made up of CaCO3 are discarded as aquaculture waste. The biosorption was assessed through batch sorption studies along with studies to characterize the morphology and surface structures of waste shell dust. The data on the biosorption were subjected to the artificial neural network (ANN) model for optimization of the process. The biosorption process changed as functions of pH of the solution, concentration of heavy metal, biomass of the adsorbent and time of exposure. The kinetic process was well represented by pseudo second order ( R 2 = 0.998), and Langmuir equilibrium ( R 2 = 0.995) had better fits in the equilibrium process with 30.33 mg g-1 of maximum sorption capacity. The regression equation ( R 2 = 0.948) in the ANN model supports predicted values of Cd removal satisfactorily. The normalized importance analysis in ANN predicts Cd2+ concentration, and pH has the most influence in removal than biomass dose and time. The SEM and EDX studies show clear peaks for Cd confirming the biosorption process while the FTIR study depicts the main functional groups (-OH, C-H, C=O, C=C) responsible for the biosorption process. The study indicated that the waste shell dust can be used as an efficient, low cost, environment friendly, sustainable adsorbent for the removal of cadmium from aqueous solution.

  19. Systematic analysis of protein–detergent complexes applying dynamic light scattering to optimize solutions for crystallization trials

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Arne [University of Hamburg, c/o DESY, Building 22a, Notkestrasse 85, 22603 Hamburg (Germany); Dierks, Karsten [University of Hamburg, c/o DESY, Building 22a, Notkestrasse 85, 22603 Hamburg (Germany); XtalConcepts, Marlowring 19, 22525 Hamburg (Germany); Hussein, Rana [University of Hamburg, c/o DESY, Building 22a, Notkestrasse 85, 22603 Hamburg (Germany); Brillet, Karl [ESBS, Pôle API, 300 Boulevard Sébastien Brant, CS10413, 67412 Illkirch CEDEX (France); Brognaro, Hevila [São Paulo State University, UNESP/IBILCE, Caixa Postal 136, São José do Rio Preto-SP, 15054 (Brazil); Betzel, Christian, E-mail: christian.betzel@uni-hamburg.de [University of Hamburg, c/o DESY, Building 22a, Notkestrasse 85, 22603 Hamburg (Germany)

    2015-01-01

    cleavage to separate the fusion partners. This study demonstrates the potential of in situ DLS to optimize solutions of protein–detergent complexes for crystallization applications.

  20. [Optimization of benzalkonium chloride concentration in 0.0015% tafluprost ophthalmic solution from the points of ocular surface safety and preservative efficacy].

    Science.gov (United States)

    Asada, Hiroyuki; Takaoka-Shichijo, Yuko; Nakamura, Masatsugu; Kimura, Akio

    2010-06-01

    Optimization of benzalkonium chloride (alkyl dimethylbenzylammonium chloride: BAK) concentration as preservative in 0.0015% tafluprost ophthalmic solution (Tapros 0.0015% ophthalmic solution), an anti-glaucoma medicine, was examined from the points of ocular surface safety and preservative efficacy. BAKC(12), which is dodecyl dimethylbenzylammonium chloride, and BAKmix, which is the mixture of dodecyl, tetradecyl and hexadecyl dimethylbenzylammonium chloride were used in this study. The effects of BAKC(12) concentrations and the BAK types, BAKC(12) and BAKmix, in tafluprost ophthalmic solution on ocular surface safety were evaluated using the in vitro SV 40-immobilized human corneal epithelium cell line (HCE-T). Following treatments of Tafluprost ophthalmic solutions with BAKC(12), its concentration dependency was observed on cell viability of HCE-T. The cell viability of HCE-T after treatment of these solutions with 0.001% to 0.003% BAKC(12) for 5 minutes were the same level as that after treatment of the solution without BAK. Tafluprost ophthalmic solution with 0.01% BAKC(12) was safer for the ocular surface than the same solution with 0.01% BAKmix. Preservatives-effectiveness tests of tafluprost ophthalmic solutions with various concentrations of BAKC(12) were performed according to the Japanese Pharmacopoeia (JP), and solutions with more than 0.0005% BAKC(12) conformed to JP criteria. It was concluded that 0.0005% to 0.003% of BAKC(12) in tafluprost ophthalmic solution was optimal, namely, well-balanced from the points of ocular surface safety and preservative efficacy.

  1. Optimized co-solute paramagnetic relaxation enhancement for the rapid NMR analysis of a highly fibrillogenic peptide

    International Nuclear Information System (INIS)

    Oktaviani, Nur Alia; Risør, Michael W.; Lee, Young-Ho; Megens, Rik P.; Jong, Djurre H. de; Otten, Renee; Scheek, Ruud M.; Enghild, Jan J.; Nielsen, Niels Chr.; Ikegami, Takahisa; Mulder, Frans A. A.

    2015-01-01

    Co-solute paramagnetic relaxation enhancement (PRE) is an attractive way to speed up data acquisition in NMR spectroscopy by shortening the T 1 relaxation time of the nucleus of interest and thus the necessary recycle delay. Here, we present the rationale to utilize high-spin iron(III) as the optimal transition metal for this purpose and characterize the properties of its neutral chelate form Fe(DO3A) as a suitable PRE agent. Fe(DO3A) effectively reduces the T 1 values across the entire sequence of the intrinsically disordered protein α-synuclein with negligible impact on line width. The agent is better suited than currently used alternatives, shows no specific interaction with the polypeptide chain and, due to its high relaxivity, is effective at low concentrations and in ‘proton-less’ NMR experiments. By using Fe(DO3A) we were able to complete the backbone resonance assignment of a highly fibrillogenic peptide from α 1 -antitrypsin by acquiring the necessary suite of multidimensional NMR datasets in 3 h

  2. Optimized co-solute paramagnetic relaxation enhancement for the rapid NMR analysis of a highly fibrillogenic peptide

    Energy Technology Data Exchange (ETDEWEB)

    Oktaviani, Nur Alia [University of Groningen, Groningen Biomolecular Sciences and Biotechnology Institute (Netherlands); Risør, Michael W. [University of Aarhus, Interdisciplinary Nanoscience Center (iNANO) and Department of Chemistry (Denmark); Lee, Young-Ho [Osaka University, Institute for Protein Research (Japan); Megens, Rik P. [University of Groningen, Stratingh Institute for Chemistry (Netherlands); Jong, Djurre H. de; Otten, Renee; Scheek, Ruud M. [University of Groningen, Groningen Biomolecular Sciences and Biotechnology Institute (Netherlands); Enghild, Jan J. [University of Aarhus, Interdisciplinary Nanoscience Center (iNANO) and Department of Molecular Biology and Genetics (Denmark); Nielsen, Niels Chr. [University of Aarhus, Interdisciplinary Nanoscience Center (iNANO) and Department of Chemistry (Denmark); Ikegami, Takahisa [Yokohama City University, Graduate School of Medical Life Science (Japan); Mulder, Frans A. A., E-mail: fmulder@chem.au.dk [University of Groningen, Groningen Biomolecular Sciences and Biotechnology Institute (Netherlands)

    2015-06-15

    Co-solute paramagnetic relaxation enhancement (PRE) is an attractive way to speed up data acquisition in NMR spectroscopy by shortening the T{sub 1} relaxation time of the nucleus of interest and thus the necessary recycle delay. Here, we present the rationale to utilize high-spin iron(III) as the optimal transition metal for this purpose and characterize the properties of its neutral chelate form Fe(DO3A) as a suitable PRE agent. Fe(DO3A) effectively reduces the T{sub 1} values across the entire sequence of the intrinsically disordered protein α-synuclein with negligible impact on line width. The agent is better suited than currently used alternatives, shows no specific interaction with the polypeptide chain and, due to its high relaxivity, is effective at low concentrations and in ‘proton-less’ NMR experiments. By using Fe(DO3A) we were able to complete the backbone resonance assignment of a highly fibrillogenic peptide from α{sub 1}-antitrypsin by acquiring the necessary suite of multidimensional NMR datasets in 3 h.

  3. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov–Poisson equation

    International Nuclear Information System (INIS)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-01-01

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.

  4. A solution to energy and environmental problems of electric power system using hybrid harmony search-random search optimization algorithm

    Directory of Open Access Journals (Sweden)

    Vikram Kumar Kamboj

    2016-04-01

    Full Text Available In recent years, global warming and carbon dioxide (CO2 emission reduction have become important issues in India, as CO2 emission levels are continuing to rise in accordance with the increased volume of Indian national energy consumption under the pressure of global warming, it is crucial for Indian government to impose the effective policy to promote CO2 emission reduction. Challenge of supplying the nation with high quality and reliable electrical energy at a reasonable cost, converted government policy into deregulation and restructuring environment. This research paper presents aims to presents an effective solution for energy and environmental problems of electric power using an efficient and powerful hybrid optimization algorithm: Hybrid Harmony search-random search algorithm. The proposed algorithm is tested for standard IEEE-14 bus, -30 bus and -56 bus system. The effectiveness of proposed hybrid algorithm is compared with others well known evolutionary, heuristics and meta-heuristics search algorithms. For multi-objective unit commitment, it is found that as there are conflicting relationship between cost and emission, if the performance in cost criterion is improved, performance in the emission is seen to deteriorate.

  5. Solution Monitoring Evaluated by Proliferation Risk Assessment and Fuzzy Optimization Analysis for Safeguards in a Reprocessing Process

    Directory of Open Access Journals (Sweden)

    Mitsutoshi Suzuki

    2013-01-01

    Full Text Available Solution monitoring (SM has been used in a nuclear reprocessing plant as an additional measure to provide assurance that the plant is operated as declared. The inline volume and density monitoring equipment with dip tubes is important for safety and safeguards purposes and is a typical example of safeguards by design (SBD. Recently safety, safeguards, and security by design (3SBD are proposed to promote an efficient and effective generation of nuclear energy. In 3SBD, proliferation risk assessment has the potential to consider likelihood of the incidence and proliferation risk in safeguards. In this study, risk assessment methodologies for safeguards and security are discussed and several mathematical methods are presented to investigate risk notion applied to intentional acts of facility misuse in an uncertainty environment. Proliferation risk analysis with the Markov model, deterrence effect with the game model, and SBD with fuzzy optimization are shown in feasibility studies to investigate the potential application of the risk and uncertainty analyses in safeguards. It is demonstrated that the SM is an effective measurement system using risk-informed and cost-effective SBD, even though there are inherent difficulties related to the possibility of operator’s falsification.

  6. Biosorption of Cr(VI) from aqueous solution using A. hydrophila in up-flow column. Optimization of process variables

    Energy Technology Data Exchange (ETDEWEB)

    Hasan, S.H.; Srivastava, P.; Ranjan, D. [Banaras Hindu Univ., Varanasi (India). Water Pollution Research Lab.; Talat, M. [Banaras Hindu Univ., Varanasi (India). Dept. of Biochemistry

    2009-06-15

    In the present study, continuous up-flow fixed-bed column study was carried out using immobilized dead biomass of Aeromonas hydrophila for the removal of Cr(VI) from aqueous solution. Different polymeric matrices were used to immobilized biomass and polysulfone-immobilized biomass has shown to give maximum removal. The sorption capacity of immobilized biomass for the removal of Cr(VI) evaluating the breakthrough curves obtained at different flow rate and bed height. A maximum of 78.58% Cr(VI) removal was obtained at bed height of 19 cm and flow rate of 2 mL/min. Bed depth service time model provides a good description of experimental results with high correlation coefficient (>0.996). An attempt has been made to investigate the individual as well as cumulative effect of the process variables and to optimize the process conditions for the maximum removal of chromium from water by two-level two-factor full-factorial central composite design with the help of Minitab {sup registered} version 15 statistical software. The predicted results are having a good agreement (R{sup 2}=98.19%) with the result obtained. Sorption-desorption studies revealed that polysulfone-immobilized biomass could be reused up to 11 cycles and bed was completely exhausted after 28 cycles. (orig.)

  7. Optimization of foaming properties of sludge protein solution by 60Co γ-ray/H2O2 using response surface methodology

    International Nuclear Information System (INIS)

    Xiang, Yulin; Xiang, Yuxiu; Wang, Lipeng; Zhang, Zhifang

    2016-01-01

    Response surface methodology and Box-Behnken experimental design were used to model and optimize the operational parameters of foaming properties of the sludge protein solution by 60 Co γ-ray/H 2 O 2 treatment. The four variables involved in this research were the protein solution concentration, H 2 O 2 , pH and dose. In the range studied, statistical analysis of the results showed that selected variables had a significant effect on protein foaming properties. The optimized conditions contained: protein solution concentration 26.50% (v/v), H 2 O 2 concentration 0.30% (v/v), pH value 9.0, and dose 4.81 kGy. Under optimal conditions, the foamability and foam stability approached 23.3 cm and 21.3 cm, respectively. Regression analysis with R 2 value of 0.9923 (foamability) and 0.9922 (foam stability) indicated a satisfactory correlation between the experimental data and predicted values (response). In addition, based on a feasibility analysis, the 60 Co γ-ray/H 2 O 2 method can improve odor and color of the protein foaming solution. - Highlights: • Effects of 60 Co γ-ray/H 2 O 2 on foaming properties of sludge protein were studied. • Response surface methodology and Box-Behnken experimental design were applied. • 60 Co γ-ray/H 2 O 2 method can improve foaming properties of protein solution.

  8. Approximate ideal multi-objective solution Q(λ) learning for optimal carbon-energy combined-flow in multi-energy power systems

    International Nuclear Information System (INIS)

    Zhang, Xiaoshun; Yu, Tao; Yang, Bo; Zheng, Limin; Huang, Linni

    2015-01-01

    Highlights: • A novel optimal carbon-energy combined-flow (OCECF) model is firstly established. • A novel approximate ideal multi-objective solution Q(λ) learning is designed. • The proposed algorithm has a high convergence stability and reliability. • The proposed algorithm can be applied for OCECF in a large-scale power grid. - Abstract: This paper proposes a novel approximate ideal multi-objective solution Q(λ) learning for optimal carbon-energy combined-flow in multi-energy power systems. The carbon emissions, fuel cost, active power loss, voltage deviation and carbon emission loss are chosen as the optimization objectives, which are simultaneously optimized by five different Q-value matrices. The dynamic optimal weight of each objective is calculated online from the entire Q-value matrices such that the greedy action policy can be obtained. Case studies are carried out to evaluate the optimization performance for carbon-energy combined-flow in an IEEE 118-bus system and the regional power grid of southern China.

  9. Extreme cosmos

    CERN Document Server

    Gaensler, Bryan

    2011-01-01

    The universe is all about extremes. Space has a temperature 270°C below freezing. Stars die in catastrophic supernova explosions a billion times brighter than the Sun. A black hole can generate 10 million trillion volts of electricity. And hypergiants are stars 2 billion kilometres across, larger than the orbit of Jupiter. Extreme Cosmos provides a stunning new view of the way the Universe works, seen through the lens of extremes: the fastest, hottest, heaviest, brightest, oldest, densest and even the loudest. This is an astronomy book that not only offers amazing facts and figures but also re

  10. Analytical Solutions and Optimization of the Exo-Irreversible Schmidt Cycle with Imperfect Regeneration for the 3 Classical Types of Stirling Engine Solutions analytiques et optimisation du cycle de Schmidt irréversible à régénération imparfaite appliquées aux 3 types classiques de moteur Stirling

    Directory of Open Access Journals (Sweden)

    Rochelle P.

    2011-11-01

    Full Text Available The “old” Stirling engine is one of the most promising multi-heat source engines for the future. Simple and realistic basic models are useful to aid in optimizing a preliminary engine configuration. In addition to new proper analytical solutions for regeneration that dramatically reduce computing time, this study of the Schmidt-Stirling engine cycle is carried out from an engineer-friendly viewpoint introducing exo-irreversible heat transfers. The reference parameters are the technological or physical constraints: the maximum pressure, the maximum volume, the extreme wall temperatures and the overall thermal conductance, while the adjustable optimization variables are the volumetric compression ratio, the dead volume ratios, the volume phase-lag, the gas characteristics, the hot-to-cold conductance ratio and the regenerator efficiency. The new normalized analytical expressions for the operating characteristics of the engine: power, work, efficiency, mean pressure, maximum speed of revolution are derived, and some dimensionless and dimensional reference numbers are presented as well as power optimization examples with respect to non-dimensional speed, volume ratio and volume phase-lag angle.analytical solutions. Le “vieux” moteur Stirling est l’un des moteurs a sources multiples d’energie les plus prometteurs pour le futur. Des modeles elementaires simples et realistes sont utiles pour faciliter l’optimisation de configurations preliminaires du moteur. En plus de nouvelles solutions analytiques qui reduisent fortement le temps de calcul, cette etude du cycle moteur de Schmidt-Stirling modifie est entreprise avec le point de vue de l’ingenieur en introduisant les exo-irreversibilites dues aux transferts thermiques. Les parametres de reference sont des contraintes technologiques ou physiques : la pression maximum, le volume maximum, les temperatures de paroi extremes et la conductance totale, alors que les parametres d

  11. Extreme ductile deformation of fine-grained salt by coupled solution-precipitation creep and microcracking: Microstructural evidence from perennial Zechstein sequence (Neuhof salt mine, Germany)

    Czech Academy of Sciences Publication Activity Database

    Závada, Prokop; Desbois, G.; Schwedt, A.; Lexa, O.; Urai, J. L.

    2012-01-01

    Roč. 37, April (2012), s. 89-104 ISSN 0191-8141 R&D Projects: GA ČR GA14-15632S Institutional support: RVO:67985530 Keywords : rock salt * solution-precipitation creep * microcracking * Griffith crack * fluid inclusion trails Subject RIV: DB - Geology ; Mineralogy OBOR OECD: Geology Impact factor: 2.285, year: 2012

  12. Solution of combinatorial optimization problems by an accelerated hopfield neural network. Kobai kasokugata poppu firudo nyuraru netto ni yoru kumiawase saitekika mondai no kaiho

    Energy Technology Data Exchange (ETDEWEB)

    Ohori, T.; Yamamoto, H.; Setsu, Nenso; Watanabe, K. (Hokkaido Inst. of Technology, Hokkaido (Japan))

    1994-04-20

    The accelerated approximate solution of combinatorial optimization problems by symmetry integrating hopfield neural network (NN) has been applied to many combinatorial problems such as the traveling salesman problem, the network planning problem, etc. However, the hopfield NN converges to local minimum solutions very slowly. In this paper, a general inclination model composed by introducing an accelerated parameter to the hopfield model is proposed, and it has been shown that the acceleration parameter can make the model converge to the local minima more quickly. Moreover, simulation experiments for random quadratic combinatorial problems with two and twenty-five variables were carried out. The results show that the acceleration of convergence makes the attraction region of the local minimum change and the accuracy of solution worse. If an initial point is selected around the center of unit hyper cube, solutions with high accuracy not affected by the acceleration parameter can be obtained. 9 refs., 8 figs., 3 tabs.

  13. Extremal graph theory

    CERN Document Server

    Bollobas, Bela

    2004-01-01

    The ever-expanding field of extremal graph theory encompasses a diverse array of problem-solving methods, including applications to economics, computer science, and optimization theory. This volume, based on a series of lectures delivered to graduate students at the University of Cambridge, presents a concise yet comprehensive treatment of extremal graph theory.Unlike most graph theory treatises, this text features complete proofs for almost all of its results. Further insights into theory are provided by the numerous exercises of varying degrees of difficulty that accompany each chapter. A

  14. Sequences of extremal radially excited rotating black holes.

    Science.gov (United States)

    Blázquez-Salcedo, Jose Luis; Kunz, Jutta; Navarro-Lérida, Francisco; Radu, Eugen

    2014-01-10

    In the Einstein-Maxwell-Chern-Simons theory the extremal Reissner-Nordström solution is no longer the single extremal solution with vanishing angular momentum, when the Chern-Simons coupling constant reaches a critical value. Instead a whole sequence of rotating extremal J=0 solutions arises, labeled by the node number of the magnetic U(1) potential. Associated with the same near horizon solution, the mass of these radially excited extremal solutions converges to the mass of the extremal Reissner-Nordström solution. On the other hand, not all near horizon solutions are also realized as global solutions.

  15. LETTER TO THE EDITOR: Constant-time solution to the global optimization problem using Brüschweiler's ensemble search algorithm

    Science.gov (United States)

    Protopopescu, V.; D'Helon, C.; Barhen, J.

    2003-06-01

    A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.

  16. Synthesis of anatase nanoparticles with extremely wide solid solution range and ScTiNbO6 with α-PbO2 structure

    International Nuclear Information System (INIS)

    Hirano, Masanori; Ito, Takaharu

    2009-01-01

    Anatase-type nanoparticles Sc X Ti 1-2X Nb X O 2 with wide solid solution range (X=0-0.35) were hydrothermally formed at 180 deg. C for 5 h. The lattice parameters a 0 and c 0 , and the optical band gap of anatase gradually and linearly increased with the increase of the content of niobium and scandium from X=0 to 0.35. Their photocatalytic activity and adsorptivity by the measurement of the concentration of methylene blue (MB) that remained in the solution in the dark or under UV-light irradiation were evaluated. The anatase phase existed stably up to 900 deg. C for the samples with X=0.25-0.30 and 750 deg. C for that with X=0.35 during heat treatment in air. The phase with α-PbO 2 structure and the rutile phases coexisted in the samples with X=0.25-0.30 after heated at temperatures above 900-950 deg. C. The α-PbO 2 structure having composition ScTiNbO 6 with possibly some cation order similar to that seen in wolframite existed as almost completely single phase after heat treatment at temperatures 900-1500 deg. C through phase transformation from anatase-type ScTiNbO 6 . - Graphical abstract: Anatase-type Sc X Ti 1-2X Nb X O 2 solid solutions with wide solid solution range (X=0-0.35) were hydrothermally formed as nanoparticles from the precursor solutions of Sc(NO 3 ) 3 , TiOSO 4 , NbCl 5 at 180 deg. C for 5 h using the hydrolysis of urea. Anatase-type ScTiNbO 6 was synthesized under hydrothermal condition. ScTiNbO 6 having α-PbO 2 structure with possibly some cation order similar to that seen in wolframite was formed through phase transformation above 900 deg. C.

  17. Optimally Stopped Optimization

    Science.gov (United States)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  18. Optimization of the spatial mesh for numerical solution of the neutron transport equation in a cluster-type lattice cell

    International Nuclear Information System (INIS)

    Davis, R.S.

    2012-01-01

    For programs that solve the neutron transport equation with an approximation that the neutron flux is constant in each space in a user-defined mesh, optimization of that mesh yields benefits in computing time and attainable precision. The previous best practice does not optimize the mesh thoroughly, because a large number of test runs of the solving software would be necessary. The method presented here optimizes the mesh for a flux that is based on conventional approximations but is more informative, so that a minimal number of parameters, one per type of material, must be adjusted by test runs to achieve thorough optimization. For a 37 element, natural-uranium, CANDU lattice cell, the present optimization yields 7 to 12 times (depending on the criterion) better precision than the previous best practice in 37% less computing time. (author)

  19. Particle Swarm Optimization applied to combinatorial problem aiming the fuel recharge problem solution in a nuclear reactor; Particle swarm optimization aplicado ao problema combinatorio com vistas a solucao do problema de recarga em um reator nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Anderson Alvarenga de Moura; Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: ameneses@con.ufrj.br; schirru@lmp.ufrj.br

    2005-07-01

    This work focuses on the usage the Artificial Intelligence technique Particle Swarm Optimization (PSO) to optimize the fuel recharge at a nuclear reactor. This is a combinatorial problem, in which the search of the best feasible solution is done by minimizing a specific objective function. However, in this first moment it is possible to compare the fuel recharge problem with the Traveling Salesman Problem (TSP), since both of them are combinatorial, with one advantage: the evaluation of the TSP objective function is much more simple. Thus, the proposed methods have been applied to two TSPs: Oliver 30 and Rykel 48. In 1995, KENNEDY and EBERHART presented the PSO technique to optimize non-linear continued functions. Recently some PSO models for discrete search spaces have been developed for combinatorial optimization. Although all of them having different formulation from the ones presented here. In this paper, we use the PSO theory associated with to the Random Keys (RK)model, used in some optimizations with Genetic Algorithms. The Particle Swarm Optimization with Random Keys (PSORK) results from this association, which combines PSO and RK. The adaptations and changings in the PSO aim to allow the usage of the PSO at the nuclear fuel recharge. This work shows the PSORK being applied to the proposed combinatorial problem and the obtained results. (author)

  20. Determination of optimal whole body vibration amplitude and frequency parameters with plyometric exercise and its influence on closed-chain lower extremity acute power output and EMG activity in resistance trained males

    Science.gov (United States)

    Hughes, Nikki J.

    The optimal combination of Whole body vibration (WBV) amplitude and frequency has not been established. Purpose. To determine optimal combination of WBV amplitude and frequency that will enhance acute mean and peak power (MP and PP) output EMG activity in the lower extremity muscles. Methods. Resistance trained males (n = 13) completed the following testing sessions: On day 1, power spectrum testing of bilateral leg press (BLP) movement was performed on the OMNI. Days 2 and 3 consisted of WBV testing with either average (5.8 mm) or high (9.8 mm) amplitude combined with either 0 (sham control), 10, 20, 30, 40 and 50 Hz frequency. Bipolar surface electrodes were placed on the rectus femoris (RF), vastus lateralis (VL), bicep femoris (BF) and gastrocnemius (GA) muscles for EMG analysis. MP and PP output and EMG activity of the lower extremity were assessed pre-, post-WBV treatments and after sham-controls on the OMNI while participants performed one set of five repetitions of BLP at the optimal resistance determined on Day 1. Results. No significant differences were found between pre- and sham-control on MP and PP output and on EMG activity in RF, VL, BF and GA. Completely randomized one-way ANOVA with repeated measures demonstrated no significant interaction of WBV amplitude and frequency on MP and PP output and peak and mean EMGrms amplitude and EMG rms area under the curve. RF and VL EMGrms area under the curve significantly decreased (p plyometric exercise does not induce alterations in subsequent MP and PP output and EMGrms activity of the lower extremity. Future studies need to address the time of WBV exposure and magnitude of external loads that will maximize strength and/or power output.

  1. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    Science.gov (United States)

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  2. Towards Optimal Power Management of Hybrid Electric Vehicles in Real-Time: A Review on Methods, Challenges, and State-Of-The-Art Solutions

    Directory of Open Access Journals (Sweden)

    Ahmed M. Ali

    2018-02-01

    Full Text Available In light of increasing alerts about limited energy sources and environment degradation, it has become essential to search for alternatives to thermal engine-based vehicles which are a major source of air pollution and fossil fuel depletion. Hybrid electric vehicles (HEVs, encompassing multiple energy sources, are a short-term solution that meets the performance requirements and contributes to fuel saving and emission reduction aims. Power management methods such as regulating efficient energy flow to the vehicle propulsion, are core technologies of HEVs. Intelligent power management methods, capable of acquiring optimal power handling, accommodating system inaccuracies, and suiting real-time applications can significantly improve the powertrain efficiency at different operating conditions. Rule-based methods are simply structured and easily implementable in real-time; however, a limited optimality in power handling decisions can be achieved. Optimization-based methods are more capable of achieving this optimality at the price of augmented computational load. In the last few years, these optimization-based methods have been under development to suit real-time application using more predictive, recognitive, and artificial intelligence tools. This paper presents a review-based discussion about these new trends in real-time optimal power management methods. More focus is given to the adaptation tools used to boost methods optimality in real-time. The contribution of this work can be identified in two points: First, to provide researchers and scholars with an overview of different power management methods. Second, to point out the state-of-the-art trends in real-time optimal methods and to highlight promising approaches for future development.

  3. Improved solution for ill-posed linear systems using a constrained optimization ruled by a penalty: evaluation in nuclear medicine tomography

    International Nuclear Information System (INIS)

    Walrand, Stephan; Jamar, François; Pauwels, Stanislas

    2009-01-01

    Ill-posed linear systems occur in many different fields. A class of regularization methods, called constrained optimization, aims to determine the extremum of a penalty function whilst constraining an objective function to a likely value. We propose here a novel heuristic way to screen the local extrema satisfying the discrepancy principle. A modified version of the Landweber algorithm is used for the iteration process. After finding a local extremum, a bound is performed to the 'farthest' estimate in the data space still satisfying the discrepancy principle. Afterwards, the modified Landweber algorithm is again applied to find a new local extremum. This bound-iteration process is repeated until a satisfying solution is reached. For evaluation in nuclear medicine tomography, a novel penalty function that preserves the edge steps in the reconstructed solution was evaluated on Monte Carlo simulations and using real SPECT acquisitions as well. Surprisingly, the first bound always provided a significantly better solution in a wide range of statistics

  4. Optimizing Quality of Care and Patient Safety in Malaysia: The Current Global Initiatives, Gaps and Suggested Solutions

    OpenAIRE

    Jarrar, Mu?taman; Rahman, Hamzah Abdul; Don, Mohammad Sobri

    2015-01-01

    Background and Objective: Demand for health care service has significantly increased, while the quality of healthcare and patient safety has become national and international priorities. This paper aims to identify the gaps and the current initiatives for optimizing the quality of care and patient safety in Malaysia. Design: Review of the current literature. Highly cited articles were used as the basis to retrieve and review the current initiatives for optimizing the quality of care and patie...

  5. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem

    OpenAIRE

    Muller , Antoine; Pontonnier , Charles; Dumont , Georges

    2018-01-01

    International audience; The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions – two polynomial criteria and a min/max criterion – were tested on a planar musculoskeletal model. The MusIC method provides a computation frequenc...

  6. Statement of Problem of Pareto Frontier Management and Its Solution in the Analysis and Synthesis of Optimal Systems

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2015-01-01

    Full Text Available The article research concerns the multi-criteria optimization (MCO, which assumes that operation quality criteria of the system are independent and specifies a way to improve values of these criteria. Mutual contradiction of some criteria is a major problem in MCO. One of the most important areas of research is to obtain the so-called Pareto - optimal options.The subject of research is Pareto front, also called the Pareto frontier. The article discusses front classifications by its geometric representation for the case of two-criterion task. It presents a mathematical description of the front characteristics using the gradients and their projections. A review of current domestic and foreign literature has revealed that the aim of works in constructing the Pareto frontier is to conduct research in conditions of uncertainty, in the stochastic statement, with no restrictions. A topology both in two- and in three-dimensional case is under consideration. The targets of modern applications are multi-agent systems and groups of players in differential games. However, all considered works have no task to provide an active management of the front.The objective of this article is to discuss the research problem the Pareto frontier in a new production, namely, with the active co-developers of the systems and (or the decision makers (DM in the management of the Pareto frontier. It notes that such formulation differs from the traditionally accepted approach based on the analysis of already existing solutions.The article discusses three ways to describe a quality of the object management system. The first way is to use the direct quality criteria for the model of a closed system as the vibrational level of the General form. The second one is to study a specific two-loop system of an aircraft control using the angular velocity and normal acceleration loops. The third is the use of the integrated quality criteria. In all three cases, the selected criteria are

  7. Patterns and singular features of extreme fluctuational paths of a periodically driven system

    International Nuclear Information System (INIS)

    Chen, Zhen; Liu, Xianbin

    2016-01-01

    Large fluctuations of an overdamped periodically driven oscillating system are investigated theoretically and numerically in the limit of weak noise. Optimal paths fluctuating to certain point are given by statistical analysis using the concept of prehistory probability distribution. The validity of statistical results is verified by solutions of boundary value problem. Optimal paths are found to change topologically when terminating points lie at opposite side of a switching line. Patterns of extreme paths are plotted through a proper parameterization of Lagrangian manifold having complicated structures. Several extreme paths to the same point are obtained by multiple solutions of boundary value solutions. Actions along various extreme paths are calculated and associated analysis is performed in relation to the singular features of the patterns. - Highlights: • Both extreme and optimal paths are obtained by various methods. • Boundary value problems are solved to ensure the validity of statistical results. • Topological structure of Lagrangian manifold is considered. • Singularities of the pattern of extreme paths are studied.

  8. Optimizing Quality of Care and Patient Safety in Malaysia: The Current Global Initiatives, Gaps and Suggested Solutions.

    Science.gov (United States)

    Jarrar, Mu'taman; Abdul Rahman, Hamzah; Don, Mohammad Sobri

    2015-10-20

    Demand for health care service has significantly increased, while the quality of healthcare and patient safety has become national and international priorities. This paper aims to identify the gaps and the current initiatives for optimizing the quality of care and patient safety in Malaysia. Review of the current literature. Highly cited articles were used as the basis to retrieve and review the current initiatives for optimizing the quality of care and patient safety. The country health plan of Ministry of Health (MOH) Malaysia and the MOH Malaysia Annual Reports were reviewed. The MOH has set four strategies for optimizing quality and sustaining quality of life. The 10th Malaysia Health Plan promotes the theme "1 Care for 1 Malaysia" in order to sustain the quality of care. Despite of these efforts, the total number of complaints received by the medico-legal section of the MOH Malaysia is increasing. The current global initiatives indicted that quality performance generally belong to three main categories: patient; staffing; and working environment related factors. There is no single intervention for optimizing quality of care to maintain patient safety. Multidimensional efforts and interventions are recommended in order to optimize the quality of care and patient safety in Malaysia.

  9. Optimizing Quality of Care and Patient Safety in Malaysia: The Current Global Initiatives, Gaps and Suggested Solutions

    Science.gov (United States)

    Jarrar, Mu’taman; Rahman, Hamzah Abdul; Don, Mohammad Sobri

    2016-01-01

    Background and Objective: Demand for health care service has significantly increased, while the quality of healthcare and patient safety has become national and international priorities. This paper aims to identify the gaps and the current initiatives for optimizing the quality of care and patient safety in Malaysia. Design: Review of the current literature. Highly cited articles were used as the basis to retrieve and review the current initiatives for optimizing the quality of care and patient safety. The country health plan of Ministry of Health (MOH) Malaysia and the MOH Malaysia Annual Reports were reviewed. Results: The MOH has set four strategies for optimizing quality and sustaining quality of life. The 10th Malaysia Health Plan promotes the theme “1 Care for 1 Malaysia” in order to sustain the quality of care. Despite of these efforts, the total number of complaints received by the medico-legal section of the MOH Malaysia is increasing. The current global initiatives indicted that quality performance generally belong to three main categories: patient; staffing; and working environment related factors. Conclusions: There is no single intervention for optimizing quality of care to maintain patient safety. Multidimensional efforts and interventions are recommended in order to optimize the quality of care and patient safety in Malaysia. PMID:26755459

  10. Design of future municipal wastewater treatment plants: A mathematical approach to manage complexity and identify optimal solutions

    DEFF Research Database (Denmark)

    Bozkurt, Hande; Quaglia, Alberto; Gernaey, Krist

    The increasing number of alternative wastewater treatment (WWT) technologies and stricter effluent requirements imposed by regulations make the early stage decision making for WWTP layout design, which is currently based on expert decisions and previous experiences, much harder. This paper...... therefore proposes a new approach based on mathematical programming to manage the complexity of the problem and generate/identify novel and optimal WWTP layouts for municipal/domestic wastewater treatment. Towards this end, after developing a database consisting of primary, secondary and tertiary WWT...... solved to obtain the optimal WWT network and the optimal wastewater and sludge flow through the network. The tool is evaluated on a case study, which was chosen as the Benchmark Simulation Model no.1 (BSM1) and many retrofitting options for obtaining a cost-effective treatment were investigated...

  11. Identifying and prioritizing indicators and effective solutions to optimization the use of wood in construction classical furniture by using AHP (Case study of Qom

    Directory of Open Access Journals (Sweden)

    Mohammad Ghofrani

    2017-02-01

    Full Text Available AbstractThe aim of this study was to identify and prioritize the indicators and provide effective solutions to optimize the use of wood in construction classical furniture using the analytic hierarchy process (case study in Qom. For this purpose, studies and results of other researchers and interviews with experts, the factors affecting the optimization of wood consumption were divided into 4 main categories and 23 sub-indicators. The importance of the sub after getting feedback furniture producers were determined by AHP. The results show that the original surface design and human resources are of great importance. In addition, among 23 sub-effective optimization of the use of wood in construction classical furniture, ergonomics, style, skill training and inlaid in classical furniture industry in order to weight the value of 0/247, 0/181, 0/124 and 0/087 are of paramount importance and the method of use of force specialist solutions were a priority.

  12. High energy (42-66 MeV reactions) fast neutron dose optimization studies in the head and neck, thorax, upper abdomen, pelvis and extremities

    International Nuclear Information System (INIS)

    Griffin, T.W.; Laramore, G.E.; Maor, M.H.; Hendrickson, F.R.; Parker, R.G.; Davis, L.W.

    1990-01-01

    550 Patients were entered into a set of dose-searching studies designed to determine normal tissue tolerances to high energy (42-66 MeV reactions) fast neutrons delivered in 12 equal fractions over 4 weeks. Patients were stratified by treatment facility and then randomized to receive 16, 18 or 20 Gy for tumors located in the upper abdomen or pelvis, and 18, 20 or 22 Gy for tumors located in the head and neck, thorax or extremities. Following completion of the randomized protocols, additional patients were studied at the 20.4 Gy level in the head and neck, thorax and pelvis. Normal tissue effect scoring was accomplished using the RTOG-EORTC acute and late normal tissue effect scales. Acute Grade 3+ toxicity rates in the head and neck were 19 per cent for 20/20.4 Gy and 20 per cent for 22 Gy. Time adjusted late toxicity rates in the head and neck at 12 months were 15 per cent for 20/20.4 Gy and 0 per cent for 22 Gy. The 18 Gy treatment arm of the head and neck protocol was dropped early in the study after only two patients were accrued. For cases treated in the thorax, acute Grade 3+ toxicity rates were 6 per cent for 18 Gy, 15 per cent for 20/20.4 Gy and 7 per cent for 22 Gy. Late toxicity rates at 12 months were 0 per cent for 18 Gy, 11 per cent for 20/20.4 Gy and 18 per cent for 22 Gy. Acute Grade 3+ toxicity rates in the upper abdomen were 0 per cent for 16 Gy, 18 per cent for 18 Gy and 12 per cent for 20 Gy. There were no Grade 3+ late toxicities in the upper abdomen. In the pelvis acute Grade 3+ toxicity rates were 0 per cent for 16 Gy, 3 per cent for 18 Gy and 3 per cent for 20/20.4 Gy. Late Grade 3+ toxicities at 24 months were 20 per cent for 16 Gy, 5 per cent for 18 Gy and 24 per cent for 20/20.4 Gy. In the extremities, acute Grade 3+ toxicity rates were 7 per cent for 20 Gy and 21 per cent for 22 Gy, while at 12 months, late Grade 3+ toxicity rates were 14 and 35 per cent respectively. The 18 Gy treatment arm of the extremities protocol was dropped early

  13. [Study of CT Automatic Exposure Control System (CT-AEC) Optimization in CT Angiography of Lower Extremity Artery by Considering Contrast-to-Noise Ratio].

    Science.gov (United States)

    Inada, Satoshi; Masuda, Takanori; Maruyama, Naoya; Yamashita, Yukari; Sato, Tomoyasu; Imada, Naoyuki

    2016-01-01

    To evaluate the image quality and effect of radiation dose reduction by setting for computed tomography automatic exposure control system (CT-AEC) in computed tomographic angiography (CTA) of lower extremity artery. Two methods of setting were compared for CT-AEC [conventional and contrast-to-noise ratio (CNR) methods]. Conventional method was set noise index (NI): 14and tube current threshold: 10-750 mA. CNR method was set NI: 18, minimum tube current: (X+Y)/2 mA (X, Y: maximum X (Y)-axis tube current value of leg in NI: 14), and maximum tube current: 750 mA. The image quality was evaluated by CNR, and radiation dose reduction was evaluated by dose-length-product (DLP). In conventional method, mean CNRs for pelvis, femur, and leg were 19.9±4.8, 20.4±5.4, and 16.2±4.3, respectively. There was a significant difference between the CNRs of pelvis and leg (P<0.001), and between femur and leg (P<0.001). In CNR method, mean CNRs for pelvis, femur, and leg were 15.2±3.3, 15.3±3.2, and 15.3±3.1, respectively; no significant difference between pelvis, femur, and leg (P=0.973) in CNR method was observed. Mean DLPs were 1457±434 mGy⋅cm in conventional method, and 1049±434 mGy·cm in CNR method. There was a significant difference in the DLPs of conventional method and CNR method (P<0.001). CNR method gave equal CNRs for pelvis, femur, and leg, and was beneficial for radiation dose reduction in CTA of lower extremity artery.

  14. On the optimal design of forward osmosis desalination systems with NH3-CO2-H2O solutions

    NARCIS (Netherlands)

    Gazzani, Matteo; Pérez-Calvo, José Francisco; Sutter, Daniel; Mazzotti, Marco

    2017-01-01

    Membrane-based forward osmosis, especially when NH3-CO2-H2O mixtures are adopted as draw solutions, is a promising new process for clean water production, including seawater desalination and wastewater treatment. In such a process, water is first removed from the feed (e.g. seawater) by exploiting

  15. Optimization of palm oil extraction from Decanter cake of small crude palm oil mill by aqueous surfactant solution using RSM

    Science.gov (United States)

    Ahmadi Pirshahid, Shewa; Arirob, Wallop; Punsuvon, Vittaya

    2018-04-01

    The use of hexane to extract vegetable oil from oilseeds or seed cake is of growing concern due to its environmental impact such as its smelling and toxicity. In our method, used Response Surface Methodology (RSM) was applied to study the optimum condition of decanter cake obtained from small crude palm oil with aqueous surfactant solution. For the first time, we provide an optimum condition of preliminary study with decanter cake extraction to obtain the maximum of oil yield. The result from preliminary was further used in RSM study by using Central Composite Design (CCD) that consisted of thirty experiments. The effect of four independent variables: the concentration of Sodium Dodecyl Sulfate (SDS) as surfactant, temperature, the ratio by weight to volume of cake to surfactant solution and the amount of sodium chloride (NaCl) on dependent variables are studied. Data were analyzed using Design-Expert 8 software. The results showed that the optimum condition of decanter cake extraction were 0.016M of SDS solution concentration, 73°C of extraction temperature, 1:10 (g:ml) of the ratio of decanter cake to SDS solution and 2% (w/w) of NaCl amount. This condition gave 77.05% (w/w) oil yield. The chemical properties of the extracted palm oil from this aqueous surfactant extraction are further investigated compared with the hexane extraction. The obtained result showed that all properties of both extractions were nearly the same.

  16. Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach

    NARCIS (Netherlands)

    Ran, J.; Ditmar, P.G.; Klees, R.; Farahani, H.

    2017-01-01

    We present an improved mascon approach to transform monthly spherical harmonic solutions based on GRACE satellite data into mass anomaly estimates in Greenland. The GRACE-based spherical harmonic coefficients are used to synthesize gravity anomalies at satellite altitude, which are then inverted

  17. Optimal platform design using non-dominated sorting genetic algorithm II and technique for order of preference by similarity to ideal solution; application to automotive suspension system

    Science.gov (United States)

    Shojaeefard, Mohammad Hassan; Khalkhali, Abolfazl; Faghihian, Hamed; Dahmardeh, Masoud

    2018-03-01

    Unlike conventional approaches where optimization is performed on a unique component of a specific product, optimum design of a set of components for employing in a product family can cause significant reduction in costs. Increasing commonality and performance of the product platform simultaneously is a multi-objective optimization problem (MOP). Several optimization methods are reported to solve these MOPs. However, what is less discussed is how to find the trade-off points among the obtained non-dominated optimum points. This article investigates the optimal design of a product family using non-dominated sorting genetic algorithm II (NSGA-II) and proposes the employment of technique for order of preference by similarity to ideal solution (TOPSIS) method to find the trade-off points among the obtained non-dominated results while compromising all objective functions together. A case study for a family of suspension systems is presented, considering performance and commonality. The results indicate the effectiveness of the proposed method to obtain the trade-off points with the best possible performance while maximizing the common parts.

  18. Optimization of photocatalytic treatment of dye solution on supported TiO2 nanoparticles by central composite design: Intermediates identification

    International Nuclear Information System (INIS)

    Khataee, A.R.; Fathinia, M.; Aber, S.; Zarei, M.

    2010-01-01

    Optimization of photocatalytic degradation of C.I. Basic Blue 3 (BB3) under UV light irradiation using TiO 2 nanoparticles in a rectangular photoreactor was studied. The investigated TiO 2 was Millennium PC-500 (crystallites mean size 5-10 nm) immobilized on non-woven paper. Central composite design was used for optimization of UV/TiO 2 process. Predicted values of decolorization efficiency were found to be in good agreement with experimental values (R 2 = 0.9686 and Adj-R 2 = 0.9411). Optimization results showed that maximum decolorization efficiency was achieved at the optimum conditions: initial dye concentration 10 mg/L, UV light intensity 47.2 W/m 2 , flow rate 100 mL/min and reaction time 120 min. Photocatalytic mineralization of BB3 was monitored by total organic carbon (TOC) decrease, and changes in UV-vis and FT-IR spectra. The photodegradation compounds were analyzed by UV-vis, FT-IR and GC-mass techniques. The degradation pathway of BB3 was proposed based on the identified compounds.

  19. The application of ant colony optimization in the solution of 3D traveling salesman problem on a sphere

    Directory of Open Access Journals (Sweden)

    Hüseyin Eldem

    2017-08-01

    Full Text Available Traveling Salesman Problem (TSP is a problem in combinatorial optimization that should be solved by a salesperson who has to travel all cities at the minimum cost (minimum route and return to the starting city (node. Todays, to resolve the minimum cost of this problem, many optimization algorithms have been used. The major ones are these metaheuristic algorithms. In this study, one of the metaheuristic methods, Ant Colony Optimization (ACO method (Max-Min Ant System – MMAS, was used to solve the Non-Euclidean TSP, which consisted of sets of different count points coincidentally located on the surface of a sphere. In this study seven point sets were used which have different point count. The performance of the MMAS method solving Non-Euclidean TSP problem was demonstrated by different experiments. Also, the results produced by ACO are compared with Discrete Cuckoo Search Algorithm (DCS and Genetic Algorithm (GA that are in the literature. The experiments for TSP on a sphere, show that ACO’s average results were better than the GA’s average results and also best results of ACO successful than the DCS.

  20. Improved creep strength of nickel-base superalloys by optimized γ/γ′ partitioning behavior of solid solution strengthening elements

    International Nuclear Information System (INIS)

    Pröbstle, M.; Neumeier, S.; Feldner, P.; Rettig, R.; Helmer, H.E.; Singer, R.F.; Göken, M.

    2016-01-01

    Solid solution strengthening of the γ matrix is one key factor for improving the creep strength of single crystal nickel-base superalloys at high temperatures. Therefore a strong partitioning of solid solution hardening elements to the matrix is beneficial for high temperature creep strength. Different Rhenium-free alloys which are derived from CMSX-4 are investigated. The alloys have been characterized regarding microstructure, phase compositions as well as creep strength. It is found that increasing the Titanium (Ti) as well as the Tungsten (W) content causes a stronger partitioning of the solid solution strengtheners, in particular W, to the γ phase. As a result the creep resistance is significantly improved. Based on these ideas, a Rhenium-free alloy with an optimized chemistry regarding the partitioning behavior of W is developed and validated in the present study. It shows comparable creep strength to the Rhenium containing second generation alloy CMSX-4 in the high temperature / low stress creep regime and is less prone to the formation of deleterious topologically close packed (TCP) phases. This more effective usage of solid solution strengtheners can enhance the creep properties of nickel-base superalloys while reducing the content of strategic elements like Rhenium.

  1. Geologic disposal as optimal solution of managing the spent nuclear fuel and high-level radioactive waste

    International Nuclear Information System (INIS)

    Ilie, P.; Didita, L.; Ionescu, A.; Deaconu, V.

    2002-01-01

    To date there exist three alternatives for the concept of geological disposal: 1. storing the high-level waste (HLW) and spent nuclear fuel (SNF) on ground repositories; 2. solutions implying advanced separation processes including partitioning and transmutation (P and T) and eventual disposal in outer space; 3. geological disposal in repositories excavated in rocks. Ground storing seems to be advantageous as it ensures a secure sustainable storing system over many centuries (about 300 years). On the other hand ground storing would be only a postponement in decision making and will be eventually followed by geological disposal. Research in the P and T field is expected to entail a significant reduction of the amount of long-lived radioactive waste although the long term geological disposal will be not eliminated. Having in view the high cost, as well as the diversity of conditions in the countries owning power reactors it appears as a reasonable regional solution of HLW disposal that of sharing a common geological disposal. In Romania legislation concerning of radioactive waste is based on the Law concerning Spent Nuclear Fuel and Radioactive Waste Management in View of Final Disposal. One admits at present that for Romania geological disposal is not yet a stressing issue and hence intermediate ground storing of SNF will allow time for finding a better final solution

  2. Factorial experimental design for the optimization of catalytic degradation of malachite green dye in aqueous solution by Fenton process

    Directory of Open Access Journals (Sweden)

    A. Elhalil

    2016-09-01

    Full Text Available This work focuses on the optimization of the catalytic degradation of malachite green dye (MG by Fenton process “Fe2+/H2O2”. A 24 full factorial experimental design was used to evaluate the effects of four factors considered in the optimization of the oxidative process: concentration of MG (X1, concentration of Fe2+ (X2, concentration of H2O2 (X3 and temperature (X4. Individual and interaction effects of the factors that influenced the percentage of dye degradation were tested. The effect of interactions between the four parameters shows that there is a dependency between concentration of MG and concentration of Fe2+; concentration of Fe2+ and concentration of H2O2, expressed by the great values of the coefficient of interaction. The analysis of variance proved that, the concentration of MG, the concentration of Fe2+ and the concentration of H2O2 have an influence on the catalytic degradation while it is not the case for the temperature. In the optimization, the great dependence between observed and predicted degradation efficiency, the correlation coefficient for the model (R2=0.986 and the important value of F-ratio proved the validity of the model. The optimum degradation efficiency of malachite green was 93.83%, when the operational parameters were malachite green concentration of 10 mg/L, Fe2+ concentration of 10 mM, H2O2 concentration of 25.6 mM and temperature of 40 °C.

  3. Suppressed power saturation due to optimized optical confinement in 9xx nm high-power diode lasers that use extreme double asymmetric vertical designs

    Science.gov (United States)

    Kaul, T.; Erbert, G.; Maaßdorf, A.; Knigge, S.; Crump, P.

    2018-03-01

    Broad area lasers with novel extreme double asymmetric structure (EDAS) vertical designs featuring increased optical confinement in the quantum well, Γ, are shown to have improved temperature stability without compromising series resistance, internal efficiency or losses. Specifically, we present here vertical design considerations for the improved continuous wave (CW) performance of devices operating at 940 nm, based on systematically increasing Γ from 0.26% to 1.1%, and discuss the impact on power saturation mechanisms. The results indicate that key power saturation mechanisms at high temperatures originate in high threshold carrier densities, which arise in the quantum well at low Γ. The characteristic temperatures, T 0 and T 1, are determined under short pulse conditions and are used to clarify the thermal contribution to power limiting mechanisms. Although increased Γ reduces thermal power saturation, it is accompanied by increased optical absorption losses in the active region, which has a significant impact on the differential external quantum efficiency, {η }{{diff}}. To quantify the impact of internal optical losses contributed by the quantum well, a resonator length-dependent simulation of {η }{{diff}} is performed and compared to the experiment, which also allows the estimation of experimental values for the light absorption cross sections of electrons and holes inside the quantum well. Overall, the analysis enables vertical designs to be developed, for devices with maximized power conversion efficiency at high CW optical power and high temperatures, in a trade-off between absorption in the well and power saturation. The best balance to date is achieved in devices using EDAS designs with {{Γ }}=0.54 % , which deliver efficiencies of 50% at 14 W optical output power at an elevated junction temperature of 105 °C.

  4. Sorption of phenol from synthetic aqueous solution by activated saw dust: Optimizing parameters with response surface methodology

    Directory of Open Access Journals (Sweden)

    Omprakash Sahu

    2017-12-01

    Full Text Available Organic pollutants have an adverse effect on the neighboring environment. Industrial activates are the major sources of different organic pollutants. These primary pollutants react with surrounding and forms secondary pollutant, which persists for a long time. The present investigation has been carried out on the surface of activated sawdust for phenol eliminations. The process parameters initial concentration, contact time, adsorbent dose and pH were optimized by the response surface methodology (RSM. The numerical optimization of sawdust (SD, initial concentration 10 mg/l, contact time 1.5 h, adsorbent dose 4 g and pH 2, the optimum response result was 78.3% adsorption. Analysis of variance (ANOVA was used to judge the adequacy of the central composite design and quadratic model found to be suitable. The coefficient of determination values was found to be maximum Adj R2 0.7223, and Pre R2 0.5739 and significant regression at 95% confidence level values.

  5. Comparing and Optimizing Nitrate Adsorption from Aqueous Solution Using Fe/Pt Bimetallic Nanoparticles and Anion Exchange Resins

    Directory of Open Access Journals (Sweden)

    Muhammad Daud

    2015-01-01

    Full Text Available This research work was carried out for the removal of nitrate from raw water for a drinking water supply. Nitrate is a widespread ground water contaminant. Methodology employed in this study included adsorption on metal based nanoparticles and ion exchange using anionic resins. Fe/Pt bimetallic nanoparticles were prepared in the laboratory, by the reduction of their respective salts using sodium borohydride. Scanning electron microscope, X-ray diffraction, energy dispersive spectrometry, and X-ray florescence techniques were utilized for characterization of bimetallic Fe/Pt nanoparticles. Optimum dose, pH, temperature, and contact time were determined for NO3- removal through batch tests, both for metal based nanoparticles and anionic exchange resin. Adsorption data fitted well the Langmuir isotherm and conformed to the pseudofirst-order kinetic model. Results indicated 97% reduction in nitrate by 0.25 mg/L of Fe/Pt nanoparticles at pH 7 and 83% reduction in nitrate was observed using 0.50 mg/L anionic exchange resins at pH 4 and contact time of one hour. Overall, Fe/Pt bimetallic nanoparticles demonstrated greater NO3- removal efficiency due to the small particle size, extremely large surface area (627 m2/g, and high adsorption capacity.

  6. Comparing and Optimizing Nitrate Adsorption from Aqueous Solution Using Fe/Pt Bimetallic Nanoparticles and Anion Exchange Resins

    International Nuclear Information System (INIS)

    Daud, M.; Khan, Z.; Ashgar, A.; Danish, M. I.; Qazi, I. A.

    2015-01-01

    This research work was carried out for the removal of nitrate from raw water for a drinking water supply. Nitrate is a widespread ground water contaminant. Methodology employed in this study included adsorption on metal based nanoparticles and ion exchange using anionic resins. Fe/Pt bimetallic nanoparticles were prepared in the laboratory, by the reduction of their respective salts using sodium borohydride. Scanning electron microscope, X-ray diffraction, energy dispersive spectrometry, and X-ray florescence techniques were utilized for characterization of bimetallic Fe/Pt nanoparticles. Optimum dose, ph, temperature, and contact time were determined for removal through batch tests, both for metal based nanoparticles and anionic exchange resin. Adsorption data fitted well the Langmuir isotherm and conformed to the pseudo first-order kinetic model. Results indicated 97% reduction in nitrate by 0.25 mg/L of Fe/Pt nanoparticles at ph 7 and 83% reduction in nitrate was observed using 0.50 mg/L anionic exchange resins at ph 4 and contact time of one hour. Overall, Fe/Pt bimetallic nanoparticles demonstrated greater removal efficiency due to the small particle size, extremely large surface area (627 m 2 /g), and high adsorption capacity.

  7. A non-linear optimal Discontinuous Petrov-Galerkin method for stabilising the solution of the transport equation

    International Nuclear Information System (INIS)

    Merton, S. R.; Smedley-Stevenson, R. P.; Pain, C. C.; Buchan, A. G.; Eaton, M. D.

    2009-01-01

    This paper describes a new Non-Linear Discontinuous Petrov-Galerkin (NDPG) method and application to the one-speed Boltzmann Transport Equation (BTE) for space-time problems. The purpose of the method is to remove unwanted oscillations in the transport solution which occur in the vicinity of sharp flux gradients, while improving computational efficiency and numerical accuracy. This is achieved by applying artificial dissipation in the solution gradient direction, internal to an element using a novel finite element (FE) Riemann approach. The amount of dissipation added acts internal to each element. This is done using a gradient-informed scaling of the advection velocities in the stabilisation term. This makes the method in its most general form non-linear. The method is designed to be independent of angular expansion framework. This is demonstrated for the both discrete ordinates (S N ) and spherical harmonics (P N ) descriptions of the angular variable. Results show the scheme performs consistently well in demanding time dependent and multi-dimensional radiation transport problems. (authors)

  8. Optimization of biosurfactant production in soybean oil by rhodococcus rhodochrous and its utilization in remediation of cadmium-contaminated solution

    Science.gov (United States)

    Suryanti, Venty; Hastuti, Sri; Andriani, Dewi

    2016-02-01

    Biosurfactant production by Rhodococcus rhodochrous in soybean oil was developed, where the effect of medium composition and fermentation time were evaluated. The optimum condition for biosurfactant production was achieved when a medium containing 30 g/L TSB (tryptic soy broth) and 20% v/v soybean oil was used as media with 7 days of fermentation. Biosurfactant was identified as glycolipids type biosurfactant which had critical micelle concentration (CMC) value of 896 mg/L. The biosurfactant had oil in water emulsion type and was able to reduce the surface tension of palm oil about 52% which could stabilize the emulsion up to 12 days. The batch removal of cadmium metal ion by crude and partially purified biosurfactants have been examined from synthetic aqueous solution at pH 6. The results exhibited that the crude biosurfactant had a much better adsorption ability of Cd(II) than that of partially purified biosurfactant. However, it was found that there was no significant difference in the adsorption of Cd(II) with 5 and 10 minutes of contact time. The results indicated that the biosurfactant could be used in remediation of heavy metals from contaminated aqueous solution.

  9. Response surface methodology for the optimization of lanthanum removal from an aqueous solution using a Fe{sub 3}O{sub 4}/chitosan nanocomposite

    Energy Technology Data Exchange (ETDEWEB)

    Haldorai, Yuvaraj [Department of Energy and Materials Engineering, Dongguk University-Seoul, Seoul (Korea, Republic of); Rengaraj, Arunkumar [Department of Biological Engineering, Biohybrid Systems Research Center (BSRC), Inha University, Incheon 402-751 (Korea, Republic of); Ryu, Taegong; Shin, Junho [Mineral Resources Research Division, Korea Institute of Geoscience and Mineral Resources, Daejeon 305-350 (Korea, Republic of); Huh, Yun Suk, E-mail: yunsuk.huh0311@gmail.com [Department of Biological Engineering, Biohybrid Systems Research Center (BSRC), Inha University, Incheon 402-751 (Korea, Republic of); Han, Young-Kyu, E-mail: ykenergy@dongguk.edu [Department of Energy and Materials Engineering, Dongguk University-Seoul, Seoul (Korea, Republic of)

    2015-05-15

    Highlights: • Magnetite/chitosan composite for lanthanum removal. • Response surface methodology was used for optimization. • A 99.88% removal of La{sup 3+} was observed at 40 °C, pH 11, and 50 min. • Adsorption process was significantly affected by pH and adsorbent dosage. • Biocompatible, eco-friendly and a low-cost adsorbent. - Abstract: In the present work, magnetite nanoparticles/chitosan composites (Fe{sub 3}O{sub 4}/CS) were prepared by a chemical precipitation method. We demonstrated the efficient removal of a rare earth metal, lanthanum (La{sup 3+}), from an aqueous solution using the composite. The removal of La{sup 3+} was optimized by using response surface methodology. Analysis of variance and Fisher's F-test were used to determine the reaction parameters which affect the removal of La{sup 3+}. Optimal conditions, including adsorbent dosage, pH, temperature, and contact time for the removal of La{sup 3+}, were found to be 6.5 mg, pH 11, 40 °C, and 50 min, respectively. The adsorption capacity was 99.88%. The rate of La{sup 3+} adsorption was significantly affected by the solution pH and adsorbent amount. An adsorption isotherm was fitted well by the Freundlich model with a linear regression correlation value of 0.9975. The adsorption of La{sup 3+} using the composite followed pseudo second-order kinetics. Thermodynamic studies have revealed that the negative values of Gibbs free energy confirmed the spontaneous and feasible nature of adsorption.

  10. Novel Power Flow Problem Solutions Method’s Based on Genetic Algorithm Optimization for Banks Capacitor Compensation Using an Fuzzy Logic Rule Bases for Critical Nodal Detections

    Directory of Open Access Journals (Sweden)

    Nasri Abdelfatah

    2011-01-01

    Full Text Available The Reactive power flow’s is one of the most electrical distribution systems problem wich have great of interset of the electrical network researchers, it’s  cause’s active power transmission reduction, power losses decreasing, and  the drop voltage’s increase. In this research we described the efficiency of the FLC-GAO approach to solve the optimal power flow (OPF combinatorial problem. The proposed approach employ tow algorithms, Fuzzy logic controller (FLC algorithm for critical nodal detection and gentic algorithm  optimization (GAO algorithm for optimal seizing capacitor.GAO method is more efficient in combinatory problem solutions. The proposed approach has been examined and tested on the standard IEEE 57-bus the resulats show the power loss minimization denhancement, voltage profile, and stability improvement. The proposed approach results have been compared to those that reported in the literature recently. The results are promising and show the effectiveness and robustness of the proposed approach.

  11. Optimization of moistening solution concentration on xylanase activity in solid state fermentation from oil palm empty fruit bunches

    Science.gov (United States)

    Mardawati, Efri; Parlan; Rialita, Tita; Nurhadi, Bambang

    2018-03-01

    Xylanase is an enzyme used in the industrial world, including food industry. Xylanase can be utilized as a 1,4-β-xylosidic endo-hydrolysis catalyst of xylanase, a hemicellulose component for obtaining a xylose monomer. This study aims to determine the optimum concentration of the fermentation medium using Response Surface Method (RSM) in the production of xylanase enzyme from oil palm empty fruit bunches (OPEFB) through solid state fermentation process. The variables varied in this study used factor A (ammonium sulphate concentration 1.0-2.0 g/L), B (concentration of potassium dihydrogen phosphate 1.5-2.5 g/L) and C (urea concentration 0.2 – 0.5 g/L). The data was analysed by using Design Expert version 10.0.1.0 especially CCD with total 17 running including 3 times replicated of canter point. Trichoderma viride was used for the process production of xylanase enzyme. The ratio between substrate and moistening solution used was 0.63 g / mL with temperature of 32.80C, 60 h incubation time. The analysis of enzyme activity was done by DNS method with 1% xylan as substrate. Analysis of protein content in enzyme was done by Bradford method. The optimum of moistening solution concentration in this fermentation was obtained. They are, the ammonium sulphate concentration of 1.5 g/L, potassium dihydrogen phosphate 2.0 g/L and urea 0.35 g/L with activity of 684.70 U/mL, specific activity enzyme xylanase 6261.58 U/mg, protein content 0.1093 U/mg, the model was validated using experiment design with perfect reliability value 0.96.

  12. A study of an optimal technological solution for the electronics of particle position sensitive gas detectors (multiwire proportional chambers)

    International Nuclear Information System (INIS)

    Zojceski, Z.

    1997-01-01

    This work aims at optimizing the electronics for position sensitive gas detectors. The first part is a review of proportional chamber operation principles and presents the different possibilities for the architecture of the electronics. The second part involves electronic signal processing for best signal-to-noise ratio. We present a time-variant filter based on a second order base line restorer.It allows a simple pole-zero and tail cancellation at high counting rates. Also, various interpolating algorithms for cathode strip chambers have been studied. The last part reports the development of a complete electronic system, from the preamplifiers up to the readout and control interface, for the cathode strip chambers in the focal plane of the BBS Spectrometer at KVI, Holland. The system is based on application specific D-size VXI modules. In all modules, the 16-bit ADCs and FIFO memory are followed by a Digital Signal Processor, which performs data filtering and cathode induced charge interpolation. Very good analog noise performance is obtained in a multi-processor environment. (author)

  13. Degradation of the fungicide carbendazim in aqueous solutions with UV/TiO2 process: Optimization, kinetics and toxicity studies

    International Nuclear Information System (INIS)

    Saien, J.; Khezrianjoo, S.

    2008-01-01

    An attempt was made to investigate the potential of UV-photocatalytic process in the presence of TiO 2 particles for the degradation of carbendazim (C 9 H 9 N 3 O 2 ), a fungicide with a high worldwide consumption but considered as a 'priority hazard substance' by the Water Framework Directive of the European Commission (WFDEC). A circulating upflow photo-reactor was employed and the influence of catalyst concentration, pH and temperature were investigated. The results showed that degradation of this fungicide can be conducted in the both processes of only UV-irradiation and UV/TiO 2 ; however, the later provides much better results. Accordingly, a degradation of more than 90% of fungicide was achieved by applying the optimal operational conditions of 70 mg L -1 of catalyst, natural pH of 6.73 and ambient temperature of 25 deg. C after 75 min irradiation. Under these mild conditions, the initial rate of degradation can be described well by the Langmuir-Hinshelwood kinetic model. Toxicological assessments on the obtained samples were also performed by measurement of the mycelium growth inhibition of Fusarium oxysporum fungus on PDA medium. The results indicate that the kinetics of degradation and toxicity are in reasonably good agreement mainly after 45 min of irradiation; confirming the effectiveness of photocatalytic process

  14. Optimal factor evaluation for the dissolution of alumina from Azaraegbelu clay in acid solution using RSM and ANN comparative analysis

    Directory of Open Access Journals (Sweden)

    P.E. Ohale

    2017-12-01

    Full Text Available Artificial neural network (ANN and Response Surface Methodology based on a 25−1 fractional factorial design were used as tools for simulation and optimisation of the dissolution process for Azaraegbelu clay. A feedforward neural network model with Levenberg–Marquard back propagating training algorithm was adapted to predict the response (alumina yield. The studied input variables were temperature, stirring speed, clay to acid dosage, leaching time and leachant concentration. The raw clay was characterized for structure elucidation via FTIR, SEM and X-ray diffraction spectroscopic techniques and the result indicates that the clay is predominantly kaolinite. Leachant concentration and dosage ratio were found to be the most significant process parameter with p-value of 0.0001. The performance of the ANN and RSM model showed adequate prediction of the response, with AAD of 11.6% and 3.6%, and R2 of 0.9733 and 0.9568, respectively. A non-dominated optimal response of 81.45% yield of alumina at 4.6 M sulphuric acid concentration, 214 min leaching time, 0.085 g/ml dosage and 214 rpm stirring speed was established as a viable route for reduced material and operating cost via RSM. Keywords: Alumina dissolution, ANN modelling, Azaraegbelu, Clay, RSM

  15. Using Minimax Regret Optimization to Search for Multi-Stakeholder Solutions to Deeply Uncertain Flood Hazards under Climate Change

    Science.gov (United States)

    Kirshen, P. H.; Hecht, J. S.; Vogel, R. M.

    2015-12-01

    Prescribing long-term urban floodplain management plans under the deep uncertainty of climate change is a challenging endeavor. To address this, we have implemented and tested with stakeholders a parsimonious multi-stage mixed integer programming (MIP) model that identifies the optimal time period(s) for implementing publicly and privately financed adaptation measures. Publicly funded measures include reach-scale flood barriers, flood insurance, and buyout programs to encourage property owners in flood-prone areas to retreat from the floodplain. Measures privately funded by property owners consist of property-scale floodproofing options, such as raising building foundations, as well as investments in flood insurance or retreat from flood-prone areas. The objective function to minimize the sum of flood control and damage costs in all planning stages for different property types during floods of different severities. There are constraints over time for flow mass balances, construction of flood management alternatives and their cumulative implementation, budget allocations, and binary decisions. Damages are adjusted for flood control investments. In recognition of the deep uncertainty of GCM-derived climate change scenarios, we employ the minimax regret criterion to identify adaptation portfolios robust to different climate change trajectories. As an example, we identify publicly and privately funded adaptation measures for a stylized community based on the estuarine community of Exeter, New Hampshire, USA. We explore the sensitivity of recommended portfolios to different ranges of climate changes, and costs associated with economies of scale and flexible infrastructure design as well as different municipal budget constraints.

  16. Solution Approach to Automatic Generation Control Problem Using Hybridized Gravitational Search Algorithm Optimized PID and FOPID Controllers

    Directory of Open Access Journals (Sweden)

    DAHIYA, P.

    2015-05-01

    Full Text Available This paper presents the application of hybrid opposition based disruption operator in gravitational search algorithm (DOGSA to solve automatic generation control (AGC problem of four area hydro-thermal-gas interconnected power system. The proposed DOGSA approach combines the advantages of opposition based learning which enhances the speed of convergence and disruption operator which has the ability to further explore and exploit the search space of standard gravitational search algorithm (GSA. The addition of these two concepts to GSA increases its flexibility for solving the complex optimization problems. This paper addresses the design and performance analysis of DOGSA based proportional integral derivative (PID and fractional order proportional integral derivative (FOPID controllers for automatic generation control problem. The proposed approaches are demonstrated by comparing the results with the standard GSA, opposition learning based GSA (OGSA and disruption based GSA (DGSA. The sensitivity analysis is also carried out to study the robustness of DOGSA tuned controllers in order to accommodate variations in operating load conditions, tie-line synchronizing coefficient, time constants of governor and turbine. Further, the approaches are extended to a more realistic power system model by considering the physical constraints such as thermal turbine generation rate constraint, speed governor dead band and time delay.

  17. Impact of discretization of the decision variables in the search of optimized solutions for history matching and injection rate optimization; Impacto do uso de variaveis discretas na busca de solucoes otimizadas para o ajuste de historico e distribuicao de vazoes de injecao

    Energy Technology Data Exchange (ETDEWEB)

    Sousa, Sergio H.G. de; Madeira, Marcelo G. [Halliburton Servicos Ltda., Rio de Janeiro, RJ (Brazil)

    2008-07-01

    In the classical operations research arena, there is the notion that the search for optimized solutions in continuous solution spaces is easier than on discrete solution spaces, even when the latter is a subset of the first. On the upstream oil industry, there is an additional complexity in the optimization problems because there usually are no analytical expressions for the objective function, which require some form of simulation in order to be evaluated. Thus, the use of meta heuristic optimizers like scatter search, tabu search and genetic algorithms is common. In this meta heuristic context, there are advantages in transforming continuous solution spaces in equivalent discrete ones; the goal to do so usually is to speed up the search for optimized solutions. However, these advantages can be masked when the problem has restrictions formed by linear combinations of its decision variables. In order to study these aspects of meta heuristic optimization, two optimization problems are proposed and solved with both continuous and discrete solution spaces: assisted history matching and injection rates optimization. Both cases operate on a model of the Wytch Farm onshore oil filed located in England. (author)

  18. DEVELOPMENT OF A KINETIC MODEL OF BOEHMITE DISSOLUTION IN CAUSTIC SOLUTIONS APPLIED TO OPTIMIZE HANFORD WASTE PROCESSING

    International Nuclear Information System (INIS)

    Disselkamp, R.S.

    2011-01-01

    Boehmite (e.g., aluminum oxyhydroxide) is a major non-radioactive component in Hanford and Savannah River nuclear tank waste sludge. Boehmite dissolution from sludge using caustic at elevated temperatures is being planned at Hanford to minimize the mass of material disposed of as high-level waste (HLW) during operation of the Waste Treatment Plant (WTP). To more thoroughly understand the chemistry of this dissolution process, we have developed an empirical kinetic model for aluminate production due to boehmite dissolution. Application of this model to Hanford tank wastes would allow predictability and optimization of the caustic leaching of aluminum solids, potentially yielding significant improvements to overall processing time, disposal cost, and schedule. This report presents an empirical kinetic model that can be used to estimate the aluminate production from the leaching of boehmite in Hanford waste as a function of the following parameters: (1) hydroxide concentration; (2) temperature; (3) specific surface area of boehmite; (4) initial soluble aluminate plus gibbsite present in waste; (5) concentration of boehmite in the waste; and (6) (pre-fit) Arrhenius kinetic parameters. The model was fit to laboratory, non-radioactive (e.g. 'simulant boehmite') leaching results, providing best-fit values of the Arrhenius A-factor, A, and apparent activation energy, E A , of A = 5.0 x 10 12 hour -1 and E A = 90 kJ/mole. These parameters were then used to predict boehmite leaching behavior observed in previously reported actual waste leaching studies. Acceptable aluminate versus leaching time profiles were predicted for waste leaching data from both Hanford and Savannah River site studies.

  19. Removal of Mefenamic acid from aqueous solutions by oxidative process: Optimization through experimental design and HPLC/UV analysis.

    Science.gov (United States)

    Colombo, Renata; Ferreira, Tanare C R; Ferreira, Renato A; Lanza, Marcos R V

    2016-02-01

    Mefenamic acid (MEF) is a non-steroidal anti-inflammatory drug indicated for relief of mild to moderate pain, and for the treatment of primary dysmenorrhea. The presence of MEF in raw and sewage waters has been detected worldwide at concentrations exceeding the predicted no-effect concentration. In this study, using experimental designs, different oxidative processes (H2O2, H2O2/UV, fenton and Photo-fenton) were simultaneously evaluated for MEF degradation efficiency. The influence and interaction effects of the most important variables in the oxidative process (concentration and addition mode of hydrogen peroxide, concentration and type of catalyst, pH, reaction period and presence/absence of light) were investigated. The parameters were determined based on the maximum efficiency to save time and minimize the consumption of reagents. According to the results, the photo-Fenton process is the best procedure to remove the drug from water. A reaction mixture containing 1.005 mmol L(-1) of ferrioxalate and 17.5 mmol L(-1) of hydrogen peroxide, added at the initial reaction period, pH of 6.1 and 60 min of degradation indicated the most efficient degradation, promoting 95% of MEF removal. The development and validation of a rapid and efficient qualitative and quantitative HPLC/UV methodology for detecting this pollutant in aqueous solution is also reported. The method can be applied in water quality control that is generated and/or treated in municipal or industrial wastewater treatment plants. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Neural architecture design based on extreme learning machine.

    Science.gov (United States)

    Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis

    2013-12-01

    Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Optimizing the deposition of hydrogen evolution sites on suspended semiconductor particles using on-line photocatalytic reforming of aqueous methanol solutions.

    Science.gov (United States)

    Busser, G Wilma; Mei, Bastian; Muhler, Martin

    2012-11-01

    The deposition of hydrogen evolution sites on photocatalysts is a crucial step in the multistep process of synthesizing a catalyst that is active for overall photocatalytic water splitting. An alternative approach to conventional photodeposition was developed, applying the photocatalytic reforming of aqueous methanol solutions to deposit metal particles on semiconductor materials such as Ga₂O₃ and (Ga₀.₆ Zn₀.₄)(N₀.₆O₀.₄). The method allows optimizing the loading of the co-catalysts based on the stepwise addition of their precursors and the continuous online monitoring of the evolved hydrogen. Moreover, a synergetic effect between different co-catalysts can be directly established. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. On the optimal systems of subalgebras for the equations of hydrodynamic stability analysis of smooth shear flows and their group-invariant solutions

    Science.gov (United States)

    Hau, Jan-Niklas; Oberlack, Martin; Chagelishvili, George

    2017-04-01

    We present a unifying solution framework for the linearized compressible equations for two-dimensional linearly sheared unbounded flows using the Lie symmetry analysis. The full set of symmetries that are admitted by the underlying system of equations is employed to systematically derive the one- and two-dimensional optimal systems of subalgebras, whose connected group reductions lead to three distinct invariant ansatz functions for the governing sets of partial differential equations (PDEs). The purpose of this analysis is threefold and explicitly we show that (i) there are three invariant solutions that stem from the optimal system. These include a general ansatz function with two free parameters, as well as the ansatz functions of the Kelvin mode and the modal approach. Specifically, the first approach unifies these well-known ansatz functions. By considering two limiting cases of the free parameters and related algebraic transformations, the general ansatz function is reduced to either of them. This fact also proves the existence of a link between the Kelvin mode and modal ansatz functions, as these appear to be the limiting cases of the general one. (ii) The Lie algebra associated with the Lie group admitted by the PDEs governing the compressible dynamics is a subalgebra associated with the group admitted by the equations governing the incompressible dynamics, which allows an additional (scaling) symmetry. Hence, any consequences drawn from the compressible case equally hold for the incompressible counterpart. (iii) In any of the systems of ordinary differential equations, derived by the three ansatz functions in the compressible case, the linearized potential vorticity is a conserved quantity that allows us to analyze vortex and wave mode perturbations separately.

  3. Bolting multicenter solutions

    Energy Technology Data Exchange (ETDEWEB)

    Bena, Iosif [Institut de Physique Théorique, Université Paris Saclay, CEA, CNRS, 91191 Gif-sur-Yvette Cedex (France); Bossard, Guillaume [Centre de Physique Théorique, Ecole Polytechnique, CNRS, Université Paris-Saclay, 91128 Palaiseau Cedex (France); Katmadas, Stefanos; Turton, David [Institut de Physique Théorique, Université Paris Saclay, CEA, CNRS, 91191 Gif-sur-Yvette Cedex (France)

    2017-01-30

    We introduce a solvable system of equations that describes non-extremal multicenter solutions to six-dimensional ungauged supergravity coupled to tensor multiplets. The system involves a set of functions on a three-dimensional base metric. We obtain a family of non-extremal axisymmetric solutions that generalize the known multicenter extremal solutions, using a particular base metric that introduces a bolt. We analyze the conditions for regularity, and in doing so we show that this family does not include solutions that contain an extremal black hole and a smooth bolt. We determine the constraints that are necessary to obtain smooth horizonless solutions involving a bolt and an arbitrary number of Gibbons-Hawking centers.

  4. A search for optimal parameters of resonance circuits ensuring damping of electroelastic structure vibrations based on the solution of natural vibration problem

    Science.gov (United States)

    Oshmarin, D.; Sevodina, N.; Iurlov, M.; Iurlova, N.

    2017-06-01

    In this paper, with the aim of providing passive control of structure vibrations a new approach has been proposed for selecting optimal parameters of external electric shunt circuits connected to piezoelectric elements located on the surface of the structure. The approach is based on the mathematical formulation of the natural vibration problem. The results of solution of this problem are the complex eigenfrequencies, the real part of which represents the vibration frequency and the imaginary part corresponds to the damping ratio, characterizing the rate of damping. A criterion of search for optimal parameters of the external passive shunt circuits, which can provide the system with desired dissipative properties, has been derived based on the analysis of responses of the real and imaginary parts of different complex eigenfrequencies to changes in the values of the parameters of the electric circuit. The efficiency of this approach has been verified in the context of natural vibration problem of rigidly clamped plate and semi-cylindrical shell, which is solved for series-connected and parallel -connected external resonance (consisting of resistive and inductive elements) R-L circuits. It has been shown that at lower (more energy-intensive) frequencies, a series-connected external circuit has the advantage of providing lower values of the circuit parameters, which renders it more attractive in terms of practical applications.

  5. Application of optimized large surface area date stone (Phoenix dactylifera ) activated carbon for rhodamin B removal from aqueous solution: Box-Behnken design approach.

    Science.gov (United States)

    Danish, Mohammed; Khanday, Waheed Ahmad; Hashim, Rokiah; Sulaiman, Nurul Syuhada Binti; Akhtar, Mohammad Nishat; Nizami, Maniruddin

    2017-05-01

    Box-Behnken model of response surface methodology was used to study the effect of adsorption process parameters for Rhodamine B (RhB) removal from aqueous solution through optimized large surface area date stone activated carbon. The set experiments with three input parameters such as time (10-600min), adsorbent dosage (0.5-10g/L) and temperature (25-50°C) were considered for statistical significance. The adequate relation was found between the input variables and response (removal percentage of RhB) and Fisher values (F- values) along with P-values suggesting the significance of various term coefficients. At an optimum adsorbent dose of 0.53g/L, time 593min and temperature 46.20°C, the adsorption capacity of 210mg/g was attained with maximum desirability. The negative values of Gibb ' s free energy (ΔG) predicted spontaneity and feasibility of adsorption; whereas, positive Enthalpy change (ΔH) confirmed endothermic adsorption of RhB onto optimized large surface area date stone activated carbons (OLSADS-AC). The adsorption data were found to be the best fit on the Langmuir model supporting monolayer type of adsorption of RhB with maximum monolayer layer adsorption capacity of 196.08mg/g. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Artificial Intelligence Based Optimization for the Se(IV) Removal from Aqueous Solution by Reduced Graphene Oxide-Supported Nanoscale Zero-Valent Iron Composites.

    Science.gov (United States)

    Cao, Rensheng; Fan, Mingyi; Hu, Jiwei; Ruan, Wenqian; Wu, Xianliang; Wei, Xionghui

    2018-03-15

    Highly promising artificial intelligence tools, including neural network (ANN), genetic algorithm (GA) and particle swarm optimization (PSO), were applied in the present study to develop an approach for the evaluation of Se(IV) removal from aqueous solutions by reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites. Both GA and PSO were used to optimize the parameters of ANN. The effect of operational parameters (i.e., initial pH, temperature, contact time and initial Se(IV) concentration) on the removal efficiency was examined using response surface methodology (RSM), which was also utilized to obtain a dataset for the ANN training. The ANN-GA model results (with a prediction error of 2.88%) showed a better agreement with the experimental data than the ANN-PSO model results (with a prediction error of 4.63%) and the RSM model results (with a prediction error of 5.56%), thus the ANN-GA model was an ideal choice for modeling and optimizing the Se(IV) removal by the nZVI/rGO composites due to its low prediction error. The analysis of the experimental data illustrates that the removal process of Se(IV) obeyed the Langmuir isotherm and the pseudo-second-order kinetic model. Furthermore, the Se 3d and 3p peaks found in XPS spectra for the nZVI/rGO composites after removing treatment illustrates that the removal of Se(IV) was mainly through the adsorption and reduction mechanisms.

  7. A noise-optimized virtual monochromatic reconstruction algorithm improves stent visualization and diagnostic accuracy for detection of in-stent re-stenosis in lower extremity run-off CT angiography

    Energy Technology Data Exchange (ETDEWEB)

    Mangold, Stefanie [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); De Cecco, Carlo N.; Yamada, Ricardo T.; Varga-Szemes, Akos; Stubenrauch, Andrew C.; Fuller, Stephen R. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Schoepf, U.J. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States); Caruso, Damiano [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' , Department of Radiological Sciences, Oncology and Pathology, Rome (Italy); Vogl, Thomas J.; Wichmann, Julian L. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Nikolaou, Konstantin [Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Todoran, Thomas M. [Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States)

    2016-12-15

    To evaluate the impact of noise-optimized virtual monochromatic imaging (VMI+) on stent visualization and accuracy for in-stent re-stenosis at lower extremity dual-energy CT angiography (DE-CTA). We evaluated third-generation dual-source DE-CTA studies in 31 patients with prior stent placement. Images were reconstructed with linear blending (F{sub 0}.5) and VMI+ at 40-150 keV. In-stent luminal diameter was measured and contrast-to-noise ratio (CNR) calculated. Diagnostic confidence was determined using a five-point scale. In 21 patients with invasive catheter angiography, accuracy for significant re-stenosis (≥50 %) was assessed at F{sub 0}.5 and 80 keV-VMI+ chosen as the optimal energy level based on image-quality analysis. At CTA, 45 stents were present. DSA was available for 28 stents whereas 12 stents showed significant re-stenosis. CNR was significantly higher with ≤80 keV-VMI+ (17.9 ± 6.4-33.7 ± 12.3) compared to F{sub 0}.5 (16.9 ± 4.8; all p < 0.0463); luminal stent diameters were increased at ≥70 keV (5.41 ± 1.8-5.92 ± 1.7 vs. 5.27 ± 1.8, all p < 0.001) and diagnostic confidence was highest at 70-80 keV-VMI+ (4.90 ± 0.48-4.88 ± 0.63 vs. 4.60 ± 0.66, p = 0.001, 0.0042). Sensitivity, negative predictive value and accuracy for re-stenosis were higher with 80 keV-VMI+ (100, 100, 96.4 %) than F{sub 0}.5 (90.9, 94.1, 89.3 %). 80 keV-VMI+ improves image quality, diagnostic confidence and accuracy for stent evaluation at lower extremity DE-CTA. (orig.)

  8. A modified estimation distribution algorithm based on extreme elitism.

    Science.gov (United States)

    Gao, Shujun; de Silva, Clarence W

    2016-12-01

    An existing estimation distribution algorithm (EDA) with univariate marginal Gaussian model was improved by designing and incorporating an extreme elitism selection method. This selection method highlighted the effect of a few top best solutions in the evolution and advanced EDA to form a primary evolution direction and obtain a fast convergence rate. Simultaneously, this selection can also keep the population diversity to make EDA avoid premature convergence. Then the modified EDA was tested by means of benchmark low-dimensional and high-dimensional optimization problems to illustrate the gains in using this extreme elitism selection. Besides, no-free-lunch theorem was implemented in the analysis of the effect of this new selection on EDAs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Remaining Useful Life Estimation of Insulated Gate Biploar Transistors (IGBTs Based on a Novel Volterra k-Nearest Neighbor Optimally Pruned Extreme Learning Machine (VKOPP Model Using Degradation Data

    Directory of Open Access Journals (Sweden)

    Zhen Liu

    2017-11-01

    Full Text Available The insulated gate bipolar transistor (IGBT is a kind of excellent performance switching device used widely in power electronic systems. How to estimate the remaining useful life (RUL of an IGBT to ensure the safety and reliability of the power electronics system is currently a challenging issue in the field of IGBT reliability. The aim of this paper is to develop a prognostic technique for estimating IGBTs’ RUL. There is a need for an efficient prognostic algorithm that is able to support in-situ decision-making. In this paper, a novel prediction model with a complete structure based on optimally pruned extreme learning machine (OPELM and Volterra series is proposed to track the IGBT’s degradation trace and estimate its RUL; we refer to this model as Volterra k-nearest neighbor OPELM prediction (VKOPP model. This model uses the minimum entropy rate method and Volterra series to reconstruct phase space for IGBTs’ ageing samples, and a new weight update algorithm, which can effectively reduce the influence of the outliers and noises, is utilized to establish the VKOPP network; then a combination of the k-nearest neighbor method (KNN and least squares estimation (LSE method is used to calculate the output weights of OPELM and predict the RUL of the IGBT. The prognostic results show that the proposed approach can predict the RUL of IGBT modules with small error and achieve higher prediction precision and lower time cost than some classic prediction approaches.

  10. Extremely Preterm Birth

    Science.gov (United States)

    ... Events Advocacy For Patients About ACOG Extremely Preterm Birth Home For Patients Search FAQs Extremely Preterm Birth ... Spanish FAQ173, June 2016 PDF Format Extremely Preterm Birth Pregnancy When is a baby considered “preterm” or “ ...

  11. A Novel Hybridization of Applied Mathematical, Operations Research and Risk-based Methods to Achieve an Optimal Solution to a Challenging Subsurface Contamination Problem

    Science.gov (United States)

    Johnson, K. D.; Pinder, G. F.

    2013-12-01

    The objective of the project is the creation of a new, computationally based, approach to the collection, evaluation and use of data for the purpose of determining optimal strategies for investment in the solution of remediation of contaminant source areas and similar environmental problems. The research focuses on the use of existing mathematical tools assembled in a unique fashion. The area of application of this new capability is optimal (least-cost) groundwater contamination source identification; we wish to identify the physical environments wherein it may be cost-prohibitive to identify a contaminant source, the optimal strategy to protect the environment from additional insult and formulate strategies for cost-effective environmental restoration. The computational underpinnings of the proposed approach encompass the integration into a unique of several known applied-mathematical tools. The resulting tool integration achieves the following: 1) simulate groundwater flow and contaminant transport under uncertainty, that is when the physical parameters such as hydraulic conductivity are known to be described by a random field; 2) define such a random field from available field data or be able to provide insight into the sampling strategy needed to create such a field; 3) incorporate subjective information, such as the opinions of experts on the importance of factors such as locations of waste landfills; 4) optimize a search strategy for finding a potential source location and to optimally combine field information with model results to provide the best possible representation of the mean contaminant field and its geostatistics. Our approach combines in a symbiotic manner methodologies found in numerical simulation, random field analysis, Kalman filtering, fuzzy set theory and search theory. Testing the algorithm for this stage of the work, we will focus on fabricated field situations wherein we can a priori specify the degree of uncertainty associated with the

  12. Automation Rover for Extreme Environments

    Science.gov (United States)

    Sauder, Jonathan; Hilgemann, Evan; Johnson, Michael; Parness, Aaron; Hall, Jeffrey; Kawata, Jessie; Stack, Kathryn

    2017-01-01

    Almost 2,300 years ago the ancient Greeks built the Antikythera automaton. This purely mechanical computer accurately predicted past and future astronomical events long before electronics existed1. Automata have been credibly used for hundreds of years as computers, art pieces, and clocks. However, in the past several decades automata have become less popular as the capabilities of electronics increased, leaving them an unexplored solution for robotic spacecraft. The Automaton Rover for Extreme Environments (AREE) proposes an exciting paradigm shift from electronics to a fully mechanical system, enabling longitudinal exploration of the most extreme environments within the solar system.

  13. Optimization of the electro-Fenton and solar photoelectro-Fenton treatments of sulfanilic acid solutions using a pre-pilot flow plant by response surface methodology

    International Nuclear Information System (INIS)

    El-Ghenymy, Abdellatif; Garcia-Segura, Sergi; Rodríguez, Rosa María; Brillas, Enric; El Begrani, Mohamed Soussi; Abdelouahid, Ben Ali

    2012-01-01

    Highlights: ► Quicker degradation of sulfanilic acid by solar photoelectro-Fenton than electro-Fenton. ► The same optimized current density, Fe 2+ content and pH for both processes by CCRD. ► Description of TOC, energy cost and current efficiency by response surface methodology. ► Fe(III)–carboxylate complexes as main by-products after electro-Fenton. ► Photolysis of these complexes by UV irradiation of sunlight in solar photoelectro-Fenton. - Abstract: A central composite rotatable design and response surface methodology were used to optimize the experimental variables of the electro-Fenton (EF) and solar photoelectro-Fenton (SPEF) degradations of 2.5 L of sulfanilic acid solutions in 0.05 M Na 2 SO 4 . Electrolyses were performed with a pre-pilot flow plant containing a Pt/air diffusion reactor generating H 2 O 2 . In SPEF, it was coupled with a solar photoreactor under an UV irradiation intensity of ca. 31 W m −2 . Optimum variables of 100 mA cm −2 , 0.5 mM Fe 2+ and pH 4.0 were determined after 240 min of EF and 120 min of SPEF. Under these conditions, EF gave 47% of mineralization, whereas SPEF was much more powerful yielding 76% mineralization with 275 kWh kg −1 total organic carbon (TOC) energy consumption and 52% current efficiency. Sulfanilic acid decayed at similar rate in both treatments following a pseudo-first-order kinetics. The final solution treated by EF contained a stable mixture of tartaric, acetic, oxalic and oxamic acids, which form Fe(III) complexes that are not attacked by hydroxyl radicals formed from H 2 O 2 and added Fe 2+ . The quick photolysis of these complexes by UV light of sunlight explains the higher oxidation power of SPEF. NH 4 + was the main inorganic nitrogen ion released in both processes.

  14. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  15. Controlling extreme events on complex networks

    Science.gov (United States)

    Chen, Yu-Zhong; Huang, Zi-Gang; Lai, Ying-Cheng

    2014-08-01

    Extreme events, a type of collective behavior in complex networked dynamical systems, often can have catastrophic consequences. To develop effective strategies to control extreme events is of fundamental importance and practical interest. Utilizing transportation dynamics on complex networks as a prototypical setting, we find that making the network ``mobile'' can effectively suppress extreme events. A striking, resonance-like phenomenon is uncovered, where an optimal degree of mobility exists for which the probability of extreme events is minimized. We derive an analytic theory to understand the mechanism of control at a detailed and quantitative level, and validate the theory numerically. Implications of our finding to current areas such as cybersecurity are discussed.

  16. Optimization of operational conditions in continuous electrodeionization method for maximizing Strontium and Cesium removal from aqueous solutions using artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Zahakifar, Fazel; Keshtkar, Alireza; Nazemi, Ehsan; Zaheri, Adib [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of)

    2017-09-01

    Strontium (Sr) and Cesium (Cs) are two important nuclear fission products which are present in the radioactive wastewater resulting from nuclear power plants. They should be treated by considering environmental and economic aspects. In this study, artificial neural network (ANN) was implemented to evaluate the optimal experimental conditions in continuous electrodeionization method in order to achieve the highest removal percentage of Sr and Ce from aqueous solutions. Three control factors at three levels were tested in experiments for Sr and Cs: Feed concentration (10, 50 and 100 mg/L), flow rate (2.5, 3.75 and 5 mL/min) and voltage (5, 7.5 and 10 V). The obtained data from the experiments were used to train two ANNs. The three control factors were utilized as the inputs of ANNs and two quality responses were used as the outputs, separately (each ANN for one quality response). After training the ANNs, 1024 different control factor levels with various quality responses were predicted and finally the optimum control factor levels were obtained. Results demonstrated that the optimum levels of the control factors for maximum removing of Sr (97.6%) had an applied voltage of 10 V, a flow rate of 2.5 mL/min and a feed concentration of 10 mg/L. As for Cs (67.8%) they were 10 V, 2.55 mL/min and 50 mg/L, respectively.

  17. Infection in the ischemic lower extremity.

    Science.gov (United States)

    Fry, D E; Marek, J M; Langsfeld, M

    1998-06-01

    Infections in the lower extremity of the patient with ischemia can cover a broad spectrum of different diseases. An understanding of the particular pathophysiologic circumstances in the ischemic extremity can be of great value in understanding the natural history of the disease and the potential complications that may occur. Optimizing blood flow to the extremity by using revascularization techniques is important for any patient with an ischemic lower extremity complicated by infection or ulceration. Infections in the ischemic lower extremity require local débridement and systemic antibiotics. For severe infections, such as necrotizing fasciitis or the fetid foot, more extensive local débridement and even amputation may be required. Fundamentals of managing prosthetic graft infection require removing the infected prosthesis, local wound débridement, and systemic antibiotics while attempting to preserve viability of the lower extremity using autogenous graft reconstruction.

  18. Portfolio Optimization and Mortgage Choice

    Directory of Open Access Journals (Sweden)

    Maj-Britt Nordfang

    2017-01-01

    Full Text Available This paper studies the optimal mortgage choice of an investor in a simple bond market with a stochastic interest rate and access to term life insurance. The study is based on advances in stochastic control theory, which provides analytical solutions to portfolio problems with a stochastic interest rate. We derive the optimal portfolio of a mortgagor in a simple framework and formulate stylized versions of mortgage products offered in the market today. This allows us to analyze the optimal investment strategy in terms of optimal mortgage choice. We conclude that certain extreme investors optimally choose either a traditional fixed rate mortgage or an adjustable rate mortgage, while investors with moderate risk aversion and income prefer a mix of the two. By matching specific investor characteristics to existing mortgage products, our study provides a better understanding of the complex and yet restricted mortgage choice faced by many household investors. In addition, the simple analytical framework enables a detailed analysis of how changes to market, income and preference parameters affect the optimal mortgage choice.

  19. Biosorption of Cd(II), Ni(II) and Pb(II) from aqueous solution by dried biomass of aspergillus niger: application of response surface methodology to the optimization of process parameters

    Energy Technology Data Exchange (ETDEWEB)

    Amini, Malihe; Younesi, Habibollah [Department of Environmental Science, Faculty of Natural Resources and Marine Sciences, Tarbiat Modares University, Noor (Iran)

    2009-10-15

    In this study, the biosorption of Cd(II), Ni(II) and Pb(II) on Aspergillus niger in a batch system was investigated, and optimal condition determined by means of central composite design (CCD) under response surface methodology (RSM). Biomass inactivated by heat and pretreated by alkali solution was used in the determination of optimal conditions. The effect of initial solution pH, biomass dose and initial ion concentration on the removal efficiency of metal ions by A. niger was optimized using a design of experiment (DOE) method. Experimental results indicated that the optimal conditions for biosorption were 5.22 g/L, 89.93 mg/L and 6.01 for biomass dose, initial ion concentration and solution pH, respectively. Enhancement of metal biosorption capacity of the dried biomass by pretreatment with sodium hydroxide was observed. Maximal removal efficiencies for Cd(II), Ni(III) and Pb(II) ions of 98, 80 and 99% were achieved, respectively. The biosorption capacity of A. niger biomass obtained for Cd(II), Ni(II) and Pb(II) ions was 2.2, 1.6 and 4.7 mg/g, respectively. According to these observations the fungal biomass of A. niger is a suitable biosorbent for the removal of heavy metals from aqueous solutions. Multiple response optimization was applied to the experimental data to discover the optimal conditions for a set of responses, simultaneously, by using a desirability function. (Abstract Copyright [2009], Wiley Periodicals, Inc.)

  20. Noncoplanar Beam Angle Class Solutions to Replace Time-Consuming Patient-Specific Beam Angle Optimization in Robotic Prostate Stereotactic Body Radiation Therapy

    International Nuclear Information System (INIS)

    Rossi, Linda; Breedveld, Sebastiaan; Aluwini, Shafak; Heijmen, Ben

    2015-01-01

    Purpose: To investigate development of a recipe for the creation of a beam angle class solution (CS) for noncoplanar prostate stereotactic body radiation therapy to replace time-consuming individualized beam angle selection (iBAS) without significant loss in plan quality, using the in-house “Erasmus-iCycle” optimizer for fully automated beam profile optimization and iBAS. Methods and Materials: For 30 patients, Erasmus-iCycle was first used to generate 15-, 20-, and 25-beam iBAS plans for a CyberKnife equipped with a multileaf collimator. With these plans, 6 recipes for creation of beam angle CSs were investigated. Plans of 10 patients were used to create CSs based on the recipes, and the other 20 to independently test them. For these tests, Erasmus-iCycle was also used to generate intensity modulated radiation therapy plans for the fixed CS beam setups. Results: Of the tested recipes for CS creation, only 1 resulted in 15-, 20-, and 25-beam noncoplanar CSs without plan deterioration compared with iBAS. For the patient group, mean differences in rectum D 1cc , V 60GyEq , V 40GyEq , and D mean between 25-beam CS plans and 25-beam plans generated with iBAS were 0.2 ± 0.4 Gy, 0.1% ± 0.2%, 0.2% ± 0.3%, and 0.1 ± 0.2 Gy, respectively. Differences between 15- and 20-beam CS and iBAS plans were also negligible. Plan quality for CS plans relative to iBAS plans was also preserved when narrower planning target volume margins were arranged and when planning target volume dose inhomogeneity was decreased. Using a CS instead of iBAS reduced the computation time by a factor of 14 to 25, mainly depending on beam number, without loss in plan quality. Conclusions: A recipe for creation of robust beam angle CSs for robotic prostate stereotactic body radiation therapy has been developed. Compared with iBAS, computation times decreased by a factor 14 to 25. The use of a CS may avoid long planning times without losses in plan quality

  1. Co-modified MCM-41 as an effective adsorbent for levofloxacin removal from aqueous solution: optimization of process parameters, isotherm, and thermodynamic studies.

    Science.gov (United States)

    Jin, Ting; Yuan, Wenhua; Xue, Yujie; Wei, Hong; Zhang, Chaoying; Li, Kebin

    2017-02-01

    Antibiotics are emerging contaminants due to their potential risks to human health and ecosystems. Poor biodegradability makes it necessary to develop effective physical-chemical methods to eliminate these contaminants from water. The cobalt-modified MCM-41 was prepared by a one-pot hydrothermal method and characterized by SAXRD, N 2 adsorption-desorption, SEM, UV-Vis DR, and FTIR spectroscopy. The results revealed that the prepared 3% Co-MCM-41 possessed mesoporous structure with BET surface areas at around 898.5 m 2 g -1 . The adsorption performance of 3% Co-MCM-41 toward levofloxacin (LVF) was investigated by batch experiments. The adsorption of LVF on 3% Co-MCM-41 was very fast and reached equilibrium within 2 h. The adsorption kinetics followed the pseudo-second-order kinetic model with the second-order rate constants in the range of 0.00198-0.00391 g mg -1  min -1 . The adsorption isotherms could be well represented by the Langmuir, Freundlich, and Dubinin-Radushkevich (D-R) isotherm equations. Nevertheless, D-R isotherm provided the best fit based on the coefficient of determination and average relative error values. The mean free energy of adsorption (E) calculated from D-R model was about 11 kJ mol -1 , indicating that the adsorption was mainly governed by a chemisorption process. Moreover, the adsorption capacity was investigated as a function of pH, adsorbent dosage, LVF concentration, and temperature with help of respond surface methodology (RSM). A quadratic model was established, and an optimal condition was obtained as follows: pH 8.5, adsorbent dosage of 1 g L -1 , initial LVF concentration of 119.8 mg L -1 , and temperature of 31.6 °C. Under the optimal condition, the adsorption capacity of 3% Co-MCM-41 to LVF could reach about 108.1 mg g -1 . The solution pH, adsorbent dosage, LVF concentration, and a combination of adsorbent dose and LVF concentration were significant factors affecting the adsorption process. The adsorption

  2. Noncoplanar Beam Angle Class Solutions to Replace Time-Consuming Patient-Specific Beam Angle Optimization in Robotic Prostate Stereotactic Body Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, Linda, E-mail: l.rossi@erasmusmc.nl; Breedveld, Sebastiaan; Aluwini, Shafak; Heijmen, Ben

    2015-07-15

    Purpose: To investigate development of a recipe for the creation of a beam angle class solution (CS) for noncoplanar prostate stereotactic body radiation therapy to replace time-consuming individualized beam angle selection (iBAS) without significant loss in plan quality, using the in-house “Erasmus-iCycle” optimizer for fully automated beam profile optimization and iBAS. Methods and Materials: For 30 patients, Erasmus-iCycle was first used to generate 15-, 20-, and 25-beam iBAS plans for a CyberKnife equipped with a multileaf collimator. With these plans, 6 recipes for creation of beam angle CSs were investigated. Plans of 10 patients were used to create CSs based on the recipes, and the other 20 to independently test them. For these tests, Erasmus-iCycle was also used to generate intensity modulated radiation therapy plans for the fixed CS beam setups. Results: Of the tested recipes for CS creation, only 1 resulted in 15-, 20-, and 25-beam noncoplanar CSs without plan deterioration compared with iBAS. For the patient group, mean differences in rectum D{sub 1cc}, V{sub 60GyEq}, V{sub 40GyEq}, and D{sub mean} between 25-beam CS plans and 25-beam plans generated with iBAS were 0.2 ± 0.4 Gy, 0.1% ± 0.2%, 0.2% ± 0.3%, and 0.1 ± 0.2 Gy, respectively. Differences between 15- and 20-beam CS and iBAS plans were also negligible. Plan quality for CS plans relative to iBAS plans was also preserved when narrower planning target volume margins were arranged and when planning target volume dose inhomogeneity was decreased. Using a CS instead of iBAS reduced the computation time by a factor of 14 to 25, mainly depending on beam number, without loss in plan quality. Conclusions: A recipe for creation of robust beam angle CSs for robotic prostate stereotactic body radiation therapy has been developed. Compared with iBAS, computation times decreased by a factor 14 to 25. The use of a CS may avoid long planning times without losses in plan quality.

  3. Non-extremal D-instantons

    NARCIS (Netherlands)

    Bergshoeff, E; Collinucci, A; Gran, U; Roest, D; Vandoren, S

    2004-01-01

    We construct the most general non-extremal deformation of the D-instanton solution with maximal rotational symmetry. The general non-supersymmetric solution carries electric charges of the SL(2,R) symmetry, which correspond to each of the three conjugacy classes of SL (2, R). Our calculations

  4. Non-extremal D-instantons

    NARCIS (Netherlands)

    Bergshoeff, E.; Collinucci, A.; Gran, U.; Roest, D.; Vandoren, S.

    2004-01-01

    We construct the most general non-extremal deformation of the D-instanton solution with maximal rotational symmetry. The general non-supersymmetric solution carries electric charges of the SL(2,R) symmetry, which correspond to each of the three conjugacy classes of SL(2,R). Our calculations

  5. Extreme environment electronics

    CERN Document Server

    Cressler, John D

    2012-01-01

    Unfriendly to conventional electronic devices, circuits, and systems, extreme environments represent a serious challenge to designers and mission architects. The first truly comprehensive guide to this specialized field, Extreme Environment Electronics explains the essential aspects of designing and using devices, circuits, and electronic systems intended to operate in extreme environments, including across wide temperature ranges and in radiation-intense scenarios such as space. The Definitive Guide to Extreme Environment Electronics Featuring contributions by some of the world's foremost exp

  6. Optimization modeling with spreadsheets

    CERN Document Server

    Baker, Kenneth R

    2015-01-01

    An accessible introduction to optimization analysis using spreadsheets Updated and revised, Optimization Modeling with Spreadsheets, Third Edition emphasizes model building skills in optimization analysis. By emphasizing both spreadsheet modeling and optimization tools in the freely available Microsoft® Office Excel® Solver, the book illustrates how to find solutions to real-world optimization problems without needing additional specialized software. The Third Edition includes many practical applications of optimization models as well as a systematic framework that il

  7. Extreme value distributions

    CERN Document Server

    Ahsanullah, Mohammad

    2016-01-01

    The aim of the book is to give a through account of the basic theory of extreme value distributions. The book cover a wide range of materials available to date. The central ideas and results of extreme value distributions are presented. The book rwill be useful o applied statisticians as well statisticians interrested to work in the area of extreme value distributions.vmonograph presents the central ideas and results of extreme value distributions.The monograph gives self-contained of theory and applications of extreme value distributions.

  8. Optimization of Low-Thrust Spiral Trajectories by Collocation

    Science.gov (United States)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  9. Evaluation of different initial solution algorithms to be used in the heuristics optimization to solve the energy resource scheduling in smart grids

    DEFF Research Database (Denmark)

    Sousa, Tiago; Morais, Hugo; Castro, Rui

    2016-01-01

    vehicles. The proposed algorithms proved to present results very close to the optimal with a small difference between 0.1%. A deterministic technique is used as comparison and it took around 26 h to obtain the optimal one. On the other hand, the simulated annealing was able of obtaining results around 1...

  10. Near optimal pentamodes as a tool for guiding stress while minimizing compliance in 3d-printed materials: A complete solution to the weak G-closure problem for 3d-printed materials

    Science.gov (United States)

    Milton, Graeme W.; Camar-Eddine, Mohamed

    2018-05-01

    For a composite containing one isotropic elastic material, with positive Lame moduli, and void, with the elastic material occupying a prescribed volume fraction f, and with the composite being subject to an average stress, σ0 , Gibiansky, Cherkaev, and Allaire provided a sharp lower bound Wf(σ0) on the minimum compliance energy σ0 :ɛ0 , in which ɛ0 is the average strain. Here we show these bounds also provide sharp bounds on the possible (σ0 ,ɛ0) -pairs that can coexist in such composites, and thus solve the weak G-closure problem for 3d-printed materials. The materials we use to achieve the extremal (σ0 ,ɛ0) -pairs are denoted as near optimal pentamodes. We also consider two-phase composites containing this isotropic elasticity material and a rigid phase with the elastic material occupying a prescribed volume fraction f, and with the composite being subject to an average strain, ɛ0. For such composites, Allaire and Kohn provided a sharp lower bound W˜f(ɛ0) on the minimum elastic energy σ0 :ɛ0 . We show that these bounds also provide sharp bounds on the possible (σ0 ,ɛ0) -pairs that can coexist in such composites of the elastic and rigid phases, and thus solve the weak G-closure problem in this case too. The materials we use to achieve these extremal (σ0 ,ɛ0) -pairs are denoted as near optimal unimodes.

  11. Topology optimization of Channel flow problems

    DEFF Research Database (Denmark)

    Gersborg-Hansen, Allan; Sigmund, Ole; Haber, R. B.

    2005-01-01

    function which measures either some local aspect of the velocity field or a global quantity, such as the rate of energy dissipation. We use the finite element method to model the flow, and we solve the optimization problem with a gradient-based math-programming algorithm that is driven by analytical......This paper describes a topology design method for simple two-dimensional flow problems. We consider steady, incompressible laminar viscous flows at low to moderate Reynolds numbers. This makes the flow problem non-linear and hence a non-trivial extension of the work of [Borrvall&Petersson 2002......]. Further, the inclusion of inertia effects significantly alters the physics, enabling solutions of new classes of optimization problems, such as velocity--driven switches, that are not addressed by the earlier method. Specifically, we determine optimal layouts of channel flows that extremize a cost...

  12. On the diversity of multiple optimal controls for quantum systems

    International Nuclear Information System (INIS)

    Shir, O M; Baeck, Th; Beltrani, V; Rabitz, H; Vrakking, M J J

    2008-01-01

    This study presents simulations of optimal field-free molecular alignment and rotational population transfer (starting from the J = 0 rotational ground state of a diatomic molecule), optimized by means of laser pulse shaping guided by evolutionary algorithms. Qualitatively different solutions are obtained that optimize the alignment and population transfer efficiency to the maximum extent that is possible given the existing constraints on the optimization due to the finite bandwidth and energy of the laser pulse, the finite degrees of freedom in the laser pulse shaping and the evolutionary algorithm employed. The effect of these constraints on the optimization process is discussed at several levels, subject to theoretical as well as experimental considerations. We show that optimized alignment yields can reach extremely high values, even with severe constraints being present. The breadth of optimal controls is assessed, and a correlation is found between the diversity of solutions and the difficulty of the problem. In the pulse shapes that optimize dynamic alignment we observe a transition between pulse sequences that maximize the initial population transfer from J = 0 to J = 2 and pulse sequences that optimize the transfer to higher rotational levels

  13. Development and Optimization of a Positron Annihilation Lifetime Spectrometer to Measure Nanoscale Defects in Solids and Borane Cage Molecules in Aqueous Nitrate Solutions

    National Research Council Canada - National Science Library

    Ross, Matthew A

    2008-01-01

    .... The timing resolution of the optimized system is 197 +or- 14 ps as measured with a known (60)Co source. A single-crystal tungsten sample was used to confirm the system calibration resulting in a lifetime of 101...

  14. Oil Reservoir Production Optimization using Optimal Control

    DEFF Research Database (Denmark)

    Völcker, Carsten; Jørgensen, John Bagterp; Stenby, Erling Halfdan

    2011-01-01

    Practical oil reservoir management involves solution of large-scale constrained optimal control problems. In this paper we present a numerical method for solution of large-scale constrained optimal control problems. The method is a single-shooting method that computes the gradients using the adjo...... reservoir using water ooding and smart well technology. Compared to the uncontrolled case, the optimal operation increases the Net Present Value of the oil field by 10%.......Practical oil reservoir management involves solution of large-scale constrained optimal control problems. In this paper we present a numerical method for solution of large-scale constrained optimal control problems. The method is a single-shooting method that computes the gradients using...

  15. How extreme is extreme hourly precipitation?

    Science.gov (United States)

    Papalexiou, Simon Michael; Dialynas, Yannis G.; Pappas, Christoforos

    2016-04-01

    The importance of accurate representation of precipitation at fine time scales (e.g., hourly), directly associated with flash flood events, is crucial in hydrological design and prediction. The upper part of a probability distribution, known as the distribution tail, determines the behavior of extreme events. In general, and loosely speaking, tails can be categorized in two families: the subexponential and the hyperexponential family, with the first generating more intense and more frequent extremes compared to the latter. In past studies, the focus has been mainly on daily precipitation, with the Gamma distribution being the most popular model. Here, we investigate the behaviour of tails of hourly precipitation by comparing the upper part of empirical distributions of thousands of records with three general types of tails corresponding to the Pareto, Lognormal, and Weibull distributions. Specifically, we use thousands of hourly rainfall records from all over the USA. The analysis indicates that heavier-tailed distributions describe better the observed hourly rainfall extremes in comparison to lighter tails. Traditional representations of the marginal distribution of hourly rainfall may significantly deviate from observed behaviours of extremes, with direct implications on hydroclimatic variables modelling and engineering design.

  16. Evaluation of Missed Energy Saving Opportunity Based on Illinois Home Performance Program Field Data: Homeowner Selected Upgrades Versus Cost-Optimized Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Yee, S.; Milby, M.; Baker, J.

    2014-06-01

    Expanding on previous research by PARR, this study compares measure packages installed during 800 Illinois Home Performance with ENERGY STAR(R) (IHP) residential retrofits to those recommended as cost-optimal by Building Energy Optimization (BEopt) modeling software. In previous research, cost-optimal measure packages were identified for fifteen Chicagoland single family housing archetypes, called housing groups. In the present study, 800 IHP homes are first matched to one of these fifteen housing groups, and then the average measures being installed in each housing group are modeled using BEopt to estimate energy savings. For most housing groups, the differences between recommended and installed measure packages is substantial. By comparing actual IHP retrofit measures to BEopt-recommended cost-optimal measures, missed savings opportunities are identified in some housing groups; also, valuable information is obtained regarding housing groups where IHP achieves greater savings than BEopt-modeled, cost-optimal recommendations. Additionally, a measure-level sensitivity analysis conducted for one housing group reveals which measures may be contributing the most to gas and electric savings. Overall, the study finds not only that for some housing groups, the average IHP retrofit results in more energy savings than would result from cost-optimal, BEopt recommended measure packages, but also that linking home categorization to standardized retrofit measure packages provides an opportunity to streamline the process for single family home energy retrofits and maximize both energy savings and cost-effectiveness.

  17. Evaluation of Missed Energy Saving Opportunity Based on Illinois Home Performance Program Field Data: Homeowner Selected Upgrades Versus Cost-Optimized Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Yee, S. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States); Milby, M. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States); Baker, J. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States)

    2014-06-01

    Expanding on previous research by PARR, this study compares measure packages installed during 800 Illinois Home Performance with ENERGY STAR® (IHP) residential retrofits to those recommended as cost-optimal by Building Energy Optimization (BEopt) modeling software. In previous research, cost-optimal measure packages were identified for 15 Chicagoland single family housing archetypes. In the present study, 800 IHP homes are first matched to one of these 15 housing groups, and then the average measures being installed in each housing group are modeled using BEopt to estimate energy savings. For most housing groups, the differences between recommended and installed measure packages is substantial. By comparing actual IHP retrofit measures to BEopt-recommended cost-optimal measures, missed savings opportunities are identified in some housing groups; also, valuable information is obtained regarding housing groups where IHP achieves greater savings than BEopt-modeled, cost-optimal recommendations. Additionally, a measure-level sensitivity analysis conducted for one housing group reveals which measures may be contributing the most to gas and electric savings. Overall, the study finds not only that for some housing groups, the average IHP retrofit results in more energy savings than would result from cost-optimal, BEopt recommended measure packages, but also that linking home categorization to standardized retrofit measure packages provides an opportunity to streamline the process for single family home energy retrofits and maximize both energy savings and cost effectiveness.

  18. Controlling solution-phase polymer aggregation with molecular weight and solvent additives to optimize polymer-fullerene bulk heterojunction solar cells

    KAUST Repository

    Bartelt, Jonathan A.; Douglas, Jessica D.; Mateker, William R.; El Labban, Abdulrahman; Tassone, Christopher J.; Toney, Michael F.; Fré chet, Jean Mj J; Beaujuge, Pierre; McGehee, Michael D.

    2014-01-01

    The bulk heterojunction (BHJ) solar cell performance of many polymers depends on the polymer molecular weight (M n) and the solvent additive(s) used for solution processing. However, the mechanism that causes these dependencies is not well

  19. Technology improves upper extremity rehabilitation.

    Science.gov (United States)

    Kowalczewski, Jan; Prochazka, Arthur

    2011-01-01

    Stroke survivors with hemiparesis and spinal cord injury (SCI) survivors with tetraplegia find it difficult or impossible to perform many activities of daily life. There is growing evidence that intensive exercise therapy, especially when supplemented with functional electrical stimulation (FES), can improve upper extremity function, but delivering the treatment can be costly, particularly after recipients leave rehabilitation facilities. Recently, there has been a growing level of interest among researchers and healthcare policymakers to deliver upper extremity treatments to people in their homes using in-home teletherapy (IHT). The few studies that have been carried out so far have encountered a variety of logistical and technical problems, not least the difficulty of conducting properly controlled and blinded protocols that satisfy the requirements of high-level evidence-based research. In most cases, the equipment and communications technology were not designed for individuals with upper extremity disability. It is clear that exercise therapy combined with interventions such as FES, supervised over the Internet, will soon be adopted worldwide in one form or another. Therefore it is timely that researchers, clinicians, and healthcare planners interested in assessing IHT be aware of the pros and cons of the new technology and the factors involved in designing appropriate studies of it. It is crucial to understand the technical barriers, the role of telesupervisors, the motor improvements that participants can reasonably expect and the process of optimizing IHT-exercise therapy protocols to maximize the benefits of the emerging technology. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Classifying Returns as Extreme

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2014-01-01

    I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme tha...

  1. Clinical application of lower extremity CTA and lower extremity perfusion CT as a method of diagnostic for lower extremity atherosclerotic obliterans

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Il Bong; Dong, Kyung Rae [Dept. Radiological Technology, Gwangju Health University, Gwangju (Korea, Republic of); Goo, Eun Hoe [Dept. Radiological Science, Cheongju University, Cheongju (Korea, Republic of)

    2016-11-15

    The purpose of this study was to assess clinical application of lower extremity CTA and lower extremity perfusion CT as a method of diagnostic for lower extremity atherosclerotic obliterans. From January to July 2016, 30 patients (mean age, 68) were studied with lower extremity CTA and lower extremity perfusion CT. 128 channel multi-detector row CT scans were acquired with a CT scanner (SOMATOM Definition Flash, Siemens medical solution, Germany) of lower extremity perfusion CT and lower extremity CTA. Acquired images were reconstructed with 3D workstation (Leonardo, Siemens, Germany). Site of lower extremity arterial occlusive and stenosis lesions were detected superficial femoral artery 36.6%, popliteal artery 23.4%, external iliac artery 16.7%, common femoral artery 13.3%, peroneal artery 10%. The mean total DLP comparison of lower extremity perfusion CT and lower extremity CTA, 650 mGy-cm and 675 mGy-cm, respectively. Lower extremity perfusion CT and lower extremity CTA were realized that were never be two examination that were exactly the same legions. Future through the development of lower extremity perfusion CT soft ware programs suggest possible clinical applications.

  2. Optimization method development of the core characteristics of a fast reactor in order to explore possible high performance solutions (a solution being a consistent set of fuel, core, system and safety)

    International Nuclear Information System (INIS)

    Ingremeau, J.-J.X.

    2011-01-01

    In the study of any new nuclear reactor, the design of the core is an important step. However designing and optimising a reactor core is quite complex as it involves neutronics, thermal-hydraulics and fuel thermomechanics and usually design of such a system is achieved through an iterative process, involving several different disciplines. In order to solve quickly such a multi-disciplinary system, while observing the appropriate constraints, a new approach has been developed to optimise both the core performance (in-cycle Pu inventory, fuel burn-up, etc...) and the core safety characteristics (safety estimators) of a Fast Neutron Reactor. This new approach, called FARM (Fast Reactor Methodology) uses analytical models and interpolations (Meta-models) from CEA reference codes for neutronics, thermal-hydraulics and fuel behaviour, which are coupled to automatically design a core based on several optimization variables. This global core model is then linked to a genetic algorithm and used to explore and optimise new core designs with improved performance. Consideration has also been given to which parameters can be best used to define the core performance and how safety can be taken into account.This new approach has been used to optimize the design of three concepts of Gas cooled Fast Reactor (GFR). For the first one, using a SiC/SiCf-cladded carbide-fuelled helium-bonded pin, the results demonstrate that the CEA reference core obtained with the traditional iterative method was an optimal core, but among many other possibilities (that is to say on the Pareto front). The optimization also found several other cores which exhibit some improved features at the expense of other safety or performance estimators. An evolution of this concept using a 'buffer', a new technology being developed at CEA, has hence been introduced in FARM. The FARM optimisation produced several core designs using this technology, and estimated their performance. The results obtained show that

  3. Epidemiology of extremity fractures in the Netherlands

    NARCIS (Netherlands)

    Beerekamp, M. S. H.; de Muinck Keizer, R. J. O.; Schep, N. W. L.; Ubbink, D. T.; Panneman, M. J. M.; Goslings, J. C.

    2017-01-01

    Insight in epidemiologic data of extremity fractures is relevant to identify people at risk. By analyzing age- and gender specific fracture incidence and treatment patterns we may adjust future policy, take preventive measures and optimize health care management. Current epidemiologic data on

  4. Investigation of optimal manufacturing process for freeze-dried formulations: Observation of frozen solutions by low temperature X-ray diffraction measurements

    International Nuclear Information System (INIS)

    Egawa, Hiroaki; Yonemochi, Etsuo; Terada, Katsuhide

    2005-01-01

    Freeze-drying is used for the production of sterile injections in the pharmaceutical industry. However, most pharmaceutical compounds are obtained as less stable amorphous form. Freeze crystallization by annealing is an effective method for pharmaceutical compounds that fail to crystallize in the freeze-drying process. Crystallization occurs in the frozen solution during the thermal treatment. In order to establish suitable annealing conditions efficiently, it is important to observe the crystallization process directly in the frozen solution. Recently, low temperature X-ray diffraction has been used to observe frozen solutions. In order to investigate the crystallization process kinetically, the temperature of the low temperature X-ray diffraction instrument must be accurately controlled. We calibrated the temperature of X-ray diffraction instrument by measuring eutectic temperatures of solutions for a series of compounds. Each eutectic crystal was observed in frozen solution with ice crystal below the eutectic temperature. Eutectic temperatures were detected by the decrease in diffraction intensity associated with heating from below the eutectic temperature. Good correlation was obtained between values in the literature and experimental values

  5. Spin glasses and nonlinear constraints in portfolio optimization

    Energy Technology Data Exchange (ETDEWEB)

    Andrecut, M., E-mail: mircea.andrecut@gmail.com

    2014-01-17

    We discuss the portfolio optimization problem with the obligatory deposits constraint. Recently it has been shown that as a consequence of this nonlinear constraint, the solution consists of an exponentially large number of optimal portfolios, completely different from each other, and extremely sensitive to any changes in the input parameters of the problem, making the concept of rational decision making questionable. Here we reformulate the problem using a quadratic obligatory deposits constraint, and we show that from the physics point of view, finding an optimal portfolio amounts to calculating the mean-field magnetizations of a random Ising model with the constraint of a constant magnetization norm. We show that the model reduces to an eigenproblem, with 2N solutions, where N is the number of assets defining the portfolio. Also, in order to illustrate our results, we present a detailed numerical example of a portfolio of several risky common stocks traded on the Nasdaq Market.

  6. Spin glasses and nonlinear constraints in portfolio optimization

    International Nuclear Information System (INIS)

    Andrecut, M.

    2014-01-01

    We discuss the portfolio optimization problem with the obligatory deposits constraint. Recently it has been shown that as a consequence of this nonlinear constraint, the solution consists of an exponentially large number of optimal portfolios, completely different from each other, and extremely sensitive to any changes in the input parameters of the problem, making the concept of rational decision making questionable. Here we reformulate the problem using a quadratic obligatory deposits constraint, and we show that from the physics point of view, finding an optimal portfolio amounts to calculating the mean-field magnetizations of a random Ising model with the constraint of a constant magnetization norm. We show that the model reduces to an eigenproblem, with 2N solutions, where N is the number of assets defining the portfolio. Also, in order to illustrate our results, we present a detailed numerical example of a portfolio of several risky common stocks traded on the Nasdaq Market.

  7. Application of multi-criteria decision-making model for choice of the optimal solution for meeting heat demand in the centralized supply system in Belgrade

    International Nuclear Information System (INIS)

    Grujić, Miodrag; Ivezić, Dejan; Živković, Marija

    2014-01-01

    The expected growth of living standard, number of inhabitants and development of technology, industry and agriculture will cause a significant increase of energy consumption in cities. Three scenarios of energy sector development until 2030 and corresponding energy consumption for the city of Belgrade are analyzed in this paper. These scenarios consider different level of economic development, investments in energy sector, substitution of fossil fuels, introduction of renewable energy sources and implementation of energy efficiency measures. The proposed model for selection of optimal district heating system compares different options for fulfilling expected new heat demand through eight criteria for each scenario. Proposed options are combination of different energy sources and technologies for their use. The criteria weights are set according to Serbian economy and energy position. The criteria include financial aspects, environmental impact and availability of energy. Multi-criteria method ELECTRE (ELimination Et Choix Traduisant la REalite) is used as a tool for obtaining the optimal option. It is concluded that combination of CHP (combined heat and power) plant and centralized use of geothermal energy is optimal choice in the optimistic scenario. In the pessimistic and business as usual scenario the optimal option is combination of new gas boilers and centralized use of geothermal energy. - Highlights: • Three scenarios for meeting new heat demand are developed and assessed. • Constructing CHP (combined heat and power) is desirable in case of significant electricity price growth. • In all scenarios the chosen option includes using geothermal energy for heating

  8. Freely dissolved concentrations of anionic surfactants in seawater solutions: optimization of the non-depletive solid-phase microextraction method and application to linear alkylbenzene sulfonates.

    NARCIS (Netherlands)

    Rico Rico, A.; Droge, S.T.J.; Widmer, D.; Hermens, J.L.M.

    2009-01-01

    A solid-phase microextraction method (SPME) has been optimized for the analysis of freely dissolved anionic surfactants, namely linear alkylbenzene sulfonates (LAS), in seawater. An effect of the thermal conditioning treatment on the polyacrylate fiber coating was demonstrated for both uptake

  9. Evolution strategies for robust optimization

    NARCIS (Netherlands)

    Kruisselbrink, Johannes Willem

    2012-01-01

    Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but

  10. Particle Swarm Optimization Toolbox

    Science.gov (United States)

    Grant, Michael J.

    2010-01-01

    The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry

  11. Extreme Weather and Climate: Workshop Report

    Science.gov (United States)

    Sobel, Adam; Camargo, Suzana; Debucquoy, Wim; Deodatis, George; Gerrard, Michael; Hall, Timothy; Hallman, Robert; Keenan, Jesse; Lall, Upmanu; Levy, Marc; hide

    2016-01-01

    Extreme events are the aspects of climate to which human society is most sensitive. Due to both their severity and their rarity, extreme events can challenge the capacity of physical, social, economic and political infrastructures, turning natural events into human disasters. Yet, because they are low frequency events, the science of extreme events is very challenging. Among the challenges is the difficulty of connecting extreme events to longer-term, large-scale variability and trends in the climate system, including anthropogenic climate change. How can we best quantify the risks posed by extreme weather events, both in the current climate and in the warmer and different climates to come? How can we better predict them? What can we do to reduce the harm done by such events? In response to these questions, the Initiative on Extreme Weather and Climate has been created at Columbia University in New York City (extreme weather.columbia.edu). This Initiative is a University-wide activity focused on understanding the risks to human life, property, infrastructure, communities, institutions, ecosystems, and landscapes from extreme weather events, both in the present and future climates, and on developing solutions to mitigate those risks. In May 2015,the Initiative held its first science workshop, entitled Extreme Weather and Climate: Hazards, Impacts, Actions. The purpose of the workshop was to define the scope of the Initiative and tremendously broad intellectual footprint of the topic indicated by the titles of the presentations (see Table 1). The intent of the workshop was to stimulate thought across disciplinary lines by juxtaposing talks whose subjects differed dramatically. Each session concluded with question and answer panel sessions. Approximately, 150 people were in attendance throughout the day. Below is a brief synopsis of each presentation. The synopses collectively reflect the variety and richness of the emerging extreme event research agenda.

  12. Extremal surface barriers

    International Nuclear Information System (INIS)

    Engelhardt, Netta; Wall, Aron C.

    2014-01-01

    We present a generic condition for Lorentzian manifolds to have a barrier that limits the reach of boundary-anchored extremal surfaces of arbitrary dimension. We show that any surface with nonpositive extrinsic curvature is a barrier, in the sense that extremal surfaces cannot be continuously deformed past it. Furthermore, the outermost barrier surface has nonnegative extrinsic curvature. Under certain conditions, we show that the existence of trapped surfaces implies a barrier, and conversely. In the context of AdS/CFT, these barriers imply that it is impossible to reconstruct the entire bulk using extremal surfaces. We comment on the implications for the firewall controversy

  13. Extremal vacuum black holes in higher dimensions

    International Nuclear Information System (INIS)

    Figueras, Pau; Lucietti, James; Rangamani, Mukund; Kunduri, Hari K.

    2008-01-01

    We consider extremal black hole solutions to the vacuum Einstein equations in dimensions greater than five. We prove that the near-horizon geometry of any such black hole must possess an SO(2,1) symmetry in a special case where one has an enhanced rotational symmetry group. We construct examples of vacuum near-horizon geometries using the extremal Myers-Perry black holes and boosted Myers-Perry strings. The latter lead to near-horizon geometries of black ring topology, which in odd spacetime dimensions have the correct number of rotational symmetries to describe an asymptotically flat black object. We argue that a subset of these correspond to the near-horizon limit of asymptotically flat extremal black rings. Using this identification we provide a conjecture for the exact 'phase diagram' of extremal vacuum black rings with a connected horizon in odd spacetime dimensions greater than five.

  14. Cold water for tyre production. Carrier: Intelligent solutions for optimal energy use; Kaltes Wasser fuer heisse Reifen. Carrier: Intelligente Loesungen zur optimalen Energienutzung

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2009-01-15

    Especially when it comes to special solutions for specific applications, this will show how well man and machine are working together. In a production plant of Michelin Reifenwerke at Bad Kreuznach, Carrier installed two absorption refrigerators, in cooperation with further specialist partners. Instead of electric power, the machines utilize waste heat of the production process. (orig.)

  15. Statistics of Extremes

    KAUST Repository

    Davison, Anthony C.; Huser, Raphaë l

    2015-01-01

    Statistics of extremes concerns inference for rare events. Often the events have never yet been observed, and their probabilities must therefore be estimated by extrapolation of tail models fitted to available data. Because data concerning the event

  16. Analysis of extreme events

    CSIR Research Space (South Africa)

    Khuluse, S

    2009-04-01

    Full Text Available ) determination of the distribution of the damage and (iii) preparation of products that enable prediction of future risk events. The methodology provided by extreme value theory can also be a powerful tool in risk analysis...

  17. Acute lower extremity ischaemia

    African Journals Online (AJOL)

    Acute lower extremity ischaemia. Acute lower limb ischaemia is a surgical emergency. ... is ~1.5 cases per 10 000 persons per year. Acute ischaemia ... Table 2. Clinical features discriminating embolic from thrombotic ALEXI. Clinical features.

  18. A Bootstrap-Based Probabilistic Optimization Method to Explore and Efficiently Converge in Solution Spaces of Earthquake Source Parameter Estimation Problems: Application to Volcanic and Tectonic Earthquakes

    Science.gov (United States)

    Dahm, T.; Heimann, S.; Isken, M.; Vasyura-Bathke, H.; Kühn, D.; Sudhaus, H.; Kriegerowski, M.; Daout, S.; Steinberg, A.; Cesca, S.

    2017-12-01

    Seismic source and moment tensor waveform inversion is often ill-posed or non-unique if station coverage is poor or signals are weak. Therefore, the interpretation of moment tensors can become difficult, if not the full model space is explored, including all its trade-offs and uncertainties. This is especially true for non-double couple components of weak or shallow earthquakes, as for instance found in volcanic, geothermal or mining environments.We developed a bootstrap-based probabilistic optimization scheme (Grond), which is based on pre-calculated Greens function full waveform databases (e.g. fomosto tool, doi.org/10.5880/GFZ.2.1.2017.001). Grond is able to efficiently explore the full model space, the trade-offs and the uncertainties of source parameters. The program is highly flexible with respect to the adaption to specific problems, the design of objective functions, and the diversity of empirical datasets.It uses an integrated, robust waveform data processing based on a newly developed Python toolbox for seismology (Pyrocko, see Heimann et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.001), and allows for visual inspection of many aspects of the optimization problem. Grond has been applied to the CMT moment tensor inversion using W-phases, to nuclear explosions in Korea, to meteorite atmospheric explosions, to volcano-tectonic events during caldera collapse and to intra-plate volcanic and tectonic crustal events.Grond can be used to optimize simultaneously seismological waveforms, amplitude spectra and static displacements of geodetic data as InSAR and GPS (e.g. KITE, Isken et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.002). We present examples of Grond optimizations to demonstrate the advantage of a full exploration of source parameter uncertainties for interpretation.

  19. Solution of the radiative enclosure with a hybrid inverse method

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Rogerio Brittes da; Franca, Francis Henrique Ramos [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Dept. de Engenharia Mecanica], E-mail: frfranca@mecanica.ufrgs.br

    2010-07-01

    This work applies the inverse analysis to solve a three-dimensional radiative enclosure - which the surfaces are diffuse-grays - filled with transparent medium. The aim is determine the powers and locations of the heaters to attain both uniform heat flux and temperature on the design surface. A hybrid solution that couples two methods, the generalized extremal optimization (GEO) and the truncated singular value decomposition (TSVD) is proposed. The determination of the heat sources distribution is treated as an optimization problem, by GEO algorithm , whereas the solution of the system of equation, that embodies the Fredholm equation of first kind and therefore is expected to be ill conditioned, is build up through TSVD regularization method. The results show that the hybrid method can lead to a heat flux on the design surface that satisfies the imposed conditions with maximum error of less than 1,10%. The results illustrated the relevance of a hybrid method as a prediction tool. (author)

  20. District heating (DH) network design and operation toward a system-wide methodology for optimizing renewable energy solutions (SMORES) in Canada: A case study

    DEFF Research Database (Denmark)

    Dalla Rosa, A.; Boulter, R.; Church, K.

    2012-01-01

    better energy delivery performance than high-temperature district heating (HTDH) (Tsupply> 100 C), decreasing the heat loss by approximately 40%. The low-temperature networks (Tsupplyinvestment. The implementation...... in Canada. The paper discusses critical issues and quantifies the performance of design concepts for DH supply to low heat density areas. DH is a fundamental energy infrastructure and is part of the solution for sustainable energy planning in Canadian communities....

  1. Preparation of (U, Gd)O2 by inverse co-precipitation in nitric solutions. Study of homogeneity and process optimization

    International Nuclear Information System (INIS)

    Marchi, Daniel E.; Menghini, Jorge E.; Trimarco, Viviana G.

    1999-01-01

    The inverse co-precipitation method has been used at the laboratory level to produce uranium - gadolinium mixed oxides. The formation of a mixed phase in the precipitates has been determined as well as the occurrence of only one phase in the sintered pellets, corresponding to a gadolinium - uranium solution. Moreover, a modification in the calcination-reduction stage was introduced that allows the elimination of the fissures previously detected in the sintered pellets

  2. Rolls-Royce I and C long term support a dedicated solution to manage plant condition, optimize operating life and maximize plant value while improving safety

    International Nuclear Information System (INIS)

    Baillonmartos, F.

    2010-01-01

    Rolls-Royce supplied safety I and C equipment as Original Equipment Manufacturer (OEM) for 200 reactors in operation from 1970', including various technological steps (analog to digital hardware). The hardware scope covers various systems, either more than 500 hardware references, either more than 150000 parts in operation. Rolls-Royce contributes to life time extension ensuring system availability and reliability on long term basis by determining the optimum solution between different maintenance approaches: maintenance and repair with limited retrofit or major system retrofit. Based on Long Term Support (LTS) agreement with customers, Rolls-Royce commits to maintain the capability to manufacture, modify, repair, test, at board, rack and systems level over a long time period. This means finding solution to hardware ageing, technology evolution, skills and tools maintenance. This paper describes how Rolls-Royce has built a global process based on delivering a package of services and developing long term support engineering solutions and tools. Our main experience on EDF partnership will serve to illustrate Rolls-Royce I and C Long Term Support activity. (authors)

  3. Controlling solution-phase polymer aggregation with molecular weight and solvent additives to optimize polymer-fullerene bulk heterojunction solar cells

    KAUST Repository

    Bartelt, Jonathan A.

    2014-03-20

    The bulk heterojunction (BHJ) solar cell performance of many polymers depends on the polymer molecular weight (M n) and the solvent additive(s) used for solution processing. However, the mechanism that causes these dependencies is not well understood. This work determines how M n and solvent additives affect the performance of BHJ solar cells made with the polymer poly(di(2-ethylhexyloxy)benzo[1,2-b:4,5-b\\']dithiophene-co- octylthieno[3,4-c]pyrrole-4,6-dione) (PBDTTPD). Low M n PBDTTPD devices have exceedingly large fullerene-rich domains, which cause extensive charge-carrier recombination. Increasing the M n of PBDTTPD decreases the size of these domains and significantly improves device performance. PBDTTPD aggregation in solution affects the size of the fullerene-rich domains and this effect is linked to the dependency of PBDTTPD solubility on M n. Due to its poor solubility high M n PBDTTPD quickly forms a fibrillar polymer network during spin-casting and this network acts as a template that prevents large-scale phase separation. Furthermore, processing low M n PBDTTPD devices with a solvent additive improves device performance by inducing polymer aggregation in solution and preventing large fullerene-rich domains from forming. These findings highlight that polymer aggregation in solution plays a significant role in determining the morphology and performance of BHJ solar cells. The performance of poly(di(2-ethylhexyloxy) benzo[1,2-b:4,5-b\\']dithiophene-co-octylthieno[3,4-c]pyrrole-4,6-dione) (PBDTTPD) bulk heterojunction solar cells strongly depends on the polymer molecular weight, and processing these bulk heterojunctions with a solvent additive preferentially improves the performance of low molecular weight devices. It is demonstrated that polymer aggregation in solution significantly impacts the thin-film bulk heterojunction morphology and is vital for high device performance. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Extreme Programming: Maestro Style

    Science.gov (United States)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2009-01-01

    "Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme

  5. Pseudolinear functions and optimization

    CERN Document Server

    Mishra, Shashi Kant

    2015-01-01

    Pseudolinear Functions and Optimization is the first book to focus exclusively on pseudolinear functions, a class of generalized convex functions. It discusses the properties, characterizations, and applications of pseudolinear functions in nonlinear optimization problems.The book describes the characterizations of solution sets of various optimization problems. It examines multiobjective pseudolinear, multiobjective fractional pseudolinear, static minmax pseudolinear, and static minmax fractional pseudolinear optimization problems and their results. The authors extend these results to locally

  6. Novel power flow problem solutions method’s based on genetic algorithm optimization for banks capacitor compensation using an fuzzy logic rule bases for critical nodal detections

    OpenAIRE

    Abdelfatah, Nasri; Brahim, Gasbaoui

    2011-01-01

    The Reactive power flow’s is one of the most electrical distribution systems problem wich have great of interset of the electrical network researchers, it’s  cause’s active power transmission reduction, power losses decreasing, and  the drop voltage’s increase. In this research we described the efficiency of the FLC-GAO approach to solve the optimal power flow (OPF) combinatorial problem. The proposed approach employ tow algorithms, Fuzzy logic controller (FLC) algorithm for critical nodal de...

  7. A contribution to the solution of the problems raised by the application of the principle of protection optimization to nuclear plants

    International Nuclear Information System (INIS)

    Lacourly, G.; Demerle, P.

    1975-01-01

    The radiological protection of populations and the environment rests on two main principles: the dose delivered to the individuals of the most exposed population group must remain below the dose limits set up by regulations; the doses must be kept as low as readly achievable social and economic considerations being taken into account. If the application of the former principle has now become routine work, the application of the latter one implying optimization calculus raises a number of difficult problems. In order to decide whether an exposure can be easily reduced, both the benefits of the reduction and its cost must be considered, which leads to undertake a differential analysis [fr

  8. Near-horizon symmetries of extremal black holes

    International Nuclear Information System (INIS)

    Kunduri, Hari K; Lucietti, James; Reall, Harvey S

    2007-01-01

    Recent work has demonstrated an attractor mechanism for extremal rotating black holes subject to the assumption of a near-horizon SO(2, 1) symmetry. We prove the existence of this symmetry for any extremal black hole with the same number of rotational symmetries as known four- and five-dimensional solutions (including black rings). The result is valid for a general two-derivative theory of gravity coupled to Abelian vectors and uncharged scalars, allowing for a non-trivial scalar potential. We prove that it remains valid in the presence of higher-derivative corrections. We show that SO(2, 1)-symmetric near-horizon solutions can be analytically continued to give SU(2)-symmetric black hole solutions. For example, the near-horizon limit of an extremal 5D Myers-Perry black hole is related by analytic continuation to a non-extremal cohomogeneity-1 Myers-Perry solution

  9. Batch removal and optimization of Cu(II) ions from aqueous solution by biosorption on to native and pretreated mangifera indica seeds

    International Nuclear Information System (INIS)

    Noreen, A.; Ali, G.; Jabeen, M.

    2011-01-01

    Biosorption is a process that utilizes biomass to sequester toxic heavy metals and is particularly useful for the removal of contaminants from industrial effluents. Present study involved batch experiments for the sorption of Cu(II) onto Mangifera indica seeds kernel particles in order to optimize the biosorbent dose, agitation rate, pH, contact time and initial metal ion concentration. The effect of citric acid pretreatment was also studied. Maximum uptake was observed at pH 5, biosorbent dose 0.5 g and agitation rate 150 rpm. A direct correlation was found to exist between adsorbed Cu(II) ion concentration and initial metal concentration upto a certain level then it reached a saturation value at about 250 mg/L. Biosorption equilibrium was established by 60 min. The maximum metal uptake capacity was 13.2 mg/g at optimized conditions. The uptake capacity of the biomass was increased by chemical pretreatment with citric acid (15.2 mg/g) when compared with the raw biomass (13.2 mg/g). Equilibrium data was fitted to Freundlich and Langmuir isotherm equations and the data was found to well represented by Langmuir isotherm equation with r/sup 2/ = 0.9981 and q/sub max/ = 17.939 mg/g for raw biomass and with r/sup 2/ = 0.9984 and qmax 18.57 mg/g for modified biomass. (author)

  10. Extreme meteorological conditions

    International Nuclear Information System (INIS)

    Altinger de Schwarzkopf, M.L.

    1983-01-01

    Different meteorological variables which may reach significant extreme values, such as the windspeed and, in particular, its occurrence through tornadoes and hurricanes that necesarily incide and wich must be taken into account at the time of nuclear power plants' installation, are analyzed. For this kind of study, it is necessary to determine the basic phenomenum of design. Two criteria are applied to define the basic values of design for extreme meteorological variables. The first one determines the expected extreme value: it is obtained from analyzing the recurence of the phenomenum in a convened period of time, wich may be generally of 50 years. The second one determines the extreme value of low probability, taking into account the nuclear power plant's operating life -f.ex. 25 years- and considering, during said lapse, the occurrence probabilities of extreme meteorological phenomena. The values may be determined either by the deterministic method, which is based on the acknowledgement of the fundamental physical characteristics of the phenomena or by the probabilistic method, that aims to the analysis of historical statistical data. Brief comments are made on the subject in relation to the Argentine Republic area. (R.J.S.) [es

  11. A deficit in optimizing task solution but robust and well-retained speed and accuracy gains in complex skill acquisition in Parkinson׳s disease: multi-session training on the Tower of Hanoi Puzzle.

    Science.gov (United States)

    Vakil, Eli; Hassin-Baer, Sharon; Karni, Avi

    2014-05-01

    There are inconsistent results in the research literature relating to whether a procedural memory dysfunction exists as a core deficit in Parkinson׳s disease (PD). To address this issue, we examined the acquisition and long-term retention of a cognitive skill in patients with moderately severe PD. To this end, we used a computerized version of the Tower of Hanoi Puzzle. Sixteen patients with PD (11 males, age 60.9±10.26 years, education 13.8±3.5 years, disease duration 8.6±4.7 years, UPDRS III "On" score 16±5.3) were compared with 20 healthy individuals matched for age, gender, education and MMSE scores. The patients were assessed while taking their anti-Parkinsonian medication. All participants underwent three consecutive practice sessions, 24-48h apart, and a retention-test session six months later. A computerized version of the Tower of Hanoi Puzzle, with four disks, was used for training. Participants completed the task 18 times in each session. Number of moves (Nom) to solution, and time per move (Tpm), were used as measures of acquisition and retention of the learned skill. Robust learning, a significant reduction in Nom and a concurrent decrease in Tpm, were found across all three training sessions, in both groups. Moreover, both patients and controls showed significant savings for both measures at six months post-training. However, while their Tpm was no slower than that of controls, patients with PD required more Nom (in 3rd and 4th sessions) and tended to stabilize on less-than-optimal solutions. The results do not support the notion of a core deficit in gaining speed (fluency) or generating procedural memory in PD. However, PD patients settled on less-than-optimal solutions of the task, i.e., less efficient task solving process. The results are consistent with animal studies of the effects of dopamine depletion on task exploration. Thus, patients with PD may have a problem in exploring for optimal task solution rather than in skill acquisition and

  12. Etching process optimization using NH{sub 4}Cl aqueous solution to texture ZnO:Al films for efficient light trapping in flexible thin film solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, S., E-mail: susanamaria.fernandez@ciemat.es [CIEMAT, Departamento de Energias Renovables, Madrid (Spain); Abril, O. de [ISOM and Departamento de Fisica Aplicada, Escuela Tecnica Superior de Ingenieros de Telecomunicacion, Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Naranjo, F.B. [Grupo de Ingenieria Fotonica, Universidad de Alcala, Departamento de Electronica, Alcala de Henares, Madrid (Spain); Gandia, J.J. [CIEMAT, Departamento de Energias Renovables, Madrid (Spain)

    2012-04-02

    0.5 {mu}m-thick aluminum-doped zinc oxide (ZnO:Al) films were deposited at 100 Degree-Sign C on polyethylene terephthalate substrates by Radio Frequency magnetron sputtering. The as-deposited films were compact and dense, showing grain sizes of 32.0 {+-} 6.4 nm and resistivities of (8.5 {+-} 0.7) Multiplication-Sign 10{sup -4} {Omega} cm. The average transmittance in the visible wavelength range of the structure ZnO:Al/PET was around 77%. The capability of a novel two-step chemical etching using diluted NH{sub 4}Cl aqueous solution to achieve efficient textured surfaces for light trapping was analyzed. The results indicated that both the aqueous solution and the etching method resulted appropriated to obtain etched surfaces with a surface roughness of 32 {+-} 5 nm, haze factors at 500 nm of 9% and light scattering at angles up to 50 Degree-Sign . To validate all these results, a commercially ITO coated PET substrate was used for comparison.

  13. Use of a screening method to determine excipients which optimize the extent and stability of supersaturated drug solutions and application of this system to solid formulation design.

    Science.gov (United States)

    Vandecruys, Roger; Peeters, Jef; Verreck, Geert; Brewster, Marcus E

    2007-09-05

    Assessing the effect of excipients on the ability to attain and maintain supersaturation of drug-based solution may provide useful information for the design of solid formulations. Judicious selection of materials that affect either the extent or stability of supersaturating drug delivery systems may be enabling for poorly soluble drug candidates or other difficult-to-formulate compounds. The technique suggested herein is aimed at providing a screening protocol to allow preliminary assessment of these factors based on small to moderate amounts of drug substance. A series of excipients were selected that may, by various mechanisms, affect supersaturation including pharmaceutical polymers such as HMPC and PVP, surfactants such as Polysorbate 20, Cremophor RH40 and TPGS and hydrophilic cyclodextrins such as HPbetaCD. Using a co-solvent based method and 25 drug candidates, the data suggested, on the whole, that the surfactants and the selected cyclodextrin seemed to best augment the extent of supersaturation but had variable benefits as stabilizers, while the pharmaceutical polymers had useful effect on supersaturation stability but were less helpful in increasing the extent of supersaturation. Using these data, a group of simple solid dosage forms were prepared and tested in the dog for one of the drug candidates. Excipients that gave the best extent and stability for the formed supersaturated solution in the screening assay also gave the highest oral bioavailability in the dog.

  14. Acclimatization to extreme heat

    Science.gov (United States)

    Warner, M. E.; Ganguly, A. R.; Bhatia, U.

    2017-12-01

    Heat extremes throughout the globe, as well as in the United States, are expected to increase. These heat extremes have been shown to impact human health, resulting in some of the highest levels of lives lost as compared with similar natural disasters. But in order to inform decision makers and best understand future mortality and morbidity, adaptation and mitigation must be considered. Defined as the ability for individuals or society to change behavior and/or adapt physiologically, acclimatization encompasses the gradual adaptation that occurs over time. Therefore, this research aims to account for acclimatization to extreme heat by using a hybrid methodology that incorporates future air conditioning use and installation patterns with future temperature-related time series data. While previous studies have not accounted for energy usage patterns and market saturation scenarios, we integrate such factors to compare the impact of air conditioning as a tool for acclimatization, with a particular emphasis on mortality within vulnerable communities.

  15. Extremely deformable structures

    CERN Document Server

    2015-01-01

    Recently, a new research stimulus has derived from the observation that soft structures, such as biological systems, but also rubber and gel, may work in a post critical regime, where elastic elements are subject to extreme deformations, though still exhibiting excellent mechanical performances. This is the realm of ‘extreme mechanics’, to which this book is addressed. The possibility of exploiting highly deformable structures opens new and unexpected technological possibilities. In particular, the challenge is the design of deformable and bi-stable mechanisms which can reach superior mechanical performances and can have a strong impact on several high-tech applications, including stretchable electronics, nanotube serpentines, deployable structures for aerospace engineering, cable deployment in the ocean, but also sensors and flexible actuators and vibration absorbers. Readers are introduced to a variety of interrelated topics involving the mechanics of extremely deformable structures, with emphasis on ...

  16. Statistics of Extremes

    KAUST Repository

    Davison, Anthony C.

    2015-04-10

    Statistics of extremes concerns inference for rare events. Often the events have never yet been observed, and their probabilities must therefore be estimated by extrapolation of tail models fitted to available data. Because data concerning the event of interest may be very limited, efficient methods of inference play an important role. This article reviews this domain, emphasizing current research topics. We first sketch the classical theory of extremes for maxima and threshold exceedances of stationary series. We then review multivariate theory, distinguishing asymptotic independence and dependence models, followed by a description of models for spatial and spatiotemporal extreme events. Finally, we discuss inference and describe two applications. Animations illustrate some of the main ideas. © 2015 by Annual Reviews. All rights reserved.

  17. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    International Nuclear Information System (INIS)

    Engelmann, Christian; Hukerikar, Saurabh

    2017-01-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across

  18. Adventure and Extreme Sports.

    Science.gov (United States)

    Gomez, Andrew Thomas; Rao, Ashwin

    2016-03-01

    Adventure and extreme sports often involve unpredictable and inhospitable environments, high velocities, and stunts. These activities vary widely and include sports like BASE jumping, snowboarding, kayaking, and surfing. Increasing interest and participation in adventure and extreme sports warrants understanding by clinicians to facilitate prevention, identification, and treatment of injuries unique to each sport. This article covers alpine skiing and snowboarding, skateboarding, surfing, bungee jumping, BASE jumping, and whitewater sports with emphasis on epidemiology, demographics, general injury mechanisms, specific injuries, chronic injuries, fatality data, and prevention. Overall, most injuries are related to overuse, trauma, and environmental or microbial exposure. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 01: Optimization of an organic field effect transistor for radiation dosimetry measurements

    Energy Technology Data Exchange (ETDEWEB)

    Syme, Alasdair [Dept of Radiation Oncology, Dalhousie University, QEII Health Sciences Centre (Canada)

    2016-08-15

    Purpose: To use Monte Carlo simulations to optimize the design of an organic field effect transistor (OFET) to maximize water-equivalence across the diagnostic and therapeutic photon energy ranges. Methods: DOSXYZnrc was used to simulate transport of mono-energetic photon beams through OFETs. Dose was scored in the dielectric region of devices and used for evaluating the response of the device relative to water. Two designs were considered: 1. a bottom-gate device on a substrate of polyethylene terephthalate (PET) with an aluminum gate, a dielectric layer of either PMMA or CYTOP (a fluorocarbon) and an organic semiconductor (pentacene). 2. a symmetric bilayer design was employed in which two polymer layers (PET and CYTOP) were deposited both below the gate and above the semiconductor to improve water-equivalence and reduce directional dependence. The relative thickness of the layers was optimized to maximize water-equivalence. Results: Without the bilayer, water-equivalence was diminished relative to OFETs with the symmetric bilayer at low photon energies (below 80 keV). The bilayer’s composition was designed to have one layer with an effective atomic number larger than that of water and the other with an effective atomic number lower than that of water. For the particular materials used in this study, a PET layer 0.1mm thick coupled with a CYTOP layer of 900 nm provided a device with a water-equivalence within 3% between 20 keV and 5 MeV. Conclusions: organic electronic devices hold tremendous potential as water-equivalent dosimeters that could be used in a wide range of applications without recalibration.

  20. Artificial neural network-genetic algorithm based optimization for the adsorption of methylene blue and brilliant green from aqueous solution by graphite oxide nanoparticle.

    Science.gov (United States)

    Ghaedi, M; Zeinali, N; Ghaedi, A M; Teimuori, M; Tashkhourian, J

    2014-05-05

    In this study, graphite oxide (GO) nano according to Hummers method was synthesized and subsequently was used for the removal of methylene blue (MB) and brilliant green (BG). The detail information about the structure and physicochemical properties of GO are investigated by different techniques such as XRD and FTIR analysis. The influence of solution pH, initial dye concentration, contact time and adsorbent dosage was examined in batch mode and optimum conditions was set as pH=7.0, 2 mg of GO and 10 min contact time. Employment of equilibrium isotherm models for description of adsorption capacities of GO explore the good efficiency of Langmuir model for the best presentation of experimental data with maximum adsorption capacity of 476.19 and 416.67 for MB and BG dyes in single solution. The analysis of adsorption rate at various stirring times shows that both dyes adsorption followed a pseudo second-order kinetic model with cooperation with interparticle diffusion model. Subsequently, the adsorption data as new combination of artificial neural network was modeled to evaluate and obtain the real conditions for fast and efficient removal of dyes. A three-layer artificial neural network (ANN) model is applicable for accurate prediction of dyes removal percentage from aqueous solution by GO following conduction of 336 experimental data. The network was trained using the obtained experimental data at optimum pH with different GO amount (0.002-0.008 g) and 5-40 mg/L of both dyes over contact time of 0.5-30 min. The ANN model was able to predict the removal efficiency with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) at hidden layer with 10 and 11 neurons for MB and BG dyes, respectively. The minimum mean squared error (MSE) of 0.0012 and coefficient of determination (R(2)) of 0.982 were found for prediction and modeling of MB removal, while the respective value for BG was the

  1. An Indirect Simulation-Optimization Model for Determining Optimal TMDL Allocation under Uncertainty

    Directory of Open Access Journals (Sweden)

    Feng Zhou

    2015-11-01

    Full Text Available An indirect simulation-optimization model framework with enhanced computational efficiency and risk-based decision-making capability was developed to determine optimal total maximum daily load (TMDL allocation under uncertainty. To convert the traditional direct simulation-optimization model into our indirect equivalent model framework, we proposed a two-step strategy: (1 application of interval regression equations derived by a Bayesian recursive regression tree (BRRT v2 algorithm, which approximates the original hydrodynamic and water-quality simulation models and accurately quantifies the inherent nonlinear relationship between nutrient load reductions and the credible interval of algal biomass with a given confidence interval; and (2 incorporation of the calibrated interval regression equations into an uncertain optimization framework, which is further converted to our indirect equivalent framework by the enhanced-interval linear programming (EILP method and provides approximate-optimal solutions at various risk levels. The proposed strategy was applied to the Swift Creek Reservoir’s nutrient TMDL allocation (Chesterfield County, VA to identify the minimum nutrient load allocations required from eight sub-watersheds to ensure compliance with user-specified chlorophyll criteria. Our results indicated that the BRRT-EILP model could identify critical sub-watersheds faster than the traditional one and requires lower reduction of nutrient loadings compared to traditional stochastic simulation and trial-and-error (TAE approaches. This suggests that our proposed framework performs better in optimal TMDL development compared to the traditional simulation-optimization models and provides extreme and non-extreme tradeoff analysis under uncertainty for risk-based decision making.

  2. The Engineering for Climate Extremes Partnership

    Science.gov (United States)

    Holland, G. J.; Tye, M. R.

    2014-12-01

    Hurricane Sandy and the recent floods in Thailand have demonstrated not only how sensitive the urban environment is to the impact of severe weather, but also the associated global reach of the ramifications. These, together with other growing extreme weather impacts and the increasing interdependence of global commercial activities point towards a growing vulnerability to weather and climate extremes. The Engineering for Climate Extremes Partnership brings academia, industry and government together with the goals encouraging joint activities aimed at developing new, robust, and well-communicated responses to this increasing vulnerability. Integral to the approach is the concept of 'graceful failure' in which flexible designs are adopted that protect against failure by combining engineering or network strengths with a plan for efficient and rapid recovery if and when they fail. Such an approach enables optimal planning for both known future scenarios and their assessed uncertainty.

  3. Stellar extreme ultraviolet astronomy

    International Nuclear Information System (INIS)

    Cash, W.C. Jr.

    1978-01-01

    The design, calibration, and launch of a rocket-borne imaging telescope for extreme ultraviolet astronomy are described. The telescope, which employed diamond-turned grazing incidence optics and a ranicon detector, was launched November 19, 1976, from the White Sands Missile Range. The telescope performed well and returned data on several potential stellar sources of extreme ultraviolet radiation. Upper limits ten to twenty times more sensitive than previously available were obtained for the extreme ultraviolet flux from the white dwarf Sirius B. These limits fall a factor of seven below the flux predicted for the star and demonstrate that the temperature of Sirius B is not 32,000 K as previously measured, but is below 30,000 K. The new upper limits also rule out the photosphere of the white dwarf as the source of the recently reported soft x-rays from Sirius. Two other white dwarf stars, Feige 24 and G191-B2B, were observed. Upper limits on the flux at 300 A were interpreted as lower limits on the interstellar hydrogen column densities to these stars. The lower limits indicate interstellar hydrogen densitites of greater than .02 cm -3 . Four nearby stars (Sirius, Procyon, Capella, and Mirzam) were observed in a search for intense low temperature coronae or extended chromospheres. No extreme ultraviolet radiation from these stars was detected, and upper limits to their coronal emisson measures are derived

  4. Extremity x-ray

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003461.htm Extremity x-ray To use the sharing features on this page, ... in the body Risks There is low-level radiation exposure. X-rays are monitored and regulated to provide the ...

  5. Extremity perfusion for sarcoma

    NARCIS (Netherlands)

    Hoekstra, Harald Joan

    2008-01-01

    For more than 50 years, the technique of extremity perfusion has been explored in the limb salvage treatment of local, recurrent, and multifocal sarcomas. The "discovery" of tumor necrosis factor-or. in combination with melphalan was a real breakthrough in the treatment of primarily irresectable

  6. Statistics of Local Extremes

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Bierbooms, W.; Hansen, Kurt Schaldemose

    2003-01-01

    . A theoretical expression for the probability density function associated with local extremes of a stochasticprocess is presented. The expression is basically based on the lower four statistical moments and a bandwidth parameter. The theoretical expression is subsequently verified by comparison with simulated...

  7. Translational research to improve the treatment of severe extremity injuries.

    Science.gov (United States)

    Brown, Kate V; Penn-Barwell, J G; Rand, B C; Wenke, J C

    2014-06-01

    Severe extremity injuries are the most significant injury sustained in combat wounds. Despite optimal clinical management, non-union and infection remain common complications. In a concerted effort to dovetail research efforts, there has been a collaboration between the UK and USA, with British military surgeons conducting translational studies under the auspices of the US Institute of Surgical Research. This paper describes 3 years of work. A variety of studies were conducted using, and developing, a previously validated rat femur critical-sized defect model. Timing of surgical debridement and irrigation, different types of irrigants and different means of delivery of antibiotic and growth factors for infection control and to promote bone healing were investigated. Early debridement and irrigation were independently shown to reduce infection. Normal saline was the most optimal irrigant, superior to disinfectant solutions. A biodegradable gel demonstrated superior antibiotic delivery capabilities than standard polymethylmethacrylate beads. A polyurethane scaffold was shown to have the ability to deliver both antibiotics and growth factors. The importance of early transit times to Role 3 capabilities for definitive surgical care has been underlined. Novel and superior methods of antibiotic and growth factor delivery, compared with current clinical standards of care, have been shown. There is the potential for translation to clinical studies to promote infection control and bone healing in these devastating injuries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Removal of Penicillin G by combination of sonolysis and Photocatalytic (sonophotocatalytic) process from aqueous solution: process optimization using RSM (Response Surface Methodology).

    Science.gov (United States)

    Almasi, Ali; Dargahi, Abdollah; Mohamadi, Mitra; Biglari, Hamed; Amirian, Farhad; Raei, Mehdi

    2016-09-01

    Penicillin G (PG) is used in a variety of infectious diseases, extensively. Generally, when antibiotics are introduced into the food chain, they pose a threat to the environment and can risk health outcomes. The aim of the present study was the removal of Penicillin G from an aqueous solution through an integrated system of UV/ZnO and UV/WO 3 with Ultrasound pretreatment. In this descriptive-analytical work dealing with the removal of Penicillin G from an aqueous solution, four significant variables, contact time (60-120 min), Penicillin G concentration (50-150 mg/L), ZnO dose (200-400 mg/L), and WO 3 dose (100-200 mg/L) were investigated. Experiments were performed in a Pyrex reactor (batch, 1 Lit) with an artificial UV 100-Watt medium pressure mercury lamp, coupled with ultrasound (100 W, 40 KHz) for PG pre-treatment. Chemical Oxygen Demand (COD) was selected to follow the performance of the photo-catalytic process and sonolysis. The experiments were based on a Central Composite Design (CCD) and analyzed by Response Surface Methodology (RSM). A mathematical model of the process was designed according to the proposed degradation scheme. The results showed that the maximum removal of PG occurred in ultrasonic/UV/WO 3 in the presence of 50 mg/L WO 3 and contact time of 120 minutes. In addition, an increase in the PG concentration caused a decrease in COD removal. As the initial concentration of the catalyst increased, the COD removal also increased. The maximum COD removal (91.3%) achieved by 200 mg/L WO 3 and 400 mg/l ZnO, a contact time of 120 minutes, and an antibiotic concentration of 50 mg/L. All of the variables in the process efficiency were found to be significant (p research data supported the conclusion that the combination of advanced oxidation process of sonolysis and photocatalytic (sonophotocatalytic) were applicable and environmentally friendly processes, which preferably can be applied extensively.

  9. NEMO. A novel techno-economic tool suite for simulating and optimizing solutions for grid integration of electric vehicles and charging stations

    Energy Technology Data Exchange (ETDEWEB)

    Erge, Thomas; Stillahn, Thies; Dallmer-Zerbe, Kilian; Wille-Haussmann, Bernhard [Frauenhofer Institut for Solar Energy Systems ISE, Freiburg (Germany)

    2013-07-01

    With an increasing use of electric vehicles (EV) grid operators need to predict energy flows depending on electromobility use profiles to accordingly adjust grid infrastructure and operation control accordingly. Tools and methodologies are required to characterize grid problems resulting from the interconnection of EV with the grid. The simulation and optimization tool suite NEMO (Novel E-MObility grid model) was developed within a European research project and is currently being tested using realistic showcases. It is a combination of three professional tools. One of the tools aims at a combined techno-economic design and operation, primarily modeling plants on contracts or the spot market, at the same time participating in balancing markets. The second tool is designed for planning grid extension or reinforcement while the third tool is mainly used to quickly discover potential conflicts of grid operation approaches through load flow analysis. The tool suite is used to investigate real showcases in Denmark, Germany and the Netherlands. First studies show that significant alleviation of stress on distribution grid lines could be achieved by few but intelligent restrictions to EV charging procedures.

  10. Degradation of the fungicide carbendazim in aqueous solutions with UV/TiO{sub 2} process: Optimization, kinetics and toxicity studies

    Energy Technology Data Exchange (ETDEWEB)

    Saien, J. [Department of Applied Chemistry, Bu-Ali Sina University, Hamedan 65174 (Iran, Islamic Republic of)], E-mail: saien@basu.ac.ir; Khezrianjoo, S. [Department of Applied Chemistry, Bu-Ali Sina University, Hamedan 65174 (Iran, Islamic Republic of)

    2008-09-15

    An attempt was made to investigate the potential of UV-photocatalytic process in the presence of TiO{sub 2} particles for the degradation of carbendazim (C{sub 9}H{sub 9}N{sub 3}O{sub 2}), a fungicide with a high worldwide consumption but considered as a 'priority hazard substance' by the Water Framework Directive of the European Commission (WFDEC). A circulating upflow photo-reactor was employed and the influence of catalyst concentration, pH and temperature were investigated. The results showed that degradation of this fungicide can be conducted in the both processes of only UV-irradiation and UV/TiO{sub 2}; however, the later provides much better results. Accordingly, a degradation of more than 90% of fungicide was achieved by applying the optimal operational conditions of 70 mg L{sup -1} of catalyst, natural pH of 6.73 and ambient temperature of 25 deg. C after 75 min irradiation. Under these mild conditions, the initial rate of degradation can be described well by the Langmuir-Hinshelwood kinetic model. Toxicological assessments on the obtained samples were also performed by measurement of the mycelium growth inhibition of Fusarium oxysporum fungus on PDA medium. The results indicate that the kinetics of degradation and toxicity are in reasonably good agreement mainly after 45 min of irradiation; confirming the effectiveness of photocatalytic process.

  11. NEMO. A novel techno-economic tool suite for simulating and optimizing solutions for grid integration of electric vehicles and charging stations

    International Nuclear Information System (INIS)

    Erge, Thomas; Stillahn, Thies; Dallmer-Zerbe, Kilian; Wille-Haussmann, Bernhard

    2013-01-01

    With an increasing use of electric vehicles (EV) grid operators need to predict energy flows depending on electromobility use profiles to accordingly adjust grid infrastructure and operation control accordingly. Tools and methodologies are required to characterize grid problems resulting from the interconnection of EV with the grid. The simulation and optimization tool suite NEMO (Novel E-MObility grid model) was developed within a European research project and is currently being tested using realistic showcases. It is a combination of three professional tools. One of the tools aims at a combined techno-economic design and operation, primarily modeling plants on contracts or the spot market, at the same time participating in balancing markets. The second tool is designed for planning grid extension or reinforcement while the third tool is mainly used to quickly discover potential conflicts of grid operation approaches through load flow analysis. The tool suite is used to investigate real showcases in Denmark, Germany and the Netherlands. First studies show that significant alleviation of stress on distribution grid lines could be achieved by few but intelligent restrictions to EV charging procedures.

  12. Limitations and pitfalls of climate change impact analysis on urban rainfall extremes

    DEFF Research Database (Denmark)

    Willems, P.; Olsson, J.; Arnbjerg-Nielsen, Karsten

    Under the umbrella of the IWA/IAHR Joint Committee on Urban Drainage, the International Working Group on Urban Rainfall (IGUR) has reviewed existing methodologies for the analysis of long-term historical and future trends in urban rainfall extremes and their effects on urban drainage systems, due...... to anthropogenic climate change. Current practices have several limitations and pitfalls, which are important to be considered by trend or climate change impact modellers and users of trend/impact results. Climate change may well be the driver that ensures that changes in urban drainage paradigms are identified...... and suitable solutions implemented. Design and optimization of urban drainage infrastructure considering climate change impacts and co-optimizing with other objectives will become ever more important to keep our cities liveable into the future....

  13. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  14. Extreme commutative quantum observables are sharp

    International Nuclear Information System (INIS)

    Heinosaari, Teiko; Pellonpaeae, Juha-Pekka

    2011-01-01

    It is well known that, in the description of quantum observables, positive operator valued measures (POVMs) generalize projection valued measures (PVMs) and they also turn out be more optimal in many tasks. We show that a commutative POVM is an extreme point in the convex set of all POVMs if and only if it is a PVM. This results implies that non-commutativity is a necessary ingredient to overcome the limitations of PVMs.

  15. An Efficacious Multi-Objective Fuzzy Linear Programming Approach for Optimal Power Flow Considering Distributed Generation.

    Science.gov (United States)

    Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri

    2016-01-01

    This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.

  16. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  17. Adaptive surrogate model based multiobjective optimization for coastal aquifer management

    Science.gov (United States)

    Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin

    2018-06-01

    In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.

  18. ERP SOLUTIONS FOR SMEs

    Directory of Open Access Journals (Sweden)

    TUTUNEA MIHAELA FILOFTEIA

    2012-09-01

    Full Text Available The integration of activities, the business processes as well as their optimization, bring the perspective of profitable growth and create significant and competitive advantages in any company. The adoption of some ERP integrated software solutions, from SMEs’ perspective, must be considered as a very important management decision in medium and long term. ERP solutions, along with the transparent and optimized management of all internal processes, also offer an intra and inter companies collaborative platform, which allows a rapid expansion of activities towards e- business and mobile-business environments. This material introduces ERP solutions for SMEs from commercial offer and open source perspective; the results of comparative analysis of the solutions on the specific market, can be an useful aid to the management of the companies, in making the decision to integrate business processes, using ERP as a support.

  19. Application of experimental design and derivative spectrophotometry methods in optimization and analysis of biosorption of binary mixtures of basic dyes from aqueous solutions.

    Science.gov (United States)

    Asfaram, Arash; Ghaedi, Mehrorang; Ghezelbash, Gholam Reza; Pepe, Francesco

    2017-05-01

    Simultaneous biosorption of malachite green (MG) and crystal violet (CV) on biosorbent Yarrowia lipolytica ISF7 was studied. An appropriate derivative spectrophotometry technique was used to evaluate the concentration of each dye in binary solutions, despite significant interferences in visible light absorbances. The effects of pH, temperature, growth time, initial MG and CV concentration in batch experiments were assessed using Design of Experiment (DOE) according to central composite second order response surface methodology (RSM). The analysis showed that the greatest biosorption efficiency (>99% for both dyes) can be obtained at pH 7.0, T=28°C, 24h mixing and 20mgL -1 initial concentrations for both MG and CV dyes. The quadratic constructed equation ability for fitting experimental data is judged based on criterions like R 2 values, significant p and lack-of-fit value strongly confirm its high adequacy and applicability for prediction of revel behavior of the system under study. The proposed model showed very high correlation coefficients (R 2 =0.9997 for CV and R 2 =0.9989 for MG), while supported by closeness of predicted and experimental value. A kinetic analysis was carried out, showing that for both dyes a pseudo-second order kinetic model adequately describes the available data. The Langmuir isotherm model in single and binary components has better performance for description of dyes biosorption with maximum monolayer biosorption capacity of 59.4 and 62.7mgg -1 in single component and 46.4 and 50.0mgg -1 for CV and MB in binary components, respectively. The surface structure of biosorbents and the possible biosorbents-dyes interactions between were also evaluated by Fourier transform infrared (FT-IR) spectroscopy and scanning electron microscopy (SEM). The values of thermodynamic parameters including ΔG° and ΔH° strongly confirm which method is spontaneous and endothermic. Copyright © 2017. Published by Elsevier Inc.

  20. BRAIN Journal - Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku Kose

    2016-01-01

    ABSTRACT In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Sc...

  1. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku kose

    2017-01-01

    In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an...

  2. Optimization theory with applications

    CERN Document Server

    Pierre, Donald A

    1987-01-01

    Optimization principles are of undisputed importance in modern design and system operation. They can be used for many purposes: optimal design of systems, optimal operation of systems, determination of performance limitations of systems, or simply the solution of sets of equations. While most books on optimization are limited to essentially one approach, this volume offers a broad spectrum of approaches, with emphasis on basic techniques from both classical and modern work.After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all typ

  3. Introduction to Continuous Optimization

    DEFF Research Database (Denmark)

    Andreasson, Niclas; Evgrafov, Anton; Patriksson, Michael

    optimal solutions for continuous optimization models. The main part of the mathematical material therefore concerns the analysis and linear algebra that underlie the workings of convexity and duality, and necessary/sufficient local/global optimality conditions for continuous optimization problems. Natural...... algorithms are then developed from these optimality conditions, and their most important convergence characteristics are analyzed. The book answers many more questions of the form “Why?” and “Why not?” than “How?”. We use only elementary mathematics in the development of the book, yet are rigorous throughout...

  4. Extremes in nature

    CERN Document Server

    Salvadori, Gianfausto; Kottegoda, Nathabandu T

    2007-01-01

    This book is about the theoretical and practical aspects of the statistics of Extreme Events in Nature. Most importantly, this is the first text in which Copulas are introduced and used in Geophysics. Several topics are fully original, and show how standard models and calculations can be improved by exploiting the opportunities offered by Copulas. In addition, new quantities useful for design and risk assessment are introduced.

  5. Rhabdomyosarcoma of the extremity

    International Nuclear Information System (INIS)

    Rao, Bhaskar N

    1997-01-01

    Rhabdomyosarcoma is the most common soft tissue sarcoma accounting for almost 55%. These tumors arise from unsegmented mesoderm or primitive mesenchyma, which have the capacity to differentiate into muscle. Less than 5% occur in the first year of life. Extremity rhabdomyosarcoma are mainly seen in the adolescent years. The most common histologic subtype is the alveolar variant. Other characteristics of extremity rhabdomyosarcoma include a predilection for lymph node metastasis, a high local failure, and a relatively low survival rate. They often present as slow painless masses; however, lesions in the hand and foot often present as painful masses and imaging studies may show invasion of the bone. Initial diagnostic approaches include needle biopsy or incisional biopsy for larger lesions. Excisional biopsy is indicated preferably for lesions less than 2.5 cm. following this in most instances therapy is initiated with multi agent chemotherapy depending upon response, the next modality may be either surgery with intent to cure or radiation therapy. Amputation of an extremity for local control is not considered in most instances. Prognostic factors that have been determined over the years to be of significance by multi variant analysis have included age, tumor size, invasiveness, presence of either nodal or distant metastasis, and complete excision whenever feasible, with supplemental radiation therapy for local control

  6. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  7. Chiral gravity, log gravity, and extremal CFT

    International Nuclear Information System (INIS)

    Maloney, Alexander; Song Wei; Strominger, Andrew

    2010-01-01

    We show that the linearization of all exact solutions of classical chiral gravity around the AdS 3 vacuum have positive energy. Nonchiral and negative-energy solutions of the linearized equations are infrared divergent at second order, and so are removed from the spectrum. In other words, chirality is confined and the equations of motion have linearization instabilities. We prove that the only stationary, axially symmetric solutions of chiral gravity are BTZ black holes, which have positive energy. It is further shown that classical log gravity--the theory with logarithmically relaxed boundary conditions--has finite asymptotic symmetry generators but is not chiral and hence may be dual at the quantum level to a logarithmic conformal field theories (CFT). Moreover we show that log gravity contains chiral gravity within it as a decoupled charge superselection sector. We formally evaluate the Euclidean sum over geometries of chiral gravity and show that it gives precisely the holomorphic extremal CFT partition function. The modular invariance and integrality of the expansion coefficients of this partition function are consistent with the existence of an exact quantum theory of chiral gravity. We argue that the problem of quantizing chiral gravity is the holographic dual of the problem of constructing an extremal CFT, while quantizing log gravity is dual to the problem of constructing a logarithmic extremal CFT.

  8. Instability of extremal relativistic charged spheres

    International Nuclear Information System (INIS)

    Anninos, Peter; Rothman, Tony

    2002-01-01

    With the question 'Can relativistic charged spheres form extremal black holes?' in mind, we investigate the properties of such spheres from a classical point of view. The investigation is carried out numerically by integrating the Oppenheimer-Volkov equation for relativistic charged fluid spheres and finding interior Reissner-Nordstroem solutions for these objects. We consider both constant density and adiabatic equations of state, as well as several possible charge distributions, and examine stability by both a normal mode and an energy analysis. In all cases, the stability limit for these spheres lies between the extremal (Q=M) limit and the black hole limit (R=R + ). That is, we find that charged spheres undergo gravitational collapse before they reach Q=M, suggesting that extremal Reissner-Nordstroem black holes produced by collapse are ruled out. A general proof of this statement would support a strong form of the cosmic censorship hypothesis, excluding not only stable naked singularities, but stable extremal black holes. The numerical results also indicate that although the interior mass-energy m(R) obeys the usual m/R + as Q→M. In the Appendix we also argue that Hawking radiation will not lead to an extremal Reissner-Nordstroem black hole. All our results are consistent with the third law of black hole dynamics, as currently understood

  9. Non-extremal instantons and wormholes in string theory

    NARCIS (Netherlands)

    Bergshoeff, E.; Collinucci, A.; Gran, U.; Roest, D.; Vandoren, S.

    2004-01-01

    We construct the most general non-extremal spherically symmetric instanton solution of a gravity-dilaton-axion system with SL(2,R) symmetry, for arbitrary euclidean spacetime dimension D ≥ 3. A subclass of these solutions describe completely regular wormhole geometries, whose size is determined

  10. Non-extremal instantons and wormholes in string theory

    NARCIS (Netherlands)

    Bergshoeff, E; Collinucci, A; Gran, U; Roest, D; Vandoren, S

    2005-01-01

    We construct the most general non-extremal spherically symmetric instanton solution of a gravity-dilatonaxion system with SL(2, R) symmetry, for arbitrary euclidean spacetime dimension D >= 3. A subclass of these solutions describe completely regular wormhole geometries, whose size is determined by

  11. Optimization of the crystal growth of the superconductor CaKFe4As4 from solution in the FeAs -CaFe2As2-KFe2As2 system

    Science.gov (United States)

    Meier, W. R.; Kong, T.; Bud'ko, S. L.; Canfield, P. C.

    2017-06-01

    Measurements of the anisotropic properties of single crystals play a crucial role in probing the physics of new materials. Determining a growth protocol that yields suitable high-quality single crystals can be particularly challenging for multicomponent compounds. Here we present a case study of how we refined a procedure to grow single crystals of CaKFe4As4 from a high temperature, quaternary liquid solution rich in iron and arsenic ("FeAs self-flux"). Temperature dependent resistance and magnetization measurements are emphasized, in addition to the x-ray diffraction, to detect intergrown CaKFe4As4 , CaFe2As2 , and KFe2As2 within what appear to be single crystals. Guided by the rules of phase equilibria and these data, we adjusted growth parameters to suppress formation of the impurity phases. The resulting optimized procedure yielded phase-pure single crystals of CaKFe4As4 . This optimization process offers insight into the growth of quaternary compounds and a glimpse of the four-component phase diagram in the pseudoternary FeAs -CaFe2As2-KFe2As2 system.

  12. Economically optimal thermal insulation

    Energy Technology Data Exchange (ETDEWEB)

    Berber, J.

    1978-10-01

    Exemplary calculations to show that exact adherence to the demands of the thermal insulation ordinance does not lead to an optimal solution with regard to economics. This is independent of the mode of financing. Optimal thermal insulation exceeds the values given in the thermal insulation ordinance.

  13. Optimization in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Geraldo R.M. da [Sao Paulo Univ., Sao Carlos, SP (Brazil). Escola de Engenharia

    1994-12-31

    This paper discusses, partially, the advantages and the disadvantages of the optimal power flow. It shows some of the difficulties of implementation and proposes solutions. An analysis is made comparing the power flow, BIGPOWER/CESP, and the optimal power flow, FPO/SEL, developed by the author, when applied to the CEPEL-ELETRONORTE and CESP systems. (author) 8 refs., 5 tabs.

  14. Optimization and Optimal Control

    CERN Document Server

    Chinchuluun, Altannar; Enkhbat, Rentsen; Tseveendorj, Ider

    2010-01-01

    During the last four decades there has been a remarkable development in optimization and optimal control. Due to its wide variety of applications, many scientists and researchers have paid attention to fields of optimization and optimal control. A huge number of new theoretical, algorithmic, and computational results have been observed in the last few years. This book gives the latest advances, and due to the rapid development of these fields, there are no other recent publications on the same topics. Key features: Provides a collection of selected contributions giving a state-of-the-art accou

  15. Optimal phase estimation with arbitrary a priori knowledge

    International Nuclear Information System (INIS)

    Demkowicz-Dobrzanski, Rafal

    2011-01-01

    The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attention is paid to a natural a priori probability distribution arising from a diffusion process.

  16. Generalized concavity in fuzzy optimization and decision analysis

    CERN Document Server

    Ramík, Jaroslav

    2002-01-01

    Convexity of sets in linear spaces, and concavity and convexity of functions, lie at the root of beautiful theoretical results that are at the same time extremely useful in the analysis and solution of optimization problems, including problems of either single objective or multiple objectives. Not all of these results rely necessarily on convexity and concavity; some of the results can guarantee that each local optimum is also a global optimum, giving these methods broader application to a wider class of problems. Hence, the focus of the first part of the book is concerned with several types of generalized convex sets and generalized concave functions. In addition to their applicability to nonconvex optimization, these convex sets and generalized concave functions are used in the book's second part, where decision-making and optimization problems under uncertainty are investigated. Uncertainty in the problem data often cannot be avoided when dealing with practical problems. Errors occur in real-world data for...

  17. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions

  18. Solution to the problem of optimum power flows with restrictions of safety by a modified particle optimizer; Solucion del problema de flujos de potencia optimo con restricciones de seguridad por un optimizador de particulas modificado

    Energy Technology Data Exchange (ETDEWEB)

    Onarte Yumbla, Pablo Enrique

    2008-02-15

    The power system optimal power flow (OPF) objective is to obtain a start-up and shut-down schedule of generating units to meet the required demand at minimum production cost, satisfying units' and system's operating constraints, by adjusting the power system control variables. Nowadays, the transmission system can be considered as an independent transmission company that provides open access to all participants. Any pricing scheme should compensate transmission companies fairly for providing transmission services and allocate entire transmissions costs among all transmission users. This thesis uses a transmission pricing scheme using a power flow tracing method to determine the actual contributions of generators to each link flow. Furthermore, the power system must be capable to withstand the loss of any component (e.g., lines, transformers, generators) without jeopardizing the system's operation, guaranteeing its security; such events are often termed probable or credible contingencies, this problem is known as optimal power flow with security constrains (OPF-SC). Additionally, constraints in generating units' limits, minimum and maximum up- and down-time, slope-down and slope-up, voltage profile improved and coupling constraints between the pre- and the post-contingency states and transient stability constraints have been taken into account. A particle swarm optimizer with reconstruction operators (PSO-RO) for solving the OPF-SC is proposed. To handle the constraints of the problem, such reconstruction operators and an external penalty are adopted. The reconstruction operators allow that all particles representing a possible solution satisfy the units' operating constraints, while looking for the optimal solution only within the feasible space, reducing the computing time and improving the quality of the achieved solution. [Spanish] El objetivo del problema de flujos de potencia optimo (FPO) es determinar un programa de arranque y parada

  19. Energetic optimization of the ventilation system in modern ships

    International Nuclear Information System (INIS)

    Pérez, José Antonio; Orosa, José Antonio; Costa, Ángel Martín

    2016-01-01

    Highlights: • New solutions to optimize the ventilation system in modern ships are proposed. • Very important energy savings have been achieved. • Extreme indoor conditions in the engine room are modelled and analysed. • Critical places and hazardous tasks have been identified and analysed. • Important problems in the daily task schedule have been detected and corrected. - Abstract: The indoor ambience on board modern ships constitutes a perfect example of severe industrial environment, where personnel are exposed to extreme working conditions, especially in the engine room. To mitigate this problem, the classical solution is the use of powerful mechanical ventilation systems, with high energy consumption, which, in the case of the engine room, represents between 3.5% and 5.5% of the overall power installed. Consequently, its energetic optimization is critical, being an interesting example of not well solved thermal engineering problem, where work risk criteria also must be considered, as the engine room is the hottest and, therefore, one of the most hazardous places on the ship. Based on a complete 3D CFD analysis of the thermal conditions in the engine room and the requirements and duties of the crew derived from their daily work schedule, the optimal ventilation requirements and the maximum tolerable working time have been established, achieving very important energy savings, without any reduction in crew productivity or safety.

  20. Optimization Under Uncertainty for Wake Steering Strategies: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Annoni, Jennifer [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Fleming, Paul A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ning, Andrew [Brigham Young University

    2017-05-01

    Wind turbines in a wind power plant experience significant power losses because of aerodynamic interactions between turbines. One control strategy to reduce these losses is known as 'wake steering,' in which upstream turbines are yawed to direct wakes away from downstream turbines. Previous wake steering research has assumed perfect information, however, there can be significant uncertainty in many aspects of the problem, including wind inflow and various turbine measurements. Uncertainty has significant implications for performance of wake steering strategies. Consequently, the authors formulate and solve an optimization under uncertainty (OUU) problem for finding optimal wake steering strategies in the presence of yaw angle uncertainty. The OUU wake steering strategy is demonstrated on a two-turbine test case and on the utility-scale, offshore Princess Amalia Wind Farm. When we accounted for yaw angle uncertainty in the Princess Amalia Wind Farm case, inflow-direction-specific OUU solutions produced between 0% and 1.4% more power than the deterministically optimized steering strategies, resulting in an overall annual average improvement of 0.2%. More importantly, the deterministic optimization is expected to perform worse and with more downside risk than the OUU result when realistic uncertainty is taken into account. Additionally, the OUU solution produces fewer extreme yaw situations than the deterministic solution.