Robust Portfolio Optimization using CAPM Approach
Directory of Open Access Journals (Sweden)
mohsen gharakhani
2013-08-01
Full Text Available In this paper, a new robust model of multi-period portfolio problem has been developed. One of the key concerns in any asset allocation problem is how to cope with uncertainty about future returns. There are some approaches in the literature for this purpose including stochastic programming and robust optimization. Applying these techniques to multi-period portfolio problem may increase the problem size in a way that the resulting model is intractable. In this paper, a novel approach has been proposed to formulate multi-period portfolio problem as an uncertain linear program assuming that asset return follows the single-index factor model. Robust optimization technique has been also used to solve the problem. In order to evaluate the performance of the proposed model, a numerical example has been applied using simulated data.
Primal and dual approaches to adjustable robust optimization
de Ruiter, Frans
2018-01-01
Robust optimization has become an important paradigm to deal with optimization under uncertainty. Adjustable robust optimization is an extension that deals with multistage problems. This thesis starts with a short but comprehensive introduction to adjustable robust optimization. Then the two
Robust and optimal control a two-port framework approach
Tsai, Mi-Ching
2014-01-01
A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear, time-invariant systems, based on the two-port system widespread in electrical engineering. The novel use of the two-port system in this context allows straightforward engineering-oriented solution-finding procedures to be developed, requiring no mathematics beyond linear algebra. A chain-scattering description provides a unified framework for constructing the stabilizing controller set and for synthesizing H2 optimal and H∞ sub-optimal controllers. Simple yet illustrative examples explain each step. A Two-port Framework for Robust and Optimal Control features: · a hands-on, tutorial-style presentation giving the reader the opportunity to repeat the designs presented and easily to modify them for their own programs; · an abundance of examples illustrating the most important steps in robust and optimal design; and · �...
Design optimization for cost and quality: The robust design approach
Unal, Resit
1990-01-01
Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.
A robust optimization approach for energy generation scheduling in microgrids
International Nuclear Information System (INIS)
Wang, Ran; Wang, Ping; Xiao, Gaoxi
2015-01-01
Highlights: • A new uncertainty model is proposed for better describing unstable energy demands. • An optimization problem is formulated to minimize the cost of microgrid operations. • Robust optimization algorithms are developed to transform and solve the problem. • The proposed scheme can prominently reduce energy expenses. • Numerical results provide useful insights for future investment policy making. - Abstract: In this paper, a cost minimization problem is formulated to intelligently schedule energy generations for microgrids equipped with unstable renewable sources and combined heat and power (CHP) generators. In such systems, the fluctuant net demands (i.e., the electricity demands not balanced by renewable energies) and heat demands impose unprecedented challenges. To cope with the uncertainty nature of net demand and heat demand, a new flexible uncertainty model is developed. Specifically, we introduce reference distributions according to predictions and field measurements and then define uncertainty sets to confine net and heat demands. The model allows the net demand and heat demand distributions to fluctuate around their reference distributions. Another difficulty existing in this problem is the indeterminate electricity market prices. We develop chance constraint approximations and robust optimization approaches to firstly transform and then solve the prime problem. Numerical results based on real-world data evaluate the impacts of different parameters. It is shown that our energy generation scheduling strategy performs well and the integration of combined heat and power (CHP) generators effectively reduces the system expenditure. Our research also helps shed some illuminations on the investment policy making for microgrids.
Reactive Robustness and Integrated Approaches for Railway Optimization Problems
DEFF Research Database (Denmark)
Haahr, Jørgen Thorlund
journeys helps the driver to drive efficiently and enhances robustness in a realistic (dynamic) environment. Four international scientific prizes have been awarded for distinct parts of the research during the course of this PhD project. The first prize was awarded for work during the \\2014 RAS Problem...... to absorb or withstand unexpected events such as delays. Making robust plans is central in order to maintain a safe and timely railway operation. This thesis focuses on reactive robustness, i.e., the ability to react once a plan is rendered infeasible in operation due to disruptions. In such time...... Solving Competition", where a freight yard optimization problem was considered. The second junior (PhD) prize was awared for the work performed in the \\ROADEF/EURO Challenge 2014: Trains don't vanish!", where the planning of rolling stock movements at a large station was considered. An honorable mention...
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
A Robust Optimization Approach for Improving Service Quality
Andreas C. Soteriou; Richard B. Chase
2000-01-01
Delivering high quality service during the service encounter is central to competitive advantage in service organizations. However, achieving such high quality while controlling for costs is a major challenge for service managers. The purpose of this paper is to present an approach for addressing this challenge. The approach entails developing a model linking service process operational variables to service quality metrics to provide guidelines for service resource allocation. The approach en...
Directory of Open Access Journals (Sweden)
Qinghua Zeng
2015-07-01
Full Text Available This article proposes a linear matrix inequality–based robust controller design approach to implement the synchronous design of aircraft control discipline and other disciplines, in which the variation in design parameters is treated as equivalent perturbations. Considering the complicated mapping relationships between the coefficient arrays of aircraft motion model and the aircraft design parameters, the robust controller designed is directly based on the variation in these coefficient arrays so conservative that the multidisciplinary design optimization problem would be too difficult to solve, or even if there is a solution, the robustness of design result is generally poor. Therefore, this article derives the uncertainty model of disciplinary design parameters based on response surface approximation, converts the design problem of the robust controller into a problem of solving a standard linear matrix inequality, and theoretically gives a less conservative design method of the robust controller which is based on the variation in design parameters. Furthermore, the concurrent subspace approach is applied to the multidisciplinary system with this kind of robust controller in the design loop. A multidisciplinary design optimization of a tailless aircraft as example is shown that control discipline can be synchronous optimal design with other discipline, especially this method will greatly reduce the calculated amount of multidisciplinary design optimization and make multidisciplinary design optimization results more robustness of flight performance.
A Quasi-Robust Optimization Approach for Crew Rescheduling
Veelenturf, L.P.; Potthoff, D.; Huisman, D.; Kroon, L.G.; Maroti, G.; Wagelmans, A.P.M.
2016-01-01
This paper studies the real-time crew rescheduling problem in case of large-scale disruptions. One of the greatest challenges of real-time disruption management is the unknown duration of the disruption. In this paper we present a novel approach for crew rescheduling where we deal with this
A robust optimization based approach for microgrid operation in deregulated environment
International Nuclear Information System (INIS)
Gupta, R.A.; Gupta, Nand Kishor
2015-01-01
Highlights: • RO based approach developed for optimal MG operation in deregulated environment. • Wind uncertainty modeled by interval forecasting through ARIMA model. • Proposed approach evaluated using two realistic case studies. • Proposed approach evaluated the impact of degree of robustness. • Proposed approach gives a significant reduction in operation cost of microgrid. - Abstract: Micro Grids (MGs) are clusters of Distributed Energy Resource (DER) units and loads. MGs are self-sustainable and generally operated in two modes: (1) grid connected and (2) grid isolated. In deregulated environment, the operation of MG is managed by the Microgrid Operator (MO) with an objective to minimize the total cost of operation. The MG management is crucial in the deregulated power system due to (i) integration of intermittent renewable sources such as wind and Photo Voltaic (PV) generation, and (ii) volatile grid prices. This paper presents robust optimization based approach for optimal MG management considering wind power uncertainty. Time series based Autoregressive Integrated Moving Average (ARIMA) model is used to characterize the wind power uncertainty through interval forecasting. The proposed approach is illustrated through a case study having both dispatchable and non-dispatchable generators through different modes of operation. Further the impact of degree of robustness is analyzed in both cases on the total cost of operation of the MG. A comparative analysis between obtained results using proposed approach and other existing approach shows the strength of proposed approach in cost minimization in MG management
A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)
Li, Minghui; Hayward, Gordon
2017-02-01
The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.
An intrinsic robust rank-one-approximation approach for currencyportfolio optimization
Directory of Open Access Journals (Sweden)
Hongxuan Huang
2018-03-01
Full Text Available A currency portfolio is a special kind of wealth whose value fluctuates with foreignexchange rates over time, which possesses 3Vs (volume, variety and velocity properties of big datain the currency market. In this paper, an intrinsic robust rank one approximation (ROA approachis proposed to maximize the value of currency portfolios over time. The main results of the paperinclude four parts: Firstly, under the assumptions about the currency market, the currency portfoliooptimization problem is formulated as the basic model, in which there are two types of variablesdescribing currency amounts in portfolios and the amount of each currency exchanged into another,respectively. Secondly, the rank one approximation problem and its variants are also formulated toapproximate a foreign exchange rate matrix, whose performance is measured by the Frobenius normor the 2-norm of a residual matrix. The intrinsic robustness of the rank one approximation is provedtogether with summarizing properties of the basic ROA problem and designing a modified powermethod to search for the virtual exchange rates hidden in a foreign exchange rate matrix. Thirdly,a technique for decision variables reduction is presented to attack the currency portfolio optimization.The reduced formulation is referred to as the ROA model, which keeps only variables describingcurrency amounts in portfolios. The optimal solution to the ROA model also induces a feasible solutionto the basic model of the currency portfolio problem by integrating forex operations from the ROAmodel with practical forex rates. Finally, numerical examples are presented to verify the feasibility ande ciency of the intrinsic robust rank one approximation approach. They also indicate that there existsan objective measure for evaluating and optimizing currency portfolios over time, which is related tothe virtual standard currency and independent of any real currency selected specially for measurement.
Robust Portfolio Optimization Using Pseudodistances.
Toma, Aida; Leoni-Aubin, Samuela
2015-01-01
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.
A novel non-probabilistic approach using interval analysis for robust design optimization
International Nuclear Information System (INIS)
Sun, Wei; Dong, Rongmei; Xu, Huanwei
2009-01-01
A technique for formulation of the objective and constraint functions with uncertainty plays a crucial role in robust design optimization. This paper presents the first application of interval methods for reformulating the robust optimization problem. Based on interval mathematics, the original real-valued objective and constraint functions are replaced with the interval-valued functions, which directly represent the upper and lower bounds of the new functions under uncertainty. The single objective function is converted into two objective functions for minimizing the mean value and the variation, and the constraint functions are reformulated with the acceptable robustness level, resulting in a bi-level mathematical model. Compared with other methods, this method is efficient and does not require presumed probability distribution of uncertain factors or gradient or continuous information of constraints. Two numerical examples are used to illustrate the validity and feasibility of the presented method
Robust Portfolio Optimization Using Pseudodistances
2015-01-01
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948
A Data-Driven Frequency-Domain Approach for Robust Controller Design via Convex Optimization
AUTHOR|(CDS)2092751; Martino, Michele
The objective of this dissertation is to develop data-driven frequency-domain methods for designing robust controllers through the use of convex optimization algorithms. Many of today's industrial processes are becoming more complex, and modeling accurate physical models for these plants using first principles may be impossible. Albeit a model may be available; however, such a model may be too complex to consider for an appropriate controller design. With the increased developments in the computing world, large amounts of measured data can be easily collected and stored for processing purposes. Data can also be collected and used in an on-line fashion. Thus it would be very sensible to make full use of this data for controller design, performance evaluation, and stability analysis. The design methods imposed in this work ensure that the dynamics of a system are captured in an experiment and avoids the problem of unmodeled dynamics associated with parametric models. The devised methods consider robust designs...
Institute of Scientific and Technical Information of China (English)
Hui Yu; Jie Deng
2017-01-01
This study examines an optimal inventory strategy when a retailer markets a product at different selling prices through a dual-channel supply chain,comprising an online channel and an offline channel.Using the operating pattern of the offiine-to-online (O2O) business model,we develop a partial robust optimization (PRO) model.Then,we provide a closed-form solution when only the mean and standard deviation of the online channel demand distribution is known and the offline channel demand follows a uniform distribution (partial robust).Specifically,owing to the good structural properties of the solution,we obtain a heuristic ordering formula for the general distribution case (i.e.,the offline channel demand follows a general distribution).In addition,a series of numerical experiments prove the rationality of our conjecture.Moreover,after comparing our solution with other possible policies,we conclude that the PRO approach improves the performance of incorporating the internet into an existing supply chain and,thus,is able to adjust the level of conservativeness of the solution.Finally,in a degenerated situation,we compare our PRO approach with a combination of information approach.The results show that the PRO approach has more "robust" performance.As a result,a reasonable trade-off between robustness and performance is achieved.
Comparison of global optimization approaches for robust calibration of hydrologic model parameters
Jung, I. W.
2015-12-01
Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
An efficient global energy optimization approach for robust 3D plane segmentation of point clouds
Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian
2018-03-01
Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)
Robust Approaches to Forecasting
Jennifer Castle; David Hendry; Michael P. Clements
2014-01-01
We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods ar...
Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino
2017-03-01
Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet
Using a Robust Design Approach to Optimize Chair Set-up in Wheelchair Sport
Directory of Open Access Journals (Sweden)
David S. Haydon
2018-02-01
Full Text Available Optimisation of wheelchairs for court sports is currently a difficult and time-consuming process due to the broad range of impairments across athletes, difficulties in monitoring on-court performance, and the trade-off set-up that parameters have on key performance variables. A robust design approach to this problem can potentially reduce the amount of testing required, and therefore allow for individual on-court assessments. This study used orthogonal design with four set-up factors (seat height, depth, and angle, as well as tyre pressure at three levels (current, decreased, and increased for three elite wheelchair rugby players. Each player performed two maximal effort sprints from a stationary position in nine different set-ups, with this allowing for detailed analysis of each factor and level. Whilst statistical significance is difficult to obtain due to the small sample size, meaningful difference results aligning with previous research findings were identified and provide support for the use of this approach.
An Integrated Approach to Single-Leg Airline Revenue Management: The Role of Robust Optimization
Birbil, S.I.; Frenk, J.B.G.; Gromicho, J.A.S.; Zhang, S.
2006-01-01
textabstractIn this paper we introduce robust versions of the classical static and dynamic single leg seat allocation models as analyzed by Wollmer, and Lautenbacher and Stidham, respectively. These robust models take into account the inaccurate estimates of the underlying probability distributions. As observed by simulation experiments it turns out that for these robust versions the variability compared to their classical counter parts is considerably reduced with a negligible decrease of av...
Efficient reanalysis techniques for robust topology optimization
DEFF Research Database (Denmark)
Amir, Oded; Sigmund, Ole; Lazarov, Boyan Stefanov
2012-01-01
efficient robust topology optimization procedures based on reanalysis techniques. The approach is demonstrated on two compliant mechanism design problems where robust design is achieved by employing either a worst case formulation or a stochastic formulation. It is shown that the time spent on finite...
An integrated approach to single-leg airline revenue management: The role of robust optimization
S.I. Birbil (Ilker); J.B.G. Frenk (Hans); J.A.S. Gromicho (Joaquim); S. Zhang (Shuzhong)
2006-01-01
textabstractIn this paper we introduce robust versions of the classical static and dynamic single leg seat allocation models as analyzed by Wollmer, and Lautenbacher and Stidham, respectively. These robust models take into account the inaccurate estimates of the underlying probability distributions.
An Integrated Approach to Single-Leg Airline Revenue Management: The Role of Robust Optimization
S.I. Birbil (Ilker); J.B.G. Frenk (Hans); J.A.S. Gromicho (Joaquim); S. Zhang (Shuzhong)
2006-01-01
textabstractIn this paper we introduce robust versions of the classical static and dynamic single leg seat allocation models as analyzed by Wollmer, and Lautenbacher and Stidham, respectively. These robust models take into account the inaccurate estimates of the underlying probability distributions.
International Nuclear Information System (INIS)
Chen, J.-D.
2007-01-01
In this paper, the robust control problem of output dynamic observer-based control for a class of uncertain neutral systems with discrete and distributed time delays is considered. Linear matrix inequality (LMI) optimization approach is used to design the new output dynamic observer-based controls. Three classes of observer-based controls are proposed and the maximal perturbed bound is given. Based on the results of this paper, the constraint of matrix equality is not necessary for designing the observer-based controls. Finally, a numerical example is given to illustrate the usefulness of the proposed method
A Robust Bayesian Approach to an Optimal Replacement Policy for Gas Pipelines
Directory of Open Access Journals (Sweden)
José Pablo Arias-Nicolás
2015-06-01
Full Text Available In the paper, we address Bayesian sensitivity issues when integrating experts’ judgments with available historical data in a case study about strategies for the preventive maintenance of low-pressure cast iron pipelines in an urban gas distribution network. We are interested in replacement priorities, as determined by the failure rates of pipelines deployed under different conditions. We relax the assumptions, made in previous papers, about the prior distributions on the failure rates and study changes in replacement priorities under different choices of generalized moment-constrained classes of priors. We focus on the set of non-dominated actions, and among them, we propose the least sensitive action as the optimal choice to rank different classes of pipelines, providing a sound approach to the sensitivity problem. Moreover, we are also interested in determining which classes have a failure rate exceeding a given acceptable value, considered as the threshold determining no need for replacement. Graphical tools are introduced to help decisionmakers to determine if pipelines are to be replaced and the corresponding priorities.
Directory of Open Access Journals (Sweden)
Hiwa Farughi
2016-05-01
Full Text Available In this paper, robust optimization of a bi-objective mathematical model in a dynamic cell formation problem considering labor utilization with uncertain data is carried out. The robust approach is used to reduce the effects of fluctuations of the uncertain parameters with regards to all the possible future scenarios. In this research, cost parameters of the cell formation and demand fluctuations are subject to uncertainty and a mixed-integer programming (MIP model is developed to formulate the related robust dynamic cell formation problem. Then the problem is transformed into a bi-objective linear one. The first objective function seeks to minimize relevant costs of the problem including machine procurement and relocation costs, machine variable cost, inter-cell movement and intra-cell movement costs, overtime cost and labor shifting cost between cells, machine maintenance cost, inventory, holding part cost. The second objective function seeks to minimize total man-hour deviations between cells or indeed labor utilization of the modeled.
An intrinsic robust rank-one-approximation approach for currencyportfolio optimization
Hongxuan Huang; Zhengjun Zhang
2018-01-01
A currency portfolio is a special kind of wealth whose value fluctuates with foreignexchange rates over time, which possesses 3Vs (volume, variety and velocity) properties of big datain the currency market. In this paper, an intrinsic robust rank one approximation (ROA) approachis proposed to maximize the value of currency portfolios over time. The main results of the paperinclude four parts: Firstly, under the assumptions about the currency market, the currency portfoliooptimization problem ...
A robust stochastic approach for design optimization of air cooled heat exchangers
Energy Technology Data Exchange (ETDEWEB)
Doodman, A.R.; Fesanghary, M.; Hosseini, R. [Department of Mechanical Engineering, Amirkabir University of Technology, 424-Hafez Avenue, 15875-4413 Tehran (Iran)
2009-07-15
This study investigates the use of global sensitivity analysis (GSA) and harmony search (HS) algorithm for design optimization of air cooled heat exchangers (ACHEs) from the economic viewpoint. In order to reduce the size of the optimization problem, GSA is performed to examine the effect of the design parameters and to identify the non-influential parameters. Then HS is applied to optimize influential parameters. To demonstrate the ability of the HS algorithm a case study is considered and for validation purpose, genetic algorithm (GA) is also applied to this case study. Results reveal that the HS algorithm converges to optimum solution with higher accuracy in comparison with GA. (author)
A robust stochastic approach for design optimization of air cooled heat exchangers
International Nuclear Information System (INIS)
Doodman, A.R.; Fesanghary, M.; Hosseini, R.
2009-01-01
This study investigates the use of global sensitivity analysis (GSA) and harmony search (HS) algorithm for design optimization of air cooled heat exchangers (ACHEs) from the economic viewpoint. In order to reduce the size of the optimization problem, GSA is performed to examine the effect of the design parameters and to identify the non-influential parameters. Then HS is applied to optimize influential parameters. To demonstrate the ability of the HS algorithm a case study is considered and for validation purpose, genetic algorithm (GA) is also applied to this case study. Results reveal that the HS algorithm converges to optimum solution with higher accuracy in comparison with GA
Musthofa, M.W.; Salmah, S.; Engwerda, Jacob; Suparwanto, A.
This paper studies the robust optimal control problem for descriptor systems. We applied differential game theory to solve the disturbance attenuation problem. The robust control problem was converted into a reduced ordinary zero-sum game. Within a linear quadratic setting, we solved the problem for
Davendralingam, Navindran
Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is
Bukhari, Hassan J.
2017-12-01
In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.
Robustizing Circuit Optimization using Huber Functions
DEFF Research Database (Denmark)
Bandler, John W.; Biernacki, Radek M.; Chen, Steve H.
1993-01-01
The authors introduce a novel approach to 'robustizing' microwave circuit optimization using Huber functions, both two-sided and one-sided. They compare Huber optimization with l/sub 1/, l/sub 2/, and minimax methods in the presence of faults, large and small measurement errors, bad starting poin......, a preliminary optimization by selecting a small number of dominant variables. It is demonstrated, through multiplexer optimization, that the one-sided Huber function can be more effective and efficient than minimax in overcoming a bad starting point.......The authors introduce a novel approach to 'robustizing' microwave circuit optimization using Huber functions, both two-sided and one-sided. They compare Huber optimization with l/sub 1/, l/sub 2/, and minimax methods in the presence of faults, large and small measurement errors, bad starting points......, and statistical uncertainties. They demonstrate FET statistical modeling, multiplexer optimization, analog fault location, and data fitting. They extend the Huber concept by introducing a 'one-sided' Huber function for large-scale optimization. For large-scale problems, the designer often attempts, by intuition...
Hussain, Lal
2018-06-01
Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.
PARAMETER COORDINATION AND ROBUST OPTIMIZATION FOR MULTIDISCIPLINARY DESIGN
Institute of Scientific and Technical Information of China (English)
HU Jie; PENG Yinghong; XIONG Guangleng
2006-01-01
A new parameter coordination and robust optimization approach for multidisciplinary design is presented. Firstly, the constraints network model is established to support engineering change, coordination and optimization. In this model, interval boxes are adopted to describe the uncertainty of design parameters quantitatively to enhance the design robustness. Secondly, the parameter coordination method is presented to solve the constraints network model, monitor the potential conflicts due to engineering changes, and obtain the consistency solution space corresponding to the given product specifications. Finally, the robust parameter optimization model is established, and genetic arithmetic is used to obtain the robust optimization parameter. An example of bogie design is analyzed to show the scheme to be effective.
Robust boosting via convex optimization
Rätsch, Gunnar
2001-12-01
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems
Evolution strategies for robust optimization
Kruisselbrink, Johannes Willem
2012-01-01
Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but
Robust topology optimization accounting for geometric imperfections
DEFF Research Database (Denmark)
Schevenels, M.; Jansen, M.; Lombaert, Geert
2013-01-01
performance. As a consequence, the actual structure may be far from optimal. In this paper, a robust approach to topology optimization is presented, taking into account two types of geometric imperfections: variations of (1) the crosssections and (2) the locations of structural elements. The first type...... is modeled by means of a scalar non-Gaussian random field, which is represented as a translation process. The underlying Gaussian field is simulated by means of the EOLE method. The second type of imperfections is modeled as a Gaussian vector-valued random field, which is simulated directly by means...... of the EOLE method. In each iteration of the optimization process, the relevant statistics of the structural response are evaluated by means of a Monte Carlo simulation. The proposed methodology is successfully applied to a test problem involving the design of a compliant mechanism (for the first type...
Robust Optimal Design of Quantum Electronic Devices
Directory of Open Access Journals (Sweden)
Ociel Morales
2018-01-01
Full Text Available We consider the optimal design of a sequence of quantum barriers, in order to manufacture an electronic device at the nanoscale such that the dependence of its transmission coefficient on the bias voltage is linear. The technique presented here is easily adaptable to other response characteristics. There are two distinguishing features of our approach. First, the transmission coefficient is determined using a semiclassical approximation, so we can explicitly compute the gradient of the objective function. Second, in contrast with earlier treatments, manufacturing uncertainties are incorporated in the model through random variables; the optimal design problem is formulated in a probabilistic setting and then solved using a stochastic collocation method. As a measure of robustness, a weighted sum of the expectation and the variance of a least-squares performance metric is considered. Several simulations illustrate the proposed technique, which shows an improvement in accuracy over 69% with respect to brute-force, Monte-Carlo-based methods.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
Robust Optimization of Database Queries
Indian Academy of Sciences (India)
JAYANT
2011-07-06
Jul 6, 2011 ... Based on first-order logic. ○ Edgar ... Cost-based Query Optimizer s choice of execution plan ... Determines the values of goods shipped between nations in a time period select ..... Born: 1881 Elected: 1934 Section: Medicine.
Distributed Robust Optimization in Networked System.
Wang, Shengnan; Li, Chunguang
2016-10-11
In this paper, we consider a distributed robust optimization (DRO) problem, where multiple agents in a networked system cooperatively minimize a global convex objective function with respect to a global variable under the global constraints. The objective function can be represented by a sum of local objective functions. The global constraints contain some uncertain parameters which are partially known, and can be characterized by some inequality constraints. After problem transformation, we adopt the Lagrangian primal-dual method to solve this problem. We prove that the primal and dual optimal solutions of the problem are restricted in some specific sets, and we give a method to construct these sets. Then, we propose a DRO algorithm to find the primal-dual optimal solutions of the Lagrangian function, which consists of a subgradient step, a projection step, and a diffusion step, and in the projection step of the algorithm, the optimized variables are projected onto the specific sets to guarantee the boundedness of the subgradients. Convergence analysis and numerical simulations verifying the performance of the proposed algorithm are then provided. Further, for nonconvex DRO problem, the corresponding approach and algorithm framework are also provided.
Robust structural optimization using Gauss-type quadrature formula
International Nuclear Information System (INIS)
Lee, Sang Hoon; Seo, Ki Seog; Chen, Shikui; Chen, Wei
2009-01-01
In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the Tensor Product Quadrature (TPQ) formula and the Univariate Dimension Reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty.
Robust structural optimization using Gauss-type quadrature formula
Energy Technology Data Exchange (ETDEWEB)
Lee, Sang Hoon; Seo, Ki Seog [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Chen, Shikui; Chen, Wei [Northwestern University, Illinois (United States)
2009-07-01
In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the Tensor Product Quadrature (TPQ) formula and the Univariate Dimension Reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty.
Robust Structural Optimization Using Gauss-type Quadrature Formula
Energy Technology Data Exchange (ETDEWEB)
Lee, Sang Hoon; Seo, Ki Seog; Chen, Shikui; Chen, Wei [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2009-08-15
In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the tensor product quadrature (TPQ) formula and the univariate dimension reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty.
Robust Structural Optimization Using Gauss-type Quadrature Formula
International Nuclear Information System (INIS)
Lee, Sang Hoon; Seo, Ki Seog; Chen, Shikui; Chen, Wei
2009-01-01
In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the tensor product quadrature (TPQ) formula and the univariate dimension reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty
Robust quantum optimizer with full connectivity.
Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P
2017-04-01
Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.
Robust optimization based upon statistical theory.
Sobotta, B; Söhn, M; Alber, M
2010-08-01
Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose
Robust Structured Control Design via LMI Optimization
DEFF Research Database (Denmark)
Adegas, Fabiano Daher; Stoustrup, Jakob
2011-01-01
This paper presents a new procedure for discrete-time robust structured control design. Parameter-dependent nonconvex conditions for stabilizable and induced L2-norm performance controllers are solved by an iterative linear matrix inequalities (LMI) optimization. A wide class of controller...... structures including decentralized of any order, ﬁxed-order dynamic output feedback, static output feedback can be designed robust to polytopic uncertainties. Stability is proven by a parameter-dependent Lyapunov function. Numerical examples on robust stability margins shows that the proposed procedure can...
Robust Design Optimization of an Aerospace Vehicle Prolusion System
Directory of Open Access Journals (Sweden)
Muhammad Aamir Raza
2011-01-01
Full Text Available This paper proposes a robust design optimization methodology under design uncertainties of an aerospace vehicle propulsion system. The approach consists of 3D geometric design coupled with complex internal ballistics, hybrid optimization, worst-case deviation, and efficient statistical approach. The uncertainties are propagated through worst-case deviation using first-order orthogonal design matrices. The robustness assessment is measured using the framework of mean-variance and percentile difference approach. A parametric sensitivity analysis is carried out to analyze the effects of design variables variation on performance parameters. A hybrid simulated annealing and pattern search approach is used as an optimizer. The results show the objective function of optimizing the mean performance and minimizing the variation of performance parameters in terms of thrust ratio and total impulse could be achieved while adhering to the system constraints.
Directory of Open Access Journals (Sweden)
Alfredo Gimelli
2018-04-01
Full Text Available In recent decades, growing concerns about global warming and climate change effects have led to specific directives, especially in Europe, promoting the use of primary energy-saving techniques and renewable energy systems. The increasingly stringent requirements for carbon dioxide reduction have led to a more widespread adoption of distributed energy systems. In particular, besides renewable energy systems for power generation, one of the most effective techniques used to face the energy-saving challenges has been the adoption of polygeneration plants for combined heating, cooling, and electricity generation. This technique offers the possibility to achieve a considerable enhancement in energy and cost savings as well as a simultaneous reduction of greenhouse gas emissions. However, the use of small-scale polygeneration systems does not ensure the achievement of mandatory, but sometimes conflicting, aims without the proper sizing and operation of the plant. This paper is focused on a methodology based on vector optimization algorithms and developed by the authors for the identification of optimal polygeneration plant solutions. To this aim, a specific calculation algorithm for the study of cogeneration systems has also been developed. This paper provides, after a detailed description of the proposed methodology, some specific applications to the study of combined heat and power (CHP and organic Rankine cycle (ORC plants, thus highlighting the potential of the proposed techniques and the main results achieved.
Robust optimization methods for cardiac sparing in tangential breast IMRT
Energy Technology Data Exchange (ETDEWEB)
Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8 (Canada); Lee, Jenny [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Chan, Timothy C. Y. [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada); Purdie, Thomas G. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada)
2015-05-15
Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust optimization approach for breast IMRT. We compare robust optimized plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). Methods: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructed using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust optimized and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust optimization method was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the optimization models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust optimized plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust method reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical method while also improving the coverage of the accumulated whole breast target volume. On average, the robust method reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust method had smaller deviations from the planned dose to the
Optimal interdependence enhances robustness of complex systems
Singh, R. K.; Sinha, Sitabhra
2017-01-01
While interdependent systems have usually been associated with increased fragility, we show that strengthening the interdependence between dynamical processes on different networks can make them more robust. By coupling the dynamics of networks that in isolation exhibit catastrophic collapse with extinction of nodal activity, we demonstrate system-wide persistence of activity for an optimal range of interdependence between the networks. This is related to the appearance of attractors of the g...
Robust topology optimization accounting for spatially varying manufacturing errors
DEFF Research Database (Denmark)
Schevenels, M.; Lazarov, Boyan Stefanov; Sigmund, Ole
2011-01-01
This paper presents a robust approach for the design of macro-, micro-, or nano-structures by means of topology optimization, accounting for spatially varying manufacturing errors. The focus is on structures produced by milling or etching; in this case over- or under-etching may cause parts...... optimization problem is formulated in a probabilistic way: the objective function is defined as a weighted sum of the mean value and the standard deviation of the structural performance. The optimization problem is solved by means of a Monte Carlo method: in each iteration of the optimization scheme, a Monte...
On the relation between flexibility analysis and robust optimization for linear systems
Zhang, Qi; Grossmann, Ignacio E.; Lima, Ricardo
2016-01-01
Flexibility analysis and robust optimization are two approaches to solving optimization problems under uncertainty that share some fundamental concepts, such as the use of polyhedral uncertainty sets and the worst-case approach to guarantee
Automatic Synthesis of Robust and Optimal Controllers
DEFF Research Database (Denmark)
Cassez, Franck; Jessen, Jan Jacob; Larsen, Kim Guldstrand
2009-01-01
In this paper, we show how to apply recent tools for the automatic synthesis of robust and near-optimal controllers for a real industrial case study. We show how to use three different classes of models and their supporting existing tools, Uppaal-TiGA for synthesis, phaver for verification......, and Simulink for simulation, in a complementary way. We believe that this case study shows that our tools have reached a level of maturity that allows us to tackle interesting and relevant industrial control problems....
Kinematically Optimal Robust Control of Redundant Manipulators
Galicki, M.
2017-12-01
This work deals with the problem of the robust optimal task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the endeffector. Furthermore, the movement is to be accomplished in such a way as to minimize both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we derive a class of chattering-free robust kinematically optimal controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.
Forecasting exchange rates: a robust regression approach
Preminger, Arie; Franck, Raphael
2005-01-01
The least squares estimation method as well as other ordinary estimation method for regression models can be severely affected by a small number of outliers, thus providing poor out-of-sample forecasts. This paper suggests a robust regression approach, based on the S-estimation method, to construct forecasting models that are less sensitive to data contamination by outliers. A robust linear autoregressive (RAR) and a robust neural network (RNN) models are estimated to study the predictabil...
Towards robust optimal design of storm water systems
Marquez Calvo, Oscar; Solomatine, Dimitri
2015-04-01
In this study the focus is on the design of a storm water or a combined sewer system. Such a system should be capable to handle properly most of the storm to minimize the damages caused by flooding due to the lack of capacity of the system to cope with rain water at peak times. This problem is a multi-objective optimization problem: we have to take into account the minimization of the construction costs, the minimization of damage costs due to flooding, and possibly other criteria. One of the most important factors influencing the design of storm water systems is the expected amount of water to deal with. It is common that this infrastructure is developed with the capacity to cope with events that occur once in, say 10 or 20 years - so-called design rainfall events. However, rainfall is a random variable and such uncertainty typically is not taken explicitly into account in optimization. Rainfall design data is based on historical information of rainfalls, but many times this data is based on unreliable measures; or in not enough historical information; or as we know, the patterns of rainfall are changing regardless of historical information. There are also other sources of uncertainty influencing design, for example, leakages in the pipes and accumulation of sediments in pipes. In the context of storm water or combined sewer systems design or rehabilitation, robust optimization technique should be able to find the best design (or rehabilitation plan) within the available budget but taking into account uncertainty in those variables that were used to design the system. In this work we consider various approaches to robust optimization proposed by various authors (Gabrel, Murat, Thiele 2013; Beyer, Sendhoff 2007) and test a novel method ROPAR (Solomatine 2012) to analyze robustness. References Beyer, H.G., & Sendhoff, B. (2007). Robust optimization - A comprehensive survey. Comput. Methods Appl. Mech. Engrg., 3190-3218. Gabrel, V.; Murat, C., Thiele, A. (2014
Intelligent and robust optimization frameworks for smart grids
Dhansri, Naren Reddy
A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic
Robustness Metrics: Consolidating the multiple approaches to quantify Robustness
DEFF Research Database (Denmark)
Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.
2016-01-01
robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...
Oracle-based online robust optimization via online learning
Ben-Tal, A.; Hazan, E.; Koren, T.; Shie, M.
2015-01-01
Robust optimization is a common optimization framework under uncertainty when problem parameters are unknown, but it is known that they belong to some given uncertainty set. In the robust optimization framework, a min-max problem is solved wherein a solution is evaluated according to its performance
Doubly Robust Estimation of Optimal Dynamic Treatment Regimes
DEFF Research Database (Denmark)
Barrett, Jessica K; Henderson, Robin; Rosthøj, Susanne
2014-01-01
We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret-regression appro......We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret......-regression approach of Almirall et al. (in Biometrics 66:131-139, 2010) and Henderson et al. (in Biometrics 66:1192-1201, 2010) and demonstrate that it is equivalent to a reduced form of Robins' efficient g-estimation procedure (Robins, in Proceedings of the Second Symposium on Biostatistics. Springer, New York, pp....... 189-326, 2004). Simulation studies suggest that while the regret-regression approach is most efficient when there is no model misspecification, in the presence of misspecification the efficient g-estimation procedure is more robust. The g-estimation method can be difficult to apply in complex...
Robust Optimal Adaptive Trajectory Tracking Control of Quadrotor Helicopter
Directory of Open Access Journals (Sweden)
M. Navabi
Full Text Available Abstract This paper focuses on robust optimal adaptive control strategy to deal with tracking problem of a quadrotor unmanned aerial vehicle (UAV in presence of parametric uncertainties, actuator amplitude constraints, and unknown time-varying external disturbances. First, Lyapunov-based indirect adaptive controller optimized by particle swarm optimization (PSO is developed for multi-input multi-output (MIMO nonlinear quadrotor to prevent input constraints violation, and then disturbance observer-based control (DOBC technique is aggregated with the control system to attenuate the effects of disturbance generated by an exogenous system. The performance of synthesis control method is evaluated by a new performance index function in time-domain, and the stability analysis is carried out using Lyapunov theory. Finally, illustrative numerical simulations are conducted to demonstrate the effectiveness of the presented approach in altitude and attitude tracking under several conditions, including large time-varying uncertainty, exogenous disturbance, and control input constraints.
Optimal Robust Fault Detection for Linear Discrete Time Systems
Directory of Open Access Journals (Sweden)
Nike Liu
2008-01-01
Full Text Available This paper considers robust fault-detection problems for linear discrete time systems. It is shown that the optimal robust detection filters for several well-recognized robust fault-detection problems, such as ℋ−/ℋ∞, ℋ2/ℋ∞, and ℋ∞/ℋ∞ problems, are the same and can be obtained by solving a standard algebraic Riccati equation. Optimal filters are also derived for many other optimization criteria and it is shown that some well-studied and seeming-sensible optimization criteria for fault-detection filter design could lead to (optimal but useless fault-detection filters.
Extending the Scope of Robust Quadratic Optimization
Marandi, Ahmadreza; Ben-Tal, A.; den Hertog, Dick; Melenberg, Bertrand
In this paper, we derive tractable reformulations of the robust counterparts of convex quadratic and conic quadratic constraints with concave uncertainties for a broad range of uncertainty sets. For quadratic constraints with convex uncertainty, it is well-known that the robust counterpart is, in
A kriging metamodel-assisted robust optimization method based on a reverse model
Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao
2018-02-01
The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.
On the robust optimization to the uncertain vaccination strategy problem
International Nuclear Information System (INIS)
Chaerani, D.; Anggriani, N.; Firdaniza
2014-01-01
In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented
On the robust optimization to the uncertain vaccination strategy problem
Energy Technology Data Exchange (ETDEWEB)
Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id [Department of Mathematics, Faculty of Mathematics and Natural Sciences, University of Padjadjaran Indonesia, Jalan Raya Bandung Sumedang KM 21 Jatinangor Sumedang 45363 (Indonesia)
2014-02-21
In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.
Self-optimizing robust nonlinear model predictive control
Lazar, M.; Heemels, W.P.M.H.; Jokic, A.; Thoma, M.; Allgöwer, F.; Morari, M.
2009-01-01
This paper presents a novel method for designing robust MPC schemes that are self-optimizing in terms of disturbance attenuation. The method employs convex control Lyapunov functions and disturbance bounds to optimize robustness of the closed-loop system on-line, at each sampling instant - a unique
Systematic and robust design of photonic crystal waveguides by topology optimization
DEFF Research Database (Denmark)
Wang, Fengwen; Jensen, Jakob Søndergaard; Sigmund, Ole
2010-01-01
on a threshold projection. The objective is formulated to minimize the maximum error between actual group indices and a prescribed group index among these three designs. Novel photonic crystal waveguide facilitating slow light with a group index of n(g) = 40 is achieved by the robust optimization approach......A robust topology optimization method is presented to consider manufacturing uncertainties in tailoring dispersion properties of photonic crystal waveguides. The under, normal and over-etching scenarios in manufacturing process are represented by dilated, intermediate and eroded designs based....... The numerical result illustrates that the robust topology optimization provides a systematic and robust design methodology for photonic crystal waveguide design....
Enhancing product robustness in reliability-based design optimization
International Nuclear Information System (INIS)
Zhuang, Xiaotian; Pan, Rong; Du, Xiaoping
2015-01-01
Different types of uncertainties need to be addressed in a product design optimization process. In this paper, the uncertainties in both product design variables and environmental noise variables are considered. The reliability-based design optimization (RBDO) is integrated with robust product design (RPD) to concurrently reduce the production cost and the long-term operation cost, including quality loss, in the process of product design. This problem leads to a multi-objective optimization with probabilistic constraints. In addition, the model uncertainties associated with a surrogate model that is derived from numerical computation methods, such as finite element analysis, is addressed. A hierarchical experimental design approach, augmented by a sequential sampling strategy, is proposed to construct the response surface of product performance function for finding optimal design solutions. The proposed method is demonstrated through an engineering example. - Highlights: • A unifying framework for integrating RBDO and RPD is proposed. • Implicit product performance function is considered. • The design problem is solved by sequential optimization and reliability assessment. • A sequential sampling technique is developed for improving design optimization. • The comparison with traditional RBDO is provided
Robust Topology Optimization Based on Stochastic Collocation Methods under Loading Uncertainties
Directory of Open Access Journals (Sweden)
Qinghai Zhao
2015-01-01
Full Text Available A robust topology optimization (RTO approach with consideration of loading uncertainties is developed in this paper. The stochastic collocation method combined with full tensor product grid and Smolyak sparse grid transforms the robust formulation into a weighted multiple loading deterministic problem at the collocation points. The proposed approach is amenable to implementation in existing commercial topology optimization software package and thus feasible to practical engineering problems. Numerical examples of two- and three-dimensional topology optimization problems are provided to demonstrate the proposed RTO approach and its applications. The optimal topologies obtained from deterministic and robust topology optimization designs under tensor product grid and sparse grid with different levels are compared with one another to investigate the pros and cons of optimization algorithm on final topologies, and an extensive Monte Carlo simulation is also performed to verify the proposed approach.
Robust optimization of supersonic ORC nozzle guide vanes
Bufi, Elio A.; Cinnella, Paola
2017-03-01
An efficient Robust Optimization (RO) strategy is developed for the design of 2D supersonic Organic Rankine Cycle turbine expanders. The dense gas effects are not-negligible for this application and they are taken into account describing the thermodynamics by means of the Peng-Robinson-Stryjek-Vera equation of state. The design methodology combines an Uncertainty Quantification (UQ) loop based on a Bayesian kriging model of the system response to the uncertain parameters, used to approximate statistics (mean and variance) of the uncertain system output, a CFD solver, and a multi-objective non-dominated sorting algorithm (NSGA), also based on a Kriging surrogate of the multi-objective fitness function, along with an adaptive infill strategy for surrogate enrichment at each generation of the NSGA. The objective functions are the average and variance of the isentropic efficiency. The blade shape is parametrized by means of a Free Form Deformation (FFD) approach. The robust optimal blades are compared to the baseline design (based on the Method of Characteristics) and to a blade obtained by means of a deterministic CFD-based optimization.
Robust balance shift control with posture optimization
Kavafoglu, Z.; Kavafoglu, Ersan; Egges, J.
2015-01-01
In this paper we present a control framework which creates robust and natural balance shifting behaviours during standing. Given high-level features such as the position of the center of mass projection and the foot configurations, a kinematic posture satisfying these features is synthesized using
Chance constrained uncertain classification via robust optimization
Ben-Tal, A.; Bhadra, S.; Bhattacharayya, C.; Saketha Nat, J.
2011-01-01
This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out
TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization
2016-11-28
methods is presented in the book chapter [CG16d]. 4.4 Robust Timetable Information Problems. Timetable information is the process of determining a...Princeton and Oxford, 2009. [BTN98] A. Ben-Tal and A. Nemirovski. Robust convex optimization. Math - ematics of Operations Research, 23(4):769–805...Goerigk. A note on upper bounds to the robust knapsack problem with discrete scenarios. Annals of Operations Research, 223(1):461–469, 2014. [GS16] M
Robust and fast nonlinear optimization of diffusion MRI microstructure models.
Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A
2017-07-15
run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Robust Optimization in Simulation : Taguchi and Response Surface Methodology
Dellino, G.; Kleijnen, J.P.C.; Meloni, C.
2008-01-01
Optimization of simulated systems is tackled by many methods, but most methods assume known environments. This article, however, develops a 'robust' methodology for uncertain environments. This methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by
Robust Optimization Using Supremum of the Objective Function for Nonlinear Programming Problems
International Nuclear Information System (INIS)
Lee, Se Jung; Park, Gyung Jin
2014-01-01
In the robust optimization field, the robustness of the objective function emphasizes an insensitive design. In general, the robustness of the objective function can be achieved by reducing the change of the objective function with respect to the variation of the design variables and parameters. However, in conventional methods, when an insensitive design is emphasized, the performance of the objective function can be deteriorated. Besides, if the numbers of the design variables are increased, the numerical cost is quite high in robust optimization for nonlinear programming problems. In this research, the robustness index for the objective function and a process of robust optimization are proposed. Moreover, a method using the supremum of linearized functions is also proposed to reduce the computational cost. Mathematical examples are solved for the verification of the proposed method and the results are compared with those from the conventional methods. The proposed approach improves the performance of the objective function and its efficiency
Design and Validation of Optimized Feedforward with Robust Feedback Control of a Nuclear Reactor
International Nuclear Information System (INIS)
Shaffer, Roman; He Weidong; Edwards, Robert M.
2004-01-01
Design applications for robust feedback and optimized feedforward control, with confirming results from experiments conducted on the Pennsylvania State University TRIGA reactor, are presented. The combination of feedforward and feedback control techniques complement each other in that robust control offers guaranteed closed-loop stability in the presence of uncertainties, and optimized feedforward offers an approach to achieving performance that is sometimes limited by overly conservative robust feedback control. The design approach taken in this work combines these techniques by first designing robust feedback control. Alternative methods for specifying a low-order linear model and uncertainty specifications, while seeking as much performance as possible, are discussed and evaluated. To achieve desired performance characteristics, the optimized feedforward control is then computed by using the nominal nonlinear plant model that incorporates the robust feedback control
Topology optimization of robust superhydrophobic surfaces
DEFF Research Database (Denmark)
Cavalli, Andrea; Bøggild, Peter; Okkels, Fridolin
2013-01-01
In this paper we apply topology optimization to micro-structured superhydrophobic surfaces for the first time. It has been experimentally observed that a droplet suspended on a brush of micrometric posts shows a high static contact angle and low roll-off angle. To keep the fluid from penetrating...
Optimal design of robust piezoelectric unimorph microgrippers
DEFF Research Database (Denmark)
Ruiz, David; Díaz-Molina, Alex; Sigmund, Ole
2018-01-01
Topology optimization can be used to design piezoelectric actuators by simultaneous design of host structure and polarization profile. Subsequent micro-scale fabrication leads us to overcome important manufacturing limitations: difficulties in placing a piezoelectric layer on both top and bottom...
Geometrical framework for robust portfolio optimization
Bazovkin, Pavel
2014-01-01
We consider a vector-valued multivariate risk measure that depends on the user's profile given by the user's utility. It is constructed on the basis of weighted-mean trimmed regions and represents the solution of an optimization problem. The key feature of this measure is convexity. We apply the measure to the portfolio selection problem, employing different measures of performance as objective functions in a common geometrical framework.
DEFF Research Database (Denmark)
Christiansen, Rasmus E.; Lazarov, Boyan S.; Jensen, Jakob S.
2015-01-01
Resonance and wave-propagation problems are known to be highly sensitive towards parameter variations. This paper discusses topology optimization formulations for creating designs that perform robustly under spatial variations for acoustic cavity problems. For several structural problems, robust...... and limitations are discussed. In addition, a known explicit penalization approach is considered for comparison. For near-uniform spatial variations it is shown that highly robust designs can be obtained using the double filter approach. It is finally demonstrated that taking non-uniform variations into account...... further improves the robustness of the designs....
Robust optimization-based DC optimal power flow for managing wind generation uncertainty
Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn
2012-11-01
Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.
Stochastic Robust Mathematical Programming Model for Power System Optimization
Energy Technology Data Exchange (ETDEWEB)
Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Linear systems optimal and robust control
Sinha, Alok
2007-01-01
Introduction Overview Contents of the Book State Space Description of a Linear System Transfer Function of a Single Input/Single Output (SISO) System State Space Realizations of a SISO System SISO Transfer Function from a State Space Realization Solution of State Space Equations Observability and Controllability of a SISO System Some Important Similarity Transformations Simultaneous Controllability and Observability Multiinput/Multioutput (MIMO) Systems State Space Realizations of a Transfer Function Matrix Controllability and Observability of a MIMO System Matrix-Fraction Description (MFD) MFD of a Transfer Function Matrix for the Minimal Order of a State Space Realization Controller Form Realization from a Right MFD Poles and Zeros of a MIMO Transfer Function Matrix Stability Analysis State Feedback Control and Optimization State Variable Feedback for a Single Input System Computation of State Feedback Gain Matrix for a Multiinput System State Feedback Gain Matrix for a Multi...
Optimal strategy analysis based on robust predictive control for inventory system with random demand
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Optimal Robust Self-Testing by Binary Nonlocal XOR Games
Miller, Carl A.; Shi, Yaoyun
2013-01-01
Self-testing a quantum apparatus means verifying the existence of a certain quantum state as well as the effect of the associated measuring devices based only on the statistics of the measurement outcomes. Robust (i.e., error-tolerant) self-testing quantum apparatuses are critical building blocks for quantum cryptographic protocols that rely on imperfect or untrusted devices. We devise a general scheme for proving optimal robust self-testing properties for tests based on nonlocal binary XOR g...
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Robust Optimization of Fourth Party Logistics Network Design under Disruptions
Directory of Open Access Journals (Sweden)
Jia Li
2015-01-01
Full Text Available The Fourth Party Logistics (4PL network faces disruptions of various sorts under the dynamic and complex environment. In order to explore the robustness of the network, the 4PL network design with consideration of random disruptions is studied. The purpose of the research is to construct a 4PL network that can provide satisfactory service to customers at a lower cost when disruptions strike. Based on the definition of β-robustness, a robust optimization model of 4PL network design under disruptions is established. Based on the NP-hard characteristic of the problem, the artificial fish swarm algorithm (AFSA and the genetic algorithm (GA are developed. The effectiveness of the algorithms is tested and compared by simulation examples. By comparing the optimal solutions of the 4PL network for different robustness level, it is indicated that the robust optimization model can evade the market risks effectively and save the cost in the maximum limit when it is applied to 4PL network design.
Application of Six Sigma Robust Optimization in Sheet Metal Forming
International Nuclear Information System (INIS)
Li, Y.Q.; Cui, Z.S.; Ruan, X.Y.; Zhang, D.J.
2005-01-01
Numerical simulation technology and optimization method have been applied in sheet metal forming process to improve design quality and shorten design cycle. While the existence of fluctuation in design variables or operation condition has great influence on the quality. In addition to that, iterative solution in numerical simulation and optimization usually take huge computational time or endure expensive experiment cost In order to eliminate effect of perturbations in design and improve design efficiency, a CAE-based six sigma robust design method is developed in this paper. In the six sigma procedure for sheet metal forming, statistical technology and dual response surface approximate model as well as algorithm of 'Design for Six Sigma (DFSS)' are integrated together to perform reliability optimization and robust improvement. A deep drawing process of a rectangular cup is taken as an example to illustrate the method. The optimization solutions show that the proposed optimization procedure not only improves significantly the reliability and robustness of the forming quality, but also increases optimization efficiency with approximate model
Robust Optimization for Household Load Scheduling with Uncertain Parameters
Directory of Open Access Journals (Sweden)
Jidong Wang
2018-04-01
Full Text Available Home energy management systems (HEMS face many challenges of uncertainty, which have a great impact on the scheduling of home appliances. To handle the uncertain parameters in the household load scheduling problem, this paper uses a robust optimization method to rebuild the household load scheduling model for home energy management. The model proposed in this paper can provide the complete robust schedules for customers while considering the disturbance of uncertain parameters. The complete robust schedules can not only guarantee the customers’ comfort constraints but also cooperatively schedule the electric devices for cost minimization and load shifting. Moreover, it is available for customers to obtain multiple schedules through setting different robust levels while considering the trade-off between the comfort and economy.
Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...
Robust Optimization in Simulation : Taguchi and Krige Combined
Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.
2009-01-01
Optimization of simulated systems is the goal of many methods, but most methods as- sume known environments. We, however, develop a `robust' methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by
Metamodel-based robust simulation-optimization : An overview
Dellino, G.; Meloni, C.; Kleijnen, J.P.C.; Dellino, Gabriella; Meloni, Carlo
2015-01-01
Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a "robust" methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by
A Robust Obstacle Avoidance for Service Robot Using Bayesian Approach
Directory of Open Access Journals (Sweden)
Widodo Budiharto
2011-03-01
Full Text Available The objective of this paper is to propose a robust obstacle avoidance method for service robot in indoor environment. The method for obstacles avoidance uses information about static obstacles on the landmark using edge detection. Speed and direction of people that walks as moving obstacle obtained by single camera using tracking and recognition system and distance measurement using 3 ultrasonic sensors. A new geometrical model and maneuvering method for moving obstacle avoidance introduced and combined with Bayesian approach for state estimation. The obstacle avoidance problem is formulated using decision theory, prior and posterior distribution and loss function to determine an optimal response based on inaccurate sensor data. Algorithms for moving obstacles avoidance method proposed and experiment results implemented to service robot also presented. Various experiments show that our proposed method very fast, robust and successfully implemented to service robot called Srikandi II that equipped with 4 DOF arm robot developed in our laboratory.
Optimal robust control strategy of a solid oxide fuel cell system
Wu, Xiaojuan; Gao, Danhui
2018-01-01
Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.
Directory of Open Access Journals (Sweden)
José Renato Munhoz
2012-01-01
Full Text Available Neste trabalho o planejamento agregado de produção de sucos concentrados congelados de laranja é modelado com considerações de incertezas em alguns parâmetros, de modo a constituir uma ferramenta efetiva de suporte à tomada de decisões. A abordagem de otimização robusta é baseada em um modelo determinístico de programação linear com múltiplos produtos, estágios e períodos proposto em Munhoz e Morabito (2010. Além das decisões de produção, mistura e estocagem de sucos, esse modelo também incorpora o planejamento de colheita da laranja, levando-se em consideração as curvas de maturação das frutas. Um estudo de caso foi realizado em uma empresa do setor localizada no estado de São Paulo, envolvendo várias plantas e uma rede de distribuição internacional. Os resultados computacionais obtidos com a abordagem de otimização robusta, utilizando uma linguagem de modelagem algébrica e um aplicativo de última geração de solução de problemas de programação matemática, indicam que a abordagem tem potencial para ser aplicada com sucesso em situações reais.In this work, a frozen concentrated orange juice aggregate production planning is modeled taking into account uncertainty in some parameters. This generates an effective tool to support decision making. The robust optimization approach is based on a linear programming model with multiple products, stages and periods proposed by Munhoz and Morabito (2010. Besides decisions in the production, blending and storage of juices, the model also includes an orange harvesting plan, which considers orange maturation curves. A case study was developed in an orange juice company located in the State of Sao Paulo. This company has different facilities and a worldwide distribution system. The computational results obtained from this robust optimization approach, using an algebraic modeling language and a state-of-the-art optimization solver; indicate that the approach can be
Directory of Open Access Journals (Sweden)
José Renato Munhoz
2013-06-01
Full Text Available Neste trabalho o planejamento agregado de produção de sucos concentrados congelados de laranja é modelado com considerações de incertezas em alguns parâmetros, de modo a constituir uma ferramenta efetiva de suporte à tomada de decisões. A abordagem de otimização robusta é baseada em um modelo determinístico de programação linear com múltiplos produtos, estágios e períodos proposto em Munhoz e Morabito (2010. Além das decisões de produção, mistura e estocagem de sucos, esse modelo também incorpora o planejamento de colheita da laranja, levando-se em consideração as curvas de maturação das frutas. Um estudo de caso foi realizado em uma empresa do setor localizada no estado de São Paulo, envolvendo várias plantas e uma rede de distribuição internacional. Os resultados computacionais obtidos com a abordagem de otimização robusta, utilizando uma linguagem de modelagem algébrica e um aplicativo de última geração de solução de problemas de programação matemática, indicam que a abordagem tem potencial para ser aplicada com sucesso em situações reais.In this work, a frozen concentrated orange juice aggregate production planning is modeled taking into account uncertainty in some parameters. This generates an effective tool to support decision making. The robust optimization approach is based on a linear programming model with multiple products, stages and periods proposed by Munhoz and Morabito (2010. Besides decisions in the production, blending and storage of juices, the model also includes an orange harvesting plan, which considers orange maturation curves. A case study was developed in an orange juice company located in the State of Sao Paulo. This company has different facilities and a worldwide distribution system. The computational results obtained from this robust optimization approach, using an algebraic modeling language and a state-of-the-art optimization solver; indicate that the approach can be
Simulation-based robust optimization for signal timing and setting.
2009-12-30
The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...
Robust Optimization Model for Production Planning Problem under Uncertainty
Directory of Open Access Journals (Sweden)
Pembe GÜÇLÜ
2017-01-01
Full Text Available Conditions of businesses change very quickly. To take into account the uncertainty engendered by changes has become almost a rule while planning. Robust optimization techniques that are methods of handling uncertainty ensure to produce less sensitive results to changing conditions. Production planning, is to decide from which product, when and how much will be produced, with a most basic definition. Modeling and solution of the Production planning problems changes depending on structure of the production processes, parameters and variables. In this paper, it is aimed to generate and apply scenario based robust optimization model for capacitated two-stage multi-product production planning problem under parameter and demand uncertainty. With this purpose, production planning problem of a textile company that operate in İzmir has been modeled and solved, then deterministic scenarios’ and robust method’s results have been compared. Robust method has provided a production plan that has higher cost but, will result close to feasible and optimal for most of the different scenarios in the future.
Including robustness in multi-criteria optimization for intensity-modulated proton therapy
Chen, Wei; Unkelbach, Jan; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David
2012-02-01
We present a method to include robustness in a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties (or errors) of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios (shifted patient positions, proton beam undershoot and overshoot). Objectives and constraints can be defined for the nominal scenario, thus characterizing nominal plan quality. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios and thus provides a measure of plan robustness. The optimization method is based on a linear projection solver and is capable of handling large problem sizes resulting from a fine dose grid resolution, many scenarios, and a large number of proton pencil beams. A base-of-skull case is used to demonstrate the robust optimization method. It is demonstrated that the robust optimization method reduces the sensitivity of the treatment plan to setup and range errors to a degree that is not achieved by a safety margin approach. A chordoma case is analyzed in more detail to demonstrate the involved trade-offs between target underdose and brainstem sparing as well as robustness and nominal plan quality. The latter illustrates the advantage of MCO in the context of robust planning. For all cases examined, the robust optimization for
Robust output LQ optimal control via integral sliding modes
Fridman, Leonid; Bejarano, Francisco Javier
2014-01-01
Featuring original research from well-known experts in the field of sliding mode control, this monograph presents new design schemes for implementing LQ control solutions in situations where the output system is the only information provided about the state of the plant. This new design works under the restrictions of matched disturbances without losing its desirable features. On the cutting-edge of optimal control research, Robust Output LQ Optimal Control via Integral Sliding Modes is an excellent resource for both graduate students and professionals involved in linear systems, optimal control, observation of systems with unknown inputs, and automatization. In the theory of optimal control, the linear quadratic (LQ) optimal problem plays an important role due to its physical meaning, and its solution is easily given by an algebraic Riccati equation. This solution turns out to be restrictive, however, because of two assumptions: the system must be free from disturbances and the entire state vector must be kn...
On robust control of uncertain chaotic systems: a sliding-mode synthesis via chaotic optimization
International Nuclear Information System (INIS)
Lu Zhao; Shieh Leangsan; Chen GuanRong
2003-01-01
This paper presents a novel Lyapunov-based control approach which utilizes a Lyapunov function of the nominal plant for robust tracking control of general multi-input uncertain nonlinear systems. The difficulty of constructing a control Lyapunov function is alleviated by means of predefining an optimal sliding mode. The conventional schemes for constructing sliding modes of nonlinear systems stipulate that the system of interest is canonical-transformable or feedback-linearizable. An innovative approach that exploits a chaotic optimizing algorithm is developed thereby obtaining the optimal sliding manifold for the control purpose. Simulations on the uncertain chaotic Chen's system illustrate the effectiveness of the proposed approach
Stochastic analysis and robust optimization for a deck lid inner panel stamping
International Nuclear Information System (INIS)
Hou, Bo; Wang, Wurong; Li, Shuhui; Lin, Zhongqin; Xia, Z. Cedric
2010-01-01
FE-simulation and optimization are widely used in the stamping process to improve design quality and shorten development cycle. However, the current simulation and optimization may lead to non-robust results due to not considering the variation of material and process parameters. In this study, a novel stochastic analysis and robust optimization approach is proposed to improve the stamping robustness, where the uncertainties are involved to reflect manufacturing reality. A meta-model based stochastic analysis method is developed, where FE-simulation, uniform design and response surface methodology (RSM) are used to construct meta-model, based on which Monte-Carlo simulation is performed to predict the influence of input parameters variation on the final product quality. By applying the stochastic analysis, uniform design and RSM, the mean and the standard deviation (SD) of product quality are calculated as functions of the controllable process parameters. The robust optimization model composed of mean and SD is constructed and solved, the result of which is compared with the deterministic one to show its advantages. It is demonstrated that the product quality variations are reduced significantly, and quality targets (reject rate) are achieved under the robust optimal solution. The developed approach offers rapid and reliable results for engineers to deal with potential stamping problems during the early phase of product and tooling design, saving more time and resources.
Energy Technology Data Exchange (ETDEWEB)
Voort, Sebastian van der [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Water, Steven van de [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Perkó, Zoltán [Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Heijmen, Ben [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Lathouwers, Danny [Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Hoogeman, Mischa, E-mail: m.hoogeman@erasmusmc.nl [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands)
2016-05-01
Purpose: We aimed to derive a “robustness recipe” giving the range robustness (RR) and setup robustness (SR) settings (ie, the error values) that ensure adequate clinical target volume (CTV) coverage in oropharyngeal cancer patients for given gaussian distributions of systematic setup, random setup, and range errors (characterized by standard deviations of Σ, σ, and ρ, respectively) when used in minimax worst-case robust intensity modulated proton therapy (IMPT) optimization. Methods and Materials: For the analysis, contoured computed tomography (CT) scans of 9 unilateral and 9 bilateral patients were used. An IMPT plan was considered robust if, for at least 98% of the simulated fractionated treatments, 98% of the CTV received 95% or more of the prescribed dose. For fast assessment of the CTV coverage for given error distributions (ie, different values of Σ, σ, and ρ), polynomial chaos methods were used. Separate recipes were derived for the unilateral and bilateral cases using one patient from each group, and all 18 patients were included in the validation of the recipes. Results: Treatment plans for bilateral cases are intrinsically more robust than those for unilateral cases. The required RR only depends on the ρ, and SR can be fitted by second-order polynomials in Σ and σ. The formulas for the derived robustness recipes are as follows: Unilateral patients need SR = −0.15Σ{sup 2} + 0.27σ{sup 2} + 1.85Σ − 0.06σ + 1.22 and RR=3% for ρ = 1% and ρ = 2%; bilateral patients need SR = −0.07Σ{sup 2} + 0.19σ{sup 2} + 1.34Σ − 0.07σ + 1.17 and RR=3% and 4% for ρ = 1% and 2%, respectively. For the recipe validation, 2 plans were generated for each of the 18 patients corresponding to Σ = σ = 1.5 mm and ρ = 0% and 2%. Thirty-four plans had adequate CTV coverage in 98% or more of the simulated fractionated treatments; the remaining 2 had adequate coverage in 97.8% and 97.9%. Conclusions: Robustness recipes were derived that can
Robust Optimal Adaptive Control Method with Large Adaptive Gain
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
A robust optimization model for distribution and evacuation in the disaster response phase
Fereiduni, Meysam; Shahanaghi, Kamran
2017-03-01
Natural disasters, such as earthquakes, affect thousands of people and can cause enormous financial loss. Therefore, an efficient response immediately following a natural disaster is vital to minimize the aforementioned negative effects. This research paper presents a network design model for humanitarian logistics which will assist in location and allocation decisions for multiple disaster periods. At first, a single-objective optimization model is presented that addresses the response phase of disaster management. This model will help the decision makers to make the most optimal choices in regard to location, allocation, and evacuation simultaneously. The proposed model also considers emergency tents as temporary medical centers. To cope with the uncertainty and dynamic nature of disasters, and their consequences, our multi-period robust model considers the values of critical input data in a set of various scenarios. Second, because of probable disruption in the distribution infrastructure (such as bridges), the Monte Carlo simulation is used for generating related random numbers and different scenarios; the p-robust approach is utilized to formulate the new network. The p-robust approach can predict possible damages along pathways and among relief bases. We render a case study of our robust optimization approach for Tehran's plausible earthquake in region 1. Sensitivity analysis' experiments are proposed to explore the effects of various problem parameters. These experiments will give managerial insights and can guide DMs under a variety of conditions. Then, the performances of the "robust optimization" approach and the "p-robust optimization" approach are evaluated. Intriguing results and practical insights are demonstrated by our analysis on this comparison.
Integrating robust timetabling in line plan optimization for railway systems
DEFF Research Database (Denmark)
Burggraeve, Sofie; Bull, Simon Henry; Vansteenwegen, Pieter
2017-01-01
We propose a heuristic algorithm to build a railway line plan from scratch that minimizes passenger travel time and operator cost and for which a feasible and robust timetable exists. A line planning module and a timetabling module work iteratively and interactively. The line planning module......, but is constrained by limited shunt capacity. While the operator and passenger cost remain close to those of the initially and (for these costs) optimally built line plan, the timetable corresponding to the finally developed robust line plan significantly improves the minimum buffer time, and thus the robustness...... creates an initial line plan. The timetabling module evaluates the line plan and identifies a critical line based on minimum buffer times between train pairs. The line planning module proposes a new line plan in which the time length of the critical line is modified in order to provide more flexibility...
Optimization of robustness of interdependent network controllability by redundant design.
Directory of Open Access Journals (Sweden)
Zenghu Zhang
Full Text Available Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy or DBS (degree based strategy for node backup and HDF(high degree first for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability.
Stochastic simulation and robust design optimization of integrated photonic filters
Directory of Open Access Journals (Sweden)
Weng Tsui-Wei
2016-07-01
Full Text Available Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%–35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.
On projection methods, convergence and robust formulations in topology optimization
DEFF Research Database (Denmark)
Wang, Fengwen; Lazarov, Boyan Stefanov; Sigmund, Ole
2011-01-01
alleviated using various projection methods. In this paper we show that simple projection methods do not ensure local mesh-convergence and propose a modified robust topology optimization formulation based on erosion, intermediate and dilation projections that ensures both global and local mesh-convergence.......Mesh convergence and manufacturability of topology optimized designs have previously mainly been assured using density or sensitivity based filtering techniques. The drawback of these techniques has been gray transition regions between solid and void parts, but this problem has recently been...
Robust optimal self tuning regulator of nuclear reactors
International Nuclear Information System (INIS)
Nouri Khajavi, M.; Menhaj, M.B.; Ghofrani, M.B.
2000-01-01
Nuclear power reactors are, in nature nonlinear and time varying. These characteristics must be considered, if large power variations occur in their working regime. In this paper a robust optimal self-tuning regulator for regulating the power of a nuclear reactor has been designed and simulated. The proposed controller is capable of regulating power levels in a wide power range (10% to 100% power levels). The controller achieves a fast and good transient response. The simulation results show that the proposed controller outperforms the fixed optimal control recently cited in the literature for nuclear power plants
Cascade-robustness optimization of coupling preference in interconnected networks
International Nuclear Information System (INIS)
Zhang, Xue-Jun; Xu, Guo-Qiang; Zhu, Yan-Bo; Xia, Yong-Xiang
2016-01-01
Highlights: • A specific memetic algorithm was proposed to optimize coupling links. • A small toy model was investigated to examine the underlying mechanism. • The MA optimized strategy exhibits a moderate assortative pattern. • A novel coupling coefficient index was proposed to quantify coupling preference. - Abstract: Recently, the robustness of interconnected networks has attracted extensive attentions, one of which is to investigate the influence of coupling preference. In this paper, the memetic algorithm (MA) is employed to optimize the coupling links of interconnected networks. Afterwards, a comparison is made between MA optimized coupling strategy and traditional assortative, disassortative and random coupling preferences. It is found that the MA optimized coupling strategy with a moderate assortative value shows an outstanding performance against cascading failures on both synthetic scale-free interconnected networks and real-world networks. We then provide an explanation for this phenomenon from a micro-scope point of view and propose a coupling coefficient index to quantify the coupling preference. Our work is helpful for the design of robust interconnected networks.
Robust optimization of a tandem grating solar thermal absorber
Choi, Jongin; Kim, Mingeon; Kang, Kyeonghwan; Lee, Ikjin; Lee, Bong Jae
2018-04-01
Ideal solar thermal absorbers need to have a high value of the spectral absorptance in the broad solar spectrum to utilize the solar radiation effectively. Majority of recent studies about solar thermal absorbers focus on achieving nearly perfect absorption using nanostructures, whose characteristic dimension is smaller than the wavelength of sunlight. However, precise fabrication of such nanostructures is not easy in reality; that is, unavoidable errors always occur to some extent in the dimension of fabricated nanostructures, causing an undesirable deviation of the absorption performance between the designed structure and the actually fabricated one. In order to minimize the variation in the solar absorptance due to the fabrication error, the robust optimization can be performed during the design process. However, the optimization of solar thermal absorber considering all design variables often requires tremendous computational costs to find an optimum combination of design variables with the robustness as well as the high performance. To achieve this goal, we apply the robust optimization using the Kriging method and the genetic algorithm for designing a tandem grating solar absorber. By constructing a surrogate model through the Kriging method, computational cost can be substantially reduced because exact calculation of the performance for every combination of variables is not necessary. Using the surrogate model and the genetic algorithm, we successfully design an effective solar thermal absorber exhibiting a low-level of performance degradation due to the fabrication uncertainty of design variables.
Group Elevator Peak Scheduling Based on Robust Optimization Model
Directory of Open Access Journals (Sweden)
ZHANG, J.
2013-08-01
Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.
Directory of Open Access Journals (Sweden)
Bingyong Yan
2015-01-01
Full Text Available A robust fault detection scheme for a class of nonlinear systems with uncertainty is proposed. The proposed approach utilizes robust control theory and parameter optimization algorithm to design the gain matrix of fault tracking approximator (FTA for fault detection. The gain matrix of FTA is designed to minimize the effects of system uncertainty on residual signals while maximizing the effects of system faults on residual signals. The design of the gain matrix of FTA takes into account the robustness of residual signals to system uncertainty and sensitivity of residual signals to system faults simultaneously, which leads to a multiobjective optimization problem. Then, the detectability of system faults is rigorously analyzed by investigating the threshold of residual signals. Finally, simulation results are provided to show the validity and applicability of the proposed approach.
Robust sawtooth period control based on adaptive online optimization
International Nuclear Information System (INIS)
Bolder, J.J.; Witvoet, G.; De Baar, M.R.; Steinbuch, M.; Van de Wouw, N.; Haring, M.A.M.; Westerhof, E.; Doelman, N.J.
2012-01-01
The systematic design of a robust adaptive control strategy for the sawtooth period using electron cyclotron current drive (ECCD) is presented. Recent developments in extremum seeking control (ESC) are employed to derive an optimized controller structure and offer practical tuning guidelines for its parameters. In this technique a cost function in terms of the desired sawtooth period is optimized online by changing the ECCD deposition location based on online estimations of the gradient of the cost function. The controller design does not require a detailed model of the sawtooth instability. Therefore, the proposed ESC is widely applicable to any sawtoothing plasma or plasma simulation and is inherently robust against uncertainties or plasma variations. Moreover, it can handle a broad class of disturbances. This is demonstrated by time-domain simulations, which show successful tracking of time-varying sawtooth period references throughout the whole operating space, even in the presence of variations in plasma parameters, disturbances and slow launcher mirror dynamics. Due to its simplicity and robustness the proposed ESC is a valuable sawtooth control candidate for any experimental tokamak plasma, and may even be applicable to other fusion-related control problems. (paper)
Optimizing the robustness of electrical power systems against cascading failures.
Zhang, Yingrui; Yağan, Osman
2016-06-21
Electrical power systems are one of the most important infrastructures that support our society. However, their vulnerabilities have raised great concern recently due to several large-scale blackouts around the world. In this paper, we investigate the robustness of power systems against cascading failures initiated by a random attack. This is done under a simple yet useful model based on global and equal redistribution of load upon failures. We provide a comprehensive understanding of system robustness under this model by (i) deriving an expression for the final system size as a function of the size of initial attacks; (ii) deriving the critical attack size after which system breaks down completely; (iii) showing that complete system breakdown takes place through a first-order (i.e., discontinuous) transition in terms of the attack size; and (iv) establishing the optimal load-capacity distribution that maximizes robustness. In particular, we show that robustness is maximized when the difference between the capacity and initial load is the same for all lines; i.e., when all lines have the same redundant space regardless of their initial load. This is in contrast with the intuitive and commonly used setting where capacity of a line is a fixed factor of its initial load.
On the relation between flexibility analysis and robust optimization for linear systems
Zhang, Qi
2016-03-05
Flexibility analysis and robust optimization are two approaches to solving optimization problems under uncertainty that share some fundamental concepts, such as the use of polyhedral uncertainty sets and the worst-case approach to guarantee feasibility. The connection between these two approaches has not been sufficiently acknowledged and examined in the literature. In this context, the contributions of this work are fourfold: (1) a comparison between flexibility analysis and robust optimization from a historical perspective is presented; (2) for linear systems, new formulations for the three classical flexibility analysis problems—flexibility test, flexibility index, and design under uncertainty—based on duality theory and the affinely adjustable robust optimization (AARO) approach are proposed; (3) the AARO approach is shown to be generally more restrictive such that it may lead to overly conservative solutions; (4) numerical examples show the improved computational performance from the proposed formulations compared to the traditional flexibility analysis models. © 2016 American Institute of Chemical Engineers AIChE J, 62: 3109–3123, 2016
Distributionally Robust Return-Risk Optimization Models and Their Applications
Directory of Open Access Journals (Sweden)
Li Yang
2014-01-01
Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.
Robust C subroutines for non-linear optimization
DEFF Research Database (Denmark)
Brock, Pernille; Madsen, Kaj; Nielsen, Hans Bruun
2004-01-01
This report presents a package of robust and easy-to-use C subroutines for solving unconstrained and constrained non-linear optimization problems. The intention is that the routines should use the currently best algorithms available. All routines have standardized calls, and the user does not have...... by changing 1 to 0. The present report is a new and updated version of a previous report NI-91-03 with the same title, [16]. Both the previous and the present report describe a collection of subroutines, which have been translated from Fortran to C. The reason for writing the present report is that some...... of the C subroutines have been replaced by more effective and robust versions translated from the original Fortran subroutines to C by the Bandler Group, see [1]. Also the test examples have been modi ed to some extent. For a description of the original Fortran subroutines see the report [17]. The software...
Siraj, M.M.; Van den Hof, P.M.J.; Jansen, J.D.
2017-01-01
Model-based dynamic optimization of the water-flooding process in oil reservoirs is a computationally complex problem and suffers from high levels of uncertainty. A traditional way of quantifying uncertainty in robust water-flooding optimization is by considering an ensemble of uncertain model
Rakić, Tijana; Jovanović, Marko; Dumić, Aleksandra; Pekić, Marina; Ribić, Sanja; Stojanović, Biljana Jancić
2013-01-01
This paper presents multiobjective optimization of complex mixtures separation in hydrophilic interaction liquid chromatography (HILIC). The selected model mixture consisted of five psychotropic drugs: clozapine, thioridazine, sulpiride, pheniramine and lamotrigine. Three factors related to the mobile phase composition (acetonitrile content, pH of the water phase and concentration of ammonium acetate) were optimized in order to achieve the following goals: maximal separation quality, minimal total analysis duration and robustness of an optimum. The consideration of robustness in early phases of the method development provides reliable methods with low risk for failure in validation phase. The simultaneous optimization of all goals was achieved by multiple threshold approach combined with grid point search. The identified optimal separation conditions (acetonitrile content 83%, pH of the water phase 3.5 and ammonium acetate content in water phase 14 mM) were experimentally verified.
HEURISTIC APPROACHES FOR PORTFOLIO OPTIMIZATION
Manfred Gilli, Evis Kellezi
2000-01-01
The paper first compares the use of optimization heuristics to the classical optimization techniques for the selection of optimal portfolios. Second, the heuristic approach is applied to problems other than those in the standard mean-variance framework where the classical optimization fails.
International Nuclear Information System (INIS)
Bender, Edward T.
2012-01-01
Purpose: To develop a robust method for deriving dose-painting prescription functions using spatial information about the risk for disease recurrence. Methods: Spatial distributions of radiobiological model parameters are derived from distributions of recurrence risk after uniform irradiation. These model parameters are then used to derive optimal dose-painting prescription functions given a constant mean biologically effective dose. Results: An estimate for the optimal dose distribution can be derived based on spatial information about recurrence risk. Dose painting based on imaging markers that are moderately or poorly correlated with recurrence risk are predicted to potentially result in inferior disease control when compared the same mean biologically effective dose delivered uniformly. A robust optimization approach may partially mitigate this issue. Conclusions: The methods described here can be used to derive an estimate for a robust, patient-specific prescription function for use in dose painting. Two approximate scaling relationships were observed: First, the optimal choice for the maximum dose differential when using either a linear or two-compartment prescription function is proportional to R, where R is the Pearson correlation coefficient between a given imaging marker and recurrence risk after uniform irradiation. Second, the predicted maximum possible gain in tumor control probability for any robust optimization technique is nearly proportional to the square of R.
Three Essays on Robust Optimization of Efficient Portfolios
Liu, Hao
2013-01-01
The mean-variance approach was first proposed by Markowitz (1952), and laid the foundation of the modern portfolio theory. Despite its theoretical appeal, the practical implementation of optimized portfolios is strongly restricted by the fact that the two inputs, the means and the covariance matrix of asset returns, are unknown and have to be estimated by available historical information. Due to the estimation risk inherited from inputs, desired properties of estimated optimal portfolios are ...
DEFF Research Database (Denmark)
Henrichsen, Søren Randrup; Lindgaard, Esben; Lund, Erik
2015-01-01
Robust buckling optimal design of laminated composite structures is conducted in this work. Optimal designs are obtained by considering geometric imperfections in the optimization procedure. Discrete Material Optimization is applied to obtain optimal laminate designs. The optimal geometric...... imperfection is represented by the “worst” shape imperfection. The two optimization problems are combined through the recurrence optimization. Hereby the imperfection sensitivity of the considered structures can be studied. The recurrence optimization is demonstrated through a U-profile and a cylindrical panel...... example. The imperfection sensitivity of the optimized structure decreases during the recurrence optimization for both examples, hence robust buckling optimal structures are designed....
Robust Inventory System Optimization Based on Simulation and Multiple Criteria Decision Making
Directory of Open Access Journals (Sweden)
Ahmad Mortazavi
2014-01-01
Full Text Available Inventory management in retailers is difficult and complex decision making process which is related to the conflict criteria, also existence of cyclic changes and trend in demand is inevitable in many industries. In this paper, simulation modeling is considered as efficient tool for modeling of retailer multiproduct inventory system. For simulation model optimization, a novel multicriteria and robust surrogate model is designed based on multiple attribute decision making (MADM method, design of experiments (DOE, and principal component analysis (PCA. This approach as a main contribution of this paper, provides a framework for robust multiple criteria decision making under uncertainty.
Horsetail matching: a flexible approach to optimization under uncertainty
Cook, L. W.; Jarrett, J. P.
2018-04-01
It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.
A robust optimization model for blood supply chain in emergency situations
Directory of Open Access Journals (Sweden)
Meysam Fereiduni
2016-09-01
Full Text Available In this paper, a multi-period model for blood supply chain in emergency situation is presented to optimize decisions related to locate blood facilities and distribute blood products after natural disasters. In disastrous situations, uncertainty is an inseparable part of humanitarian logistics and blood supply chain as well. This paper proposes a robust network to capture the uncertain nature of blood supply chain during and after disasters. This study considers donor points, blood facilities, processing and testing labs, and hospitals as the components of blood supply chain. In addition, this paper makes location and allocation decisions for multiple post disaster periods through real data. The study compares the performances of “p-robust optimization” approach and “robust optimization” approach and the results are discussed.
Robust filtering for uncertain systems a parameter-dependent approach
Gao, Huijun
2014-01-01
This monograph provides the reader with a systematic treatment of robust filter design, a key issue in systems, control and signal processing, because of the fact that the inevitable presence of uncertainty in system and signal models often degrades the filtering performance and may even cause instability. The methods described are therefore not subject to the rigorous assumptions of traditional Kalman filtering. The monograph is concerned with robust filtering for various dynamical systems with parametric uncertainties, and focuses on parameter-dependent approaches to filter design. Classical filtering schemes, like H2 filtering and H¥ filtering, are addressed, and emerging issues such as robust filtering with constraints on communication channels and signal frequency characteristics are discussed. The text features: · design approaches to robust filters arranged according to varying complexity level, and emphasizing robust filtering in the parameter-dependent framework for the first time; ·...
Robust Nearfield Wideband Beamforming Design Based on Adaptive-Weighted Convex Optimization
Directory of Open Access Journals (Sweden)
Guo Ye-Cai
2017-01-01
Full Text Available Nearfield wideband beamformers for microphone arrays have wide applications in multichannel speech enhancement. The nearfield wideband beamformer design based on convex optimization is one of the typical representatives of robust approaches. However, in this approach, the coefficient of convex optimization is a constant, which has not used all the freedom provided by the weighting coefficient efficiently. Therefore, it is still necessary to further improve the performance. To solve this problem, we developed a robust nearfield wideband beamformer design approach based on adaptive-weighted convex optimization. The proposed approach defines an adaptive-weighted function by the adaptive array signal processing theory and adjusts its value flexibly, which has improved the beamforming performance. During each process of the adaptive updating of the weighting function, the convex optimization problem can be formulated as a SOCP (Second-Order Cone Program problem, which could be solved efficiently using the well-established interior-point methods. This method is suitable for the case where the sound source is in the nearfield range, can work well in the presence of microphone mismatches, and is applicable to arbitrary array geometries. Several design examples are presented to verify the effectiveness of the proposed approach and the correctness of the theoretical analysis.
Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.
Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon
2017-01-01
In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.
Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions
Ilgen, Marc R.
This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value
Robust approach to f(R) gravity
International Nuclear Information System (INIS)
Jaime, Luisa G.; Patino, Leonardo; Salgado, Marcelo
2011-01-01
We consider metric f(R) theories of gravity without mapping them to their scalar-tensor counterpart, but using the Ricci scalar itself as an ''extra'' degree of freedom. This approach avoids then the introduction of a scalar-field potential that might be ill defined (not single valued). In order to explicitly show the usefulness of this method, we focus on static and spherically symmetric spacetimes and deal with the recent controversy about the existence of extended relativistic objects in certain class of f(R) models.
Robust, Causal, and Incremental Approaches to Investigating Linguistic Adaptation
Roberts, Seán G.
2018-01-01
This paper discusses the maximum robustness approach for studying cases of adaptation in language. We live in an age where we have more data on more languages than ever before, and more data to link it with from other domains. This should make it easier to test hypotheses involving adaptation, and also to spot new patterns that might be explained by adaptation. However, there is not much discussion of the overall approach to research in this area. There are outstanding questions about how to formalize theories, what the criteria are for directing research and how to integrate results from different methods into a clear assessment of a hypothesis. This paper addresses some of those issues by suggesting an approach which is causal, incremental and robust. It illustrates the approach with reference to a recent claim that dry environments select against the use of precise contrasts in pitch. Study 1 replicates a previous analysis of the link between humidity and lexical tone with an alternative dataset and finds that it is not robust. Study 2 performs an analysis with a continuous measure of tone and finds no significant correlation. Study 3 addresses a more recent analysis of the link between humidity and vowel use and finds that it is robust, though the effect size is small and the robustness of the measurement of vowel use is low. Methodological robustness of the general theory is addressed by suggesting additional approaches including iterated learning, a historical case study, corpus studies, and studying individual speech. PMID:29515487
Topology optimization approaches
DEFF Research Database (Denmark)
Sigmund, Ole; Maute, Kurt
2013-01-01
Topology optimization has undergone a tremendous development since its introduction in the seminal paper by Bendsøe and Kikuchi in 1988. By now, the concept is developing in many different directions, including “density”, “level set”, “topological derivative”, “phase field”, “evolutionary...
Directory of Open Access Journals (Sweden)
Vahid Raissi Dehkordi
2009-01-01
Full Text Available This paper deals with the robust performance problem of a linear time-invariant control system in the presence of robust controller uncertainty. Assuming that plant uncertainty is modeled as an additive perturbation, a geometrical approach is followed in order to find a necessary and sufficient condition for robust performance in the form of a bound on the magnitude of controller uncertainty. This frequency domain bound is derived by converting the problem into an optimization problem, whose solution is shown to be more time-efficient than a conventional structured singular value calculation. The bound on controller uncertainty can be used in controller order reduction and implementation problems.
Multidisciplinary Design Optimization for High Reliability and Robustness
National Research Council Canada - National Science Library
Grandhi, Ramana
2005-01-01
.... Over the last 3 years Wright State University has been applying analysis tools to predict the behavior of critical disciplines to produce highly robust torpedo designs using robust multi-disciplinary...
RECOVERY ACT - Robust Optimization for Connectivity and Flows in Dynamic Complex Networks
Energy Technology Data Exchange (ETDEWEB)
Balasundaram, Balabhaskar [Oklahoma State Univ., Stillwater, OK (United States); Butenko, Sergiy [Texas A & M Univ., College Station, TX (United States); Boginski, Vladimir [Univ. of Florida, Gainesville, FL (United States); Uryasev, Stan [Univ. of Florida, Gainesville, FL (United States)
2013-12-25
The goal of this project was to study robust connectivity and flow patterns of complex multi-scale systems modeled as networks. Networks provide effective ways to study global, system level properties, as well as local, multi-scale interactions at a component level. Numerous applications from power systems, telecommunication, transportation, biology, social science, and other areas have benefited from novel network-based models and their analysis. Modeling and optimization techniques that employ appropriate measures of risk for identifying robust clusters and resilient network designs in networks subject to uncertain failures were investigated in this collaborative multi-university project. In many practical situations one has to deal with uncertainties associated with possible failures of network components, thereby affecting the overall efficiency and performance of the system (e.g., every node/connection has a probability of partial or complete failure). Some extreme examples include power grid component failures, airline hub failures due to weather, or freeway closures due to emergencies. These are also situations in which people, materials, or other resources need to be managed efficiently. Important practical examples include rerouting flow through power grids, adjusting flight plans, and identifying routes for emergency services and supplies, in the event network elements fail unexpectedly. Solutions that are robust under uncertainty, in addition to being economically efficient, are needed. This project has led to the development of novel models and methodologies that can tackle the optimization problems arising in such situations. A number of new concepts, which have not been previously applied in this setting, were investigated in the framework of the project. The results can potentially help decision-makers to better control and identify robust or risk-averse decisions in such situations. Formulations and optimal solutions of the considered problems need
DEFF Research Database (Denmark)
Zhang, Yipu; Ai, Xiaomeng; Wen, Jinyu
2018-01-01
. In this paper, a novel data-adaptive robust optimization method for the unit commitment is proposed for the power system with wind farms integrated. The extreme scenario extraction and the two stage robust optimization are combined in the proposed method. The data-adaptive set consisting of a few extreme...... scenarios is derived to reduce the conservativeness by considering the temporal and spatial correlations of multiple wind farms. Numerical results demonstrate that the proposed data-adaptive robust optimization algorithm is less conservative than the current two-stage optimization approaches while maintains...
Robust topology optimization accounting for misplacement of material
DEFF Research Database (Denmark)
Jansen, Miche; Lombaert, Geert; Diehl, Moritz
2013-01-01
into account this type of geometric imperfections. A density filter based approach is followed, and translations of material are obtained by adding a small perturbation to the center of the filter kernel. The spatial variation of the geometric imperfections is modeled by means of a vector valued random field....... A sampling method is used to estimate these statistics during the optimization process. The proposed method is successfully applied to three example problems: the minimum compliance design of a slender column-like structure and a cantilever beam and a compliant mechanism design. An extensive Monte Carlo...
An Evolutionary Approach for Robust Layout Synthesis of MEMS
DEFF Research Database (Denmark)
Fan, Zhun; Wang, Jiachuan; Goodman, Erik
2005-01-01
The paper introduces a robust design method for layout synthesis of MEM resonators subject to inherent geometric uncertainties such as the fabrication error on the sidewall of the structure. The robust design problem is formulated as a multi-objective constrained optimisation problem after certain...... assumptions and treated with multiobjective genetic algorithm (MOGA), a special type of evolutionary computing approaches. Case study based on layout synthesis of a comb-driven MEM resonator shows that the approach proposed in this paper can lead to design results that meet the target performance and are less...
An optimization methodology for identifying robust process integration investments under uncertainty
International Nuclear Information System (INIS)
Svensson, Elin; Berntsson, Thore; Stroemberg, Ann-Brith; Patriksson, Michael
2009-01-01
Uncertainties in future energy prices and policies strongly affect decisions on investments in process integration measures in industry. In this paper, we present a five-step methodology for the identification of robust investment alternatives incorporating explicitly such uncertainties in the optimization model. Methods for optimization under uncertainty (or, stochastic programming) are thus combined with a deep understanding of process integration and process technology in order to achieve a framework for decision-making concerning the investment planning of process integration measures under uncertainty. The proposed methodology enables the optimization of investments in energy efficiency with respect to their net present value or an environmental objective. In particular, as a result of the optimization approach, complex investment alternatives, allowing for combinations of energy efficiency measures, can be analyzed. Uncertainties as well as time-dependent parameters, such as energy prices and policies, are modelled using a scenario-based approach, enabling the identification of robust investment solutions. The methodology is primarily an aid for decision-makers in industry, but it will also provide insight for policy-makers into how uncertainties regarding future price levels and policy instruments affect the decisions on investments in energy efficiency measures. (author)
An optimization methodology for identifying robust process integration investments under uncertainty
Energy Technology Data Exchange (ETDEWEB)
Svensson, Elin; Berntsson, Thore [Department of Energy and Environment, Division of Heat and Power Technology, Chalmers University of Technology, SE-412 96 Goeteborg (Sweden); Stroemberg, Ann-Brith [Fraunhofer-Chalmers Research Centre for Industrial Mathematics, Chalmers Science Park, SE-412 88 Gothenburg (Sweden); Patriksson, Michael [Department of Mathematical Sciences, Chalmers University of Technology and Department of Mathematical Sciences, University of Gothenburg, SE-412 96 Goeteborg (Sweden)
2009-02-15
Uncertainties in future energy prices and policies strongly affect decisions on investments in process integration measures in industry. In this paper, we present a five-step methodology for the identification of robust investment alternatives incorporating explicitly such uncertainties in the optimization model. Methods for optimization under uncertainty (or, stochastic programming) are thus combined with a deep understanding of process integration and process technology in order to achieve a framework for decision-making concerning the investment planning of process integration measures under uncertainty. The proposed methodology enables the optimization of investments in energy efficiency with respect to their net present value or an environmental objective. In particular, as a result of the optimization approach, complex investment alternatives, allowing for combinations of energy efficiency measures, can be analyzed. Uncertainties as well as time-dependent parameters, such as energy prices and policies, are modelled using a scenario-based approach, enabling the identification of robust investment solutions. The methodology is primarily an aid for decision-makers in industry, but it will also provide insight for policy-makers into how uncertainties regarding future price levels and policy instruments affect the decisions on investments in energy efficiency measures. (author)
Robust Homography Estimation Based on Nonlinear Least Squares Optimization
Directory of Open Access Journals (Sweden)
Wei Mou
2014-01-01
Full Text Available The homography between image pairs is normally estimated by minimizing a suitable cost function given 2D keypoint correspondences. The correspondences are typically established using descriptor distance of keypoints. However, the correspondences are often incorrect due to ambiguous descriptors which can introduce errors into following homography computing step. There have been numerous attempts to filter out these erroneous correspondences, but it is unlikely to always achieve perfect matching. To deal with this problem, we propose a nonlinear least squares optimization approach to compute homography such that false matches have no or little effect on computed homography. Unlike normal homography computation algorithms, our method formulates not only the keypoints’ geometric relationship but also their descriptor similarity into cost function. Moreover, the cost function is parametrized in such a way that incorrect correspondences can be simultaneously identified while the homography is computed. Experiments show that the proposed approach can perform well even with the presence of a large number of outliers.
WANG, Qingrong; ZHU, Changfeng; LI, Ying; ZHANG, Zhengkun
2017-06-01
Considering the time dependence of emergency logistic network and complexity of the environment that the network exists in, in this paper the time dependent network optimization theory and robust discrete optimization theory are combined, and the emergency logistics dynamic network optimization model with characteristics of robustness is built to maximize the timeliness of emergency logistics. On this basis, considering the complexity of dynamic network and the time dependence of edge weight, an improved ant colony algorithm is proposed to realize the coupling of the optimization algorithm and the network time dependence and robustness. Finally, a case study has been carried out in order to testify validity of this robustness optimization model and its algorithm, and the value of different regulation factors was analyzed considering the importance of the value of the control factor in solving the optimal path. Analysis results show that this model and its algorithm above-mentioned have good timeliness and strong robustness.
Energy Technology Data Exchange (ETDEWEB)
Newpower, M; Ge, S; Mohan, R [UT MD Anderson Cancer Center, Houston, TX (United States)
2016-06-15
Purpose: To report an approach to quantify the normal tissue sparing for 4D robustly-optimized versus PTV-optimized IMPT plans. Methods: We generated two sets of 90 DVHs from a patient’s 10-phase 4D CT set; one by conventional PTV-based optimization done in the Eclipse treatment planning system, and the other by an in-house robust optimization algorithm. The 90 DVHs were created for the following scenarios in each of the ten phases of the 4DCT: ± 5mm shift along x, y, z; ± 3.5% range uncertainty and a nominal scenario. A Matlab function written by Gay and Niemierko was modified to calculate EUD for each DVH for the following structures: esophagus, heart, ipsilateral lung and spinal cord. An F-test determined whether or not the variances of each structure’s DVHs were statistically different. Then a t-test determined if the average EUDs for each optimization algorithm were statistically significantly different. Results: T-test results showed each structure had a statistically significant difference in average EUD when comparing robust optimization versus PTV-based optimization. Under robust optimization all structures except the spinal cord received lower EUDs than PTV-based optimization. Using robust optimization the average EUDs decreased 1.45% for the esophagus, 1.54% for the heart and 5.45% for the ipsilateral lung. The average EUD to the spinal cord increased 24.86% but was still well below tolerance. Conclusion: This work has helped quantify a qualitative relationship noted earlier in our work: that robust optimization leads to plans with greater normal tissue sparing compared to PTV-based optimization. Except in the case of the spinal cord all structures received a lower EUD under robust optimization and these results are statistically significant. While the average EUD to the spinal cord increased to 25.06 Gy under robust optimization it is still well under the TD50 value of 66.5 Gy from Emami et al. Supported in part by the NCI U19 CA021239.
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2018-03-01
The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.
International Nuclear Information System (INIS)
Guo, Chunxiang; Liu, Xiaoli; Jin, Maozhu; Lv, Zhihan
2016-01-01
Considering the uncertainty of the macroeconomic environment, the robust optimization method is studied for constructing and designing the automotive supply chain network, and based on the definition of robust solution a robust optimization model is built for integrated supply chain network design that consists of supplier selection problem and facility location–distribution problem. The tabu search algorithm is proposed for supply chain node configuration, analyzing the influence of the level of uncertainty on robust results, and by comparing the performance of supply chain network design through the stochastic programming model and robustness optimize model, on this basis, determining the rational layout of supply chain network under macroeconomic fluctuations. At last the contrastive test result validates that the performance of tabu search algorithm is outstanding on convergence and computational time. Meanwhile it is indicated that the robust optimization model can reduce investment risks effectively when it is applied to supply chain network design.
Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A
2012-07-02
Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of
Martowicz, Adam; Uhl, Tadeusz
2012-10-01
The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.
ARTICLE Robust Diagnosis of Mechatronics System by Bond Graph Approach
Directory of Open Access Journals (Sweden)
Abderrahmene Sellami
2018-03-01
Full Text Available This article presents design of a robust diagnostic system based on bond graph model for a mechatronic system. Mechatronics is the synergistic and systemic combination of mechanics, electronics and computer science. The design of a mechatronic system modeled by the bond graph model becomes easier and more generous. The bond graph tool is a unified graphical language for all areas of engineering sciences and confirmed as a structured approach to modeling and simulation of multidisciplinary systems.
International Nuclear Information System (INIS)
Gao, Hao
2016-01-01
For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)
International Nuclear Information System (INIS)
Barragán, A. M.; Differding, S.; Lee, J. A.; Sterpin, E.; Janssens, G.
2015-01-01
Purpose: To prove the ability of protons to reproduce a dose gradient that matches a dose painting by numbers (DPBN) prescription in the presence of setup and range errors, by using contours and structure-based optimization in a commercial treatment planning system. Methods: For two patients with head and neck cancer, voxel-by-voxel prescription to the target volume (GTV PET ) was calculated from 18 FDG-PET images and approximated with several discrete prescription subcontours. Treatments were planned with proton pencil beam scanning. In order to determine the optimal plan parameters to approach the DPBN prescription, the effects of the scanning pattern, number of fields, number of subcontours, and use of range shifter were separately tested on each patient. Different constant scanning grids (i.e., spot spacing = Δx = Δy = 3.5, 4, and 5 mm) and uniform energy layer separation [4 and 5 mm WED (water equivalent distance)] were analyzed versus a dynamic and automatic selection of the spots grid. The number of subcontours was increased from 3 to 11 while the number of beams was set to 3, 5, or 7. Conventional PTV-based and robust clinical target volumes (CTV)-based optimization strategies were considered and their robustness against range and setup errors assessed. Because of the nonuniform prescription, ensuring robustness for coverage of GTV PET inevitably leads to overdosing, which was compared for both optimization schemes. Results: The optimal number of subcontours ranged from 5 to 7 for both patients. All considered scanning grids achieved accurate dose painting (1% average difference between the prescribed and planned doses). PTV-based plans led to nonrobust target coverage while robust-optimized plans improved it considerably (differences between worst-case CTV dose and the clinical constraint was up to 3 Gy for PTV-based plans and did not exceed 1 Gy for robust CTV-based plans). Also, only 15% of the points in the GTV PET (worst case) were above 5% of DPBN
International Nuclear Information System (INIS)
Gimelli, A.; Muccillo, M.; Sannino, R.
2017-01-01
Highlights: • A specific methodology has been set up based on genetic optimization algorithm. • Results highlight a tradeoff between primary energy savings (TPES) and simple payback (SPB). • Optimized plant configurations show TPES exceeding 18% and SPB of approximately three years. • The study aims to identify the most stable plant solutions through the robust design optimization. • The research shows how a deterministic definition of the decision variables could lead to an overestimation of the results. - Abstract: The widespread adoption of combined heat and power generation is widely recognized as a strategic goal to achieve significant primary energy savings and lower carbon dioxide emissions. In this context, the purpose of this research is to evaluate the potential of cogeneration based on reciprocating gas engines for some Italian hospital buildings. Comparative analyses have been conducted based on the load profiles of two specific hospital facilities and through the study of the cogeneration system-user interaction. To this end, a specific methodology has been set up by coupling a specifically developed calculation algorithm to a genetic optimization algorithm, and a multi-objective approach has been adopted. The results from the optimization problem highlight a clear trade-off between total primary energy savings (TPES) and simple payback period (SPB). Optimized plant configurations and management strategies show TPES exceeding 18% for the reference hospital facilities and multi–gas engine solutions along with a minimum SPB of approximately three years, thereby justifying the European regulation promoting cogeneration. However, designing a CHP plant for a specific energetic, legislative or market scenario does not guarantee good performance when these scenarios change. For this reason, the proposed methodology has been enhanced in order to focus on some innovative aspects. In particular, this study proposes an uncommon and effective approach
DEFF Research Database (Denmark)
Ding, Tao; Li, Cheng; Yang, Yongheng
2017-01-01
Optimally dispatching Photovoltaic (PV) inverters is an efficient way to avoid overvoltage in active distribution networks, which may occur in the case of PV generation surplus load demand. Typically, the dispatching optimization objective is to identify critical PV inverters that have the most...... nature of solar PV energy may affect the selection of the critical PV inverters and also the final optimal objective value. In order to address this issue, a two-stage robust optimization model is proposed in this paper to achieve a robust optimal solution to the PV inverter dispatch, which can hedge...... against any possible realization within the uncertain PV outputs. In addition, the conic relaxation-based branch flow formulation and second-order cone programming based column-and-constraint generation algorithm are employed to deal with the proposed robust optimization model. Case studies on a 33-bus...
DEFF Research Database (Denmark)
2017-01-01
‘Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...
Optimal design of robust piezoelectric microgrippers undergoing large displacements
DEFF Research Database (Denmark)
Ruiz, D.; Sigmund, Ole
2018-01-01
Topology optimization combined with optimal design of electrodes is used to design piezoelectric microgrippers. Fabrication at micro-scale presents an important challenge: due to non-symmetrical lamination of the structures, out-of-plane bending spoils the behaviour of the grippers. Suppression...
Assuring robustness to noise in optimal quantum control experiments
International Nuclear Information System (INIS)
Bartelt, A.F.; Roth, M.; Mehendale, M.; Rabitz, H.
2005-01-01
Closed-loop optimal quantum control experiments operate in the inherent presence of laser noise. In many applications, attaining high quality results [i.e., a high signal-to-noise (S/N) ratio for the optimized objective] is as important as producing a high control yield. Enhancement of the S/N ratio will typically be in competition with the mean signal, however, the latter competition can be balanced by biasing the optimization experiments towards higher mean yields while retaining a good S/N ratio. Other strategies can also direct the optimization to reduce the standard deviation of the statistical signal distribution. The ability to enhance the S/N ratio through an optimized choice of the control is demonstrated for two condensed phase model systems: second harmonic generation in a nonlinear optical crystal and stimulated emission pumping in a dye solution
Robust optimization for load scheduling of a smart home with photovoltaic system
International Nuclear Information System (INIS)
Wang, Chengshan; Zhou, Yue; Jiao, Bingqi; Wang, Yamin; Liu, Wenjian; Wang, Dan
2015-01-01
Highlights: • Robust household load scheduling is presented for smart homes with PV system. • A robust counterpart is formulated to deal with PV output uncertainty. • The robust counterpart is finally transformed to a quadratic programming problem. • Load schedules with different robustness can be made by the proposed method. • Feed-in tariff and PV output would affect the significance of the proposed method. - Abstract: In this paper, a robust approach is developed to tackle the uncertainty of PV power output for load scheduling of smart homes integrated with household PV system. Specifically, a robust formulation is proposed and further transformed to an equivalent quadratic programming problem. Day-ahead load schedules with different robustness can be generated by solving the proposed robust formulation with different predefined parameters. The validity and advantage of the proposed approach has been verified by simulation results. Also, the effects of feed-in tariff and PV output have been evaluated
Approximating the Pareto set of multiobjective linear programs via robust optimization
Gorissen, B.L.; den Hertog, D.
2012-01-01
We consider problems with multiple linear objectives and linear constraints and use adjustable robust optimization and polynomial optimization as tools to approximate the Pareto set with polynomials of arbitrarily large degree. The main difference with existing techniques is that we optimize a
DEFF Research Database (Denmark)
Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer
2017-01-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...
APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION
Directory of Open Access Journals (Sweden)
N. Belavendram
2010-12-01
Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.
Hybrid Robust Optimization for the Design of a Smartphone Metal Frame Antenna
Directory of Open Access Journals (Sweden)
Sungwoo Lee
2018-01-01
Full Text Available Hybrid robust optimization that combines a genetical swarm optimization (GSO scheme with an orthogonal array (OA is proposed to design an antenna robust to the tolerances arising during the fabrication process of the antenna in this paper. An inverted-F antenna with a metal frame serves as an example to explain the procedure of the proposed method. GSO is adapted to determine the design variables of the antenna, which operates on the GSM850 band (824–894 MHz. The robustness of the antenna is evaluated through a noise test using the OA. The robustness of the optimized antenna is improved by approximately 61.3% relative to that of a conventional antenna. Conventional and optimized antennas are fabricated and measured to validate the experimental results.
Robust stabilization of nonlinear systems: The LMI approach
Directory of Open Access Journals (Sweden)
iljak D. D.
2000-01-01
Full Text Available This paper presents a new approach to robust quadratic stabilization of nonlinear systems within the framework of Linear Matrix Inequalities (LMI. The systems are composed of a linear constant part perturbed by an additive nonlinearity which depends discontinuously on both time and state. The only information about the nonlinearity is that it satisfies a quadratic constraint. Our major objective is to show how linear constant feedback laws can be formulated to stabilize this type of systems and, at the same time, maximize the bounds on the nonlinearity which the system can tolerate without going unstable. We shall broaden the new setting to include design of decentralized control laws for robust stabilization of interconnected systems. Again, the LMI methods will be used to maximize the class of uncertain interconnections which leave the overall system connectively stable. It is useful to learn that the proposed LMI formulation “recognizes” the matching conditions by returning a feedback gain matrix for any prescribed bound on the interconnection terms. More importantly, the new formulation provides a suitable setting for robust stabilization of nonlinear systems where the nonlinear perturbations satisfy the generalized matching conditions.
Robust Optimization-Based Generation Self-Scheduling under Uncertain Price
Directory of Open Access Journals (Sweden)
Xiao Luo
2011-01-01
Full Text Available This paper considers generation self-scheduling in electricity markets under uncertain price. Based on the robust optimization (denoted as RO methodology, a new self-scheduling model, which has a complicated max-min optimization structure, is set up. By using optimal dual theory, the proposed model is reformulated to an ordinary quadratic and quadratic cone programming problems in the cases of box and ellipsoidal uncertainty, respectively. IEEE 30-bus system is used to test the new model. Some comparisons with other methods are done, and the sensitivity with respect to the uncertain set is analyzed. Comparing with the existed uncertain self-scheduling approaches, the new method has twofold characteristics. First, it does not need a prediction of distribution of random variables and just requires an estimated value and the uncertain set of power price. Second, the counterpart of RO corresponding to the self-scheduling is a simple quadratic or quadratic cone programming. This indicates that the reformulated problem can be solved by many ordinary optimization algorithms.
APPLYING ROBUST RANKING METHOD IN TWO PHASE FUZZY OPTIMIZATION LINEAR PROGRAMMING PROBLEMS (FOLPP
Directory of Open Access Journals (Sweden)
Monalisha Pattnaik
2014-12-01
Full Text Available Background: This paper explores the solutions to the fuzzy optimization linear program problems (FOLPP where some parameters are fuzzy numbers. In practice, there are many problems in which all decision parameters are fuzzy numbers, and such problems are usually solved by either probabilistic programming or multi-objective programming methods. Methods: In this paper, using the concept of comparison of fuzzy numbers, a very effective method is introduced for solving these problems. This paper extends linear programming based problem in fuzzy environment. With the problem assumptions, the optimal solution can still be theoretically solved using the two phase simplex based method in fuzzy environment. To handle the fuzzy decision variables can be initially generated and then solved and improved sequentially using the fuzzy decision approach by introducing robust ranking technique. Results and conclusions: The model is illustrated with an application and a post optimal analysis approach is obtained. The proposed procedure was programmed with MATLAB (R2009a version software for plotting the four dimensional slice diagram to the application. Finally, numerical example is presented to illustrate the effectiveness of the theoretical results, and to gain additional managerial insights.
Benefits and Challenges when Performing Robust Topology Optimization for Interior Acoustic Problems
DEFF Research Database (Denmark)
Christiansen, Rasmus Ellebæk; Jensen, Jakob Søndergaard; Lazarov, Boyan Stefanov
The objective of this work is to present benets and challenges of using robust topology optimization techniques for minimizing the sound pressure in interior acoustic problems. The focus is on creating designs which maintain high performance under uniform spatial variations. This work takes offset...... in previous work considering topology optimization for interior acoustic problems, [1]. However in the previous work the robustness of the designs was not considered....
Benefits and Challenges when Performing Robust Topology Optimization for Interior Acoustic Problems
Christiansen, Rasmus Ellebæk; Jensen, Jakob Søndergaard; Lazarov, Boyan Stefanov; Sigmund, Ole
2014-01-01
The objective of this work is to present benets and challenges of using robust topology optimization techniques for minimizing the sound pressure in interior acoustic problems. The focus is on creating designs which maintain high performance under uniform spatial variations. This work takes offset in previous work considering topology optimization for interior acoustic problems, [1]. However in the previous work the robustness of the designs was not considered.
Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects
Li, Ming; Wu, Guangdong
2014-01-01
Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illus...
Nickel-Cadmium Battery Operation Management Optimization Using Robust Design
Blosiu, Julian O.; Deligiannis, Frank; DiStefano, Salvador
1996-01-01
In recent years following several spacecraft battery anomalies, it was determined that managing the operational factors of NASA flight NiCd rechargeable battery was very important in order to maintain space flight battery nominal performance. The optimization of existing flight battery operational performance was viewed as something new for a Taguchi Methods application.
Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm
2009-03-10
xfar by xint. Else, generate a new individual, using the Sobol pseudo- random sequence generator within the upper and lower bounds of the variables...12. Deb, K., Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons. 2002. 13. Sobol , I. M., "Uniformly Distributed Sequences
Robust optimization in simulation : Taguchi and Krige combined
Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.
2012-01-01
Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a “robust” methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by
Robust Bayesian decision theory applied to optimal dosage.
Abraham, Christophe; Daurès, Jean-Pierre
2004-04-15
We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. Copyright 2004 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Shujuan Wang
2015-01-01
Full Text Available This paper investigates the structural design optimization to cover both the reliability and robustness under uncertainty in design variables. The main objective is to improve the efficiency of the optimization process. To address this problem, a hybrid reliability-based robust design optimization (RRDO method is proposed. Prior to the design optimization, the Sobol sensitivity analysis is used for selecting key design variables and providing response variance as well, resulting in significantly reduced computational complexity. The single-loop algorithm is employed to guarantee the structural reliability, allowing fast optimization process. In the case of robust design, the weighting factor balances the response performance and variance with respect to the uncertainty in design variables. The main contribution of this paper is that the proposed method applies the RRDO strategy with the usage of global approximation and the Sobol sensitivity analysis, leading to the reduced computational cost. A structural example is given to illustrate the performance of the proposed method.
Robust regularized least-squares beamforming approach to signal estimation
Suliman, Mohamed Abdalla Elhag
2017-05-12
In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.
Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan
2014-11-01
This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.
Hybrid robust predictive optimization method of power system dispatch
Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY
2011-08-02
A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.
Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects
Directory of Open Access Journals (Sweden)
Ming Li
2014-01-01
Full Text Available Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illustrate the robust model, nondominated sorting genetic algorithm-II (NSGA-II is modified to solve the project example. The results show that, by means of adjusting the time and cost robust coefficients, the robust Pareto sets for time-cost tradeoff can be obtained according to different acceptable risk level, from which the decision maker could choose the preferred construction alternative.
A robust neural network-based approach for microseismic event detection
Akram, Jubran
2017-08-17
We present an artificial neural network based approach for robust event detection from low S/N waveforms. We use a feed-forward network with a single hidden layer that is tuned on a training dataset and later applied on the entire example dataset for event detection. The input features used include the average of absolute amplitudes, variance, energy-ratio and polarization rectilinearity. These features are calculated in a moving-window of same length for the entire waveform. The output is set as a user-specified relative probability curve, which provides a robust way of distinguishing between weak and strong events. An optimal network is selected by studying the weight-based saliency and effect of number of neurons on the predicted results. Using synthetic data examples, we demonstrate that this approach is effective in detecting weaker events and reduces the number of false positives.
Optimal interdependence enhances the dynamical robustness of complex systems
Singh, Rishu Kumar; Sinha, Sitabhra
2017-08-01
Although interdependent systems have usually been associated with increased fragility, we show that strengthening the interdependence between dynamical processes on different networks can make them more likely to survive over long times. By coupling the dynamics of networks that in isolation exhibit catastrophic collapse with extinction of nodal activity, we demonstrate system-wide persistence of activity for an optimal range of interdependence between the networks. This is related to the appearance of attractors of the global dynamics comprising disjoint sets ("islands") of stable activity.
Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling
DEFF Research Database (Denmark)
Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper
2014-01-01
The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...
Jalali Naini, Seyed Gholamreza; Nouralizadeh, Hamid Reza
2012-01-01
We use two-stage data envelopment analysis (DEA) model to analyze the effects of entrance deregulation on the efficiency in the Iranian insurance market. In the first stage, we propose a robust optimization approach in order to overcome the sensitivity of DEA results to any uncertainty in the output parameters. Hence, the efficiency of each ongoing insurer is estimated using our proposed robust DEA model. The insurers are then ranked based on their relative efficiency scores for an eight-year...
Robust optimization methods for chance constrained, simulation-based, and bilevel problems
Yanikoglu, I.
2014-01-01
The objective of robust optimization is to find solutions that are immune to the uncertainty of the parameters in a mathematical optimization problem. It requires that the constraints of a given problem should be satisfied for all realizations of the uncertain parameters in a so-called uncertainty
Robust Optimization of Thermal Aspects of Friction Stir Welding Using Manifold Mapping Techniques
DEFF Research Database (Denmark)
Larsen, Anders Astrup; Lahaye, Domenico; Schmidt, Henrik Nikolaj Blicher
2008-01-01
The aim of this paper is to optimize a friction stir welding process taking robustness into account. The optimization problems are formulated with the goal of obtaining desired mean responses while reducing the variance of the response. We restrict ourselves to a thermal model of the process...
Optimization of controllability and robustness of complex networks by edge directionality
Liang, Man; Jin, Suoqin; Wang, Dingjie; Zou, Xiufen
2016-09-01
Recently, controllability of complex networks has attracted enormous attention in various fields of science and engineering. How to optimize structural controllability has also become a significant issue. Previous studies have shown that an appropriate directional assignment can improve structural controllability; however, the evolution of the structural controllability of complex networks under attacks and cascading has always been ignored. To address this problem, this study proposes a new edge orientation method (NEOM) based on residual degree that changes the link direction while conserving topology and directionality. By comparing the results with those of previous methods in two random graph models and several realistic networks, our proposed approach is demonstrated to be an effective and competitive method for improving the structural controllability of complex networks. Moreover, numerical simulations show that our method is near-optimal in optimizing structural controllability. Strikingly, compared to the original network, our method maintains the structural controllability of the network under attacks and cascading, indicating that the NEOM can also enhance the robustness of controllability of networks. These results alter the view of the nature of controllability in complex networks, change the understanding of structural controllability and affect the design of network models to control such networks.
Recurrent, Robust and Scalable Patterns Underlie Human Approach and Avoidance
Kennedy, David N.; Lehár, Joseph; Lee, Myung Joo; Blood, Anne J.; Lee, Sang; Perlis, Roy H.; Smoller, Jordan W.; Morris, Robert; Fava, Maurizio
2010-01-01
Background Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach), decrease (avoid), or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory. Methodology/Principal Findings Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i) a preference trade-off counterbalancing approach and avoidance, (ii) a value function linking preference intensity to uncertainty about preference, and (iii) a saturation function linking preference intensity to its standard deviation, thereby setting limits to both. Conclusions/Significance These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a relative preference
Recurrent, robust and scalable patterns underlie human approach and avoidance.
Directory of Open Access Journals (Sweden)
Byoung Woo Kim
2010-05-01
Full Text Available Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach, decrease (avoid, or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory.Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i a preference trade-off counterbalancing approach and avoidance, (ii a value function linking preference intensity to uncertainty about preference, and (iii a saturation function linking preference intensity to its standard deviation, thereby setting limits to both.These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a relative preference theory. Since variables in these patterns have been
A robust quantitative near infrared modeling approach for blend monitoring.
Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A
2018-01-30
This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.
Chaerani, D.; Lesmana, E.; Tressiana, N.
2018-03-01
In this paper, an application of Robust Optimization in agricultural water resource management problem under gross margin and water demand uncertainty is presented. Water resource management is a series of activities that includes planning, developing, distributing and managing the use of water resource optimally. Water resource management for agriculture can be one of the efforts to optimize the benefits of agricultural output. The objective function of agricultural water resource management problem is to maximizing total benefits by water allocation to agricultural areas covered by the irrigation network in planning horizon. Due to gross margin and water demand uncertainty, we assume that the uncertain data lies within ellipsoidal uncertainty set. We employ robust counterpart methodology to get the robust optimal solution.
Robust Optimization of the Self- scheduling and Market Involvement for an Electricity Producer
Lima, Ricardo
2015-01-01
This work address the optimization under uncertainty of the self-scheduling, forward contracting, and pool involvement of an electricity producer operating a mixed power generation station, which combines thermal, hydro and wind sources, and uses a two-stage adaptive robust optimization approach. In this problem the wind power production and the electricity pool price are considered to be uncertain, and are described by uncertainty convex sets. Two variants of a constraint generation algorithm are proposed, namely a primal and dual version, and they are used to solve two case studies based on two different producers. Their market strategies are investigated for three different scenarios, corresponding to as many instances of electricity price forecasts. The effect of the producers’ approach, whether conservative or more risk prone, is also investigated by solving each instance for multiple values of the so-called budget parameter. It was possible to conclude that this parameter influences markedly the producers’ strategy, in terms of scheduling, profit, forward contracting, and pool involvement. Regarding the computational results, these show that for some instances, the two variants of the algorithms have a similar performance, while for a particular subset of them one variant has a clear superiority
Robust Optimization of the Self- scheduling and Market Involvement for an Electricity Producer
Lima, Ricardo
2015-01-07
This work address the optimization under uncertainty of the self-scheduling, forward contracting, and pool involvement of an electricity producer operating a mixed power generation station, which combines thermal, hydro and wind sources, and uses a two-stage adaptive robust optimization approach. In this problem the wind power production and the electricity pool price are considered to be uncertain, and are described by uncertainty convex sets. Two variants of a constraint generation algorithm are proposed, namely a primal and dual version, and they are used to solve two case studies based on two different producers. Their market strategies are investigated for three different scenarios, corresponding to as many instances of electricity price forecasts. The effect of the producers’ approach, whether conservative or more risk prone, is also investigated by solving each instance for multiple values of the so-called budget parameter. It was possible to conclude that this parameter influences markedly the producers’ strategy, in terms of scheduling, profit, forward contracting, and pool involvement. Regarding the computational results, these show that for some instances, the two variants of the algorithms have a similar performance, while for a particular subset of them one variant has a clear superiority
Robust Sensing of Approaching Vehicles Relying on Acoustic Cues
Directory of Open Access Journals (Sweden)
Mitsunori Mizumachi
2014-05-01
Full Text Available The latest developments in automobile design have allowed them to be equipped with various sensing devices. Multiple sensors such as cameras and radar systems can be simultaneously used for active safety systems in order to overcome blind spots of individual sensors. This paper proposes a novel sensing technique for catching up and tracking an approaching vehicle relying on an acoustic cue. First, it is necessary to extract a robust spatial feature from noisy acoustical observations. In this paper, the spatio-temporal gradient method is employed for the feature extraction. Then, the spatial feature is filtered out through sequential state estimation. A particle filter is employed to cope with a highly non-linear problem. Feasibility of the proposed method has been confirmed with real acoustical observations, which are obtained by microphones outside a cruising vehicle.
S-variable approach to LMI-based robust control
Ebihara, Yoshio; Arzelier, Denis
2015-01-01
This book shows how the use of S-variables (SVs) in enhancing the range of problems that can be addressed with the already-versatile linear matrix inequality (LMI) approach to control can, in many cases, be put on a more unified, methodical footing. Beginning with the fundamentals of the SV approach, the text shows how the basic idea can be used for each problem (and when it should not be employed at all). The specific adaptations of the method necessitated by each problem are also detailed. The problems dealt with in the book have the common traits that: analytic closed-form solutions are not available; and LMIs can be applied to produce numerical solutions with a certain amount of conservatism. Typical examples are robustness analysis of linear systems affected by parametric uncertainties and the synthesis of a linear controller satisfying multiple, often conflicting, design specifications. For problems in which LMI methods produce conservative results, the SV approach is shown to achieve greater accuracy...
Robust Multivariable Optimization and Performance Simulation for ASIC Design
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.
Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong
2014-01-01
Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.
Directory of Open Access Journals (Sweden)
Xing-cai Liu
2014-01-01
Full Text Available Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.
Optimal control of quantum systems: Origins of inherent robustness to control field fluctuations
International Nuclear Information System (INIS)
Rabitz, Herschel
2002-01-01
The impact of control field fluctuations on the optimal manipulation of quantum dynamics phenomena is investigated. The quantum system is driven by an optimal control field, with the physical focus on the evolving expectation value of an observable operator. A relationship is shown to exist between the system dynamics and the control field fluctuations, wherein the process of seeking optimal performance assures an inherent degree of system robustness to such fluctuations. The presence of significant field fluctuations breaks down the evolution of the observable expectation value into a sequence of partially coherent robust steps. Robustness occurs because the optimization process reduces sensitivity to noise-driven quantum system fluctuations by taking advantage of the observable expectation value being bilinear in the evolution operator and its adjoint. The consequences of this inherent robustness are discussed in the light of recent experiments and numerical simulations on the optimal control of quantum phenomena. The analysis in this paper bodes well for the future success of closed-loop quantum optimal control experiments, even in the presence of reasonable levels of field fluctuations
Ju, Yaping; Zhang, Chuhua
2016-03-01
Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.
Data-adaptive Robust Optimization Method for the Economic Dispatch of Active Distribution Networks
DEFF Research Database (Denmark)
Zhang, Yipu; Ai, Xiaomeng; Fang, Jiakun
2018-01-01
Due to the restricted mathematical description of the uncertainty set, the current two-stage robust optimization is usually over-conservative which has drawn concerns from the power system operators. This paper proposes a novel data-adaptive robust optimization method for the economic dispatch...... of active distribution network with renewables. The scenario-generation method and the two-stage robust optimization are combined in the proposed method. To reduce the conservativeness, a few extreme scenarios selected from the historical data are used to replace the conventional uncertainty set....... The proposed extreme-scenario selection algorithm takes advantage of considering the correlations and can be adaptive to different historical data sets. A theoretical proof is given that the constraints will be satisfied under all the possible scenarios if they hold in the selected extreme scenarios, which...
International Nuclear Information System (INIS)
Das, Rabindra Nath; Kim, Jinseog; Park, Jeong-Soo
2015-01-01
In quality engineering, the most commonly used lifetime distributions are log-normal, exponential, gamma and Weibull. Experimental designs are useful for predicting the optimal operating conditions of the process in lifetime improvement experiments. In the present article, invariant robust first-order D-optimal designs are derived for correlated lifetime responses having the above four distributions. Robust designs are developed for some correlated error structures. It is shown that robust first-order D-optimal designs for these lifetime distributions are always robust rotatable but the converse is not true. Moreover, it is observed that these designs depend on the respective error covariance structure but are invariant to the above four lifetime distributions. This article generalizes the results of Das and Lin [7] for the above four lifetime distributions with general (intra-class, inter-class, compound symmetry, and tri-diagonal) correlated error structures. - Highlights: • This paper presents invariant robust first-order D-optimal designs under correlated lifetime responses. • The results of Das and Lin [7] are extended for the four lifetime (log-normal, exponential, gamma and Weibull) distributions. • This paper also generalizes the results of Das and Lin [7] to more general correlated error structures
Directory of Open Access Journals (Sweden)
A. Chebbi
2013-10-01
Full Text Available Based on rainfall intensity-duration-frequency (IDF curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2. Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World
Directory of Open Access Journals (Sweden)
Dezhi Zhang
2015-12-01
Full Text Available This article proposes a new model to address the design problem of a sustainable regional logistics network with uncertainty in future logistics demand. In the proposed model, the future logistics demand is assumed to be a random variable with a given probability distribution. A set of chance constraints with regard to logistics service capacity and environmental impacts is incorporated to consider the sustainability of logistics network design. The proposed model is formulated as a two-stage robust optimization problem. The first-stage problem before the realization of future logistics demand aims to minimize a risk-averse objective by determining the optimal location and size of logistics parks with CO2 emission taxes consideration. The second stage after the uncertain logistics demand has been determined is a scenario-based stochastic logistics service route choices equilibrium problem. A heuristic solution algorithm, which is a combination of penalty function method, genetic algorithm, and Gauss–Seidel decomposition approach, is developed to solve the proposed model. An illustrative example is given to show the application of the proposed model and solution algorithm. The findings show that total social welfare of the logistics system depends very much on the level of uncertainty in future logistics demand, capital budget for logistics parks, and confidence levels of the chance constraints.
A robust optimization model for agile and build-to-order supply chain planning under uncertainties
DEFF Research Database (Denmark)
Lalmazloumian, Morteza; Wong, Kuan Yew; Govindan, Kannan
2016-01-01
Supply chain planning as one of the most important processes within the supply chain management concept, has a great impact on firms' success or failure. This paper considers a supply chain planning problem of an agile manufacturing company operating in a build-to-order environment under various....... The formulation is a robust optimization model with the objective of minimizing the expected total supply chain cost while maintaining customer service level. The developed multi-product, multi-period, multi-echelon robust mixed-integer linear programming model is then solved using the CPLEX optimization studio...
Directory of Open Access Journals (Sweden)
Jianwen Ren
2018-04-01
Full Text Available This paper proposes a distributed robust dispatch approach to solve the economic dispatch problem of the interconnected systems with a high proportion of wind power penetration. First of all, the basic principle of synchronous alternating direction method of multipliers (SADMM is introduced to solve the economic dispatch problem of the two interconnected regions. Next, the polyhedron set of the robust optimization method is utilized to describe the wind power output. To adjust the conservativeness of the polyhedron set, an adjustment factor of robust conservativeness is introduced. Subsequently, considering the operation characteristics of the DC tie line between the interconnected regions, an economic dispatch model with a high proportion of wind power penetration is established and parallel iteration based on SADMM is used to solve the model. In each iteration, the optimized power of DC tie lines is exchanged between the regions without requiring the participation of the superior dispatch center. Finally, the validity of the proposed model is verified by the examples of the 2-area 6-node interconnected system and the interconnection of several modified New England 39-node systems. The results show that the proposed model can meet the needs of the independent dispatch of regional power grids, effectively deal with the uncertainty of wind power output, and maximize the wind power consumption under the condition of ensuring the safe operation of the interconnected systems.
Group Counseling Optimization: A Novel Approach
Eita, M. A.; Fahmy, M. M.
A new population-based search algorithm, which we call Group Counseling Optimizer (GCO), is presented. It mimics the group counseling behavior of humans in solving their problems. The algorithm is tested using seven known benchmark functions: Sphere, Rosenbrock, Griewank, Rastrigin, Ackley, Weierstrass, and Schwefel functions. A comparison is made with the recently published comprehensive learning particle swarm optimizer (CLPSO). The results demonstrate the efficiency and robustness of the proposed algorithm.
Takemiya, Tetsushi
, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite
Design optimization of a robust sleeve antenna for hepatic microwave ablation
International Nuclear Information System (INIS)
Prakash, Punit; Webster, John G; Deng Geng; Converse, Mark C; Mahvi, David M; Ferris, Michael C
2008-01-01
We describe the application of a Bayesian variable-number sample-path (VNSP) optimization algorithm to yield a robust design for a floating sleeve antenna for hepatic microwave ablation. Finite element models are used to generate the electromagnetic (EM) field and thermal distribution in liver given a particular design. Dielectric properties of the tissue are assumed to vary within ± 10% of average properties to simulate the variation among individuals. The Bayesian VNSP algorithm yields an optimal design that is a 14.3% improvement over the original design and is more robust in terms of lesion size, shape and efficiency. Moreover, the Bayesian VNSP algorithm finds an optimal solution saving 68.2% simulation of the evaluations compared to the standard sample-path optimization method
Toward Environmentally Robust Organic Electronics: Approaches and Applications.
Lee, Eun Kwang; Lee, Moo Yeol; Park, Cheol Hee; Lee, Hae Rang; Oh, Joon Hak
2017-11-01
Recent interest in flexible electronics has led to a paradigm shift in consumer electronics, and the emergent development of stretchable and wearable electronics is opening a new spectrum of ubiquitous applications for electronics. Organic electronic materials, such as π-conjugated small molecules and polymers, are highly suitable for use in low-cost wearable electronic devices, and their charge-carrier mobilities have now exceeded that of amorphous silicon. However, their commercialization is minimal, mainly because of weaknesses in terms of operational stability, long-term stability under ambient conditions, and chemical stability related to fabrication processes. Recently, however, many attempts have been made to overcome such instabilities of organic electronic materials. Here, an overview is provided of the strategies developed for environmentally robust organic electronics to overcome the detrimental effects of various critical factors such as oxygen, water, chemicals, heat, and light. Additionally, molecular design approaches to π-conjugated small molecules and polymers that are highly stable under ambient and harsh conditions are explored; such materials will circumvent the need for encapsulation and provide a greater degree of freedom using simple solution-based device-fabrication techniques. Applications that are made possible through these strategies are highlighted. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Robust infrared target tracking using discriminative and generative approaches
Asha, C. S.; Narasimhadhan, A. V.
2017-09-01
The process of designing an efficient tracker for thermal infrared imagery is one of the most challenging tasks in computer vision. Although a lot of advancement has been achieved in RGB videos over the decades, textureless and colorless properties of objects in thermal imagery pose hard constraints in the design of an efficient tracker. Tracking of an object using a single feature or a technique often fails to achieve greater accuracy. Here, we propose an effective method to track an object in infrared imagery based on a combination of discriminative and generative approaches. The discriminative technique makes use of two complementary methods such as kernelized correlation filter with spatial feature and AdaBoost classifier with pixel intesity features to operate in parallel. After obtaining optimized locations through discriminative approaches, the generative technique is applied to determine the best target location using a linear search method. Unlike the baseline algorithms, the proposed method estimates the scale of the target by Lucas-Kanade homography estimation. To evaluate the proposed method, extensive experiments are conducted on 17 challenging infrared image sequences obtained from LTIR dataset and a significant improvement of mean distance precision and mean overlap precision is accomplished as compared with the existing trackers. Further, a quantitative and qualitative assessment of the proposed approach with the state-of-the-art trackers is illustrated to clearly demonstrate an overall increase in performance.
Dynamic optimization and robust explicit model predictive control of hydrogen storage tank
Panos, C.
2010-09-01
We present a general framework for the optimal design and control of a metal-hydride bed under hydrogen desorption operation. The framework features: (i) a detailed two-dimension dynamic process model, (ii) a design and operational dynamic optimization step, and (iii) an explicit/multi-parametric model predictive controller design step. For the controller design, a reduced order approximate model is obtained, based on which nominal and robust multi-parametric controllers are designed. © 2010 Elsevier Ltd.
Robust design of broadband EUV multilayer beam splitters based on particle swarm optimization
International Nuclear Information System (INIS)
Jiang, Hui; Michette, Alan G.
2013-01-01
A robust design idea for broadband EUV multilayer beam splitters is introduced that achieves the aim of decreasing the influence of layer thickness errors on optical performances. Such beam splitters can be used in interferometry to determine the quality of EUVL masks by comparing with a reference multilayer. In the optimization, particle swarm techniques were used for the first time in such designs. Compared to conventional genetic algorithms, particle swarm optimization has stronger ergodicity, simpler processing and faster convergence
Commitment and dispatch of heat and power units via affinely adjustable robust optimization
DEFF Research Database (Denmark)
Zugno, Marco; Morales González, Juan Miguel; Madsen, Henrik
2016-01-01
compromising computational tractability. We perform an extensive numerical study based on data from the Copenhagen area in Denmark, which highlights important features of the proposed model. Firstly, we illustrate commitment and dispatch choices that increase conservativeness in the robust optimization...... and conservativeness of the solution. Finally, we perform a thorough comparison with competing models based on deterministic optimization and stochastic programming. (C) 2016 Elsevier Ltd. All rights reserved....
Dynamic optimization and robust explicit model predictive control of hydrogen storage tank
Panos, C.; Kouramas, K.I.; Georgiadis, M.C.; Pistikopoulos, E.N.
2010-01-01
We present a general framework for the optimal design and control of a metal-hydride bed under hydrogen desorption operation. The framework features: (i) a detailed two-dimension dynamic process model, (ii) a design and operational dynamic optimization step, and (iii) an explicit/multi-parametric model predictive controller design step. For the controller design, a reduced order approximate model is obtained, based on which nominal and robust multi-parametric controllers are designed. © 2010 Elsevier Ltd.
Directory of Open Access Journals (Sweden)
Fushing Hsieh
2016-11-01
Full Text Available Discrete combinatorial optimization problems in real world are typically defined via an ensemble of potentially high dimensional measurements pertaining to all subjects of a system under study. We point out that such a data ensemble in fact embeds with system's information content that is not directly used in defining the combinatorial optimization problems. Can machine learning algorithms extract such information content and make combinatorial optimizing tasks more efficient? Would such algorithmic computations bring new perspectives into this classic topic of Applied Mathematics and Theoretical Computer Science? We show that answers to both questions are positive. One key reason is due to permutation invariance. That is, the data ensemble of subjects' measurement vectors is permutation invariant when it is represented through a subject-vs-measurement matrix. An unsupervised machine learning algorithm, called Data Mechanics (DM, is applied to find optimal permutations on row and column axes such that the permuted matrix reveals coupled deterministic and stochastic structures as the system's information content. The deterministic structures are shown to facilitate geometry-based divide-and-conquer scheme that helps optimizing task, while stochastic structures are used to generate an ensemble of mimicries retaining the deterministic structures, and then reveal the robustness pertaining to the original version of optimal solution. Two simulated systems, Assignment problem and Traveling Salesman problem, are considered. Beyond demonstrating computational advantages and intrinsic robustness in the two systems, we propose brand new robust optimal solutions. We believe such robust versions of optimal solutions are potentially more realistic and practical in real world settings.
A Hybrid Interval-Robust Optimization Model for Water Quality Management.
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-05-01
In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.
A Hybrid Interval–Robust Optimization Model for Water Quality Management
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-01-01
Abstract In water quality management problems, uncertainties may exist in many system components and pollution-related processes (i.e., random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval–robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements. PMID:23922495
DEFF Research Database (Denmark)
Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars
2013-01-01
The artide describes a robust and effective implementation of the interior point optimization algorithm. The adopted method includes a precalculation step, which reduces the number of variables by fulfilling the equilibrium equations a priori. This work presents an improved implementation of the ...
DEFF Research Database (Denmark)
Dollerup, Niels; Jepsen, Michael S.; Frier, Christian
2014-01-01
A robust and effective finite element based implementation of lower bound limit state analysis applying an interior point formulation is presented in this paper. The lower bound formulation results in a convex optimization problem consisting of a number of linear constraints from the equilibrium...
DEFF Research Database (Denmark)
Mahmood, Faisal; Gehl, Julie
2011-01-01
and genes to intracranial tumors in humans, and demonstrate a method to optimize the design (i.e. geometry) of the electrode device prototype to improve both clinical performance and geometrical tolerance (robustness). We have employed a semiempirical objective function based on constraints similar to those...... sensitive to random geometrical deviations. The method is readily applicable to other electrode configurations....
Towards a Robuster Interpretive Parsing: learning from overt forms in Optimality Theory
Biró, T.
2013-01-01
The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this
Avoiding Optimal Mean ℓ2,1-Norm Maximization-Based Robust PCA for Reconstruction.
Luo, Minnan; Nie, Feiping; Chang, Xiaojun; Yang, Yi; Hauptmann, Alexander G; Zheng, Qinghua
2017-04-01
Robust principal component analysis (PCA) is one of the most important dimension-reduction techniques for handling high-dimensional data with outliers. However, most of the existing robust PCA presupposes that the mean of the data is zero and incorrectly utilizes the average of data as the optimal mean of robust PCA. In fact, this assumption holds only for the squared [Formula: see text]-norm-based traditional PCA. In this letter, we equivalently reformulate the objective of conventional PCA and learn the optimal projection directions by maximizing the sum of projected difference between each pair of instances based on [Formula: see text]-norm. The proposed method is robust to outliers and also invariant to rotation. More important, the reformulated objective not only automatically avoids the calculation of optimal mean and makes the assumption of centered data unnecessary, but also theoretically connects to the minimization of reconstruction error. To solve the proposed nonsmooth problem, we exploit an efficient optimization algorithm to soften the contributions from outliers by reweighting each data point iteratively. We theoretically analyze the convergence and computational complexity of the proposed algorithm. Extensive experimental results on several benchmark data sets illustrate the effectiveness and superiority of the proposed method.
Robust optimization of robotic pick and place operations for deformable objects through simulation
DEFF Research Database (Denmark)
Bo Jorgensen, Troels; Debrabant, Kristian; Kruger, Norbert
2016-01-01
for the task. The solutions are parameterized in terms of the robot motion and the gripper configuration, and after each simulation various objective scores are determined and combined. This enables the use of various optimization strategies. Based on visual inspection of the most robust solution found...
Approximating the Pareto Set of Multiobjective Linear Programs via Robust Optimization
Gorissen, B.L.; den Hertog, D.
2012-01-01
Abstract: The Pareto set of a multiobjective optimization problem consists of the solutions for which one or more objectives can not be improved without deteriorating one or more other objectives. We consider problems with linear objectives and linear constraints and use Adjustable Robust
Optimal and Robust Switching Control Strategies : Theory, and Applications in Traffic Management
Hajiahmadi, M.
2015-01-01
Macroscopic modeling, predictive and robust control and route guidance for large-scale freeway and urban traffic networks are the main focus of this thesis. In order to increase the efficiency of our control strategies, we propose several mathematical and optimization techniques. Moreover, in the
Vonk, E.; Xu, YuePing; Booij, Martijn J.; Augustijn, Dionysius C.M.
2016-01-01
In this research we investigate the robustness of the common implicit stochastic optimization (ISO) method for dam reoperation. As a case study, we focus on the Xinanjiang-Fuchunjiang reservoir cascade in eastern China, for which adapted operating rules were proposed as a means to reduce the impact
Kleijnen, J.P.C.; Gaury, E.G.A.
2001-01-01
Whereas Operations Research has always paid much attention to optimization, practitioners judge the robustness of the 'optimum' solution to be of greater importance.Therefore this paper proposes a practical methodology that is a stagewise combination of the following four proven techniques: (1)
An H(∞) control approach to robust learning of feedforward neural networks.
Jing, Xingjian
2011-09-01
A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Koo, Jamin; Han, Kyusang; Yoon, En Sup
2011-01-01
In this paper, a new approach has been proposed that allows a robust optimization of sustainable energy planning over a period of years. It is based on the modified energy flow optimization model (EFOM) and minimizes total costs in planning capacities of power plants and CCS to be added, stripped or retrofitted. In the process, it reduces risks due to a high volatility in fuel prices; it also provides robustness against infeasibility with respect to meeting the required emission level by adopting a penalty constant that corresponds to the price level of emission allowances. In this manner, the proposed methodology enables decision makers to determine the optimal capacities of power plants and/or CCS, as well as volumes of emissions trading in the future that will meet the required emission level and satisfy energy demand from various user-sections with minimum costs and maximum robustness. They can also gain valuable insights on the effects that the price of emission allowances has on the competitiveness of RES and CCS technologies; it may be used in, for example, setting appropriate subsidies and tax policies for promoting greater use of these technologies. The proposed methodology is applied to a case based on directions and volumes of energy flows in South Korea during the year 2008. (author)
Robust regularized least-squares beamforming approach to signal estimation
Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Al-Naffouri, Tareq Y.
2017-01-01
In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill
A robust data-driven approach for gene ontology annotation.
Li, Yanpeng; Yu, Hong
2014-01-01
Gene ontology (GO) and GO annotation are important resources for biological information management and knowledge discovery, but the speed of manual annotation became a major bottleneck of database curation. BioCreative IV GO annotation task aims to evaluate the performance of system that automatically assigns GO terms to genes based on the narrative sentences in biomedical literature. This article presents our work in this task as well as the experimental results after the competition. For the evidence sentence extraction subtask, we built a binary classifier to identify evidence sentences using reference distance estimator (RDE), a recently proposed semi-supervised learning method that learns new features from around 10 million unlabeled sentences, achieving an F1 of 19.3% in exact match and 32.5% in relaxed match. In the post-submission experiment, we obtained 22.1% and 35.7% F1 performance by incorporating bigram features in RDE learning. In both development and test sets, RDE-based method achieved over 20% relative improvement on F1 and AUC performance against classical supervised learning methods, e.g. support vector machine and logistic regression. For the GO term prediction subtask, we developed an information retrieval-based method to retrieve the GO term most relevant to each evidence sentence using a ranking function that combined cosine similarity and the frequency of GO terms in documents, and a filtering method based on high-level GO classes. The best performance of our submitted runs was 7.8% F1 and 22.2% hierarchy F1. We found that the incorporation of frequency information and hierarchy filtering substantially improved the performance. In the post-submission evaluation, we obtained a 10.6% F1 using a simpler setting. Overall, the experimental analysis showed our approaches were robust in both the two tasks. © The Author(s) 2014. Published by Oxford University Press.
A simple and robust approach to immobilization of antibody fragments.
Ikonomova, Svetlana P; He, Ziming; Karlsson, Amy J
2016-08-01
Antibody fragments, such as the single-chain variable fragment (scFv), have much potential in research and diagnostics because of their antigen-binding ability similar to a full-sized antibody and their ease of production in microorganisms. Some applications of antibody fragments require immobilization on a surface, and we have established a simple immobilization method that is based on the biotin-streptavidin interaction and does not require a separate purification step. We genetically fused two biotinylation tags-the biotin carboxyl carrier protein (BCCP) or the AviTag minimal sequence-to six different scFvs (scFv13R4, scFvD10, scFv26-10, scFv3, scFv5, and scFv12) for site-specific biotinylation in vivo by endogenous biotin ligases produced by Escherichia coli. The biotinylated scFvs were immobilized onto streptavidin-coated plates directly from cell lysates, and immobilization was detected through enzyme-linked immunosorbent assays. All scFvs fusions were successfully immobilized, and scFvs biotinylated via the BCCP tag tended to immobilize better than those biotinylated via the AviTag, even when biotinylation efficiency was improved with the biotin ligase BirA. The ability of immobilized scFvs to bind antigens was confirmed using scFv13R4 and scFvD10 with their respective targets β-galactosidase and bacteriophage lambda head protein D (gpD). The immobilized scFv13R4 bound to β-galactosidase at the same level for both biotinylation tags when the surface was saturated with the scFv, and immobilized scFvs retained their functionality for at least 100days after immobilization. The simplicity and robustness of our method make it a promising approach for future applications that require antibody fragment immobilization. Copyright © 2016 Elsevier B.V. All rights reserved.
SU-E-T-07: 4DCT Robust Optimization for Esophageal Cancer Using Intensity Modulated Proton Therapy
Energy Technology Data Exchange (ETDEWEB)
Liao, L [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Department of Industrial Engineering, University of Houston, Houston, TX (United States); Yu, J; Zhu, X; Li, H; Zhang, X [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Li, Y [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Varian Medical Systems, Houston, TX (United States); Lim, G [Department of Industrial Engineering, University of Houston, Houston, TX (United States)
2015-06-15
Purpose: To develop a 4DCT robust optimization method to reduce the dosimetric impact from respiratory motion in intensity modulated proton therapy (IMPT) for esophageal cancer. Methods: Four esophageal cancer patients were selected for this study. The different phases of CT from a set of 4DCT were incorporated into the worst-case dose distribution robust optimization algorithm. 4DCT robust treatment plans were designed and compared with the conventional non-robust plans. Result doses were calculated on the average and maximum inhale/exhale phases of 4DCT. Dose volume histogram (DVH) band graphic and ΔD95%, ΔD98%, ΔD5%, ΔD2% of CTV between different phases were used to evaluate the robustness of the plans. Results: Compare to the IMPT plans optimized using conventional methods, the 4DCT robust IMPT plans can achieve the same quality in nominal cases, while yield a better robustness to breathing motion. The mean ΔD95%, ΔD98%, ΔD5% and ΔD2% of CTV are 6%, 3.2%, 0.9% and 1% for the robustly optimized plans vs. 16.2%, 11.8%, 1.6% and 3.3% from the conventional non-robust plans. Conclusion: A 4DCT robust optimization method was proposed for esophageal cancer using IMPT. We demonstrate that the 4DCT robust optimization can mitigate the dose deviation caused by the diaphragm motion.
A mean–variance objective for robust production optimization in uncertain geological scenarios
DEFF Research Database (Denmark)
Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne
2014-01-01
directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...
Directory of Open Access Journals (Sweden)
Xianfu Cheng
2014-01-01
Full Text Available The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.
Lin, Yuqun
2014-01-01
The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system. PMID:24683334
Cheng, Xianfu; Lin, Yuqun
2014-01-01
The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.
Robust and Reliable Portfolio Optimization Formulation of a Chance Constrained Problem
Directory of Open Access Journals (Sweden)
Sengupta Raghu Nandan
2017-02-01
Full Text Available We solve a linear chance constrained portfolio optimization problem using Robust Optimization (RO method wherein financial script/asset loss return distributions are considered as extreme valued. The objective function is a convex combination of portfolio’s CVaR and expected value of loss return, subject to a set of randomly perturbed chance constraints with specified probability values. The robust deterministic counterpart of the model takes the form of Second Order Cone Programming (SOCP problem. Results from extensive simulation runs show the efficacy of our proposed models, as it helps the investor to (i utilize extensive simulation studies to draw insights into the effect of randomness in portfolio decision making process, (ii incorporate different risk appetite scenarios to find the optimal solutions for the financial portfolio allocation problem and (iii compare the risk and return profiles of the investments made in both deterministic as well as in uncertain and highly volatile financial markets.
RSMDP-based Robust Q-learning for Optimal Path Planning in a Dynamic Environment
Directory of Open Access Journals (Sweden)
Yunfei Zhang
2014-07-01
Full Text Available This paper presents arobust Q-learning method for path planningin a dynamic environment. The method consists of three steps: first, a regime-switching Markov decision process (RSMDP is formed to present the dynamic environment; second a probabilistic roadmap (PRM is constructed, integrated with the RSMDP and stored as a graph whose nodes correspond to a collision-free world state for the robot; and third, an onlineQ-learning method with dynamic stepsize, which facilitates robust convergence of the Q-value iteration, is integrated with the PRM to determine an optimal path for reaching the goal. In this manner, the robot is able to use past experience for improving its performance in avoiding not only static obstacles but also moving obstacles, without knowing the nature of the obstacle motion. The use ofregime switching in the avoidance of obstacles with unknown motion is particularly innovative. The developed approach is applied to a homecare robot in computer simulation. The results show that the online path planner with Q-learning is able torapidly and successfully converge to the correct path.
Robust synthetic biology design: stochastic game theory approach.
Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching
2009-07-15
Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi-Sugeno (T-S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf.
Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.
Li, Xianwei; Gao, Huijun
2015-10-01
Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.
Stisen, S.; Demirel, C.; Koch, J.
2017-12-01
Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing
A Probabilistic Approach for Robustness Evaluation of Timber Structures
DEFF Research Database (Denmark)
Kirkegaard, Poul Henning; Sørensen, John Dalsgaard
of Structures and a probabilistic modelling of the timber material proposed in the Probabilistic Model Code (PMC) of the Joint Committee on Structural Safety (JCSS). Due to the framework in the Danish Code the timber structure has to be evaluated with respect to the following criteria where at least one shall...... to criteria a) and b) the timber frame structure has one column with a reliability index a bit lower than an assumed target level. By removal three columns one by one no significant extensive failure of the entire structure or significant parts of it are obatined. Therefore the structure can be considered......A probabilistic based robustness analysis has been performed for a glulam frame structure supporting the roof over the main court in a Norwegian sports centre. The robustness analysis is based on the framework for robustness analysis introduced in the Danish Code of Practice for the Safety...
Liu, W; Mohan, R
2012-06-01
Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD
A Knowledge-Based Approach to Robust Parsing
Oltmans, J.A.
2000-01-01
The research presented in this thesis describes the design, implementation and evaluation of a natural-language processing system that is used as part of an information retrieval system. Specifically, I focus on the development of a system that performs robust syntactic analysis of scientific texts
Poder, Joel; Whitaker, May
2016-06-01
Inverse planning simulated annealing (IPSA) optimized brachytherapy treatment plans are characterized with large isolated dwell times at the first or last dwell position of each catheter. The potential of catheter shifts relative to the target and organs at risk in these plans may lead to a more significant change in delivered dose to the volumes of interest relative to plans with more uniform dwell times. This study aims to determine if the Nucletron Oncentra dwell time deviation constraint (DTDC) parameter can be optimized to improve the robustness of high-dose-rate (HDR) prostate brachytherapy plans to catheter displacements. A set of 10 clinically acceptable prostate plans were re-optimized with a DTDC parameter of 0 and 0.4. For each plan, catheter displacements of 3, 7, and 14 mm were retrospectively applied and the change in dose volume histogram (DVH) indices and conformity indices analyzed. The robustness of clinically acceptable prostate plans to catheter displacements in the caudal direction was found to be dependent on the DTDC parameter. A DTDC value of 0 improves the robustness of planning target volume (PTV) coverage to catheter displacements, whereas a DTDC value of 0.4 improves the robustness of the plans to changes in hotspots. The results indicate that if used in conjunction with a pre-treatment catheter displacement correction protocol and a tolerance of 3 mm, a DTDC value of 0.4 may produce clinically superior plans. However, the effect of the DTDC parameter in plan robustness was not observed to be as strong as initially suspected.
Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path
Directory of Open Access Journals (Sweden)
Semi Jeon
2017-02-01
Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.
Robust state feedback controller design of STATCOM using chaotic optimization algorithm
Directory of Open Access Journals (Sweden)
Safari Amin
2010-01-01
Full Text Available In this paper, a new design technique for the design of robust state feedback controller for static synchronous compensator (STATCOM using Chaotic Optimization Algorithm (COA is presented. The design is formulated as an optimization problem which is solved by the COA. Since chaotic planning enjoys reliability, ergodicity and stochastic feature, the proposed technique presents chaos mapping using Lozi map chaotic sequences which increases its convergence rate. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results reveal that the proposed controller has an excellent capability in damping power system low frequency oscillations and enhances greatly the dynamic stability of the power systems. Moreover, the system performance analysis under different operating conditions shows that the phase based controller is superior compare to the magnitude based controller.
Robust non-gradient C subroutines for non-linear optimization
DEFF Research Database (Denmark)
Brock, Pernille; Madsen, Kaj; Nielsen, Hans Bruun
2004-01-01
This report presents a package of robust and easy-to-use C subroutines for solving unconstrained and constrained non-linear optimization problems, where gradient information is not required. The intention is that the routines should use the currently best algorithms available. All routines have...... subroutines are obtained by changing 0 to 1. The present report is a new and updated version of a previous report NI-91-04 with the title Non-gradient c Subroutines for Non- Linear Optimization, [16]. Both the previous and the present report describe a collection of subroutines, which have been translated...... from Fortran to C. The reason for writing the present report is that some of the C subroutines have been replaced by more e ective and robust versions translated from the original Fortran subroutines to C by the Bandler Group, see [1]. Also the test examples have been modified to some extent...
The French biofuels mandates under cost uncertainty - an assessment based on robust optimization
International Nuclear Information System (INIS)
Lorne, Daphne; Tchung-Ming, Stephane
2012-01-01
This paper investigates the impact of primary energy and technology cost uncertainty on the achievement of renewable and especially biofuel policies - mandates and norms - in France by 2030. A robust optimization technique that allows to deal with uncertainty sets of high dimensionality is implemented in a TIMES-based long-term planning model of the French energy transport and electricity sectors. The energy system costs and potential benefits (GHG emissions abatements, diversification) of the French renewable mandates are assessed within this framework. The results of this systemic analysis highlight how setting norms and mandates allows to reduce the variability of CO 2 emissions reductions and supply mix diversification when the costs of technological progress and prices are uncertain. Beyond that, we discuss the usefulness of robust optimization in complement of other techniques to integrate uncertainty in large-scale energy models. (authors)
Hernandez, Monica
2017-12-01
This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.
Dynamical System Approaches to Combinatorial Optimization
DEFF Research Database (Denmark)
Starke, Jens
2013-01-01
of large times as an asymptotically stable point of the dynamics. The obtained solutions are often not globally optimal but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization......Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods...... thereof can be used as models for many industrial problems like manufacturing planning and optimization of flexible manufacturing systems. This is illustrated for an example in distributed robotic systems....
SOCP relaxation bounds for the optimal subset selection problem applied to robust linear regression
Flores, Salvador
2015-01-01
This paper deals with the problem of finding the globally optimal subset of h elements from a larger set of n elements in d space dimensions so as to minimize a quadratic criterion, with an special emphasis on applications to computing the Least Trimmed Squares Estimator (LTSE) for robust regression. The computation of the LTSE is a challenging subset selection problem involving a nonlinear program with continuous and binary variables, linked in a highly nonlinear fashion. The selection of a ...
Robust subspace estimation using low-rank optimization theory and applications
Oreifej, Omar
2014-01-01
Various fundamental applications in computer vision and machine learning require finding the basis of a certain subspace. Examples of such applications include face detection, motion estimation, and activity recognition. An increasing interest has been recently placed on this area as a result of significant advances in the mathematics of matrix rank optimization. Interestingly, robust subspace estimation can be posed as a low-rank optimization problem, which can be solved efficiently using techniques such as the method of Augmented Lagrange Multiplier. In this book,?the authors?discuss fundame
Optimal Control for Fast and Robust Generation of Entangled States in Anisotropic Heisenberg Chains
Zhang, Xiong-Peng; Shao, Bin; Zou, Jian
2017-05-01
Motivated by some recent results of the optimal control (OC) theory, we study anisotropic XXZ Heisenberg spin-1/2 chains with control fields acting on a single spin, with the aim of exploring how maximally entangled state can be prepared. To achieve the goal, we use a numerical optimization algorithm (e.g., the Krotov algorithm, which was shown to be capable of reaching the quantum speed limit) to search an optimal set of control parameters, and then obtain OC pulses corresponding to the target fidelity. We find that the minimum time for implementing our target state depending on the anisotropy parameter Δ of the model. Finally, we analyze the robustness of the obtained results for the optimal fidelities and the effectiveness of the Krotov method under some realistic conditions.
Supply chain downsizing under bankruptcy : A robust optimization approach
Ashayeri, J.; Ma, N.; Sotirov, R.
Research on supply chain network design has mainly pursued efficiency oriented objectives for boosting service level and profit. However, the priority of an enterprise facing bankruptcy pressure shifts to fulfill debt obligation with limited financial resources and survive downsizing. In this paper,
International Nuclear Information System (INIS)
Piacentino, A.; Cardona, F.
2008-01-01
The optimization of synthesis, design and operation in trigeneration systems for building applications is a quite complex task, due to the high number of decision variables, the presence of irregular heat, cooling and electric load profiles and the variable electricity price. Consequently, computer-aided techniques are usually adopted to achieve the optimal solution, based either on iterative techniques, linear or non-linear programming or evolutionary search. Large efforts have been made in improving algorithm efficiency, which have resulted in an increasingly rapid convergence to the optimal solution and in reduced calculation time; robust algorithm have also been formulated, assuming stochastic behaviour for energy loads and prices. This paper is based on the assumption that margins for improvements in the optimization of trigeneration systems still exist, which require an in-depth understanding of plant's energetic behaviour. Robustness in the optimization of trigeneration systems has more to do with a 'correct and comprehensive' than with an 'efficient' modelling, being larger efforts required to energy specialists rather than to experts in efficient algorithms. With reference to a mixed integer linear programming model implemented in MatLab for a trigeneration system including a pressurized (medium temperature) heat storage, the relevant contribute of thermoeconomics and energo-environmental analysis in the phase of mathematical modelling and code testing are shown
Herman, Jonathan D.; Zeff, Harrison B.; Reed, Patrick M.; Characklis, Gregory W.
2014-10-01
While optimality is a foundational mathematical concept in water resources planning and management, "optimal" solutions may be vulnerable to failure if deeply uncertain future conditions deviate from those assumed during optimization. These vulnerabilities may produce severely asymmetric impacts across a region, making it vital to evaluate the robustness of management strategies as well as their impacts for regional stakeholders. In this study, we contribute a multistakeholder many-objective robust decision making (MORDM) framework that blends many-objective search and uncertainty analysis tools to discover key tradeoffs between water supply alternatives and their robustness to deep uncertainties (e.g., population pressures, climate change, and financial risks). The proposed framework is demonstrated for four interconnected water utilities representing major stakeholders in the "Research Triangle" region of North Carolina, U.S. The utilities supply well over one million customers and have the ability to collectively manage drought via transfer agreements and shared infrastructure. We show that water portfolios for this region that compose optimal tradeoffs (i.e., Pareto-approximate solutions) under expected future conditions may suffer significantly degraded performance with only modest changes in deeply uncertain hydrologic and economic factors. We then use the Patient Rule Induction Method (PRIM) to identify which uncertain factors drive the individual and collective vulnerabilities for the four cooperating utilities. Our framework identifies key stakeholder dependencies and robustness tradeoffs associated with cooperative regional planning, which are critical to understanding the tensions between individual versus regional water supply goals. Cooperative demand management was found to be the key factor controlling the robustness of regional water supply planning, dominating other hydroclimatic and economic uncertainties through the 2025 planning horizon. Results
Biologically inspired control of humanoid robot arms robust and adaptive approaches
Spiers, Adam; Herrmann, Guido
2016-01-01
This book investigates a biologically inspired method of robot arm control, developed with the objective of synthesising human-like motion dynamically, using nonlinear, robust and adaptive control techniques in practical robot systems. The control method caters to a rising interest in humanoid robots and the need for appropriate control schemes to match these systems. Unlike the classic kinematic schemes used in industrial manipulators, the dynamic approaches proposed here promote human-like motion with better exploitation of the robot’s physical structure. This also benefits human-robot interaction. The control schemes proposed in this book are inspired by a wealth of human-motion literature that indicates the drivers of motion to be dynamic, model-based and optimal. Such considerations lend themselves nicely to achievement via nonlinear control techniques without the necessity for extensive and complex biological models. The operational-space method of robot control forms the basis of many of the techniqu...
ROBUST CONTROL OF END-TIDAL CO2 USING THE H∞ LOOP-SHAPING APPROACH
Directory of Open Access Journals (Sweden)
Anake Pomprapa
2013-12-01
Full Text Available Mechanically ventilated patients require appropriate settings of respiratory control variables to maintain acceptable gas exchange. To control the carbon dioxide (CO2 level effectively and automatically, system identification based on a human subject was performed using a linear affine model and a nonlinear Hammerstein structure. Subsequently, a robust controller was designed using the H∞ loop-shaping approach, which synthesizes the optimal controller based on a specific objective by achieving stability with guaranteed performance. For demonstration purposes, the closed-loop control ventilation system was successfully tested in a human volunteer. The experimental results indicate that the blood CO2 level may indeed be controlled noninvasively by measuring end-tidal CO2 from expired air. Keeping the limited amount of experimental data in mind, we conclude that H∞ loop-shaping may be a promising technique for control of mechanical ventilation in patients with respiratory insufficiency.
Asgharnia, Amirhossein; Shahnazi, Reza; Jamali, Ali
2018-05-11
The most studied controller for pitch control of wind turbines is proportional-integral-derivative (PID) controller. However, due to uncertainties in wind turbine modeling and wind speed profiles, the need for more effective controllers is inevitable. On the other hand, the parameters of PID controller usually are unknown and should be selected by the designer which is neither a straightforward task nor optimal. To cope with these drawbacks, in this paper, two advanced controllers called fuzzy PID (FPID) and fractional-order fuzzy PID (FOFPID) are proposed to improve the pitch control performance. Meanwhile, to find the parameters of the controllers the chaotic evolutionary optimization methods are used. Using evolutionary optimization methods not only gives us the unknown parameters of the controllers but also guarantees the optimality based on the chosen objective function. To improve the performance of the evolutionary algorithms chaotic maps are used. All the optimization procedures are applied to the 2-mass model of 5-MW wind turbine model. The proposed optimal controllers are validated using simulator FAST developed by NREL. Simulation results demonstrate that the FOFPID controller can reach to better performance and robustness while guaranteeing fewer fatigue damages in different wind speeds in comparison to FPID, fractional-order PID (FOPID) and gain-scheduling PID (GSPID) controllers. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Kuznetsova, Elizaveta; Li, Yan-Fu; Ruiz, Carlos; Zio, Enrico
2014-01-01
Highlights: • Microgrid composed of a train station, wind power plant and district is investigated. • Each player is modeled as an individual agent aiming at a particular goal. • Prediction Intervals quantify the uncertain operational and environmental parameters. • Optimal goal-directed actions planning is achieved with robust optimization. • Optimization framework improves system reliability and decreases power imbalances. - Abstract: A microgrid energy management framework for the optimization of individual objectives of microgrid stakeholders is proposed. The framework is exemplified by way of a microgrid that is connected to an external grid via a transformer and includes the following players: a middle-size train station with integrated photovoltaic power production system, a small energy production plant composed of urban wind turbines, and a surrounding district including residences and small businesses. The system is described by Agent-Based Modelling (ABM), in which each player is modelled as an individual agent aiming at a particular goal, (i) decreasing its expenses for power purchase or (ii) increasing its revenues from power selling. The context in which the agents operate is uncertain due to the stochasticity of operational and environmental parameters, and the technical failures of the renewable power generators. The uncertain operational and environmental parameters of the microgrid are quantified in terms of Prediction Intervals (PIs) by a Non-dominated Sorting Genetic Algorithm (NSGA-II) – trained Neural Network (NN). Under these uncertainties, each agent is seeking for optimal goal-directed actions planning by Robust Optimization (RO). The developed framework is shown to lead to an increase in system performance, evaluated in terms of typical reliability (adequacy) indicators for energy systems, such as Loss of Load Expectation (LOLE) and Loss of Expected Energy (LOEE), in comparison with optimal planning based on expected values of
International Nuclear Information System (INIS)
Chao, Ming; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi; Wei, Jie; Li, Tianfang
2016-01-01
We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. (paper)
A robust algorithm for optimizing protein structures with NMR chemical shifts
Energy Technology Data Exchange (ETDEWEB)
Berjanskii, Mark; Arndt, David; Liang, Yongjie; Wishart, David S., E-mail: david.wishart@ualberta.ca [University of Alberta, Department of Computing Science (Canada)
2015-11-15
Over the past decade, a number of methods have been developed to determine the approximate structure of proteins using minimal NMR experimental information such as chemical shifts alone, sparse NOEs alone or a combination of comparative modeling data and chemical shifts. However, there have been relatively few methods that allow these approximate models to be substantively refined or improved using the available NMR chemical shift data. Here, we present a novel method, called Chemical Shift driven Genetic Algorithm for biased Molecular Dynamics (CS-GAMDy), for the robust optimization of protein structures using experimental NMR chemical shifts. The method incorporates knowledge-based scoring functions and structural information derived from NMR chemical shifts via a unique combination of multi-objective MD biasing, a genetic algorithm, and the widely used XPLOR molecular modelling language. Using this approach, we demonstrate that CS-GAMDy is able to refine and/or fold models that are as much as 10 Å (RMSD) away from the correct structure using only NMR chemical shift data. CS-GAMDy is also able to refine of a wide range of approximate or mildly erroneous protein structures to more closely match the known/correct structure and the known/correct chemical shifts. We believe CS-GAMDy will allow protein models generated by sparse restraint or chemical-shift-only methods to achieve sufficiently high quality to be considered fully refined and “PDB worthy”. The CS-GAMDy algorithm is explained in detail and its performance is compared over a range of refinement scenarios with several commonly used protein structure refinement protocols. The program has been designed to be easily installed and easily used and is available at http://www.gamdy.ca http://www.gamdy.ca.
Estimation and robust control of microalgae culture for optimization of biological fixation of CO2
International Nuclear Information System (INIS)
Filali, R.
2012-01-01
This thesis deals with the optimization of carbon dioxide consumption by microalgae. Indeed, following several current environmental issues primarily related to large emissions of CO 2 , it is shown that microalgae represent a very promising solution for CO 2 mitigation. From this perspective, we are interested in the optimization strategy of CO 2 consumption through the development of a robust control law. The main aim is to ensure optimal operating conditions for a Chlorella vulgaris culture in an instrumented photo-bioreactor. The thesis is based on three major axes. The first one concerns growth modeling of the selected species based on a mathematical model reflecting the influence of light and total inorganic carbon concentration. For the control context, the second axis is related to biomass estimation from the real-time measurement of dissolved carbon dioxide. This step is necessary for the control part due to the lack of affordable real-time sensors for this kind of measurement. Three observers structures have been studied and compared: an extended Kalman filter, an asymptotic observer and an interval observer. The last axis deals with the implementation of a non-linear predictive control law coupled to the estimation strategy for the regulation of the cellular concentration around a value which maximizes the CO 2 consumption. Performance and robustness of this control law have been validated in simulation and experimentally on a laboratory-scale instrumented photo-bioreactor. This thesis represents a preliminary study for the optimization of CO 2 mitigation strategy by microalgae. (author)
International Nuclear Information System (INIS)
Alavi, Seyed Arash; Ahmadian, Ali; Aliakbar-Golkar, Masoud
2015-01-01
Highlights: • Energy management is necessary in the active distribution network to reduce operation costs. • Uncertainty modeling is essential in energy management studies in active distribution networks. • Point estimate method is a suitable method for uncertainty modeling due to its lower computation time and acceptable accuracy. • In the absence of Probability Distribution Function (PDF) robust optimization has a good ability for uncertainty modeling. - Abstract: Uncertainty can be defined as the probability of difference between the forecasted value and the real value. As this probability is small, the operation cost of the power system will be less. This purpose necessitates modeling of system random variables (such as the output power of renewable resources and the load demand) with appropriate and practicable methods. In this paper, an adequate procedure is proposed in order to do an optimal energy management on a typical micro-grid with regard to the relevant uncertainties. The point estimate method is applied for modeling the wind power and solar power uncertainties, and robust optimization technique is utilized to model load demand uncertainty. Finally, a comparison is done between deterministic and probabilistic management in different scenarios and their results are analyzed and evaluated
Chau, Savio; Vatan, Farrokh; Randolph, Vincent; Baroth, Edmund C.
2006-01-01
Future In-Space propulsion systems for exploration programs will invariably require data collection from a large number of sensors. Consider the sensors needed for monitoring several vehicle systems states of health, including the collection of structural health data, over a large area. This would include the fuel tanks, habitat structure, and science containment of systems required for Lunar, Mars, or deep space exploration. Such a system would consist of several hundred or even thousands of sensors. Conventional avionics system design will require these sensors to be connected to a few Remote Health Units (RHU), which are connected to robust, micro flight computers through a serial bus. This results in a large mass of cabling and unacceptable weight. This paper first gives a survey of several techniques that may reduce the cabling mass for sensors. These techniques can be categorized into four classes: power line communication, serial sensor buses, compound serial buses, and wireless network. The power line communication approach uses the power line to carry both power and data, so that the conventional data lines can be eliminated. The serial sensor bus approach reduces most of the cabling by connecting all the sensors with a single (or redundant) serial bus. Many standard buses for industrial control and sensor buses can support several hundreds of nodes, however, have not been space qualified. Conventional avionics serial buses such as the Mil-Std-1553B bus and IEEE 1394a are space qualified but can support only a limited number of nodes. The third approach is to combine avionics buses to increase their addressability. The reliability, EMI/EMC, and flight qualification issues of wireless networks have to be addressed. Several wireless networks such as the IEEE 802.11 and Ultra Wide Band are surveyed in this paper. The placement of sensors can also affect cable mass. Excessive sensors increase the number of cables unnecessarily. Insufficient number of sensors
Young Kim, Eun; Johnson, Hans J
2013-01-01
A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.
The Robustness of the CAPM - A Computational Approach
Herings, P.J.J.; Kubler, F.
1999-01-01
In this paper we argue that in realistically calibrated two period general equilibrium models with incomplete markets CAPM-pricing provides a good benchmark for equilibrium prices even when agents are not mean-variance optimizers and returns are not normally distributed. We numerically approximate
System optimization for HVAC energy management using the robust evolutionary algorithm
International Nuclear Information System (INIS)
Fong, K.F.; Hanby, V.I.; Chow, T.T.
2009-01-01
For an installed centralized heating, ventilating and air conditioning (HVAC) system, appropriate energy management measures would achieve energy conservation targets through the optimal control and operation. The performance optimization of conventional HVAC systems may be handled by operation experience, but it may not cover different optimization scenarios and parameters in response to a variety of load and weather conditions. In this regard, it is common to apply the suitable simulation-optimization technique to model the system then determine the required operation parameters. The particular plant simulation models can be built up by either using the available simulation programs or a system of mathematical expressions. To handle the simulation models, iterations would be involved in the numerical solution methods. Since the gradient information is not easily available due to the complex nature of equations, the traditional gradient-based optimization methods are not applicable for this kind of system models. For the heuristic optimization methods, the continual search is commonly necessary, and the system function call is required for each search. The frequency of simulation function calls would then be a time-determining step, and an efficient optimization method is crucial, in order to find the solution through a number of function calls in a reasonable computational period. In this paper, the robust evolutionary algorithm (REA) is presented to tackle this nature of the HVAC simulation models. REA is based on one of the paradigms of evolutionary algorithm, evolution strategy, which is a stochastic population-based searching technique emphasized on mutation. The REA, which incorporates the Cauchy deterministic mutation, tournament selection and arithmetic recombination, would provide a synergetic effect for optimal search. The REA is effective to cope with the complex simulation models, as well as those represented by explicit mathematical expressions of
Kucukgoz, Mehmet; Harmanci, Oztan; Mihcak, Mehmet K.; Venkatesan, Ramarathnam
2005-03-01
In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.
A case study on robust optimal experimental design for model calibration of ω-Transaminase
DEFF Research Database (Denmark)
Daele, Timothy, Van; Van Hauwermeiren, Daan; Ringborg, Rolf Hoffmeyer
the experimental space. However, it is expected that more informative experiments can be designed to increase the confidence of the parameter estimates. Therefore, we apply Optimal Experimental Design (OED) to the calibrated model of Shin and Kim (1998). The total number of samples was retained to allow fair......” parameter values are not known before finishing the model calibration. However, it is important that the chosen parameter values are close to the real parameter values, otherwise the OED can possibly yield non-informative experiments. To counter this problem, one can use robust OED. The idea of robust OED......Proper calibration of models describing enzyme kinetics can be quite challenging. This is especially the case for more complex models like transaminase models (Shin and Kim, 1998). The latter fitted model parameters, but the confidence on the parameter estimation was not derived. Hence...
A robust approach to QMU, validation, and conservative prediction.
Energy Technology Data Exchange (ETDEWEB)
Segalman, Daniel Joseph; Paez, Thomas Lee; Bauman, Lara E
2013-01-01
A systematic approach to defining margin in a manner that incorporates statistical information and accommodates data uncertainty, but does not require assumptions about specific forms of the tails of distributions is developed. This approach extends to calculations underlying validation assessment and quantitatively conservative predictions.
A Hybrid Genetic Algorithm Approach for Optimal Power Flow
Directory of Open Access Journals (Sweden)
Sydulu Maheswarapu
2011-08-01
Full Text Available This paper puts forward a reformed hybrid genetic algorithm (GA based approach to the optimal power flow. In the approach followed here, continuous variables are designed using real-coded GA and discrete variables are processed as binary strings. The outcomes are compared with many other methods like simple genetic algorithm (GA, adaptive genetic algorithm (AGA, differential evolution (DE, particle swarm optimization (PSO and music based harmony search (MBHS on a IEEE30 bus test bed, with a total load of 283.4 MW. Its found that the proposed algorithm is found to offer lowest fuel cost. The proposed method is found to be computationally faster, robust, superior and promising form its convergence characteristics.
Directory of Open Access Journals (Sweden)
Nitish Katal
2016-01-01
Full Text Available Automation of the robust control system synthesis for uncertain systems is of great practical interest. In this paper, the loop shaping step for synthesizing quantitative feedback theory (QFT based controller for a two-phase permanent magnet stepper motor (PMSM has been automated using teaching learning-based optimization (TLBO algorithm. The QFT controller design problem has been posed as an optimization problem and TLBO algorithm has been used to minimize the proposed cost function. This facilitates designing low-order fixed-structure controller, eliminates the need of manual loop shaping step on the Nichols charts, and prevents the overdesign of the controller. A performance comparison of the designed controller has been made with the classical PID tuning method of Ziegler-Nichols and QFT controller tuned using other optimization algorithms. The simulation results show that the designed QFT controller using TLBO offers robust stability, disturbance rejection, and proper reference tracking over a range of PMSM’s parametric uncertainties as compared to the classical design techniques.
International Nuclear Information System (INIS)
Fredriksson, Albin; Bokrantz, Rasmus
2014-01-01
Purpose: To critically evaluate and compare three worst case optimization methods that have been previously employed to generate intensity-modulated proton therapy treatment plans that are robust against systematic errors. The goal of the evaluation is to identify circumstances when the methods behave differently and to describe the mechanism behind the differences when they occur. Methods: The worst case methods optimize plans to perform as well as possible under the worst case scenario that can physically occur (composite worst case), the combination of the worst case scenarios for each objective constituent considered independently (objectivewise worst case), and the combination of the worst case scenarios for each voxel considered independently (voxelwise worst case). These three methods were assessed with respect to treatment planning for prostate under systematic setup uncertainty. An equivalence with probabilistic optimization was used to identify the scenarios that determine the outcome of the optimization. Results: If the conflict between target coverage and normal tissue sparing is small and no dose-volume histogram (DVH) constraints are present, then all three methods yield robust plans. Otherwise, they all have their shortcomings: Composite worst case led to unnecessarily low plan quality in boundary scenarios that were less difficult than the worst case ones. Objectivewise worst case generally led to nonrobust plans. Voxelwise worst case led to overly conservative plans with respect to DVH constraints, which resulted in excessive dose to normal tissue, and less sharp dose fall-off than the other two methods. Conclusions: The three worst case methods have clearly different behaviors. These behaviors can be understood from which scenarios that are active in the optimization. No particular method is superior to the others under all circumstances: composite worst case is suitable if the conflicts are not very severe or there are DVH constraints whereas
Hernandez, F.; Liang, X.
2017-12-01
Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational
Tools for Trustworthy Autonomy: Robust Predictions, Intuitive Control, and Optimized Interaction
Driggs Campbell, Katherine Rose
2017-01-01
In the near future, robotics will impact nearly every aspect of life. Yet for technology to smoothly integrate into society, we need interactive systems to be well modeled and predictable; have robust decision making and control; and be trustworthy to improve cooperation and interaction. To achieve these goals, we propose taking a human-centered approach to ease the transition into human-dominated fields. In this work, our modeling methods and control schemes are validated through user stu...
Robust digital controllers for uncertain chaotic systems: A digital redesign approach
Energy Technology Data Exchange (ETDEWEB)
Ababneh, Mohammad [Department of Controls, FMC Kongsberg Subsea, FMC Energy Systems, Houston, TX 77067 (United States); Barajas-Ramirez, Juan-Gonzalo [CICESE, Depto. De Electronica y Telecomunicaciones, Ensenada, BC, 22860 (Mexico); Chen Guanrong [Centre for Chaos Control and Synchronization, Department of Electronic Engineering, City University of Hong Kong (China); Shieh, Leang S. [Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77204-4005 (United States)
2007-03-15
In this paper, a new and systematic method for designing robust digital controllers for uncertain nonlinear systems with structured uncertainties is presented. In the proposed method, a controller is designed in terms of the optimal linear model representation of the nominal system around each operating point of the trajectory, while the uncertainties are decomposed such that the uncertain nonlinear system can be rewritten as a set of local linear models with disturbed inputs. Applying conventional robust control techniques, continuous-time robust controllers are first designed to eliminate the effects of the uncertainties on the underlying system. Then, a robust digital controller is obtained as the result of a digital redesign of the designed continuous-time robust controller using the state-matching technique. The effectiveness of the proposed controller design method is illustrated through some numerical examples on complex nonlinear systems--chaotic systems.
DEFF Research Database (Denmark)
Codas, Andrés; Hanssen, Kristian G.; Foss, Bjarne
2017-01-01
The production life of oil reservoirs starts under significant uncertainty regarding the actual economical return of the recovery process due to the lack of oil field data. Consequently, investors and operators make management decisions based on a limited and uncertain description of the reservoir....... In this work, we propose a new formulation for robust optimization of reservoir well controls. It is inspired by the multiple shooting (MS) method which permits a broad range of parallelization opportunities and output constraint handling. This formulation exploits coherent risk measures, a concept...
Efficient and robust cell detection: A structured regression approach.
Xie, Yuanpu; Xing, Fuyong; Shi, Xiaoshuang; Kong, Xiangfei; Su, Hai; Yang, Lin
2018-02-01
Efficient and robust cell detection serves as a critical prerequisite for many subsequent biomedical image analysis methods and computer-aided diagnosis (CAD). It remains a challenging task due to touching cells, inhomogeneous background noise, and large variations in cell sizes and shapes. In addition, the ever-increasing amount of available datasets and the high resolution of whole-slice scanned images pose a further demand for efficient processing algorithms. In this paper, we present a novel structured regression model based on a proposed fully residual convolutional neural network for efficient cell detection. For each testing image, our model learns to produce a dense proximity map that exhibits higher responses at locations near cell centers. Our method only requires a few training images with weak annotations (just one dot indicating the cell centroids). We have extensively evaluated our method using four different datasets, covering different microscopy staining methods (e.g., H & E or Ki-67 staining) or image acquisition techniques (e.g., bright-filed image or phase contrast). Experimental results demonstrate the superiority of our method over existing state of the art methods in terms of both detection accuracy and running time. Copyright © 2017. Published by Elsevier B.V.
International Nuclear Information System (INIS)
Svensson, Elin; Berntsson, Thore; Stroemberg, Ann-Brith
2009-01-01
This paper presents a case study on the optimization of process integration investments in a pulp mill considering uncertainties in future electricity and biofuel prices and CO 2 emissions charges. The work follows the methodology described in Svensson et al. [Svensson, E., Berntsson, T., Stroemberg, A.-B., Patriksson, M., 2008b. An optimization methodology for identifying robust process integration investments under uncertainty. Energy Policy, in press, (doi:10.1016/j.enpol.2008.10.023)] where a scenario-based approach is proposed for the modelling of uncertainties. The results show that the proposed methodology provides a way to handle the time dependence and the uncertainties of the parameters. For the analyzed case, a robust solution is found which turns out to be a combination of two opposing investment strategies. The difference between short-term and strategic views for the investment decision is analyzed and it is found that uncertainties are increasingly important to account for as a more strategic view is employed. Furthermore, the results imply that the obvious effect of policy instruments aimed at decreasing CO 2 emissions is, in applications like this, an increased profitability for all energy efficiency investments, and not as much a shift between different alternatives
Energy Technology Data Exchange (ETDEWEB)
Svensson, Elin [Department of Energy and Environment, Division of Heat and Power Technology, Chalmers University of Technology, SE-412 96 Goeteborg (Sweden)], E-mail: elin.svensson@chalmers.se; Berntsson, Thore [Department of Energy and Environment, Division of Heat and Power Technology, Chalmers University of Technology, SE-412 96 Goeteborg (Sweden); Stroemberg, Ann-Brith [Fraunhofer-Chalmers Research Centre for Industrial Mathematics, Chalmers Science Park, SE-412 88 Gothenburg (Sweden)
2009-03-15
This paper presents a case study on the optimization of process integration investments in a pulp mill considering uncertainties in future electricity and biofuel prices and CO{sub 2} emissions charges. The work follows the methodology described in Svensson et al. [Svensson, E., Berntsson, T., Stroemberg, A.-B., Patriksson, M., 2008b. An optimization methodology for identifying robust process integration investments under uncertainty. Energy Policy, in press, (doi:10.1016/j.enpol.2008.10.023)] where a scenario-based approach is proposed for the modelling of uncertainties. The results show that the proposed methodology provides a way to handle the time dependence and the uncertainties of the parameters. For the analyzed case, a robust solution is found which turns out to be a combination of two opposing investment strategies. The difference between short-term and strategic views for the investment decision is analyzed and it is found that uncertainties are increasingly important to account for as a more strategic view is employed. Furthermore, the results imply that the obvious effect of policy instruments aimed at decreasing CO{sub 2} emissions is, in applications like this, an increased profitability for all energy efficiency investments, and not as much a shift between different alternatives.
Energy Technology Data Exchange (ETDEWEB)
Svensson, Elin; Berntsson, Thore [Department of Energy and Environment, Division of Heat and Power Technology, Chalmers University of Technology, SE-412 96 Goeteborg (Sweden); Stroemberg, Ann-Brith [Fraunhofer-Chalmers Research Centre for Industrial Mathematics, Chalmers Science Park, SE-412 88 Gothenburg (Sweden)
2009-03-15
This paper presents a case study on the optimization of process integration investments in a pulp mill considering uncertainties in future electricity and biofuel prices and CO{sub 2} emissions charges. The work follows the methodology described in Svensson et al. [Svensson, E., Berntsson, T., Stroemberg, A.-B., Patriksson, M., 2008b. An optimization methodology for identifying robust process integration investments under uncertainty. Energy Policy, in press, doi:10.1016/j.enpol.2008.10.023] where a scenario-based approach is proposed for the modelling of uncertainties. The results show that the proposed methodology provides a way to handle the time dependence and the uncertainties of the parameters. For the analyzed case, a robust solution is found which turns out to be a combination of two opposing investment strategies. The difference between short-term and strategic views for the investment decision is analyzed and it is found that uncertainties are increasingly important to account for as a more strategic view is employed. Furthermore, the results imply that the obvious effect of policy instruments aimed at decreasing CO{sub 2} emissions is, in applications like this, an increased profitability for all energy efficiency investments, and not as much a shift between different alternatives. (author)
Guo, Yu; Dong, Daoyi; Shu, Chuan-Cun
2018-04-04
Achieving fast and efficient quantum state transfer is a fundamental task in physics, chemistry and quantum information science. However, the successful implementation of the perfect quantum state transfer also requires robustness under practically inevitable perturbative defects. Here, we demonstrate how an optimal and robust quantum state transfer can be achieved by shaping the spectral phase of an ultrafast laser pulse in the framework of frequency domain quantum optimal control theory. Our numerical simulations of the single dibenzoterrylene molecule as well as in atomic rubidium show that optimal and robust quantum state transfer via spectral phase modulated laser pulses can be achieved by incorporating a filtering function of the frequency into the optimization algorithm, which in turn has potential applications for ultrafast robust control of photochemical reactions.
An Approach to Robust Control of the Hopf Bifurcation
Directory of Open Access Journals (Sweden)
Giacomo Innocenti
2011-01-01
Full Text Available The paper illustrates a novel approach to modify the Hopf bifurcation nature via a nonlinear state feedback control, which leaves the equilibrium properties unchanged. This result is achieved by recurring to linear and nonlinear transformations, which lead the system to locally assume the ordinary differential equation representation. Third-order models are considered, since they can be seen as proper representatives of a larger class of systems. The explicit relationship between the control input and the Hopf bifurcation nature is obtained via a frequency approach, that does not need the computation of the center manifold.
Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo
2016-03-01
In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.
Controlled grafting of vinylic monomers on polyolefins: a robust mathematical modeling approach.
Saeb, Mohammad Reza; Rezaee, Babak; Shadman, Alireza; Formela, Krzysztof; Ahmadi, Zahed; Hemmati, Farkhondeh; Kermaniyan, Tayebeh Sadat; Mohammadi, Yousef
2017-01-01
Experimental and mathematical modeling analyses were used for controlling melt free-radical grafting of vinylic monomers on polyolefins and, thereby, reducing the disturbance of undesired cross-linking of polyolefins. Response surface, desirability function, and artificial intelligence methodologies were blended to modeling/optimization of grafting reaction in terms of vinylic monomer content, peroxide initiator concentration, and melt-processing time. An in-house code was developed based on artificial neural network that learns and mimics processing torque and grafting of glycidyl methacrylate (GMA) typical vinylic monomer on high-density polyethylene (HDPE). Application of response surface and desirability function enabled concurrent optimization of processing torque and GMA grafting on HDPE, through which we quantified for the first time competition between parallel reactions taking place during melt processing: (i) desirable grafting of GMA on HDPE; (ii) undesirable cross-linking of HDPE. The proposed robust mathematical modeling approach can precisely learn the behavior of grafting reaction of vinylic monomers on polyolefins and be placed into practice in finding exact operating condition needed for efficient grafting of reactive monomers on polyolefins.
Optimal robustness of supervised learning from a noniterative point of view
Hu, Chia-Lun J.
1995-08-01
In most artificial neural network applications, (e.g. pattern recognition) if the dimension of the input vectors is much larger than the number of patterns to be recognized, generally, a one- layered, hard-limited perceptron is sufficient to do the recognition job. As long as the training input-output mapping set is numerically given, and as long as this given set satisfies a special linear-independency relation, the connection matrix to meet the supervised learning requirements can be solved by a noniterative, one-step, algebra method. The learning of this noniterative scheme is very fast (close to real-time learning) because the learning is one-step and noniterative. The recognition of the untrained patterns is very robust because a universal geometrical optimization process of selecting the solution can be applied to the learning process. This paper reports the theoretical foundation of this noniterative learning scheme and focuses the result at the optimal robustness analysis. A real-time character recognition scheme is then designed along this line. This character recognition scheme will be used (in a movie presentation) to demonstrate the experimental results of some theoretical parts reported in this paper.
International Nuclear Information System (INIS)
Abdul Aziz Mohamed; Jaafar Abdullah; Dahlan Mohd; Rozaidi Rasid; Megat Harun AlRashid Megat Ahmad; Mahathir Mohamad; Mohd Hamzah Harun
2012-01-01
L 18 orthogonal array in mix level of Taguchi robust design method was carried out to optimize experimental conditions for the preparation of polymer blend composite. Tensile strength and neutron absorption of the composite were the properties of interest. Filler size, filler loading, ball mixing time and dispersion agent concentration were selected as parameters or factors which are expected to affect the composite properties. As a result of Taguchi analysis, filler loading was the most influencing parameter on the tensile strength and neutron absorption. The least influencing was ball-mixing time. The optimal conditions were determined by using mix-level Taguchi robust design method and a polymer composite with tensile strength of 6.33 MPa was successfully prepared. The composite was found to fully absorb thermal neutron flux of 1.04 x 10 5 n/ cm 2 / s with only 2 mm in thickness. In addition, the filler was also characterized by scanning electron microscopy (SEM) and elemental analysis (EDX). (Author)
Robustness Beamforming Algorithms
Directory of Open Access Journals (Sweden)
Sajad Dehghani
2014-04-01
Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.
van de Water, Steven; Albertini, Francesca; Weber, Damien C.; Heijmen, Ben J. M.; Hoogeman, Mischa S.; Lomax, Antony J.
2018-01-01
The aim of this study is to develop an anatomical robust optimization method for intensity-modulated proton therapy (IMPT) that accounts for interfraction variations in nasal cavity filling, and to compare it with conventional single-field uniform dose (SFUD) optimization and online plan adaptation. We included CT data of five patients with tumors in the sinonasal region. Using the planning CT, we generated for each patient 25 ‘synthetic’ CTs with varying nasal cavity filling. The robust optimization method available in our treatment planning system ‘Erasmus-iCycle’ was extended to also account for anatomical uncertainties by including (synthetic) CTs with varying patient anatomy as error scenarios in the inverse optimization. For each patient, we generated treatment plans using anatomical robust optimization and, for benchmarking, using SFUD optimization and online plan adaptation. Clinical target volume (CTV) and organ-at-risk (OAR) doses were assessed by recalculating the treatment plans on the synthetic CTs, evaluating dose distributions individually and accumulated over an entire fractionated 50 GyRBE treatment, assuming each synthetic CT to correspond to a 2 GyRBE fraction. Treatment plans were also evaluated using actual repeat CTs. Anatomical robust optimization resulted in adequate CTV doses (V95% ⩾ 98% and V107% ⩽ 2%) if at least three synthetic CTs were included in addition to the planning CT. These CTV requirements were also fulfilled for online plan adaptation, but not for the SFUD approach, even when applying a margin of 5 mm. Compared with anatomical robust optimization, OAR dose parameters for the accumulated dose distributions were on average 5.9 GyRBE (20%) higher when using SFUD optimization and on average 3.6 GyRBE (18%) lower for online plan adaptation. In conclusion, anatomical robust optimization effectively accounted for changes in nasal cavity filling during IMPT, providing substantially improved CTV and
International Nuclear Information System (INIS)
Luo, Xiongbiao; Wan, Ying; He, Xiangjian
2015-01-01
Purpose: Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. Methods: The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor’s) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. Results: The experimental results demonstrate that the authors’ proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors’ framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. Conclusions: A robust electromagnetically guided endoscopy framework was
Luo, Xiongbiao; Wan, Ying; He, Xiangjian
2015-04-01
Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor's) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. The experimental results demonstrate that the authors' proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors' framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm
Energy Technology Data Exchange (ETDEWEB)
Luo, Xiongbiao, E-mail: xluo@robarts.ca, E-mail: Ying.Wan@student.uts.edu.au [Robarts Research Institute, Western University, London, Ontario N6A 5K8 (Canada); Wan, Ying, E-mail: xluo@robarts.ca, E-mail: Ying.Wan@student.uts.edu.au; He, Xiangjian [School of Computing and Communications, University of Technology, Sydney, New South Wales 2007 (Australia)
2015-04-15
Purpose: Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. Methods: The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor’s) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. Results: The experimental results demonstrate that the authors’ proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors’ framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. Conclusions: A robust electromagnetically guided endoscopy framework was
Quantum Resonance Approach to Combinatorial Optimization
Zak, Michail
1997-01-01
It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.
Robust mode space approach for atomistic modeling of realistically large nanowire transistors
Huang, Jun Z.; Ilatikhameneh, Hesameddin; Povolotskyi, Michael; Klimeck, Gerhard
2018-01-01
Nanoelectronic transistors have reached 3D length scales in which the number of atoms is countable. Truly atomistic device representations are needed to capture the essential functionalities of the devices. Atomistic quantum transport simulations of realistically extended devices are, however, computationally very demanding. The widely used mode space (MS) approach can significantly reduce the numerical cost, but a good MS basis is usually very hard to obtain for atomistic full-band models. In this work, a robust and parallel algorithm is developed to optimize the MS basis for atomistic nanowires. This enables engineering-level, reliable tight binding non-equilibrium Green's function simulation of nanowire metal-oxide-semiconductor field-effect transistor (MOSFET) with a realistic cross section of 10 nm × 10 nm using a small computer cluster. This approach is applied to compare the performance of InGaAs and Si nanowire n-type MOSFETs (nMOSFETs) with various channel lengths and cross sections. Simulation results with full-band accuracy indicate that InGaAs nanowire nMOSFETs have no drive current advantage over their Si counterparts for cross sections up to about 10 nm × 10 nm.
Numerical Optimization Design of Dynamic Quantizer via Matrix Uncertainty Approach
Directory of Open Access Journals (Sweden)
Kenji Sawada
2013-01-01
Full Text Available In networked control systems, continuous-valued signals are compressed to discrete-valued signals via quantizers and then transmitted/received through communication channels. Such quantization often degrades the control performance; a quantizer must be designed that minimizes the output difference between before and after the quantizer is inserted. In terms of the broadbandization and the robustness of the networked control systems, we consider the continuous-time quantizer design problem. In particular, this paper describes a numerical optimization method for a continuous-time dynamic quantizer considering the switching speed. Using a matrix uncertainty approach of sampled-data control, we clarify that both the temporal and spatial resolution constraints can be considered in analysis and synthesis, simultaneously. Finally, for the slow switching, we compare the proposed and the existing methods through numerical examples. From the examples, a new insight is presented for the two-step design of the existing continuous-time optimal quantizer.
Practical Robust Optimization Method for Unit Commitment of a System with Integrated Wind Resource
Directory of Open Access Journals (Sweden)
Yuanchao Yang
2017-01-01
Full Text Available Unit commitment, one of the significant tasks in power system operations, faces new challenges as the system uncertainty increases dramatically due to the integration of time-varying resources, such as wind. To address these challenges, we propose the formulation and solution of a generalized unit commitment problem for a system with integrated wind resources. Given the prespecified interval information acquired from real central wind forecasting system for uncertainty representation of nodal wind injections with their correlation information, the proposed unit commitment problem solution is computationally tractable and robust against all uncertain wind power injection realizations. We provide a solution approach to tackle this problem with complex mathematical basics and illustrate the capabilities of the proposed mixed integer solution approach on the large-scale power system of the Northwest China Grid. The numerical results demonstrate that the approach is realistic and not overly conservative in terms of the resulting dispatch cost outcomes.
The Orienteering Problem under Uncertainty Stochastic Programming and Robust Optimization compared
Evers, L.; Glorie, K.; Ster, S. van der; Barros, A.I.; Monsuur, H.
2012-01-01
The Orienteering Problem (OP) is a generalization of the well-known traveling salesman problem and has many interesting applications in logistics, tourism and defense. To reflect real-life situations, we focus on an uncertain variant of the OP. Two main approaches that deal with optimization under
Directory of Open Access Journals (Sweden)
Xiaozhang Qu
2016-07-01
Full Text Available A kind of modified epoxy resin sheet molding compounds of the impeller has been designed. Through the test, the non-metal impeller has a better environmental aging performance, but must do the waterproof processing design. In order to improve the stability of the impeller vibration design, the influence of uncertainty factors is considered, and a multi-objective robust optimization method is proposed to reduce the weight of the impeller. Firstly, based on the fluid-structure interaction，the analysis model of the impeller vibration is constructed. Secondly, the optimal approximate model of the impeller is constructed by using the Latin hypercube and radial basis function, and the fitting and optimization accuracy of the approximate model is improved by increasing the sample points. Finally, the micro multi-objective genetic algorithm is applied to the robust optimization of approximate model, and the Monte Carlo simulation and Sobol sampling techniques are used for reliability analysis. By comparing the results of the deterministic, different sigma levels and different materials, the multi-objective optimization of the SMC molding impeller can meet the requirements of engineering stability and lightweight. And the effectiveness of the proposed multi-objective robust optimization method is verified by the error analysis. After the SMC molding and the robust optimization of the impeller, the optimized rate reached 42.5%, which greatly improved the economic benefit, and greatly reduce the vibration of the ventilation system.
Converse, S.J.; Kendall, W.L.; Doherty, P.F.; Naughton, M.B.; Hines, J.E.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
For many animal populations, one or more life stages are not accessible to sampling, and therefore an unobservable state is created. For colonially-breeding populations, this unobservable state could represent the subset of adult breeders that have foregone breeding in a given year. This situation applies to many seabird populations, notably albatrosses, where skipped breeders are either absent from the colony, or are present but difficult to capture or correctly assign to breeding state. Kendall et al. have proposed design strategies for investigations of seabird demography where such temporary emigration occurs, suggesting the use of the robust design to permit the estimation of time-dependent parameters and to increase the precision of estimates from multi-state models. A traditional robust design, where animals are subject to capture multiple times in a sampling season, is feasible in many cases. However, due to concerns that multiple captures per season could cause undue disturbance to animals, Kendall et al. developed a less-invasive robust design (LIRD), where initial captures are followed by an assessment of the ratio of marked-to-unmarked birds in the population or sampled plot. This approach has recently been applied in the Northwestern Hawaiian Islands to populations of Laysan (Phoebastria immutabilis) and black-footed (P. nigripes) albatrosses. In this paper, we outline the LIRD and its application to seabird population studies. We then describe an approach to determining optimal allocation of sampling effort in which we consider a non-robust design option (nRD), and variations of both the traditional robust design (RD), and the LIRD. Variations we considered included the number of secondary sampling occasions for the RD and the amount of total effort allocated to the marked-to-unmarked ratio assessment for the LIRD. We used simulations, informed by early data from the Hawaiian study, to address optimal study design for our example cases. We found that
Energy Technology Data Exchange (ETDEWEB)
Liu, Wei, E-mail: Liu.Wei@mayo.edu [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Schild, Steven E. [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Chang, Joe Y.; Liao, Zhongxing [Department of Radiation Oncology, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Chang, Yu-Hui [Division of Health Sciences Research, Mayo Clinic Arizona, Phoenix, Arizona (United States); Wen, Zhifei [Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Shen, Jiajian; Stoker, Joshua B.; Ding, Xiaoning; Hu, Yanle [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Sahoo, Narayan [Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Herman, Michael G. [Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, Minnesota (United States); Vargas, Carlos; Keole, Sameer; Wong, William; Bues, Martin [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States)
2016-05-01
Purpose: The purpose of this study was to compare the impact of uncertainties and interplay on 3-dimensional (3D) and 4D robustly optimized intensity modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods and Materials: IMPT plans were created for 11 nonrandomly selected non-small cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D computed tomography (CT) to irradiate clinical target volume (CTV). Regular fractionation (66 Gy [relative biological effectiveness; RBE] in 33 fractions) was considered. In 4D optimization, the CTV of individual phases received nonuniform doses to achieve a uniform cumulative dose. The root-mean-square dose-volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram (DVH) indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed rank test. Results: 4D robust optimization plans led to smaller AUC for CTV (14.26 vs 18.61, respectively; P=.001), better CTV coverage (Gy [RBE]) (D{sub 95%} CTV: 60.6 vs 55.2, respectively; P=.001), and better CTV homogeneity (D{sub 5%}-D{sub 95%} CTV: 10.3 vs 17.7, resspectively; P=.002) in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage (D{sub 95%} CTV: 64.5 vs 63.8, respectively; P=.0068), comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions: Our exploratory methodology study showed
Zheng, Wenming; Lin, Zhouchen; Wang, Haixian
2014-04-01
A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.
The Study of an Optimal Robust Design and Adjustable Ordering Strategies in the HSCM.
Liao, Hung-Chang; Chen, Yan-Kwang; Wang, Ya-huei
2015-01-01
The purpose of this study was to establish a hospital supply chain management (HSCM) model in which three kinds of drugs in the same class and with the same indications were used in creating an optimal robust design and adjustable ordering strategies to deal with a drug shortage. The main assumption was that although each doctor has his/her own prescription pattern, when there is a shortage of a particular drug, the doctor may choose a similar drug with the same indications as a replacement. Four steps were used to construct and analyze the HSCM model. The computation technology used included a simulation, a neural network (NN), and a genetic algorithm (GA). The mathematical methods of the simulation and the NN were used to construct a relationship between the factor levels and performance, while the GA was used to obtain the optimal combination of factor levels from the NN. A sensitivity analysis was also used to assess the change in the optimal factor levels. Adjustable ordering strategies were also developed to prevent drug shortages.
Hadzi-Vaskov, M.; Kool, C.J.M.
2007-01-01
This paper presents a robustness check of the stochastic discount factor approach to international (bilateral) risk-sharing given in Brandt, Cochrane, and Santa-Clara (2006). We demonstrate two main inherent limitations of the bilateral SDF approach to international risk-sharing. First, the discount
Portfolio optimization using median-variance approach
Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli
2013-04-01
Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.
Directory of Open Access Journals (Sweden)
Koofigar Hamid Reza
2017-09-01
Full Text Available A robust auxiliary wide area damping controller is proposed for a unified power flow controller (UPFC. The mixed H2 / H∞ problem with regional pole placement, resolved by linear matrix inequality (LMI, is applied for controller design. Based on modal analysis, the optimal wide area input signals for the controller are selected. The time delay of input signals, due to electrical distance from the UPFC location is taken into account in the design procedure. The proposed controller is applied to a multi-machine interconnected power system from the IRAN power grid. It is shown that the both transient and dynamic stability are significantly improved despite different disturbances and loading conditions.
International Nuclear Information System (INIS)
Bouzid, M.; Benkherouf, H.; Benzadi, K.
2011-01-01
In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.
Energy-scales convergence for optimal and robust quantum transport in photosynthetic complexes
Energy Technology Data Exchange (ETDEWEB)
Mohseni, M. [Google Research, Venice, California 90291 (United States); Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Shabani, A. [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States); Department of Chemistry, University of California at Berkeley, Berkeley, California 94720 (United States); Lloyd, S. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Rabitz, H. [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States)
2014-01-21
Underlying physical principles for the high efficiency of excitation energy transfer in light-harvesting complexes are not fully understood. Notably, the degree of robustness of these systems for transporting energy is not known considering their realistic interactions with vibrational and radiative environments within the surrounding solvent and scaffold proteins. In this work, we employ an efficient technique to estimate energy transfer efficiency of such complex excitonic systems. We observe that the dynamics of the Fenna-Matthews-Olson (FMO) complex leads to optimal and robust energy transport due to a convergence of energy scales among all important internal and external parameters. In particular, we show that the FMO energy transfer efficiency is optimum and stable with respect to important parameters of environmental interactions including reorganization energy λ, bath frequency cutoff γ, temperature T, and bath spatial correlations. We identify the ratio of k{sub B}λT/ℏγg as a single key parameter governing quantum transport efficiency, where g is the average excitonic energy gap.
Energy-scales convergence for optimal and robust quantum transport in photosynthetic complexes
International Nuclear Information System (INIS)
Mohseni, M.; Shabani, A.; Lloyd, S.; Rabitz, H.
2014-01-01
Underlying physical principles for the high efficiency of excitation energy transfer in light-harvesting complexes are not fully understood. Notably, the degree of robustness of these systems for transporting energy is not known considering their realistic interactions with vibrational and radiative environments within the surrounding solvent and scaffold proteins. In this work, we employ an efficient technique to estimate energy transfer efficiency of such complex excitonic systems. We observe that the dynamics of the Fenna-Matthews-Olson (FMO) complex leads to optimal and robust energy transport due to a convergence of energy scales among all important internal and external parameters. In particular, we show that the FMO energy transfer efficiency is optimum and stable with respect to important parameters of environmental interactions including reorganization energy λ, bath frequency cutoff γ, temperature T, and bath spatial correlations. We identify the ratio of k B λT/ℏγg as a single key parameter governing quantum transport efficiency, where g is the average excitonic energy gap
Decreasing-Rate Pruning Optimizes the Construction of Efficient and Robust Distributed Networks.
Directory of Open Access Journals (Sweden)
Saket Navlakha
2015-07-01
Full Text Available Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. During neural network development in the brain, synapses are massively over-produced and then pruned-back over time. This strategy is not commonly used when designing engineered networks, since adding connections that will soon be removed is considered wasteful. Here, we show that for large distributed routing networks, network function is markedly enhanced by hyper-connectivity followed by aggressive pruning and that the global rate of pruning, a developmental parameter not previously studied by experimentalists, plays a critical role in optimizing network structure. We first used high-throughput image analysis techniques to quantify the rate of pruning in the mammalian neocortex across a broad developmental time window and found that the rate is decreasing over time. Based on these results, we analyzed a model of computational routing networks and show using both theoretical analysis and simulations that decreasing rates lead to more robust and efficient networks compared to other rates. We also present an application of this strategy to improve the distributed design of airline networks. Thus, inspiration from neural network formation suggests effective ways to design distributed networks across several domains.
International Nuclear Information System (INIS)
Jiang, Jian-ping; Li, Dong-xu
2010-01-01
The decentralized robust vibration control with collocated piezoelectric actuator and strain sensor pairs is considered in this paper for spacecraft solar panel structures. Each actuator is driven individually by the output of the corresponding sensor so that only local feedback control is implemented, with each actuator, sensor and controller operating independently. Firstly, an optimal placement method for the location of the collocated piezoelectric actuator and strain gauge sensor pairs is developed based on the degree of observability and controllability indices for solar panel structures. Secondly, a decentralized robust H ∞ controller is designed to suppress the vibration induced by external disturbance. Finally, a numerical comparison between centralized and decentralized control systems is performed in order to investigate their effectiveness to suppress vibration of the smart solar panel. The simulation results show that the vibration can be significantly suppressed with permitted actuator voltages by the controllers. The decentralized control system almost has the same disturbance attenuation level as the centralized control system with a bit higher control voltages. More importantly, the decentralized controller composed of four three-order systems is a better practical implementation than a high-order centralized controller is
Zhang, Minliang; Chen, Qian; Tao, Tianyang; Feng, Shijie; Hu, Yan; Li, Hui; Zuo, Chao
2017-08-21
Temporal phase unwrapping (TPU) is an essential algorithm in fringe projection profilometry (FPP), especially when measuring complex objects with discontinuities and isolated surfaces. Among others, the multi-frequency TPU has been proven to be the most reliable algorithm in the presence of noise. For a practical FPP system, in order to achieve an accurate, efficient, and reliable measurement, one needs to make wise choices about three key experimental parameters: the highest fringe frequency, the phase-shifting steps, and the fringe pattern sequence. However, there was very little research on how to optimize these parameters quantitatively, especially considering all three aspects from a theoretical and analytical perspective simultaneously. In this work, we propose a new scheme to determine simultaneously the optimal fringe frequency, phase-shifting steps and pattern sequence under multi-frequency TPU, robustly achieving high accuracy measurement by a minimum number of fringe frames. Firstly, noise models regarding phase-shifting algorithms as well as 3-D coordinates are established under a projector defocusing condition, which leads to the optimal highest fringe frequency for a FPP system. Then, a new concept termed frequency-to-frame ratio (FFR) that evaluates the magnitude of the contribution of each frame for TPU is defined, on which an optimal phase-shifting combination scheme is proposed. Finally, a judgment criterion is established, which can be used to judge whether the ratio between adjacent fringe frequencies is conducive to stably and efficiently unwrapping the phase. The proposed method provides a simple and effective theoretical framework to improve the accuracy, efficiency, and robustness of a practical FPP system in actual measurement conditions. The correctness of the derived models as well as the validity of the proposed schemes have been verified through extensive simulations and experiments. Based on a normal monocular 3-D FPP hardware system
Methodological approach to strategic performance optimization
Hell, Marko; Vidačić, Stjepan; Garača, Željko
2009-01-01
This paper presents a matrix approach to the measuring and optimization of organizational strategic performance. The proposed model is based on the matrix presentation of strategic performance, which follows the theoretical notions of the balanced scorecard (BSC) and strategy map methodologies, initially developed by Kaplan and Norton. Development of a quantitative record of strategic objectives provides an arena for the application of linear programming (LP), which is a mathematical tech...
Quasipolynomial Approach to Simultaneous Robust Control of Time-Delay Systems
Directory of Open Access Journals (Sweden)
Nikolaj Semenič
2014-01-01
Full Text Available A control law for retarded time-delay systems is considered, concerning infinite closed-loop spectrum assignment. An algebraic method for spectrum assignment is presented with a unique optimization algorithm for minimization of spectral abscissa and effective shaping of the chains of infinitely many closed-loop poles. Uncertainty of plant delays of a certain structure is considered in a sense of a robust simultaneous stabilization. Robust performance is achieved using mixed sensitivity design, which is incorporated into the addressed control law.
Robust Trajectory Optimization of a Ski Jumper for Uncertainty Influence and Safety Quantification
Directory of Open Access Journals (Sweden)
Patrick Piprek
2018-02-01
Full Text Available This paper deals with the development of a robust optimal control framework for a previously developed multi-body ski jumper simulation model by the authors. This framework is used to model uncertainties acting on the jumper during his jump, e.g., wind or mass, to enhance the performance, but also to increase the fairness and safety of the competition. For the uncertainty modeling the method of generalized polynomial chaos together with the discrete expansion by stochastic collocation is applied: This methodology offers a very flexible framework to model multiple uncertainties using a small number of required optimizations to calculate an uncertain trajectory. The results are then compared to the results of the Latin-Hypercube sampling method to show the correctness of the applied methods. Finally, the results are examined with respect to two major metrics: First, the influence of the uncertainties on the jumper, his positioning with respect to the air, and his maximal achievable flight distance are examined. Then, the results are used in a further step to quantify the safety of the jumper.
Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle
Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun
2018-05-01
The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.
International Nuclear Information System (INIS)
Zhang, P; Hu, J; Tyagi, N; Mageras, G; Lee, N; Hunt, M
2014-01-01
Purpose: To develop a robust planning paradigm which incorporates a tumor regression model into the optimization process to ensure tumor coverage in head and neck radiotherapy. Methods: Simulation and weekly MR images were acquired for a group of head and neck patients to characterize tumor regression during radiotherapy. For each patient, the tumor and parotid glands were segmented on the MR images and the weekly changes were formulated with an affine transformation, where morphological shrinkage and positional changes are modeled by a scaling factor, and centroid shifts, respectively. The tumor and parotid contours were also transferred to the planning CT via rigid registration. To perform the robust planning, weekly predicted PTV and parotid structures were created by transforming the corresponding simulation structures according to the weekly affine transformation matrix averaged over patients other than him/herself. Next, robust PTV and parotid structures were generated as the union of the simulation and weekly prediction contours. In the subsequent robust optimization process, attainment of the clinical dose objectives was required for the robust PTV and parotids, as well as other organs at risk (OAR). The resulting robust plans were evaluated by looking at the weekly and total accumulated dose to the actual weekly PTV and parotid structures. The robust plan was compared with the original plan based on the planning CT to determine its potential clinical benefit. Results: For four patients, the average weekly change to tumor volume and position was −4% and 1.2 mm laterally-posteriorly. Due to these temporal changes, the robust plans resulted in an accumulated PTV D95 that was, on average, 2.7 Gy higher than the plan created from the planning CT. OAR doses were similar. Conclusion: Integration of a tumor regression model into target delineation and plan robust optimization is feasible and may yield improved tumor coverage. Part of this research is supported
International Nuclear Information System (INIS)
Zhou, Qing; Fang, Gang; Wang, Dong-peng; Yang, Wei
2016-01-01
Abstracts: The robust optimization model is applied to analyze the enterprise's decision of the investment portfolio for the collaborative innovation under the risk constraints. Through the mathematical model deduction and the simulation analysis, the research result shows that the enterprise's investment to the collaborative innovation has relatively obvious robust effect. As for the collaborative innovation, the return from the investment coexists with the risk of it. Under the risk constraints, the robust optimization method could solve the minimum risk as well as the proportion of each investment scheme in the portfolio on the condition of different target returns from the investment. On the basis of the result, the enterprise could balance between the investment return and risk and make optimal decision on the investment scheme.
Demayo, Trevor Nat
optimization. The active control system was demonstrated and evaluated by optimizing the burners under practical conditions. In most cases, the controller was able to locate, within 10--15 min, a global performance peak that simultaneously minimized emissions and maximized system efficiency within specified stability limits. The active controller demonstrated flexibility and robustness by (a) successfully optimizing different burners for different J functions, initial conditions, and sensor combinations, and (b) successfully reoptimizing a burner under the effect of simulated window fouling and following sudden inlet perturbations, including load cycling and a misaligned fuel injector.
A Robust Approach to Risk Assessment Based on Species Sensitivity Distributions.
Monti, Gianna S; Filzmoser, Peter; Deutsch, Roland C
2018-05-03
The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real-world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation. © 2018 Society for Risk Analysis.
Energy Technology Data Exchange (ETDEWEB)
Shayeghi, H., E-mail: hshayeghi@gmail.co [Technical Engineering Department, University of Mohaghegh Ardabili, Ardabil (Iran, Islamic Republic of); Shayanfar, H.A. [Center of Excellence for Power System Automation and Operation, Electrical Engineering Department, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of); Jalilzadeh, S.; Safari, A. [Technical Engineering Department, Zanjan University, Zanjan (Iran, Islamic Republic of)
2010-10-15
In this paper, a new approach based on the particle swarm optimization (PSO) technique is proposed to tune the parameters of the thyristor controlled series capacitor (TCSC) power oscillation damping controller. The design problem of the damping controller is converted to an optimization problem with the time-domain-based objective function which is solved by a PSO technique which has a strong ability to find the most optimistic results. To ensure the robustness of the proposed stabilizers, the design process takes a wide range of operating conditions into account. The performance of the newly designed controller is evaluated in a four-machine power system subjected to the different types of disturbances in comparison with the genetic algorithm based damping controller. The effectiveness of the proposed controller is demonstrated through the nonlinear time-domain simulation and some performance indices studies. The results analysis reveals that the tuned PSO based TCSC damping controller using the proposed fitness function has an excellent capability in damping power system inter-area oscillations and enhances greatly the dynamic stability of the power systems. Moreover, it is superior to the genetic algorithm based damping controller.
Directory of Open Access Journals (Sweden)
Ritwik K Niyogi
experimentally fitted value. Our work provides insights into the simultaneous and rapid modulation of excitatory and inhibitory neuronal gains, which enables flexible, robust, and optimal decision-making.
Directory of Open Access Journals (Sweden)
Fei Song
2014-01-01
Full Text Available This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywheel is activated to counteract the fault effect and ensure that the satellite is working safely and reliably. The active disturbance rejection approach is employed to design the controller, which handles input information with tracking differentiator, estimates system uncertainties with extended state observer, and generates control variables by state feedback and compensation. The designed active disturbance rejection controller is robust to both internal dynamics and external disturbances. The bandwidth parameter of extended state observer is optimized by the artificial bee colony algorithm so as to improve the performance of attitude control system. A series of simulation experiment results demonstrate the performance superiorities of the proposed robust fault-tolerant control algorithm.
Li, Jia; Lam, Edmund Y
2014-04-21
Mask topography effects need to be taken into consideration for a more accurate solution of source mask optimization (SMO) in advanced optical lithography. However, rigorous 3D mask models generally involve intensive computation and conventional SMO fails to manipulate the mask-induced undesired phase errors that degrade the usable depth of focus (uDOF) and process yield. In this work, an optimization approach incorporating pupil wavefront aberrations into SMO procedure is developed as an alternative to maximize the uDOF. We first design the pupil wavefront function by adding primary and secondary spherical aberrations through the coefficients of the Zernike polynomials, and then apply the conjugate gradient method to achieve an optimal source-mask pair under the condition of aberrated pupil. We also use a statistical model to determine the Zernike coefficients for the phase control and adjustment. Rigorous simulations of thick masks show that this approach provides compensation for mask topography effects by improving the pattern fidelity and increasing uDOF.
The role of robust optimization in single-leg airline revenue management
Birbil, S.I.; Frenk, J.B.G.; Gromicho Dos Santos, J.A.; Zhang, S.
2009-01-01
In this paper, we introduce robust versions of the classical static and dynamic single-leg seat allocation models. These robust models take into account the inaccurate estimates of the underlying probability distributions. As observed by simulation experiments, it turns out that for these robust
Energy Technology Data Exchange (ETDEWEB)
El Farouq, Naïma, E-mail: naima.elfarouq@univ-bpclermont.fr [Université Blaise Pascal (Clermont-Ferrand II) (France); Bernhard, Pierre, E-mail: pierre.bernhard@inria.fr [INRIA Sophia Antipolis-Méditerranée (France)
2015-10-15
We prove the missing uniqueness theorem for the viscosity solution of a quasi-variational inequality related to a minimax impulse control problem modeling the option pricing with proportional transactions costs. This result makes our robust control approach of option pricing in the interval market model essentially complete.
A Robust Bayesian Approach for Structural Equation Models with Missing Data
Lee, Sik-Yum; Xia, Ye-Mao
2008-01-01
In this paper, normal/independent distributions, including but not limited to the multivariate t distribution, the multivariate contaminated distribution, and the multivariate slash distribution, are used to develop a robust Bayesian approach for analyzing structural equation models with complete or missing data. In the context of a nonlinear…
International Nuclear Information System (INIS)
El Farouq, Naïma; Bernhard, Pierre
2015-01-01
We prove the missing uniqueness theorem for the viscosity solution of a quasi-variational inequality related to a minimax impulse control problem modeling the option pricing with proportional transactions costs. This result makes our robust control approach of option pricing in the interval market model essentially complete
A robust neural network-based approach for microseismic event detection
Akram, Jubran; Ovcharenko, Oleg; Peter, Daniel
2017-01-01
We present an artificial neural network based approach for robust event detection from low S/N waveforms. We use a feed-forward network with a single hidden layer that is tuned on a training dataset and later applied on the entire example dataset
A robust and hierarchical approach for the automatic co-registration of intensity and visible images
González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José
2012-09-01
This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.
International Nuclear Information System (INIS)
Yu, Nan; Kang, Jin-Su; Chang, Chung-Chuan; Lee, Tai-Yong; Lee, Dong-Yup
2016-01-01
This study aims to provide economical and environmentally friendly solutions for a microgrid system with distributed energy resources in the design stage, considering multiple uncertainties during operation and conflicting interests among diverse microgrid stakeholders. For the purpose, we develop a multi-objective optimization model for robust microgrid planning, on the basis of an economic robustness measure, i.e. the worst-case cost among possible scenarios, to reduce the variability among scenario costs caused by uncertainties. The efficacy of the model is successfully demonstrated by applying it to Taichung Industrial Park in Taiwan, an industrial complex, where significant amount of greenhouse gases are emitted. Our findings show that the most robust solution, but the highest cost, mainly includes 45% (26.8 MW) of gas engine and 47% (28 MW) of photovoltaic panel with the highest system capacity (59 MW). Further analyses reveal the environmental benefits from the significant reduction of the expected annual CO_2 emission and carbon tax by about half of the current utility facilities in the region. In conclusion, the developed model provides an efficient decision-making tool for robust microgrid planning at the preliminary stage. - Highlights: • Developed robust economic and environmental optimization model for microgrid planning. • Provided Pareto optimal planning solutions for Taichung Industrial Park, Taiwan. • Suggested microgrid configuration with significant economic and environmental benefits. • Identified gas engine and photovoltaic panel as two promising energy sources.
Emergence of robust growth laws from optimal regulation of ribosome synthesis.
Scott, Matthew; Klumpp, Stefan; Mateescu, Eduard M; Hwa, Terence
2014-08-22
Bacteria must constantly adapt their growth to changes in nutrient availability; yet despite large-scale changes in protein expression associated with sensing, adaptation, and processing different environmental nutrients, simple growth laws connect the ribosome abundance and the growth rate. Here, we investigate the origin of these growth laws by analyzing the features of ribosomal regulation that coordinate proteome-wide expression changes with cell growth in a variety of nutrient conditions in the model organism Escherichia coli. We identify supply-driven feedforward activation of ribosomal protein synthesis as the key regulatory motif maximizing amino acid flux, and autonomously guiding a cell to achieve optimal growth in different environments. The growth laws emerge naturally from the robust regulatory strategy underlying growth rate control, irrespective of the details of the molecular implementation. The study highlights the interplay between phenomenological modeling and molecular mechanisms in uncovering fundamental operating constraints, with implications for endogenous and synthetic design of microorganisms. © 2014 The Authors. Published under the terms of the CC BY 4.0 license.
Zhang, Yong; Jiang, Yunjian
2017-02-01
Waste cooking oil (WCO)-for-biodiesel conversion is regarded as the "waste-to-wealthy" industry. This paper addresses the design of a WCO-for-biodiesel supply chain at both strategic and tactical levels. The supply chain of this problem is studied, which is based on a typical mode of the waste collection (from restaurants' kitchen) and conversion in the cities. The supply chain comprises three stakeholders: WCO supplier, integrated bio-refinery and demand zone. Three key problems should be addressed for the optimal design of the supply chain: (1) the number, sizes and locations of bio-refinery; (2) the sites and amount of WCO collected; (3) the transportation plans of WCO and biodiesel. A robust mixed integer linear model with muti-objective (economic, environmental and social objectives) is proposed for these problems. Finally, a large-scale practical case study is adopted based on Suzhou, a city in the east of China, to verify the proposed models. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Abednico Montshiwa
2016-02-01
Full Text Available This paper presents an optimized diamond structured automobile supply chain network towards a robust Business Continuity Management model. The model is necessitated by the nature of the automobile supply chain. Companies in tier two are centralized and numerically limited and have to supply multiple tier one companies with goods and services. The challenge with this supply chain structure is the inherent risks in the supply chain. Once supply chain disruption takes place at tier 2 level, the whole supply chain network suffers huge loses. To address this challenge, the paper replaces Risk Analysis with Risk Ranking and it introduces Supply Chain Cooperation (SCC to the traditional Business Continuity Plan (BCP concept. The paper employed three statistical analysis techniques (correlation analysis, regression analysis and Smart PLS 3.0 calculations. In this study, correlation and regression analysis results on risk rankings, SCC and Business Impact Analysis were significant, ascertaining the value of the model. The multivariate data analysis calculations demonstrated that SCC has a positive total significant effect on risk rankings and BCM while BIA has strongest positive effects on all BCP factors. Finally, sensitivity analysis demonstrated that company size plays a role in BCM.
Robust Design of Terminal ILC with H∞ Mixed Sensitivity Approach for a Thermoforming Oven
Directory of Open Access Journals (Sweden)
Guy Gauthier
2008-01-01
Full Text Available This paper presents a robust design approach for terminal iterative learning control (TILC. This robust design uses the H∞ mixed-sensitivity technique. An industrial application is described where TILC is used to control the reheat phase of plastic sheets in a thermoforming oven. The TILC adjusts the heater temperature setpoints such that, at the end of the reheat cycle, the surface temperature map of the plastic sheet will converge to the desired one. Simulation results are included to show the effectiveness of the control law.
International Nuclear Information System (INIS)
Ngamroo, Issarachai
2011-01-01
Even the superconducting magnetic energy storage (SMES) is the smart stabilizing device in electric power systems, the installation cost of SMES is very high. Especially, the superconducting magnetic coil size which is the critical part of SMES, must be well designed. On the contrary, various system operating conditions result in system uncertainties. The power controller of SMES designed without taking such uncertainties into account, may fail to stabilize the system. By considering both coil size and system uncertainties, this paper copes with the optimization of robust SMES controller. No need of exact mathematic equations, the normalized coprime factorization is applied to model system uncertainties. Based on the normalized integral square error index of inter-area rotor angle difference and specified structured H ∞ loop shaping optimization, the robust SMES controller with the smallest coil size, can be achieved by the genetic algorithm. The robustness of the proposed SMES with the smallest coil size can be confirmed by simulation study.
Our objective was to determine an optimal experimental design for a mixture of perfluoroalkyl acids (PFAAs) that is robust to the assumption of additivity. Of particular focus to this research project is whether an environmentally relevant mixture of four PFAAs with long half-liv...
Our objective is to determine an optimal experimental design for a mixture of perfluoroalkyl acids (PFAAs) that is robust to the assumption of additivity. PFAAs are widely used in consumer products and industrial applications. The presence and persistence of PFAAs, especially in ...
On robust multi-period pre-commitment and time-consistent mean-variance portfolio optimization
F. Cong (Fei); C.W. Oosterlee (Kees)
2017-01-01
textabstractWe consider robust pre-commitment and time-consistent mean-variance optimal asset allocation strategies, that are required to perform well also in a worst-case scenario regarding the development of the asset price. We show that worst-case scenarios for both strategies can be found by
Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup
2009-01-01
For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.
Using Multi-Objective Optimization to Explore Robust Policies in the Colorado River Basin
Alexander, E.; Kasprzyk, J. R.; Zagona, E. A.; Prairie, J. R.; Jerla, C.; Butler, A.
2017-12-01
The long term reliability of water deliveries in the Colorado River Basin has degraded due to the imbalance of growing demand and dwindling supply. The Colorado River meanders 1,450 miles across a watershed that covers seven US states and Mexico and is an important cultural, economic, and natural resource for nearly 40 million people. Its complex operating policy is based on the "Law of the River," which has evolved since the Colorado River Compact in 1922. Recent (2007) refinements to address shortage reductions and coordinated operations of Lakes Powell and Mead were negotiated with stakeholders in which thousands of scenarios were explored to identify operating guidelines that could ultimately be agreed on. This study explores a different approach to searching for robust operating policies to inform the policy making process. The Colorado River Simulation System (CRSS), a long-term water management simulation model implemented in RiverWare, is combined with the Borg multi-objective evolutionary algorithm (MOEA) to solve an eight objective problem formulation. Basin-wide performance metrics are closely tied to system health through incorporating critical reservoir pool elevations, duration, frequency and quantity of shortage reductions in the objective set. For example, an objective to minimize the frequency that Lake Powell falls below the minimum power pool elevation of 3,490 feet for Glen Canyon Dam protects a vital economic and renewable energy source for the southwestern US. The decision variables correspond to operating tiers in Lakes Powell and Mead that drive the implementation of various shortage and release policies, thus affecting system performance. The result will be a set of non-dominated solutions that can be compared with respect to their trade-offs based on the various objectives. These could inform policy making processes by eliminating dominated solutions and revealing robust solutions that could remain hidden under conventional analysis.
Miura, Hideharu; Ozawa, Shuichi; Nagata, Yasushi
2017-09-01
This study investigated position dependence in planning target volume (PTV)-based and robust optimization plans using full-arc and partial-arc volumetric modulated arc therapy (VMAT). The gantry angles at the periphery, intermediate, and center CTV positions were 181°-180° (full-arc VMAT) and 181°-360° (partial-arc VMAT). A PTV-based optimization plan was defined by 5 mm margin expansion of the CTV to a PTV volume, on which the dose constraints were applied. The robust optimization plan consisted of a directly optimized dose to the CTV under a maximum-uncertainties setup of 5 mm. The prescription dose was normalized to the CTV D 99% (the minimum relative dose that covers 99% of the volume of the CTV) as an original plan. The isocenter was rigidly shifted at 1 mm intervals in the anterior-posterior (A-P), superior-inferior (S-I), and right-left (R-L) directions from the original position to the maximum-uncertainties setup of 5 mm in the original plan, yielding recalculated dose distributions. It was found that for the intermediate and center positions, the uncertainties in the D 99% doses to the CTV for all directions did not significantly differ when comparing the PTV-based and robust optimization plans (P > 0.05). For the periphery position, uncertainties in the D 99% doses to the CTV in the R-L direction for the robust optimization plan were found to be lower than those in the PTV-based optimization plan (P plan's efficacy using partial-arc VMAT depends on the periphery CTV position. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Czech Academy of Sciences Publication Activity Database
Branda, Martin; Bucher, M.; Červinka, Michal; Schwartz, A.
2018-01-01
Roč. 70, č. 2 (2018), s. 503-530 ISSN 0926-6003 R&D Projects: GA ČR GA15-00735S Institutional support: RVO:67985556 Keywords : Cardinality constraints * Regularization method * Scholtes regularization * Strong stationarity * Sparse portfolio optimization * Robust portfolio optimization Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 1.520, year: 2016 http://library.utia.cas.cz/separaty/2018/MTR/branda-0489264.pdf
Emergence of robust solutions to 0-1 optimization problems in multi-agent systems
DEFF Research Database (Denmark)
constructive application in engineering. The approach is demonstrated by giving two examples: First, time-dependent robot-target assignment problems with several autonomous robots and several targets are considered as model of flexible manufacturing systems. Each manufacturing target has to be served...... of autonomous space robots building a space station by a distributed transportation of several parts from a space shuttle to defined positions at the space station. Second, the suggested approach is used for the design and selection of traffic networks. The topology of the network is optimized with respect...... to an additive quantity like the length of route segments and an upper bound for the number of route segments. For this, the dynamics of the selection processes of the previous example is extended such that for each vertex several choices for the edges can be made simultaneously up to an individually given upper...
A nonlinear optimal control approach for chaotic finance dynamics
Rigatos, G.; Siano, P.; Loia, V.; Tommasetti, A.; Troisi, O.
2017-11-01
A new nonlinear optimal control approach is proposed for stabilization of the dynamics of a chaotic finance model. The dynamic model of the financial system, which expresses interaction between the interest rate, the investment demand, the price exponent and the profit margin, undergoes approximate linearization round local operating points. These local equilibria are defined at each iteration of the control algorithm and consist of the present value of the systems state vector and the last value of the control inputs vector that was exerted on it. The approximate linearization makes use of Taylor series expansion and of the computation of the associated Jacobian matrices. The truncation of higher order terms in the Taylor series expansion is considered to be a modelling error that is compensated by the robustness of the control loop. As the control algorithm runs, the temporary equilibrium is shifted towards the reference trajectory and finally converges to it. The control method needs to compute an H-infinity feedback control law at each iteration, and requires the repetitive solution of an algebraic Riccati equation. Through Lyapunov stability analysis it is shown that an H-infinity tracking performance criterion holds for the control loop. This implies elevated robustness against model approximations and external perturbations. Moreover, under moderate conditions the global asymptotic stability of the control loop is proven.
Directory of Open Access Journals (Sweden)
Yi Long
2016-01-01
Full Text Available A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user’s intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC with a cerebellar model articulation controller (CMAC neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA, and SMC with CMAC compensation (SMC_CMAC, all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems.
Long, Yi; Du, Zhi-jiang; Wang, Wei-dong; Dong, Wei
2016-01-01
A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems. PMID:27069353
A robust fuzzy possibilistic AHP approach for partner selection in international strategic alliance
Directory of Open Access Journals (Sweden)
Vahid Reza Salamat
2018-09-01
Full Text Available The international strategic alliance is an inevitable solution for making competitive advantage and reducing the risk in today’s business environment. Partner selection is an important part in success of partnerships, and meanwhile it is a complicated decision because of various dimensions of the problem and inherent conflicts of stockholders. The purpose of this paper is to provide a practical approach to the problem of partner selection in international strategic alliances, which fulfills the gap between theories of inter-organizational relationships and quantitative models. Thus, a novel Robust Fuzzy Possibilistic AHP approach is proposed for combining the benefits of two complementary theories of inter-organizational relationships named, (1 Resource-based view, and (2 Transaction-cost theory and considering Fit theory as the perquisite of alliance success. The Robust Fuzzy Possibilistic AHP approach is a novel development of Interval-AHP technique employing robust formulation; aimed at handling the ambiguity of the problem and let the use of intervals as pairwise judgments. The proposed approach was compared with existing approaches, and the results show that it provides the best quality solutions in terms of minimum error degree. Moreover, the framework implemented in a case study and its applicability were discussed.
Detection of heart beats in multimodal data: a robust beat-to-beat interval estimation approach.
Antink, Christoph Hoog; Brüser, Christoph; Leonhardt, Steffen
2015-08-01
The heart rate and its variability play a vital role in the continuous monitoring of patients, especially in the critical care unit. They are commonly derived automatically from the electrocardiogram as the interval between consecutive heart beat. While their identification by QRS-complexes is straightforward under ideal conditions, the exact localization can be a challenging task if the signal is severely contaminated with noise and artifacts. At the same time, other signals directly related to cardiac activity are often available. In this multi-sensor scenario, methods of multimodal sensor-fusion allow the exploitation of redundancies to increase the accuracy and robustness of beat detection.In this paper, an algorithm for the robust detection of heart beats in multimodal data is presented. Classic peak-detection is augmented by robust multi-channel, multimodal interval estimation to eliminate false detections and insert missing beats. This approach yielded a score of 90.70 and was thus ranked third place in the PhysioNet/Computing in Cardiology Challenge 2014: Robust Detection of Heart Beats in Muthmodal Data follow-up analysis.In the future, the robust beat-to-beat interval estimator may directly be used for the automated processing of multimodal patient data for applications such as diagnosis support and intelligent alarming.
A constrained robust least squares approach for contaminant release history identification
Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.
2006-04-01
Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.
Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei
2018-06-01
To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large
International Nuclear Information System (INIS)
Anetai, Y; Mizuno, H; Sumida, I; Ogawa, K; Takegawa, H; Inoue, T; Koizumi, M; Veld, A van’t; Korevaar, E
2015-01-01
Purpose: To determine which proton planning technique on average-CT is more vulnerable to respiratory motion induced density changes and interplay effect among (a) IMPT of CTV-based minimax robust optimization with 5mm set-up error considered, (b, c) IMPT/SFUD of 5mm-expanded PTV optimization. Methods: Three planning techniques were optimized in Raystation with a prescription of 60/25 (Gy/fractions) and almost the same OAR constraints/objectives for each of 10 NSCLC patients. 4D dose without/with interplay effect was recalculated on eight 4D-CT phases and accumulated after deforming the dose of each phase to a reference (exhalation phase). The change of D98% of each CTV caused by density changes and interplay was determined. In addition, evaluation of the DVH information vector (D99%, D98%, D95%, Dave, D50%, D2%, D1%) which compares the whole DVH by η score = (cosine similarity × Pearson correlation coefficient − 0.9) × 1000 quantified the degree of DVH change: score below 100 indicates changed DVH. Results: Three 3D plans of each technique satisfied our clinical goals. D98% shift mean±SD (Gy) due to density changes was largest in (c): −0.78±1.1 while (a): −0.11±0.65 and (b): − 0.59±0.93. Also the shift due to interplay effect most was (c): −.54±0.70 whereas (a): −0.25±0.93 and (b): −0.12±0.13. Moreover lowest η score caused by density change was also (c): 69, while (a) and (b) kept around 90. η score also indicated less effect of interplay than density changes. Note that generally the changed DVH were still acceptable clinically. Paired T-tests showed a significantly smaller density change effect in (a) (p<0.05) than in (b) or (c) and no significant difference in interplay effect. Conclusion: CTV-based robust optimized IMPT was more robust against respiratory motion induced density changes than PTV-based IMPT and SFUD. The interplay effect was smaller than the effect of density changes and similar among the three techniques. The JSPS Core
A perturbed martingale approach to global optimization
Energy Technology Data Exchange (ETDEWEB)
Sarkar, Saikat [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Roy, Debasish, E-mail: royd@civil.iisc.ernet.in [Computational Mechanics Lab, Department of Civil Engineering, Indian Institute of Science, Bangalore 560012 (India); Vasu, Ram Mohan [Department of Instrumentation and Applied Physics, Indian Institute of Science, Bangalore 560012 (India)
2014-08-01
A new global stochastic search, guided mainly through derivative-free directional information computable from the sample statistical moments of the design variables within a Monte Carlo setup, is proposed. The search is aided by imparting to the directional update term additional layers of random perturbations referred to as ‘coalescence’ and ‘scrambling’. A selection step, constituting yet another avenue for random perturbation, completes the global search. The direction-driven nature of the search is manifest in the local extremization and coalescence components, which are posed as martingale problems that yield gain-like update terms upon discretization. As anticipated and numerically demonstrated, to a limited extent, against the problem of parameter recovery given the chaotic response histories of a couple of nonlinear oscillators, the proposed method appears to offer a more rational, more accurate and faster alternative to most available evolutionary schemes, prominently the particle swarm optimization. - Highlights: • Evolutionary global optimization is posed as a perturbed martingale problem. • Resulting search via additive updates is a generalization over Gateaux derivatives. • Additional layers of random perturbation help avoid trapping at local extrema. • The approach ensures efficient design space exploration and high accuracy. • The method is numerically assessed via parameter recovery of chaotic oscillators.
Outage optimization - the US experience and approach
International Nuclear Information System (INIS)
LaPlatney, J.
2007-01-01
Sustainable development of Nuclear Energy depends heavily on excellent performance of the existing fleet which in turn depends heavily on the performance of planned outages. Some reactor fleets, for example Finland and Germany, have demonstrated sustained good outage performance from their start of commercial operation. Others, such as the US, have improved performance over time. The principles behind a successful outage optimization process are: -) duration is not sole measure of outage success, -) outage work must be performed safely, -) scope selection must focus on improving plant material condition to improve reliability, -) all approved outage work must be completed, -) work must be done cost effectively, -) post-outage plant reliability is a key measure of outage success, and -) outage lessons learned must be effectively implemented to achieve continuous improvement. This approach has proven its superiority over simple outage shortening, and has yielded good results in the US fleet over the past 15 years
The optimal shape of elastomer mushroom-like fibers for high and robust adhesion
Directory of Open Access Journals (Sweden)
Burak Aksak
2014-05-01
Full Text Available Over the last decade, significant effort has been put into mimicking the ability of the gecko lizard to strongly and reversibly cling to surfaces, by using synthetic structures. Among these structures, mushroom-like elastomer fiber arrays have demonstrated promising performance on smooth surfaces matching the adhesive strengths obtained with the natural gecko foot-pads. It is possible to improve the already impressive adhesive performance of mushroom-like fibers provided that the underlying adhesion mechanism is understood. Here, the adhesion mechanism of bio-inspired mushroom-like fibers is investigated by implementing the Dugdale–Barenblatt cohesive zone model into finite elements simulations. It is found that the magnitude of pull-off stress depends on the edge angle θ and the ratio of the tip radius to the stalk radius β of the mushroom-like fiber. Pull-off stress is also found to depend on a dimensionless parameter χ, the ratio of the fiber radius to a length-scale related to the dominance of adhesive stress. As an estimate, the optimal parameters are found to be β = 1.1 and θ = 45°. Further, the location of crack initiation is found to depend on χ for given β and θ. An analytical model for pull-off stress, which depends on the location of crack initiation as well as on θ and β, is proposed and found to agree with the simulation results. Results obtained in this work provide a geometrical guideline for designing robust bio-inspired dry fibrillar adhesives.
Directory of Open Access Journals (Sweden)
Emran Tohidi
2013-01-01
Full Text Available The idea of approximation by monomials together with the collocation technique over a uniform mesh for solving state-space analysis and optimal control problems (OCPs has been proposed in this paper. After imposing the Pontryagins maximum principle to the main OCPs, the problems reduce to a linear or nonlinear boundary value problem. In the linear case we propose a monomial collocation matrix approach, while in the nonlinear case, the general collocation method has been applied. We also show the efficiency of the operational matrices of differentiation with respect to the operational matrices of integration in our numerical examples. These matrices of integration are related to the Bessel, Walsh, Triangular, Laguerre, and Hermite functions.
Directory of Open Access Journals (Sweden)
Adacher Ludovica
2017-12-01
Full Text Available In this paper we extend a stochastic discrete optimization algorithm so as to tackle the signal setting problem. Signalized junctions represent critical points of an urban transportation network, and the efficiency of their traffic signal setting influences the overall network performance. Since road congestion usually takes place at or close to junction areas, an improvement in signal settings contributes to improving travel times, drivers’ comfort, fuel consumption efficiency, pollution and safety. In a traffic network, the signal control strategy affects the travel time on the roads and influences drivers’ route choice behavior. The paper presents an algorithm for signal setting optimization of signalized junctions in a congested road network. The objective function used in this work is a weighted sum of delays caused by the signalized intersections. We propose an iterative procedure to solve the problem by alternately updating signal settings based on fixed flows and traffic assignment based on fixed signal settings. To show the robustness of our method, we consider two different assignment methods: one based on user equilibrium assignment, well established in the literature as well as in practice, and the other based on a platoon simulation model with vehicular flow propagation and spill-back. Our optimization algorithm is also compared with others well known in the literature for this problem. The surrogate method (SM, particle swarm optimization (PSO and the genetic algorithm (GA are compared for a combined problem of global optimization of signal settings and traffic assignment (GOSSTA. Numerical experiments on a real test network are reported.
Directory of Open Access Journals (Sweden)
Bangyan Zhu
2016-07-01
Full Text Available Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
An interactive and flexible approach to stamping design and optimization
International Nuclear Information System (INIS)
Roy, Subir; Kunju, Ravi; Kirby, David
2004-01-01
This paper describes an efficient method that integrates finite element analysis (FEA), mesh morphing and response surface based optimization in order to implement an automated and flexible software tool to optimize stamping tool and process design. For FEA, a robust and extremely fast inverse solver is chosen. For morphing, a state of the art mesh morpher that interactively generates shape variables for optimization studies is used. The optimization algorithm utilized in this study enables a global search for a multitude of parameters and is highly flexible with regards to the choice of objective functions. A quality function that minimizes formability defects resulting from stretching and compression is implemented
Automatic spinal cord localization, robust to MRI contrasts using global curve optimization.
Gros, Charley; De Leener, Benjamin; Dupont, Sara M; Martin, Allan R; Fehlings, Michael G; Bakshi, Rohit; Tummala, Subhash; Auclair, Vincent; McLaren, Donald G; Callot, Virginie; Cohen-Adad, Julien; Sdika, Michaël
2018-02-01
During the last two decades, MRI has been increasingly used for providing valuable quantitative information about spinal cord morphometry, such as quantification of the spinal cord atrophy in various diseases. However, despite the significant improvement of MR sequences adapted to the spinal cord, automatic image processing tools for spinal cord MRI data are not yet as developed as for the brain. There is nonetheless great interest in fully automatic and fast processing methods to be able to propose quantitative analysis pipelines on large datasets without user bias. The first step of most of these analysis pipelines is to detect the spinal cord, which is challenging to achieve automatically across the broad range of MRI contrasts, field of view, resolutions and pathologies. In this paper, a fully automated, robust and fast method for detecting the spinal cord centerline on MRI volumes is introduced. The algorithm uses a global optimization scheme that attempts to strike a balance between a probabilistic localization map of the spinal cord center point and the overall spatial consistency of the spinal cord centerline (i.e. the rostro-caudal continuity of the spinal cord). Additionally, a new post-processing feature, which aims to automatically split brain and spine regions is introduced, to be able to detect a consistent spinal cord centerline, independently from the field of view. We present data on the validation of the proposed algorithm, known as "OptiC", from a large dataset involving 20 centers, 4 contrasts (T 2 -weighted n = 287, T 1 -weighted n = 120, T 2 ∗ -weighted n = 307, diffusion-weighted n = 90), 501 subjects including 173 patients with a variety of neurologic diseases. Validation involved the gold-standard centerline coverage, the mean square error between the true and predicted centerlines and the ability to accurately separate brain and spine regions. Overall, OptiC was able to cover 98.77% of the gold-standard centerline, with a
Wang, Qian; Xue, Anke
2018-06-01
This paper has proposed a robust control for the spacecraft rendezvous system by considering the parameter uncertainties and actuator unsymmetrical saturation based on the discrete gain scheduling approach. By changing of variables, we transform the actuator unsymmetrical saturation control problem into a symmetrical one. The main advantage of the proposed method is improving the dynamic performance of the closed-loop system with a region of attraction as large as possible. By the Lyapunov approach and the scheduling technology, the existence conditions for the admissible controller are formulated in the form of linear matrix inequalities. The numerical simulation illustrates the effectiveness of the proposed method.
Classical gas: Hearty prices, robust demand combine to pump breezy optimism through 2005 forecasts
International Nuclear Information System (INIS)
Lunan, D.
2005-01-01
The outlook for natural gas in 2005 is said to be a watershed year, with a lengthy list of developments that could have significant effect on the industry for many years to come. In light of continuing high demand and static supply prospects, prices will have to continue to be high in order to ensure the necessary infrastructure investments to keep gas flowing from multiple sources to the consumer. It is predicted that against the backdrop of robust prices several supply initiatives will continue to advance rapidly in 2005, such as the $7 billion Mackenzie Gas Project on which public hearings are expected to start this summer, along with regulatory clarity about the $20 billion Alaska Highway Natural Gas Pipeline Project to move North Slope gas to southern markets. Drilling of new gas wells will continue to approach or even surpass 18,000 new wells, with an increasing number of these being coal-bed methane wells. Despite high level drilling activity, supply is expected to grow only about 400 MMcf per day. Greater supply increments are expected through continued LNG terminal development, although plans for new LNG terminal development have been met with stiff resistance from local residents both in Canada and the United States. Imports of liquefied natural gas into the United States slowed dramatically in 2004 under the severe short-term downward pressure on natural gas prices, nevertheless, these imports are expected to rebound to new record highs in 2005. Capacity is expected to climb from about 2.55 Bcf per day in 2004 to as much as 6.4 Bcf per day by late 2007. At least one Canadian import facility, Anadarko's one Bcf per day Bear Head terminal on Nova Scotia's Strait of Canso, is expected to become operational by late 2007 or early 2008. 6 photos
Novel Robust Optimization and Power Allocation of Time Reversal-MIMO-UWB Systems in an Imperfect CSI
Directory of Open Access Journals (Sweden)
Sajjad Alizadeh
2013-03-01
Full Text Available Time Reversal (TR technique is an attractive solution for a scenario where the transmission system employs low complexity receivers with multiple antennas at both transmitter and receiver sides. The TR technique can be combined with a high data rate MIMO-UWB system as TR-MIMO-UWB system. In spite of TR's good performance in MIMO-UWB systems, it suffers from performance degradation in an imperfect Channel State Information (CSI case. In this paper, at first a robust TR pre-filter is designed together with a MMSE equalizer in TR-MIMO-UWB system where is robust against channel imperfection conditions. We show that the robust pre-filter optimization technique, considerably improves the BER performance of TR-MIMO-UWB system in imperfect CSI, where temporal focusing of the TR technique is kept, especially for high SNR values. Then, in order to improve the system performance more than ever, a power loading scheme is developed by minimizing the average symbol error rate in an imperfect CSI. Numerical and simulation results are presented to confirm the performance advantage attained by the proposed robust optimization and power loading in an imperfect CSI scenario.
International Nuclear Information System (INIS)
Iskander, Boulaabi; Faycal, Ben Hmida; Moncef, Gossa; Anis, Sellami
2009-01-01
This paper presents a design method of a Sliding Mode Observer (SMO) for robust sensor faults reconstruction of systems with matched uncertainty. This class of uncertainty requires a known upper bound. The basic idea is to use the H ∞ concept to design the observer, which minimizes the effect of the uncertainty on the reconstruction of the sensor faults. Specifically, we applied the equivalent output error injection concept from previous work in Fault Detection and Isolation (FDI) scheme. Then, these two problems of design and reconstruction can be expressed and numerically formulate via Linear Matrix Inequalities (LMIs) optimization. Finally, a numerical example is given to illustrate the validity and the applicability of the proposed approach.
Directory of Open Access Journals (Sweden)
Benoit Macq
2008-07-01
Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.
Akkala, Arun Goud
Leakage currents in CMOS transistors have risen dramatically with technology scaling leading to significant increase in standby power consumption. Among the various transistor candidates, the excellent short channel immunity of Silicon double gate FinFETs have made them the best contender for successful scaling to sub-10nm nodes. For sub-10nm FinFETs, new quantum mechanical leakage mechanisms such as direct source to drain tunneling (DSDT) of charge carriers through channel potential energy barrier arising due to proximity of source/drain regions coupled with the high transport direction electric field is expected to dominate overall leakage. To counter the effects of DSDT and worsening short channel effects and to maintain Ion/ Ioff, performance and power consumption at reasonable values, device optimization techniques are necessary for deeply scaled transistors. In this work, source/drain underlapping of FinFETs has been explored using quantum mechanical device simulations as a potentially promising method to lower DSDT while maintaining the Ion/ Ioff ratio at acceptable levels. By adopting a device/circuit/system level co-design approach, it is shown that asymmetric underlapping, where the drain side underlap is longer than the source side underlap, results in optimal energy efficiency for logic circuits in near-threshold as well as standard, super-threshold operating regimes. In addition, read/write conflict in 6T SRAMs and the degradation in cell noise margins due to the low supply voltage can be mitigated by using optimized asymmetric underlapped n-FinFETs for the access transistor, thereby leading to robust cache memories. When gate-workfunction tuning is possible, using asymmetric underlapped n-FinFETs for both access and pull-down devices in an SRAM bit cell can lead to high-speed and low-leakage caches. Further, it is shown that threshold voltage degradation in the presence of Hot Carrier Injection (HCI) is less severe in asymmetric underlap n-FinFETs. A
Yang, Jun; Zolotas, Argyrios; Chen, Wen-Hua; Michail, Konstantinos; Li, Shihua
2011-07-01
Robust control of a class of uncertain systems that have disturbances and uncertainties not satisfying "matching" condition is investigated in this paper via a disturbance observer based control (DOBC) approach. In the context of this paper, "matched" disturbances/uncertainties stand for the disturbances/uncertainties entering the system through the same channels as control inputs. By properly designing a disturbance compensation gain, a novel composite controller is proposed to counteract the "mismatched" lumped disturbances from the output channels. The proposed method significantly extends the applicability of the DOBC methods. Rigorous stability analysis of the closed-loop system with the proposed method is established under mild assumptions. The proposed method is applied to a nonlinear MAGnetic LEViation (MAGLEV) suspension system. Simulation shows that compared to the widely used integral control method, the proposed method provides significantly improved disturbance rejection and robustness against load variation. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Robust control of uncertain dynamic systems a linear state space approach
Yedavalli, Rama K
2014-01-01
This textbook aims to provide a clear understanding of the various tools of analysis and design for robust stability and performance of uncertain dynamic systems. In model-based control design and analysis, mathematical models can never completely represent the “real world” system that is being modeled, and thus it is imperative to incorporate and accommodate a level of uncertainty into the models. This book directly addresses these issues from a deterministic uncertainty viewpoint and focuses on the interval parameter characterization of uncertain systems. Various tools of analysis and design are presented in a consolidated manner. This volume fills a current gap in published works by explicitly addressing the subject of control of dynamic systems from linear state space framework, namely using a time-domain, matrix-theory based approach. This book also: Presents and formulates the robustness problem in a linear state space model framework Illustrates various systems level methodologies with examples and...
Robust Optimization on Regional WCO-for-Biodiesel Supply Chain under Supply and Demand Uncertainties
Directory of Open Access Journals (Sweden)
Yong Zhang
2016-01-01
Full Text Available This paper aims to design a robust waste cooking oil- (WCO- for-biodiesel supply chain under WCO supply and price as well as biodiesel demand and price uncertainties, so as to improve biorefineries’ ability to cope with the poor environment. A regional supply chain is firstly introduced based on the biggest WCO-for-biodiesel company in Changzhou, Jiangsu province, and it comprises three components: WCO supplier, biorefinery, and demand zone. And then a robust mixed integer linear model with multiple objectives (economic, environmental, and social objectives is proposed for both biorefinery location and transportation plans. After that, a heuristic algorithm based on genetic algorithm is proposed to solve this model. Finally, the 27 cities in Yangtze River delta are adopted to verify the proposed models and methods, and the sustainability and robustness of biodiesel supply are discussed.
International Nuclear Information System (INIS)
Moradi, Hamed; Bakhtiari-Nejad, Firooz; Saffar-Avval, Majid
2009-01-01
To achieve a good performance of the utility boiler, dynamic variables such as drum pressure, steam temperature and water level of drum must be controlled. In this paper, a linear time invariant (LTI) model of a boiler system is considered in which the input variables are feed-water and fuel mass rates. However this dynamic model may associate with uncertainties. With considering the uncertainties of the dynamic model, a sliding mode controller is designed. After representation of the uncertain dynamic system in general control configuration and modelling the parametric uncertainties, nominal performance, robust stability and robust performance are analyzed by the concept of structured singular value μ. Using an algorithm for μ-analysis and applying an inversed-base controller, robust stability and nominal performance are guaranteed but robust performance is not satisfied. Finally, an optimal robust controller is designed based on μ-synthesis with DK-iteration algorithm. Both optimal robust and sliding mode controllers guarantee robust performance of the system against the uncertainties and result in desired time responses of the output variables. By applying H ∞ robust control, system tracks the desire reference inputs in a less time and with smoother time responses. However, less control efforts, feedwater and fuel mass rates, are needed when the sliding mode controller is applied.
Directory of Open Access Journals (Sweden)
Seyed Gholamreza Jalali Naini
2012-01-01
Full Text Available We use two-stage data envelopment analysis (DEA model to analyze the effects of entrance deregulation on the efficiency in the Iranian insurance market. In the first stage, we propose a robust optimization approach in order to overcome the sensitivity of DEA results to any uncertainty in the output parameters. Hence, the efficiency of each ongoing insurer is estimated using our proposed robust DEA model. The insurers are then ranked based on their relative efficiency scores for an eight-year period from 2003 to 2010. In the second stage, a comprehensive statistical analysis using generalized estimating equations (GEE is conducted to analyze some other factors which could possibly affect the efficiency scores. The first results from DEA model indicate a decline in efficiency over the entrance deregulation period while further statistical analysis confirms that the solvency ignorance which is a widespread paradigm among state owned companies is one of the main drivers of efficiency in the Iranian insurance market.
Non-invasive multiparametric qBOLD approach for robust mapping of the oxygen extraction fraction.
Domsch, Sebastian; Mie, Moritz B; Wenz, Frederik; Schad, Lothar R
2014-09-01
The quantitative blood oxygenation level-dependent (qBOLD) method has not become clinically established yet because long acquisition times are necessary to achieve an acceptable certainty of the parameter estimates. In this work, a non-invasive multiparametric (nimp) qBOLD approach based on a simple analytical model is proposed to facilitate robust oxygen extraction fraction (OEF) mapping within clinically acceptable acquisition times by using separate measurements. The protocol consisted of a gradient-echo sampled spin-echo sequence (GESSE), a T2-weighted Carr-Purcell-Meiboom-Gill (CPMG) sequence, and a T2(*)-weighted multi-slice multi-echo gradient echo (MMGE) sequence. The GESSE acquisition time was less than 5 minutes and the extra measurement time for CPMG/MMGE was below 2 minutes each. The proposed nimp-qBOLD approach was validated in healthy subjects (N = 5) and one patient. The proposed nimp-qBOLD approach facilitated more robust OEF mapping with significantly reduced inter- and intra-subject variability compared to the standard qBOLD method. Thereby, an average OEF in all subjects of 27±2% in white matter (WM) and 29±2% in gray matter (GM) using the nimp-qBOLD method was more stable compared to 41±10% (WM) and 46±10% (GM) with standard qBOLD. Moreover, the spatial variance in the image slice (i.e. standard deviation divided by mean) was on average reduced from 35% to 25%. In addition, the preliminary results of the patient are encouraging. The proposed nimp-qBOLD technique provides a promising tool for robust OEF mapping within clinically acceptable acquisition times and could therefore provide an important contribution for analyzing tumors or monitoring the success of radio and chemo therapies. Copyright © 2014. Published by Elsevier GmbH.
Non-invasive multiparametric qBOLD approach for robust mapping of the oxygen extraction fraction
Energy Technology Data Exchange (ETDEWEB)
Domsch, Sebastian; Mie, Moritz B.; Schad, Lothar R. [Heidelberg Univ., Medical Faculty Mannheim (Germany). Computer Assisted Clinical Medicine; Wenz, Frederik [Heidelberg Univ., Medical Faculty Mannheim (Germany). Dept. of Radiation Oncology
2014-10-01
Introduction: The quantitative blood oxygenation level-dependent (qBOLD) method has not become clinically established yet because long acquisition times are necessary to achieve an acceptable certainty of the parameter estimates. In this work, a non-invasive multiparametric (nimp) qBOLD approach based on a simple analytical model is proposed to facilitate robust oxygen extraction fraction (OEF) mapping within clinically acceptable acquisition times by using separate measurements. Methods: The protocol consisted of a gradient-echo sampled spin-echo sequence (GESSE), a T{sub 2}-weighted Carr-Purcell-Meiboom-Gill (CPMG) sequence, and a T{sub 2}{sup *}-weighted multi-slice multi-echo gradient echo (MMGE) sequence. The GESSE acquisition time was less than 5 minutes and the extra measurement time for CPMG / MMGE was below 2 minutes each. The proposed nimp-qBOLD approach was validated in healthy subjects (N = 5) and one patient. Results: The proposed nimp-qBOLD approach facilitated more robust OEF mapping with significantly reduced inter- and intra-subject variability compared to the standard qBOLD method. Thereby, an average OEF in all subjects of 27 ± 2 % in white matter (WM) and 29 ± 2 % in gray matter (GM) using the nimp-qBOLD method was more stable compared to 41 ± 10 % (WM) and 46 ± 10 % (GM) with standard qBOLD. Moreover, the spatial variance in the image slice (i.e. standard deviation divided by mean) was on average reduced from 35 % to 25 %. In addition, the preliminary results of the patient are encouraging. Conclusion: The proposed nimp-qBOLD technique provides a promising tool for robust OEF mapping within clinically acceptable acquisition times and could therefore provide an important contribution for analyzing tumors or monitoring the success of radio and chemo therapies. (orig.)
A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms
International Nuclear Information System (INIS)
Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats
2008-01-01
This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique
Kumar, Keshav; Cava, Felipe
2018-04-10
In the present work, Principal coordinate analysis (PCoA) is introduced to develop a robust model to classify the chromatographic data sets of peptidoglycan sample. PcoA captures the heterogeneity present in the data sets by using the dissimilarity matrix as input. Thus, in principle, it can even capture the subtle differences in the bacterial peptidoglycan composition and can provide a more robust and fast approach for classifying the bacterial collection and identifying the novel cell wall targets for further biological and clinical studies. The utility of the proposed approach is successfully demonstrated by analysing the two different kind of bacterial collections. The first set comprised of peptidoglycan sample belonging to different subclasses of Alphaproteobacteria. Whereas, the second set that is relatively more intricate for the chemometric analysis consist of different wild type Vibrio Cholerae and its mutants having subtle differences in their peptidoglycan composition. The present work clearly proposes a useful approach that can classify the chromatographic data sets of chromatographic peptidoglycan samples having subtle differences. Furthermore, present work clearly suggest that PCoA can be a method of choice in any data analysis workflow. Copyright © 2018 Elsevier Inc. All rights reserved.
A New Robust Tracking Control Design for Turbofan Engines: H∞/Leitmann Approach
Directory of Open Access Journals (Sweden)
Muxuan Pan
2017-04-01
Full Text Available In this paper, a H ∞ /Leitmann approach to the robust tracking control design is presented for an uncertain dynamic system. This new method is developed in the following two steps. Firstly, a tracking dynamic system with simultaneous consideration of parameter uncertainty and noise is modeled based on a linear system and a reference model. Accordingly, a “nominal system” from the tracking system is defined and controlled by a H ∞ control to obtain the asymptotical stability and noise resistance. Secondly, by making use of a Lyapunov function and the norm boundedness, a new robust control with the “Leitmann approach” is designed to cope with the uncertainty. The two controls collaborate with each other to achieve “uniform tracking boundedness” and “uniform ultimate tracking boundedness”. The new approach is then applied to an aircraft turbofan control design, and the numerical simulation results show the prescribed performances of the closed-loop system and the advantage of the developed approach.
A robust optimisation approach to the problem of supplier selection and allocation in outsourcing
Fu, Yelin; Keung Lai, Kin; Liang, Liang
2016-03-01
We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.
Wilkinson, M.H.F.
The Robust Automatic Threshold Selection algorithm was introduced as a threshold selection based on a simple image statistic. The statistic is an average of the grey levels of the pixels in an image weighted by the response at each pixel of a specific edge detector. Other authors have suggested that
Game-theoretic approaches to optimal risk sharing
Boonen, T.J.
2014-01-01
This Ph.D. thesis studies optimal risk capital allocation and optimal risk sharing. The first chapter deals with the problem of optimally allocating risk capital across divisions within a financial institution. To do so, an asymptotic approach is used to generalize the well-studied Aumann-Shapley
Density control in ITER: an iterative learning control and robust control approach
Ravensbergen, T.; de Vries, P. C.; Felici, F.; Blanken, T. C.; Nouailletas, R.; Zabeo, L.
2018-01-01
Plasma density control for next generation tokamaks, such as ITER, is challenging because of multiple reasons. The response of the usual gas valve actuators in future, larger fusion devices, might be too slow for feedback control. Both pellet fuelling and the use of feedforward-based control may help to solve this problem. Also, tight density limits arise during ramp-up, due to operational limits related to divertor detachment and radiative collapses. As the number of shots available for controller tuning will be limited in ITER, in this paper, iterative learning control (ILC) is proposed to determine optimal feedforward actuator inputs based on tracking errors, obtained in previous shots. This control method can take the actuator and density limits into account and can deal with large actuator delays. However, a purely feedforward-based density control may not be sufficient due to the presence of disturbances and shot-to-shot differences. Therefore, robust control synthesis is used to construct a robustly stabilizing feedback controller. In simulations, it is shown that this combined controller strategy is able to achieve good tracking performance in the presence of shot-to-shot differences, tight constraints, and model mismatches.
Kantian Optimization: An Approach to Cooperative Behavior
John E. Roemer
2014-01-01
Although evidence accrues in biology, anthropology and experimental economics that homo sapiens is a cooperative species, the reigning assumption in economic theory is that individuals optimize in an autarkic manner (as in Nash and Walrasian equilibrium). I here postulate a cooperative kind of optimizing behavior, called Kantian. It is shown that in simple economic models, when there are negative externalities (such as congestion effects from use of a commonly owned resource) or positive exte...
Reliability-redundancy optimization by means of a chaotic differential evolution approach
International Nuclear Information System (INIS)
Coelho, Leandro dos Santos
2009-01-01
The reliability design is related to the performance analysis of many engineering systems. The reliability-redundancy optimization problems involve selection of components with multiple choices and redundancy levels that produce maximum benefits, can be subject to the cost, weight, and volume constraints. Classical mathematical methods have failed in handling nonconvexities and nonsmoothness in optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solution in reliability-redundancy optimization problems. Evolutionary algorithms (EAs) - paradigms of evolutionary computation field - are stochastic and robust meta-heuristics useful to solve reliability-redundancy optimization problems. EAs such as genetic algorithm, evolutionary programming, evolution strategies and differential evolution are being used to find global or near global optimal solution. A differential evolution approach based on chaotic sequences using Lozi's map for reliability-redundancy optimization problems is proposed in this paper. The proposed method has a fast convergence rate but also maintains the diversity of the population so as to escape from local optima. An application example in reliability-redundancy optimization based on the overspeed protection system of a gas turbine is given to show its usefulness and efficiency. Simulation results show that the application of deterministic chaotic sequences instead of random sequences is a possible strategy to improve the performance of differential evolution.
DEFF Research Database (Denmark)
Ghoreishi, Maryam
2018-01-01
Many models within the field of optimal dynamic pricing and lot-sizing models for deteriorating items assume everything is deterministic and develop a differential equation as the core of analysis. Two prominent examples are the papers by Rajan et al. (Manag Sci 38:240–262, 1992) and Abad (Manag......, we will try to expose the model by Abad (1996) and Rajan et al. (1992) to stochastic inputs; however, designing these stochastic inputs such that they as closely as possible are aligned with the assumptions of those papers. We do our investigation through a numerical test where we test the robustness...... of the numerical results reported in Rajan et al. (1992) and Abad (1996) in a simulation model. Our numerical results seem to confirm that the results stated in these papers are indeed robust when being imposed to stochastic inputs....
Directory of Open Access Journals (Sweden)
Olav Slupphaug
2001-01-01
Full Text Available We present a mathematical programming approach to robust control of nonlinear systems with uncertain, possibly time-varying, parameters. The uncertain system is given by different local affine parameter dependent models in different parts of the state space. It is shown how this representation can be obtained from a nonlinear uncertain system by solving a set of continuous linear semi-infinite programming problems, and how each of these problems can be solved as a (finite series of ordinary linear programs. Additionally, the system representation includes control- and state constraints. The controller design method is derived from Lyapunov stability arguments and utilizes an affine parameter dependent quadratic Lyapunov function. The controller has a piecewise affine output feedback structure, and the design amounts to finding a feasible solution to a set of linear matrix inequalities combined with one spectral radius constraint on the product of two positive definite matrices. A local solution approach to this nonconvex feasibility problem is proposed. Complexity of the design method and some special cases such as state- feedback are discussed. Finally, an application of the results is given by proposing an on-line computationally feasible algorithm for constrained nonlinear state- feedback model predictive control with robust stability.
Robust bladder image registration by redefining data-term in total variational approach
Ali, Sharib; Daul, Christian; Galbrun, Ernest; Amouroux, Marine; Guillemin, François; Blondel, Walter
2015-03-01
Cystoscopy is the standard procedure for clinical diagnosis of bladder cancer diagnosis. Bladder carcinoma in situ are often multifocal and spread over large areas. In vivo, localization and follow-up of these tumors and their nearby sites is necessary. But, due to the small field of view (FOV) of the cystoscopic video images, urologists cannot easily interpret the scene. Bladder mosaicing using image registration facilitates this interpretation through the visualization of entire lesions with respect to anatomical landmarks. The reference white light (WL) modality is affected by a strong variability in terms of texture, illumination conditions and motion blur. Moreover, in the complementary fluorescence light (FL) modality, the texture is visually different from that of the WL. Existing algorithms were developed for a particular modality and scene conditions. This paper proposes a more general on fly image registration approach for dealing with these variability issues in cystoscopy. To do so, we present a novel, robust and accurate image registration scheme by redefining the data-term of the classical total variational (TV) approach. Quantitative results on realistic bladder phantom images are used for verifying accuracy and robustness of the proposed model. This method is also qualitatively assessed with patient data mosaicing for both WL and FL modalities.
Robustness analysis of the Zhang neural network for online time-varying quadratic optimization
International Nuclear Information System (INIS)
Zhang Yunong; Ruan Gongqin; Li Kene; Yang Yiwen
2010-01-01
A general type of recurrent neural network (termed as Zhang neural network, ZNN) has recently been proposed by Zhang et al for the online solution of time-varying quadratic-minimization (QM) and quadratic-programming (QP) problems. Global exponential convergence of the ZNN could be achieved theoretically in an ideal error-free situation. In this paper, with the normal differentiation and dynamics-implementation errors considered, the robustness properties of the ZNN model are investigated for solving these time-varying problems. In addition, linear activation functions and power-sigmoid activation functions could be applied to such a perturbed ZNN model. Both theoretical-analysis and computer-simulation results demonstrate the good ZNN robustness and superior performance for online time-varying QM and QP problem solving, especially when using power-sigmoid activation functions.
Conceptual information processing: A robust approach to KBS-DBMS integration
Lazzara, Allen V.; Tepfenhart, William; White, Richard C.; Liuzzi, Raymond
1987-01-01
Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing.
International Nuclear Information System (INIS)
Piacentino, Antonio; Cardona, Ennio
2010-01-01
This paper represents the Part II of a paper in two parts. In Part I the fundamentals of Scope Oriented Thermoeconomics have been introduced, showing a scarce potential for the cost accounting of existing plants; in this Part II the same concepts are applied to the optimization of a small set of design variables for a vapour compression chiller. The method overcomes the limit of most conventional optimization techniques, which are usually based on hermetic algorithms not enabling the energy analyst to recognize all the margins for improvement. The Scope Oriented Thermoeconomic optimization allows us to disassemble the optimization process, thus recognizing the Formation Structure of Optimality, i.e. the specific influence of any thermodynamic and economic parameter in the path toward the optimal design. Finally, the potential applications of such an in-depth understanding of the inner driving forces of the optimization are discussed in the paper, with a particular focus on the sensitivity analysis to the variation of energy and capital costs and on the actual operation-oriented design.
Directory of Open Access Journals (Sweden)
Bego Blanco
2017-01-01
Full Text Available In the context of cloud-enabled 5G radio access networks with network function virtualization capabilities, we focus on the virtual network function placement problem for a multitenant cluster of small cells that provide mobile edge computing services. Under an emerging distributed network architecture and hardware infrastructure, we employ cloud-enabled small cells that integrate microservers for virtualization execution, equipped with additional hardware appliances. We develop an energy-aware placement solution using a robust optimization approach based on service demand uncertainty in order to minimize the power consumption in the system constrained by network service latency requirements and infrastructure terms. Then, we discuss the results of the proposed placement mechanism in 5G scenarios that combine several service flavours and robust protection values. Once the impact of the service flavour and robust protection on the global power consumption of the system is analyzed, numerical results indicate that our proposal succeeds in efficiently placing the virtual network functions that compose the network services in the available hardware infrastructure while fulfilling service constraints.
An Information Retrieval Approach for Robust Prediction of Road Surface States.
Park, Jae-Hyung; Kim, Kwanho
2017-01-28
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.
International Nuclear Information System (INIS)
Liu, W; Schild, S; Bues, M; Liao, Z; Sahoo, N; Park, P; Li, H; Li, Y; Li, X; Shen, J; Anand, A; Dong, L; Zhu, X; Mohan, R
2014-01-01
Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from the internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly
Directory of Open Access Journals (Sweden)
Jeevanandham Arumugam
2009-01-01
Full Text Available In this paper a classical lead-lag power system stabilizer is used for demonstration. The stabilizer parameters are selected in such a manner to damp the rotor oscillations. The problem of selecting the stabilizer parameters is converted to a simple optimization problem with an eigen value based objective function and it is proposed to employ simulated annealing and particle swarm optimization for solving the optimization problem. The objective function allows the selection of the stabilizer parameters to optimally place the closed-loop eigen values in the left hand side of the complex s-plane. The single machine connected to infinite bus system and 10-machine 39-bus system are considered for this study. The effectiveness of the stabilizer tuned using the best technique, in enhancing the stability of power system. Stability is confirmed through eigen value analysis and simulation results and suitable heuristic technique will be selected for the best performance of the system.
Directory of Open Access Journals (Sweden)
Chen Qin
2013-01-01
Full Text Available This paper considers the problems of the robust stability and robust H∞ controller design for time-varying delay switched systems using delta operator approach. Based on the average dwell time approach and delta operator theory, a sufficient condition of the robust exponential stability is presented by choosing an appropriate Lyapunov-Krasovskii functional candidate. Then, a state feedback controller is designed such that the resulting closed-loop system is exponentially stable with a guaranteed H∞ performance. The obtained results are formulated in the form of linear matrix inequalities (LMIs. Finally, a numerical example is provided to explicitly illustrate the feasibility and effectiveness of the proposed method.
Optimization approaches for robot trajectory planning
Directory of Open Access Journals (Sweden)
Carlos Llopis-Albert
2018-03-01
Full Text Available The development of optimal trajectory planning algorithms for autonomous robots is a key issue in order to efficiently perform the robot tasks. This problem is hampered by the complex environment regarding the kinematics and dynamics of robots with several arms and/or degrees of freedom (dof, the design of collision-free trajectories and the physical limitations of the robots. This paper presents a review about the existing robot motion planning techniques and discusses their pros and cons regarding completeness, optimality, efficiency, accuracy, smoothness, stability, safety and scalability.
A robust and efficient approach to detect 3D rectal tubes from CT colonography
Energy Technology Data Exchange (ETDEWEB)
Yang Xiaoyun; Slabaugh, Greg [Medicsight PLC, Kensington Centre, 66 Hammersmith Road, London (United Kingdom)
2011-11-15
Purpose: The rectal tube (RT) is a common source of false positives (FPs) in computer-aided detection (CAD) systems for CT colonography. A robust and efficient detection of RT can improve CAD performance by eliminating such ''obvious'' FPs and increase radiologists' confidence in CAD. Methods: In this paper, we present a novel and robust bottom-up approach to detect the RT. Probabilistic models, trained using kernel density estimation on simple low-level features, are employed to rank and select the most likely RT tube candidate on each axial slice. Then, a shape model, robustly estimated using random sample consensus (RANSAC), infers the global RT path from the selected local detections. Subimages around the RT path are projected into a subspace formed from training subimages of the RT. A quadratic discriminant analysis (QDA) provides a classification of a subimage as RT or non-RT based on the projection. Finally, a bottom-top clustering method is proposed to merge the classification predictions together to locate the tip position of the RT. Results: Our method is validated using a diverse database, including data from five hospitals. On a testing data with 21 patients (42 volumes), 99.5% of annotated RT paths have been successfully detected. Evaluated with CAD, 98.4% of FPs caused by the RT have been detected and removed without any loss of sensitivity. Conclusions: The proposed method demonstrates a high detection rate of the RT path, and when tested in a CAD system, reduces FPs caused by the RT without the loss of sensitivity.
A robust and efficient approach to detect 3D rectal tubes from CT colonography
International Nuclear Information System (INIS)
Yang Xiaoyun; Slabaugh, Greg
2011-01-01
Purpose: The rectal tube (RT) is a common source of false positives (FPs) in computer-aided detection (CAD) systems for CT colonography. A robust and efficient detection of RT can improve CAD performance by eliminating such ''obvious'' FPs and increase radiologists' confidence in CAD. Methods: In this paper, we present a novel and robust bottom-up approach to detect the RT. Probabilistic models, trained using kernel density estimation on simple low-level features, are employed to rank and select the most likely RT tube candidate on each axial slice. Then, a shape model, robustly estimated using random sample consensus (RANSAC), infers the global RT path from the selected local detections. Subimages around the RT path are projected into a subspace formed from training subimages of the RT. A quadratic discriminant analysis (QDA) provides a classification of a subimage as RT or non-RT based on the projection. Finally, a bottom-top clustering method is proposed to merge the classification predictions together to locate the tip position of the RT. Results: Our method is validated using a diverse database, including data from five hospitals. On a testing data with 21 patients (42 volumes), 99.5% of annotated RT paths have been successfully detected. Evaluated with CAD, 98.4% of FPs caused by the RT have been detected and removed without any loss of sensitivity. Conclusions: The proposed method demonstrates a high detection rate of the RT path, and when tested in a CAD system, reduces FPs caused by the RT without the loss of sensitivity.
International Nuclear Information System (INIS)
Azadeh, A.; Ghaderi, S.F.; Omrani, H.
2009-01-01
This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran
A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.
Yang, Shaofu; Liu, Qingshan; Wang, Jun
2018-04-01
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
Ducted wind turbine optimization : A numerical approach
Dighe, V.V.; De Oliveira Andrade, G.L.; van Bussel, G.J.W.
2017-01-01
The practice of ducting wind turbines has shown a beneficial effect on the overall performance, when compared to an open turbine of the same rotor diameter1. However, an optimization study specifically for ducted wind turbines (DWT’s) is missing or incomplete. This work focuses on a numerical
Russian Loanword Adaptation in Persian; Optimal Approach
Kambuziya, Aliye Kord Zafaranlu; Hashemi, Eftekhar Sadat
2011-01-01
In this paper we analyzed some of the phonological rules of Russian loanword adaptation in Persian, on the view of Optimal Theory (OT) (Prince & Smolensky, 1993/2004). It is the first study of phonological process on Russian loanwords adaptation in Persian. By gathering about 50 current Russian loanwords, we selected some of them to analyze. We…
An integer optimization algorithm for robust identification of non-linear gene regulatory networks
Directory of Open Access Journals (Sweden)
Chemmangattuvalappil Nishanth
2012-09-01
Full Text Available Abstract Background Reverse engineering gene networks and identifying regulatory interactions are integral to understanding cellular decision making processes. Advancement in high throughput experimental techniques has initiated innovative data driven analysis of gene regulatory networks. However, inherent noise associated with biological systems requires numerous experimental replicates for reliable conclusions. Furthermore, evidence of robust algorithms directly exploiting basic biological traits are few. Such algorithms are expected to be efficient in their performance and robust in their prediction. Results We have developed a network identification algorithm to accurately infer both the topology and strength of regulatory interactions from time series gene expression data in the presence of significant experimental noise and non-linear behavior. In this novel formulism, we have addressed data variability in biological systems by integrating network identification with the bootstrap resampling technique, hence predicting robust interactions from limited experimental replicates subjected to noise. Furthermore, we have incorporated non-linearity in gene dynamics using the S-system formulation. The basic network identification formulation exploits the trait of sparsity of biological interactions. Towards that, the identification algorithm is formulated as an integer-programming problem by introducing binary variables for each network component. The objective function is targeted to minimize the network connections subjected to the constraint of maximal agreement between the experimental and predicted gene dynamics. The developed algorithm is validated using both in silico and experimental data-sets. These studies show that the algorithm can accurately predict the topology and connection strength of the in silico networks, as quantified by high precision and recall, and small discrepancy between the actual and predicted kinetic parameters
Optimization of nonlinear controller with an enhanced biogeography approach
Directory of Open Access Journals (Sweden)
Mohammed Salem
2014-07-01
Full Text Available This paper is dedicated to the optimization of nonlinear controllers basing of an enhanced Biogeography Based Optimization (BBO approach. Indeed, The BBO is combined to a predator and prey model where several predators are used with introduction of a modified migration operator to increase the diversification along the optimization process so as to avoid local optima and reach the optimal solution quickly. The proposed approach is used in tuning the gains of PID controller for nonlinear systems. Simulations are carried out over a Mass spring damper and an inverted pendulum and has given remarkable results when compared to genetic algorithm and BBO.
Experimental Modeling of Monolithic Resistors for Silicon ICS with a Robust Optimizer-Driving Scheme
Directory of Open Access Journals (Sweden)
Philippe Leduc
2002-06-01
Full Text Available Today, an exhaustive library of models describing the electrical behavior of integrated passive components in the radio-frequency range is essential for the simulation and optimization of complex circuits. In this work, a preliminary study has been done on Tantalum Nitride (TaN resistors integrated on silicon, and this leads to a single p-type lumped-element circuit. An efficient extraction technique will be presented to provide a computer-driven optimizer with relevant initial model parameter values (the "guess-timate". The results show the unicity in most cases of the lumped element determination, which leads to a precise simulation of self-resonant frequencies.
Chorel, Marine; Lanternier, Thomas; Lavastre, Éric; Bonod, Nicolas; Bousquet, Bruno; Néauport, Jérôme
2018-04-30
We report on a numerical optimization of the laser induced damage threshold of multi-dielectric high reflection mirrors in the sub-picosecond regime. We highlight the interplay between the electric field distribution, refractive index and intrinsic laser induced damage threshold of the materials on the overall laser induced damage threshold (LIDT) of the multilayer. We describe an optimization method of the multilayer that minimizes the field enhancement in high refractive index materials while preserving a near perfect reflectivity. This method yields a significant improvement of the damage resistance since a maximum increase of 40% can be achieved on the overall LIDT of the multilayer.
CLUSTER ENERGY OPTIMIZATION: A THEORETICAL APPROACH
Vikram Yadav; G. Sahoo
2013-01-01
The optimization of energy consumption in the cloud computing environment is the question how to use various energy conservation strategies to efficiently allocate resources. The need of differentresources in cloud environment is unpredictable. It is observed that load management in cloud is utmost needed in order to provide QOS. The jobs at over-loaded physical machine are shifted to under-loadedphysical machine and turning the idle machine off in order to provide green cloud. For energy opt...
Design Buildings Optimally: A Lifecycle Assessment Approach
Hosny, Ossama
2013-01-01
This paper structures a generic framework to support optimum design for multi-buildings in desert environment. The framework is targeting an environmental friendly design with minimum lifecycle cost, using Genetic Algorithms (Gas). GAs function through a set of success measures which evaluates the design, formulates a proper objective, and reflects possible tangible/intangible constraints. The framework optimizes the design and categorizes it under a certain environmental category at minimum Life Cycle Cost (LCC). It consists of three main modules: (1) a custom Building InformationModel (BIM) for desert buildings with a compatibility checker as a central interactive database; (2) a system evaluator module to evaluate the proposed success measures for the design; and (3) a GAs optimization module to ensure optimum design. The framework functions through three levels: the building components, integrated building, and multi-building levels. At the component level the design team should be able to select components in a designed sequence to ensure compatibility among various components, while at the building level; the team can relatively locate and orient each individual building. Finally, at the multi-building (compound) level the whole design can be evaluated using success measures of natural light, site capacity, shading impact on natural lighting, thermal change, visual access and energy saving. The framework through genetic algorithms optimizes the design by determining proper types of building components and relative buildings locations and orientations which ensure categorizing the design under a specific category or meet certain preferences at minimum lifecycle cost.
Optimizing Linear Functions with Randomized Search Heuristics - The Robustness of Mutation
DEFF Research Database (Denmark)
Witt, Carsten
2012-01-01
The analysis of randomized search heuristics on classes of functions is fundamental for the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of the simple (1...
Robust optimal control of material flows in demand-driven supply networks
Laumanns, M.; Lefeber, A.A.J.
2006-01-01
We develop a model based on stochastic discrete-time controlleddynamical systems in order to derive optimal policies for controllingthe material flow in supply networks. Each node in the network isdescribed as a transducer such that the dynamics of the material andinformation flows within the entire
Baran, Derya; Gasparini, Nicola; Wadsworth, Andrew; Tan, Ching Hong; Wehbe, Nimer; Song, Xin; Hamid, Zeinab; Zhang, Weimin; Neophytou, Marios; Kirchartz, Thomas; Brabec, Christoph J; Durrant, James R; McCulloch, Iain
2018-05-25
Nonfullerene solar cells have increased their efficiencies up to 13%, yet quantum efficiencies are still limited to 80%. Here we report efficient nonfullerene solar cells with quantum efficiencies approaching unity. This is achieved with overlapping absorption bands of donor and acceptor that increases the photon absorption strength in the range from about 570 to 700 nm, thus, almost all incident photons are absorbed in the active layer. The charges generated are found to dissociate with negligible geminate recombination losses resulting in a short-circuit current density of 20 mA cm -2 along with open-circuit voltages >1 V, which is remarkable for a 1.6 eV bandgap system. Most importantly, the unique nano-morphology of the donor:acceptor blend results in a substantially improved stability under illumination. Understanding the efficient charge separation in nonfullerene acceptors can pave the way to robust and recombination-free organic solar cells.
Stabilization and regulation of nonlinear systems a robust and adaptive approach
Chen, Zhiyong
2015-01-01
The core of this textbook is a systematic and self-contained treatment of the nonlinear stabilization and output regulation problems. Its coverage embraces both fundamental concepts and advanced research outcomes and includes many numerical and practical examples. Several classes of important uncertain nonlinear systems are discussed. The state-of-the art solution presented uses robust and adaptive control design ideas in an integrated approach which demonstrates connections between global stabilization and global output regulation allowing both to be treated as stabilization problems. Stabilization and Regulation of Nonlinear Systems takes advantage of rich new results to give students up-to-date instruction in the central design problems of nonlinear control, problems which are a driving force behind the furtherance of modern control theory and its application. The diversity of systems in which stabilization and output regulation become significant concerns in the mathematical formulation of practical contr...
Baran, Derya; Gasparini, Nicola; Wadsworth, Andrew; Tan, Ching Hong; Wehbe, Nimer; Song, Xin; Hamid, Zeinab; Zhang, Weimin; Neophytou, Marios; Kirchartz, Thomas; Brabec, Christoph J.; Durrant, James R.; McCulloch, Iain
2018-01-01
Nonfullerene solar cells have increased their efficiencies up to 13%, yet quantum efficiencies are still limited to 80%. Here we report efficient nonfullerene solar cells with quantum efficiencies approaching unity. This is achieved with overlapping absorption bands of donor and acceptor that increases the photon absorption strength in the range from about 570 to 700 nm, thus, almost all incident photons are absorbed in the active layer. The charges generated are found to dissociate with negligible geminate recombination losses resulting in a short-circuit current density of 20 mA cm-2 along with open-circuit voltages >1 V, which is remarkable for a 1.6 eV bandgap system. Most importantly, the unique nano-morphology of the donor:acceptor blend results in a substantially improved stability under illumination. Understanding the efficient charge separation in nonfullerene acceptors can pave the way to robust and recombination-free organic solar cells.
Baran, Derya
2018-05-21
Nonfullerene solar cells have increased their efficiencies up to 13%, yet quantum efficiencies are still limited to 80%. Here we report efficient nonfullerene solar cells with quantum efficiencies approaching unity. This is achieved with overlapping absorption bands of donor and acceptor that increases the photon absorption strength in the range from about 570 to 700 nm, thus, almost all incident photons are absorbed in the active layer. The charges generated are found to dissociate with negligible geminate recombination losses resulting in a short-circuit current density of 20 mA cm-2 along with open-circuit voltages >1 V, which is remarkable for a 1.6 eV bandgap system. Most importantly, the unique nano-morphology of the donor:acceptor blend results in a substantially improved stability under illumination. Understanding the efficient charge separation in nonfullerene acceptors can pave the way to robust and recombination-free organic solar cells.
Interval Analysis Approach to Prototype the Robust Control of the Laboratory Overhead Crane
Smoczek, J.; Szpytko, J.; Hyla, P.
2014-07-01
The paper describes the software-hardware equipment and control-measurement solutions elaborated to prototype the laboratory scaled overhead crane control system. The novelty approach to crane dynamic system modelling and fuzzy robust control scheme design is presented. The iterative procedure for designing a fuzzy scheduling control scheme is developed based on the interval analysis of discrete-time closed-loop system characteristic polynomial coefficients in the presence of rope length and mass of a payload variation to select the minimum set of operating points corresponding to the midpoints of membership functions at which the linear controllers are determined through desired poles assignment. The experimental results obtained on the laboratory stand are presented.
Interval Analysis Approach to Prototype the Robust Control of the Laboratory Overhead Crane
International Nuclear Information System (INIS)
Smoczek, J; Szpytko, J; Hyla, P
2014-01-01
The paper describes the software-hardware equipment and control-measurement solutions elaborated to prototype the laboratory scaled overhead crane control system. The novelty approach to crane dynamic system modelling and fuzzy robust control scheme design is presented. The iterative procedure for designing a fuzzy scheduling control scheme is developed based on the interval analysis of discrete-time closed-loop system characteristic polynomial coefficients in the presence of rope length and mass of a payload variation to select the minimum set of operating points corresponding to the midpoints of membership functions at which the linear controllers are determined through desired poles assignment. The experimental results obtained on the laboratory stand are presented
A robust approach for analysing dispersion of elastic waves in an orthotropic cylindrical shell
Kaplunov, J.; Nobili, A.
2017-08-01
Dispersion of elastic waves in a thin orthotropic cylindrical shell is considered, within the framework of classical 2D Kirchhoff-Love theory. In contrast to direct multi-parametric analysis of the lowest propagating modes, an alternative robust approach is proposed that simply requires evaluation of the evanescent modes (quasi-static edge effect), which, at leading order, do not depend on vibration frequency. A shortened dispersion relation for the propagating modes is then derived by polynomial division and its accuracy is numerically tested against the full Kirchhoff-Love dispersion relation. It is shown that the same shortened relation may be also obtained from a refined dynamic version of the semi-membrane theory for cylindrical shells. The presented results may be relevant for modelling various types of nanotubes which, according to the latest experimental findings, possess strong material anisotropy.
A hybrid approach for biobjective optimization
DEFF Research Database (Denmark)
Stidsen, Thomas Jacob Riis; Andersen, Kim Allan
2018-01-01
to singleobjective problems is that no standard multiobjective solvers exist and specialized algorithms need to be programmed from scratch.In this article we will present a hybrid approach, which operates both in decision space and in objective space. The approach enables massive efficient parallelization and can...... be used to a wide variety of biobjective Mixed Integer Programming models. We test the approach on the biobjective extension of the classic traveling salesman problem, on the standard datasets, and determine the full set of nondominated points. This has only been done once before (Florios and Mavrotas...
Chen, Quan; Li, Yaoyu; Seem, John E
2015-09-01
This paper presents a self-optimizing robust control scheme that can maximize the power generation for a variable speed wind turbine with Doubly-Fed Induction Generator (DFIG) operated in Region 2. A dual-loop control structure is proposed to synergize the conversion from aerodynamic power to rotor power and the conversion from rotor power to the electrical power. The outer loop is an Extremum Seeking Control (ESC) based generator torque regulation via the electric power feedback. The ESC can search for the optimal generator torque constant to maximize the rotor power without wind measurement or accurate knowledge of power map. The inner loop is a vector-control based scheme that can both regulate the generator torque requested by the ESC and also maximize the conversion from the rotor power to grid power. An ℋ(∞) controller is synthesized for maximizing, with performance specifications defined based upon the spectrum of the rotor power obtained by the ESC. Also, the controller is designed to be robust against the variations of some generator parameters. The proposed control strategy is validated via simulation study based on the synergy of several software packages including the TurbSim and FAST developed by NREL, Simulink and SimPowerSystems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Li, Cheng-Hsien
2012-01-01
Of the several measures of optimism presently available in the literature, the Life Orientation Test (LOT; Scheier & Carver, 1985) has been the most widely used in empirical research. This article explores, confirms, and cross-validates the factor structure of the Chinese version of the LOT with ordinal data by using robust weighted least…
Energy Technology Data Exchange (ETDEWEB)
Baker, Lewis A.; Habershon, Scott, E-mail: S.Habershon@warwick.ac.uk [Department of Chemistry and Centre for Scientific Computing, University of Warwick, Coventry CV4 7AL (United Kingdom)
2015-09-14
Pigment-protein complexes (PPCs) play a central role in facilitating excitation energy transfer (EET) from light-harvesting antenna complexes to reaction centres in photosynthetic systems; understanding molecular organisation in these biological networks is key to developing better artificial light-harvesting systems. In this article, we combine quantum-mechanical simulations and a network-based picture of transport to investigate how chromophore organization and protein environment in PPCs impacts on EET efficiency and robustness. In a prototypical PPC model, the Fenna-Matthews-Olson (FMO) complex, we consider the impact on EET efficiency of both disrupting the chromophore network and changing the influence of (local and global) environmental dephasing. Surprisingly, we find a large degree of resilience to changes in both chromophore network and protein environmental dephasing, the extent of which is greater than previously observed; for example, FMO maintains EET when 50% of the constituent chromophores are removed, or when environmental dephasing fluctuations vary over two orders-of-magnitude relative to the in vivo system. We also highlight the fact that the influence of local dephasing can be strongly dependent on the characteristics of the EET network and the initial excitation; for example, initial excitations resulting in rapid coherent decay are generally insensitive to the environment, whereas the incoherent population decay observed following excitation at weakly coupled chromophores demonstrates a more pronounced dependence on dephasing rate as a result of the greater possibility of local exciton trapping. Finally, we show that the FMO electronic Hamiltonian is not particularly optimised for EET; instead, it is just one of many possible chromophore organisations which demonstrate a good level of EET transport efficiency following excitation at different chromophores. Overall, these robustness and efficiency characteristics are attributed to the highly
International Nuclear Information System (INIS)
Baker, Lewis A.; Habershon, Scott
2015-01-01
Pigment-protein complexes (PPCs) play a central role in facilitating excitation energy transfer (EET) from light-harvesting antenna complexes to reaction centres in photosynthetic systems; understanding molecular organisation in these biological networks is key to developing better artificial light-harvesting systems. In this article, we combine quantum-mechanical simulations and a network-based picture of transport to investigate how chromophore organization and protein environment in PPCs impacts on EET efficiency and robustness. In a prototypical PPC model, the Fenna-Matthews-Olson (FMO) complex, we consider the impact on EET efficiency of both disrupting the chromophore network and changing the influence of (local and global) environmental dephasing. Surprisingly, we find a large degree of resilience to changes in both chromophore network and protein environmental dephasing, the extent of which is greater than previously observed; for example, FMO maintains EET when 50% of the constituent chromophores are removed, or when environmental dephasing fluctuations vary over two orders-of-magnitude relative to the in vivo system. We also highlight the fact that the influence of local dephasing can be strongly dependent on the characteristics of the EET network and the initial excitation; for example, initial excitations resulting in rapid coherent decay are generally insensitive to the environment, whereas the incoherent population decay observed following excitation at weakly coupled chromophores demonstrates a more pronounced dependence on dephasing rate as a result of the greater possibility of local exciton trapping. Finally, we show that the FMO electronic Hamiltonian is not particularly optimised for EET; instead, it is just one of many possible chromophore organisations which demonstrate a good level of EET transport efficiency following excitation at different chromophores. Overall, these robustness and efficiency characteristics are attributed to the highly
Energy Technology Data Exchange (ETDEWEB)
Ren, Shangjie [Tianjin Key Laboratory of Process Measurement and Control, School of Electrical Engineering and Automation, Tianjin University, Tianjin (China); Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Hara, Wendy; Wang, Lei; Buyyounouski, Mark K.; Le, Quynh-Thu; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States); Li, Ruijiang, E-mail: rli2@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California (United States)
2017-03-15
Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a reference anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.
Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding
Directory of Open Access Journals (Sweden)
Xin Li
2014-06-01
Full Text Available Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians, especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach.
Multiobjective Optimization Methodology A Jumping Gene Approach
Tang, KS
2012-01-01
Complex design problems are often governed by a number of performance merits. These markers gauge how good the design is going to be, but can conflict with the performance requirements that must be met. The challenge is reconciling these two requirements. This book introduces a newly developed jumping gene algorithm, designed to address the multi-functional objectives problem and supplies a viably adequate solution in speed. The text presents various multi-objective optimization techniques and provides the technical know-how for obtaining trade-off solutions between solution spread and converg
Robust optimization of the output voltage of nanogenerators by statistical design of experiments
Song, Jinhui; Xie, Huizhi; Wu, Wenzhuo; Roshan Joseph, V.; Jeff Wu, C. F.; Wang, Zhong Lin
2010-01-01
Nanogenerators were first demonstrated by deflecting aligned ZnO nanowires using a conductive atomic force microscopy (AFM) tip. The output of a nanogenerator is affected by three parameters: tip normal force, tip scanning speed, and tip abrasion. In this work, systematic experimental studies have been carried out to examine the combined effects of these three parameters on the output, using statistical design of experiments. A statistical model has been built to analyze the data and predict the optimal parameter settings. For an AFM tip of cone angle 70° coated with Pt, and ZnO nanowires with a diameter of 50 nm and lengths of 600 nm to 1 μm, the optimized parameters for the nanogenerator were found to be a normal force of 137 nN and scanning speed of 40 μm/s, rather than the conventional settings of 120 nN for the normal force and 30 μm/s for the scanning speed. A nanogenerator with the optimized settings has three times the average output voltage of one with the conventional settings. © 2010 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.
Robust optimization of the output voltage of nanogenerators by statistical design of experiments
Song, Jinhui
2010-09-01
Nanogenerators were first demonstrated by deflecting aligned ZnO nanowires using a conductive atomic force microscopy (AFM) tip. The output of a nanogenerator is affected by three parameters: tip normal force, tip scanning speed, and tip abrasion. In this work, systematic experimental studies have been carried out to examine the combined effects of these three parameters on the output, using statistical design of experiments. A statistical model has been built to analyze the data and predict the optimal parameter settings. For an AFM tip of cone angle 70° coated with Pt, and ZnO nanowires with a diameter of 50 nm and lengths of 600 nm to 1 μm, the optimized parameters for the nanogenerator were found to be a normal force of 137 nN and scanning speed of 40 μm/s, rather than the conventional settings of 120 nN for the normal force and 30 μm/s for the scanning speed. A nanogenerator with the optimized settings has three times the average output voltage of one with the conventional settings. © 2010 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.
International Nuclear Information System (INIS)
Santos Coelho, Leandro dos
2009-01-01
Despite the popularity, the tuning aspect of proportional-integral-derivative (PID) controllers is a challenge for researchers and plant operators. Various controllers tuning methodologies have been proposed in the literature such as auto-tuning, self-tuning, pattern recognition, artificial intelligence, and optimization methods. Chaotic optimization algorithms as an emergent method of global optimization have attracted much attention in engineering applications. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from local optimum, is a promising tool for engineering applications. In this paper, a tuning method for determining the parameters of PID control for an automatic regulator voltage (AVR) system using a chaotic optimization approach based on Lozi map is proposed. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. Simulation results are promising and show the effectiveness of the proposed approach. Numerical simulations based on proposed PID control of an AVR system for nominal system parameters and step reference voltage input demonstrate the good performance of chaotic optimization.
Robust control and linear parameter varying approaches application to vehicle dynamics
Gaspar, Peter; Bokor, József
2013-01-01
Vehicles are complex systems (non-linear, multi-variable) where the abundance of embedded controllers should ensure better security. This book aims at emphasizing the interest and potential of Linear Parameter Varying methods within the framework of vehicle dynamics, e.g. · proposed control-oriented model, complex enough to handle some system non linearities but still simple for control or observer design, · take into account the adaptability of the vehicle's response to driving situations, to the driver request and/or to the road sollicitations, · manage interactions between various actuators to optimize the dynamic behavior of vehicles. This book results from the 32th International Summer School in Automatic that held in Grenoble, France, in September 2011, where recent methods (based on robust control and LPV technics), then applied to the control of vehicle dynamics, have been presented. After some theoretical background and a view on so...
Dynamic programming approach to optimization of approximate decision rules
Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2013-01-01
This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number
Dynamic Programming Approach for Exact Decision Rule Optimization
Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2013-01-01
This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from
Active Distribution Grid Management based on Robust AC Optimal Power Flow
DEFF Research Database (Denmark)
Soares, Tiago; Bessa, Richard J.; Pinson, Pierre
2017-01-01
Further integration of distributed renewable energy sources in distribution systems requires a paradigm change in grid management by the distribution system operators (DSO). DSOs are currently moving to an operational planning approach based on activating flexibility from distributed energy resou...
Optimization approaches to volumetric modulated arc therapy planning
Energy Technology Data Exchange (ETDEWEB)
Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bortfeld, Thomas; Craft, David [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Alber, Markus [Department of Medical Physics and Department of Radiation Oncology, Aarhus University Hospital, Aarhus C DK-8000 (Denmark); Bangert, Mark [Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg D-69120 (Germany); Bokrantz, Rasmus [RaySearch Laboratories, Stockholm SE-111 34 (Sweden); Chen, Danny [Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Li, Ruijiang; Xing, Lei [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Men, Chunhua [Department of Research, Elekta, Maryland Heights, Missouri 63043 (United States); Nill, Simeon [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom); Papp, Dávid [Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Romeijn, Edwin [H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Salari, Ehsan [Department of Industrial and Manufacturing Engineering, Wichita State University, Wichita, Kansas 67260 (United States)
2015-03-15
Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.
Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing
2015-07-01
In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.
Directory of Open Access Journals (Sweden)
Chunlai Li
2017-07-01
Full Text Available This paper proposes an energy and reserve joint dispatch model based on a robust optimization approach in real-time electricity markets, considering wind power generation uncertainties as well as zonal reserve constraints under both normal and N-1 contingency conditions. In the proposed model, the operating reserves are classified as regulating reserve and spinning reserve according to the response performance. More specifically, the regulating reserve is usually utilized to reduce the gap due to forecasting errors, while the spinning reserve is commonly adopted to enhance the ability for N-1 contingencies. Since the transmission bottlenecks may inhibit the deliverability of reserve, the zonal placement of spinning reserve is considered in this paper to improve the reserve deliverability under the contingencies. Numerical results on the IEEE 118-bus test system show the effectiveness of the proposed model.
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
Robust/optimal temperature profile control of a high-speed aerospace vehicle using neural networks.
Yadav, Vivek; Padhi, Radhakant; Balakrishnan, S N
2007-07-01
An approximate dynamic programming (ADP)-based suboptimal neurocontroller to obtain desired temperature for a high-speed aerospace vehicle is synthesized in this paper. A 1-D distributed parameter model of a fin is developed from basic thermal physics principles. "Snapshot" solutions of the dynamics are generated with a simple dynamic inversion-based feedback controller. Empirical basis functions are designed using the "proper orthogonal decomposition" (POD) technique and the snapshot solutions. A low-order nonlinear lumped parameter system to characterize the infinite dimensional system is obtained by carrying out a Galerkin projection. An ADP-based neurocontroller with a dual heuristic programming (DHP) formulation is obtained with a single-network-adaptive-critic (SNAC) controller for this approximate nonlinear model. Actual control in the original domain is calculated with the same POD basis functions through a reverse mapping. Further contribution of this paper includes development of an online robust neurocontroller to account for unmodeled dynamics and parametric uncertainties inherent in such a complex dynamic system. A neural network (NN) weight update rule that guarantees boundedness of the weights and relaxes the need for persistence of excitation (PE) condition is presented. Simulation studies show that in a fairly extensive but compact domain, any desired temperature profile can be achieved starting from any initial temperature profile. Therefore, the ADP and NN-based controllers appear to have the potential to become controller synthesis tools for nonlinear distributed parameter systems.
Product code optimization for determinate state LDPC decoding in robust image transmission.
Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G
2006-08-01
We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.
A robust approach based on Weibull distribution for clustering gene expression data
Directory of Open Access Journals (Sweden)
Gong Binsheng
2011-05-01
Full Text Available Abstract Background Clustering is a widely used technique for analysis of gene expression data. Most clustering methods group genes based on the distances, while few methods group genes according to the similarities of the distributions of the gene expression levels. Furthermore, as the biological annotation resources accumulated, an increasing number of genes have been annotated into functional categories. As a result, evaluating the performance of clustering methods in terms of the functional consistency of the resulting clusters is of great interest. Results In this paper, we proposed the WDCM (Weibull Distribution-based Clustering Method, a robust approach for clustering gene expression data, in which the gene expressions of individual genes are considered as the random variables following unique Weibull distributions. Our WDCM is based on the concept that the genes with similar expression profiles have similar distribution parameters, and thus the genes are clustered via the Weibull distribution parameters. We used the WDCM to cluster three cancer gene expression data sets from the lung cancer, B-cell follicular lymphoma and bladder carcinoma and obtained well-clustered results. We compared the performance of WDCM with k-means and Self Organizing Map (SOM using functional annotation information given by the Gene Ontology (GO. The results showed that the functional annotation ratios of WDCM are higher than those of the other methods. We also utilized the external measure Adjusted Rand Index to validate the performance of the WDCM. The comparative results demonstrate that the WDCM provides the better clustering performance compared to k-means and SOM algorithms. The merit of the proposed WDCM is that it can be applied to cluster incomplete gene expression data without imputing the missing values. Moreover, the robustness of WDCM is also evaluated on the incomplete data sets. Conclusions The results demonstrate that our WDCM produces clusters
Self-Tuning Control Scheme Based on the Robustness σ-Modification Approach
Directory of Open Access Journals (Sweden)
Nabiha Touijer
2017-01-01
Full Text Available This paper deals with the self-tuning control problem of linear systems described by autoregressive exogenous (ARX mathematical models in the presence of unmodelled dynamics. An explicit scheme of control is described, which we use a recursive algorithm on the basis of the robustness σ-modification approach to estimate the parameters of the system, to solve the problem of regulation tracking of the system. This approach was designed with the assumptions that the norm of the vector of the parameters is well-known. A new quadratic criterion is proposed to develop a modified recursive least squares (M-RLS algorithm with σ-modification. The stability condition of the proposed estimation scheme is proved using the concepts of the small gain theorem. The effectiveness and reliability of the proposed M-RLS algorithm are shown by an illustrative simulation example. The effectiveness of the described explicit self-tuning control scheme is demonstrated by simulation results of the cruise control system for a vehicle.
Robust PLS approach for KPI-related prediction and diagnosis against outliers and missing data
Yin, Shen; Wang, Guang; Yang, Xu
2014-07-01
In practical industrial applications, the key performance indicator (KPI)-related prediction and diagnosis are quite important for the product quality and economic benefits. To meet these requirements, many advanced prediction and monitoring approaches have been developed which can be classified into model-based or data-driven techniques. Among these approaches, partial least squares (PLS) is one of the most popular data-driven methods due to its simplicity and easy implementation in large-scale industrial process. As PLS is totally based on the measured process data, the characteristics of the process data are critical for the success of PLS. Outliers and missing values are two common characteristics of the measured data which can severely affect the effectiveness of PLS. To ensure the applicability of PLS in practical industrial applications, this paper introduces a robust version of PLS to deal with outliers and missing values, simultaneously. The effectiveness of the proposed method is finally demonstrated by the application results of the KPI-related prediction and diagnosis on an industrial benchmark of Tennessee Eastman process.
Wu, Jason H.; Hoy, Wayne K.; Tarter, C. John
2013-01-01
Purpose: The purpose of this research is twofold: to test a theory of academic optimism in Taiwan elementary schools and to expand the theory by adding new variables, collective responsibility and enabling school structure, to the model. Design/methodology/approach: Structural equation modeling was used to test, refine, and expand an…
An approach for optimizing arc welding applications
International Nuclear Information System (INIS)
Chapuis, Julien
2011-01-01
The dynamic and transport mechanisms involved in the arc plasma and the weld pool of arc welding operations are numerous and strongly coupled. They produce a medium the magnitudes of which exhibit rapid time variations and very marked gradients which make any experimental analysis complex in this disrupted environment. In this work, we study the TIG and MIG processes. An experimental platform was developed to allow synchronized measurement of various physical quantities associated with welding (process parameters, temperatures, clamping forces, metal transfer, etc.). Numerical libraries dedicated to applied studies in arc welding are developed. They enable the treatment of a large flow of data (signals, images) with a systematic and global method. The advantages of this approach for the enrichment of numerical simulation and arc process control are shown in different situations. Finally, this experimental approach is used in the context of the chosen application to obtain rich measurements to describe the dynamic behavior of the weld pool in P-GMAW. Dimensional analysis of these experimental measurements allows to identify the predominant mechanisms involved and to determine experimentally the characteristic times associated. This type of approach includes better description of the behavior of a macro-drop of molten metal or the phenomena occurring in the humping instabilities. (author)
Tuan, Pham Viet; Koo, Insoo
2017-10-06
In this paper, we consider multiuser simultaneous wireless information and power transfer (SWIPT) for cognitive radio systems where a secondary transmitter (ST) with an antenna array provides information and energy to multiple single-antenna secondary receivers (SRs) equipped with a power splitting (PS) receiving scheme when multiple primary users (PUs) exist. The main objective of the paper is to maximize weighted sum harvested energy for SRs while satisfying their minimum required signal-to-interference-plus-noise ratio (SINR), the limited transmission power at the ST, and the interference threshold of each PU. For the perfect channel state information (CSI), the optimal beamforming vectors and PS ratios are achieved by the proposed PSO-SDR in which semidefinite relaxation (SDR) and particle swarm optimization (PSO) methods are jointly combined. We prove that SDR always has a rank-1 solution, and is indeed tight. For the imperfect CSI with bounded channel vector errors, the upper bound of weighted sum harvested energy (WSHE) is also obtained through the S-Procedure. Finally, simulation results demonstrate that the proposed PSO-SDR has fast convergence and better performance as compared to the other baseline schemes.
Liu, Lei; Wang, Zhanshan; Zhang, Huaguang
2018-04-01
This paper is concerned with the robust optimal tracking control strategy for a class of nonlinear multi-input multi-output discrete-time systems with unknown uncertainty via adaptive critic design (ACD) scheme. The main purpose is to establish an adaptive actor-critic control method, so that the cost function in the procedure of dealing with uncertainty is minimum and the closed-loop system is stable. Based on the neural network approximator, an action network is applied to generate the optimal control signal and a critic network is used to approximate the cost function, respectively. In contrast to the previous methods, the main features of this paper are: 1) the ACD scheme is integrated into the controllers to cope with the uncertainty and 2) a novel cost function, which is not in quadric form, is proposed so that the total cost in the design procedure is reduced. It is proved that the optimal control signals and the tracking errors are uniformly ultimately bounded even when the uncertainty exists. Finally, a numerical simulation is developed to show the effectiveness of the present approach.
Wei, Ke; Fan, Xiaoguang; Zhan, Mei; Meng, Miao
2018-03-01
Billet optimization can greatly improve the forming quality of the transitional region in the isothermal local loading forming (ILLF) of large-scale Ti-alloy ribweb components. However, the final quality of the transitional region may be deteriorated by uncontrollable factors, such as the manufacturing tolerance of the preforming billet, fluctuation of the stroke length, and friction factor. Thus, a dual-response surface method (RSM)-based robust optimization of the billet was proposed to address the uncontrollable factors in transitional region of the ILLF. Given that the die underfilling and folding defect are two key factors that influence the forming quality of the transitional region, minimizing the mean and standard deviation of the die underfilling rate and avoiding folding defect were defined as the objective function and constraint condition in robust optimization. Then, the cross array design was constructed, a dual-RSM model was established for the mean and standard deviation of the die underfilling rate by considering the size parameters of the billet and uncontrollable factors. Subsequently, an optimum solution was derived to achieve the robust optimization of the billet. A case study on robust optimization was conducted. Good results were attained for improving the die filling and avoiding folding defect, suggesting that the robust optimization of the billet in the transitional region of the ILLF was efficient and reliable.
A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the
Biased Monte Carlo optimization: the basic approach
International Nuclear Information System (INIS)
Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo
2005-01-01
It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly
The Orienteering Problem under Uncertainty Stochastic Programming and Robust Optimization compared
L. Evers (Lanah); K.M. Glorie (Kristiaan); S. van der Ster (Suzanne); A.I. Barros (Ana); H. Monsuur (Herman)
2012-01-01
textabstractThe Orienteering Problem (OP) is a generalization of the well-known traveling salesman problem and has many interesting applications in logistics, tourism and defense. To reflect real-life situations, we focus on an uncertain variant of the OP. Two main approaches that deal with
Microwave potentials and optimal control for robust quantum gates on an atom chip
International Nuclear Information System (INIS)
Treutlein, Philipp; Haensch, Theodor W.; Reichel, Jakob; Negretti, Antonio; Cirone, Markus A.; Calarco, Tommaso
2006-01-01
We propose a two-qubit collisional phase gate that can be implemented with available atom chip technology and present a detailed theoretical analysis of its performance. The gate is based on earlier phase gate schemes, but uses a qubit state pair with an experimentally demonstrated, very long coherence lifetime. Microwave near fields play a key role in our implementation as a means to realize the state-dependent potentials required for conditional dynamics. Quantum control algorithms are used to optimize gate performance. We employ circuit configurations that can be built with current fabrication processes and extensively discuss the impact of technical noise and imperfections that characterize an actual atom chip. We find an overall infidelity compatible with requirements for fault-tolerant quantum computation
Reliability-based optimal structural design by the decoupling approach
International Nuclear Information System (INIS)
Royset, J.O.; Der Kiureghian, A.; Polak, E.
2001-01-01
A decoupling approach for solving optimal structural design problems involving reliability terms in the objective function, the constraint set or both is discussed and extended. The approach employs a reformulation of each problem, in which reliability terms are replaced by deterministic functions. The reformulated problems can be solved by existing semi-infinite optimization algorithms and computational reliability methods. It is shown that the reformulated problems produce solutions that are identical to those of the original problems when the limit-state functions defining the reliability problem are affine. For nonaffine limit-state functions, approximate solutions are obtained by solving series of reformulated problems. An important advantage of the approach is that the required reliability and optimization calculations are completely decoupled, thus allowing flexibility in the choice of the optimization algorithm and the reliability computation method
Parasuraman, Ramviyas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-01-01
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide red...
MO-FG-CAMPUS-TeP3-04: Deliverable Robust Optimization in IMPT Using Quadratic Objective Function
Energy Technology Data Exchange (ETDEWEB)
Shan, J; Liu, W; Bues, M; Schild, S [Mayo Clinic Arizona, Phoenix, AZ (United States)
2016-06-15
Purpose: To find and evaluate the way of applying deliverable MU constraints into robust spot intensity optimization in Intensity-Modulated- Proton-Therapy (IMPT) to prevent plan quality and robustness from degrading due to machine deliverable MU-constraints. Methods: Currently, the influence of the deliverable MU-constraints is retrospectively evaluated by post-processing immediately following optimization. In this study, we propose a new method based on the quasi-Newton-like L-BFGS-B algorithm with which we turn deliverable MU-constraints on and off alternatively during optimization. Seven patients with two different machine settings (small and large spot size) were planned with both conventional and new methods. For each patient, three kinds of plans were generated — conventional non-deliverable plan (plan A), conventional deliverable plan with post-processing (plan B), and new deliverable plan (plan C). We performed this study with both realistic (small) and artificial (large) deliverable MU-constraints. Results: With small minimum MU-constraints considered, new method achieved a slightly better plan quality than conventional method (D95% CTV normalized to the prescription dose: 0.994[0.992∼0.996] (Plan C) vs 0.992[0.986∼0.996] (Plan B)). With large minimum MU constraints considered, results show that the new method maintains plan quality while plan quality from the conventional method is degraded greatly (D95% CTV normalized to the prescription dose: 0.987[0.978∼0.994] (Plan C) vs 0.797[0.641∼1.000] (Plan B)). Meanwhile, plan robustness of these two method’s results is comparable. (For all 7 patients, CTV DVH band gap at D95% normalized to the prescription dose: 0.015[0.005∼0.043] (Plan C) vs 0.012[0.006∼0.038] (Plan B) with small MU-constraints and 0.019[0.009∼0.039] (Plan C) vs 0.030[0.015∼0.041] (Plan B) with large MU-constraints) Conclusion: Positive correlation has been found between plan quality degeneration and magnitude of
MO-FG-CAMPUS-TeP3-04: Deliverable Robust Optimization in IMPT Using Quadratic Objective Function
International Nuclear Information System (INIS)
Shan, J; Liu, W; Bues, M; Schild, S
2016-01-01
Purpose: To find and evaluate the way of applying deliverable MU constraints into robust spot intensity optimization in Intensity-Modulated- Proton-Therapy (IMPT) to prevent plan quality and robustness from degrading due to machine deliverable MU-constraints. Methods: Currently, the influence of the deliverable MU-constraints is retrospectively evaluated by post-processing immediately following optimization. In this study, we propose a new method based on the quasi-Newton-like L-BFGS-B algorithm with which we turn deliverable MU-constraints on and off alternatively during optimization. Seven patients with two different machine settings (small and large spot size) were planned with both conventional and new methods. For each patient, three kinds of plans were generated — conventional non-deliverable plan (plan A), conventional deliverable plan with post-processing (plan B), and new deliverable plan (plan C). We performed this study with both realistic (small) and artificial (large) deliverable MU-constraints. Results: With small minimum MU-constraints considered, new method achieved a slightly better plan quality than conventional method (D95% CTV normalized to the prescription dose: 0.994[0.992∼0.996] (Plan C) vs 0.992[0.986∼0.996] (Plan B)). With large minimum MU constraints considered, results show that the new method maintains plan quality while plan quality from the conventional method is degraded greatly (D95% CTV normalized to the prescription dose: 0.987[0.978∼0.994] (Plan C) vs 0.797[0.641∼1.000] (Plan B)). Meanwhile, plan robustness of these two method’s results is comparable. (For all 7 patients, CTV DVH band gap at D95% normalized to the prescription dose: 0.015[0.005∼0.043] (Plan C) vs 0.012[0.006∼0.038] (Plan B) with small MU-constraints and 0.019[0.009∼0.039] (Plan C) vs 0.030[0.015∼0.041] (Plan B) with large MU-constraints) Conclusion: Positive correlation has been found between plan quality degeneration and magnitude of
Robust Stabilization of Discrete-Time Systems with Time-Varying Delay: An LMI Approach
Directory of Open Access Journals (Sweden)
Valter J. S. Leite
2008-01-01
Full Text Available Sufficient linear matrix inequality (LMI conditions to verify the robust stability and to design robust state feedback gains for the class of linear discrete-time systems with time-varying delay and polytopic uncertainties are presented. The conditions are obtained through parameter-dependent Lyapunov-Krasovskii functionals and use some extra variables, which yield less conservative LMI conditions. Both problems, robust stability analysis and robust synthesis, are formulated as convex problems where all system matrices can be affected by uncertainty. Some numerical examples are presented to illustrate the advantages of the proposed LMI conditions.
Performance Optimization in Sport: A Psychophysiological Approach
Directory of Open Access Journals (Sweden)
Selenia di Fronso
2017-11-01
Full Text Available ABSTRACT In the last 20 years, there was a growing interest in the study of the theoretical and applied issues surrounding psychophysiological processes underlying performance. The psychophysiological monitoring, which enables the study of these processes, consists of the assessment of the activation and functioning level of the organism using a multidimensional approach. In sport, it can be used to attain a better understanding of the processes underlying athletic performance and to improve it. The most frequently used ecological techniques include electromyography (EMG, electrocardiography (ECG, electroencephalography (EEG, and the assessment of electrodermal activity and breathing rhythm. The purpose of this paper is to offer an overview of the use of these techniques in applied interventions in sport and physical exercise and to give athletes, coaches and sport psychology experts new insights for performance improvement.
A probabilistic approach for optimal sensor allocation in structural health monitoring
International Nuclear Information System (INIS)
Azarbayejani, M; Reda Taha, M M; El-Osery, A I; Choi, K K
2008-01-01
Recent advances in sensor technology promote using large sensor networks to efficiently and economically monitor, identify and quantify damage in structures. In structural health monitoring (SHM) systems, the effectiveness and reliability of the sensor network are crucial to determine the optimal number and locations of sensors in SHM systems. Here, we suggest a probabilistic approach for identifying the optimal number and locations of sensors for SHM. We demonstrate a methodology to establish the probability distribution function that identifies the optimal sensor locations such that damage detection is enhanced. The approach is based on using the weights of a neural network trained from simulations using a priori knowledge about damage locations and damage severities to generate a normalized probability distribution function for optimal sensor allocation. We also demonstrate that the optimal sensor network can be related to the highest probability of detection (POD). The redundancy of the proposed sensor network is examined using a 'leave one sensor out' analysis. A prestressed concrete bridge is selected as a case study to demonstrate the effectiveness of the proposed method. The results show that the proposed approach can provide a robust design for sensor networks that are more efficient than a uniform distribution of sensors on a structure
Directory of Open Access Journals (Sweden)
Islam S.M. Khalil
2016-06-01
Full Text Available Targeted therapy using magnetic microparticles and nanoparticles has the potential to mitigate the negative side-effects associated with conventional medical treatment. Major technological challenges still need to be addressed in order to translate these particles into in vivo applications. For example, magnetic particles need to be navigated controllably in vessels against flowing streams of body fluid. This paper describes the motion control of paramagnetic microparticles in the flowing streams of fluidic channels with time-varying flow rates (maximum flow is 35 ml.hr−1. This control is designed using a magnetic-based proportional-derivative (PD control system to compensate for the time-varying flow inside the channels (with width and depth of 2 mm and 1.5 mm, respectively. First, we achieve point-to-point motion control against and along flow rates of 4 ml.hr−1, 6 ml.hr−1, 17 ml.hr−1, and 35 ml.hr−1. The average speeds of single microparticle (with average diameter of 100 μm against flow rates of 6 ml.hr−1 and 30 ml.hr−1 are calculated to be 45 μm.s−1 and 15 μm.s−1, respectively. Second, we implement PD control with disturbance estimation and compensation. This control decreases the steady-state error by 50%, 70%, 73%, and 78% at flow rates of 4 ml.hr−1, 6 ml.hr−1, 17 ml.hr−1, and 35 ml.hr−1, respectively. Finally, we consider the problem of finding the optimal path (minimal kinetic energy between two points using calculus of variation, against the mentioned flow rates. Not only do we find that an optimal path between two collinear points with the direction of maximum flow (middle of the fluidic channel decreases the rise time of the microparticles, but we also decrease the input current that is supplied to the electromagnetic coils by minimizing the kinetic energy of the microparticles, compared to a PD control with disturbance compensation.
A "Hybrid" Approach for Synthesizing Optimal Controllers of Hybrid Systems
DEFF Research Database (Denmark)
Zhao, Hengjun; Zhan, Naijun; Kapur, Deepak
2012-01-01
to discretization manageable and within bounds. A major advantage of our approach is not only that it avoids errors due to numerical computation, but it also gives a better optimal controller. In order to illustrate our approach, we use the real industrial example of an oil pump provided by the German company HYDAC...
An Optimization Approach to the Dynamic Allocation of Economic Capital
Laeven, R.J.A.; Goovaerts, M.J.
2004-01-01
We propose an optimization approach to allocating economic capital, distinguishing between an allocation or raising principle and a measure for the risk residual. The approach is applied both at the aggregate (conglomerate) level and at the individual (subsidiary) level and yields an integrated
A practical multiscale approach for optimization of structural damping
DEFF Research Database (Denmark)
Andreassen, Erik; Jensen, Jakob Søndergaard
2016-01-01
A simple and practical multiscale approach suitable for topology optimization of structural damping in a component ready for additive manufacturing is presented.The approach consists of two steps: First, the homogenized loss factor of a two-phase material is maximized. This is done in order...
Directory of Open Access Journals (Sweden)
Gina M. Zastrow-Hayes
2015-03-01
Full Text Available Molecular characterization of events is an integral part of the advancement process during genetically modified (GM crop product development. Assessment of these events is traditionally accomplished by polymerase chain reaction (PCR and Southern blot analyses. Southern blot analysis can be time-consuming and comparatively expensive and does not provide sequence-level detail. We have developed a sequence-based application, Southern-by-Sequencing (SbS, utilizing sequence capture coupled with next-generation sequencing (NGS technology to replace Southern blot analysis for event selection in a high-throughput molecular characterization environment. SbS is accomplished by hybridizing indexed and pooled whole-genome DNA libraries from GM plants to biotinylated probes designed to target the sequence of transformation plasmids used to generate events within the pool. This sequence capture process enriches the sequence data obtained for targeted regions of interest (transformation plasmid DNA. Taking advantage of the DNA adjacent to the targeted bases (referred to as next-to-target sequence that accompanies the targeted transformation plasmid sequence, the data analysis detects plasmid-to-genome and plasmid-to-plasmid junctions introduced during insertion into the plant genome. Analysis of these junction sequences provides sequence-level information as to the following: the number of insertion loci including detection of unlinked, independently segregating, small DNA fragments; copy number; rearrangements, truncations, or deletions of the intended insertion DNA; and the presence of transformation plasmid backbone sequences. This molecular evidence from SbS analysis is used to characterize and select GM plants meeting optimal molecular characterization criteria. SbS technology has proven to be a robust event screening tool for use in a high-throughput molecular characterization environment.
An Efficient PageRank Approach for Urban Traffic Optimization
Directory of Open Access Journals (Sweden)
Florin Pop
2012-01-01
to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.
Random Matrix Approach for Primal-Dual Portfolio Optimization Problems
Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi
2017-12-01
In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.
A kinematic approach for efficient and robust simulation of the cardiac beating motion.
Directory of Open Access Journals (Sweden)
Takashi Ijiri
Full Text Available Computer simulation techniques for cardiac beating motions potentially have many applications and a broad audience. However, most existing methods require enormous computational costs and often show unstable behavior for extreme parameter sets, which interrupts smooth simulation study and make it difficult to apply them to interactive applications. To address this issue, we present an efficient and robust framework for simulating the cardiac beating motion. The global cardiac motion is generated by the accumulation of local myocardial fiber contractions. We compute such local-to-global deformations using a kinematic approach; we divide a heart mesh model into overlapping local regions, contract them independently according to fiber orientation, and compute a global shape that satisfies contracted shapes of all local regions as much as possible. A comparison between our method and a physics-based method showed that our method can generate motion very close to that of a physics-based simulation. Our kinematic method has high controllability; the simulated ventricle-wall-contraction speed can be easily adjusted to that of a real heart by controlling local contraction timing. We demonstrate that our method achieves a highly realistic beating motion of a whole heart in real time on a consumer-level computer. Our method provides an important step to bridge a gap between cardiac simulations and interactive applications.
Energy Technology Data Exchange (ETDEWEB)
Wang, Sujuan; Zhou, Changhua [Key Laboratory for Special Functional Materials of the Ministry of Education, Henan University, Kaifeng 475004 (China); Yuan, Hang [Life Science Division, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055 (China); Shen, Huaibin [Key Laboratory for Special Functional Materials of the Ministry of Education, Henan University, Kaifeng 475004 (China); Zhao, Wenxiu [Life Science Division, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055 (China); Ma, Lan, E-mail: malan@sz.tsinghua.edu.cn [Life Science Division, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055 (China); Li, Lin Song, E-mail: lsli@henu.edu.cn [Key Laboratory for Special Functional Materials of the Ministry of Education, Henan University, Kaifeng 475004 (China)
2013-08-01
Graphical abstract: - Highlights: • Aqueous CdSe/ZnS QDs were prepared using polymaleic anhydrides as capping ligand. • Effect of reaction temperature and time were systematically studied in the synthesis process. • Water-soluble QDs exhibited a good stability in physiological relevant environment. • The aqueous QDs were applied as biological probe to detect human embryonic stem cell. - Abstract: This paper describes a robust ligand exchange approach for preparing biocompatible CdSe/ZnS quantum dots (QDs) to make bioprobe for effective cell imaging. In this method, polymaleic anhydride (PMA) ligand are first used to replace original hydrophobic ligand (oleic acid) and form a protection shell with multiple hydrophilic groups to coat and protect CdSe/ZnS QDs. The as-prepared aqueous QDs exhibit small particle size, good colloidal stability in aqueous solutions with a wide range of pH, salt concentrations and under thermal treatment, which are necessary for biological applications. The use of this new class of aqueous QDs for effective cell imaging shows strong fluorescence signal to human embryonic stem cell, which demonstrate that PMA coated QDs are fully satisfied with the requirements of preparing high quality biological probe.
Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection
Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd
2015-02-01
Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.
A generic flexible and robust approach for intelligent real-time video-surveillance systems
Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit
2004-05-01
In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.
Wang, Li; Yang, Xiaonan; Wang, Quandai; Yang, Zhiqiang; Duan, Hui; Lu, Bingheng
2017-07-01
The construction of stable hydrophobic surfaces has increasingly gained attention owing to its wide range of potential applications. However, these surfaces may become wet and lose their slip effect owing to insufficient hydrophobic stability. Pillars with a mushroom-shaped tip are believed to enhance hydrophobicity stability. This work presents a facile method of manufacturing mushroom-shaped structures, where, compared with the previously used method, the modulation of the cap thickness, cap diameter, and stem height of the structures is more convenient. The effects of the development time on the cap diameter and overhanging angle are investigated and well-defined mushroom-shaped structures are demonstrated. The effect of the microstructure geometry on the contact state of a droplet is predicted by taking an energy minimization approach and is experimentally validated with nonvolatile ultraviolet-curable polymer with a low surface tension by inspecting the profiles of liquid-vapor interface deformation and tracking the trace of the receding contact line after exposure to ultraviolet light. Theoretical and experimental results show that, compared with regular pillar arrays having a vertical sidewall, the mushroom-like structures can effectively enhance hydrophobic stability. The proposed manufacturing method will be useful for fabricating robust hydrophobic surfaces in a cost-effective and convenient manner.
A Robust Approach for Clock Offset Estimation in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Kim Jang-Sub
2010-01-01
Full Text Available The maximum likelihood estimators (MLEs for the clock phase offset assuming a two-way message exchange mechanism between the nodes of a wireless sensor network were recently derived assuming Gaussian and exponential network delays. However, the MLE performs poorly in the presence of non-Gaussian or nonexponential network delay distributions. Currently, there is a need to develop clock synchronization algorithms that are robust to the distribution of network delays. This paper proposes a clock offset estimator based on the composite particle filter (CPF to cope with the possible asymmetries and non-Gaussianity of the network delay distributions. Also, a variant of the CPF approach based on the bootstrap sampling (BS is shown to exhibit good performance in the presence of reduced number of observations. Computer simulations illustrate that the basic CPF and its BS-based variant present superior performance than MLE under general random network delay distributions such as asymmetric Gaussian, exponential, Gamma, Weibull as well as various mixtures.
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
On Generating Optimal Signal Probabilities for Random Tests: A Genetic Approach
Directory of Open Access Journals (Sweden)
M. Srinivas
1996-01-01
Full Text Available Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed. A brief overview of Genetic Algorithms (GAs and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance of our GAbased approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger.
Directory of Open Access Journals (Sweden)
Jay Krishna Thakur
2015-08-01
Full Text Available The aim of this work is to investigate new approaches using methods based on statistics and geo-statistics for spatio-temporal optimization of groundwater monitoring networks. The formulated and integrated methods were tested with the groundwater quality data set of Bitterfeld/Wolfen, Germany. Spatially, the monitoring network was optimized using geo-statistical methods. Temporal optimization of the monitoring network was carried out using Sen’s method (1968. For geostatistical network optimization, a geostatistical spatio-temporal algorithm was used to identify redundant wells in 2- and 2.5-D Quaternary and Tertiary aquifers. Influences of interpolation block width, dimension, contaminant association, groundwater flow direction and aquifer homogeneity on statistical and geostatistical methods for monitoring network optimization were analysed. The integrated approach shows 37% and 28% redundancies in the monitoring network in Quaternary aquifer and Tertiary aquifer respectively. The geostatistical method also recommends 41 and 22 new monitoring wells in the Quaternary and Tertiary aquifers respectively. In temporal optimization, an overall optimized sampling interval was recommended in terms of lower quartile (238 days, median quartile (317 days and upper quartile (401 days in the research area of Bitterfeld/Wolfen. Demonstrated methods for improving groundwater monitoring network can be used in real monitoring network optimization with due consideration given to influencing factors.
A novel approach for optimal chiller loading using particle swarm optimization
Energy Technology Data Exchange (ETDEWEB)
Ardakani, A. Jahanbani; Ardakani, F. Fattahi; Hosseinian, S.H. [Department of Electrical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Hafez Avenue, Tehran 15875-4413 (Iran, Islamic Republic of)
2008-07-01
This study employs two new methods to solve optimal chiller loading (OCL) problem. These methods are continuous genetic algorithm (GA) and particle swarm optimization (PSO). Because of continuous nature of variables in OCL problem, continuous GA and PSO easily overcome deficiencies in other conventional optimization methods. Partial load ratio (PLR) of the chiller is chosen as the variable to be optimized and consumption power of the chiller is considered as fitness function. Both of these methods find the optimal solution while the equality constraint is exactly satisfied. Some of the major advantages of proposed approaches over other conventional methods can be mentioned as fast convergence, escaping from getting into local optima, simple implementation as well as independency of the solution from the problem. Abilities of proposed methods are examined with reference to an example system. To demonstrate these abilities, results are compared with binary genetic algorithm method. The proposed approaches can be perfectly applied to air-conditioning systems. (author)
Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis
Střelec, Luboš
2011-09-01
The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from
Toda, Masayoshi
2016-01-01
This book provides readers with alternative robust approaches to control design for an important class of systems characteristically associated with ocean-going vessels and structures. These systems, which include crane vessels, on-board cranes, radar gimbals, and a conductivity temperature and depth winch, are modelled as manipulators with oscillating bases. One design approach is based on the H-infinity control framework exploiting an effective combination of PD control, an extended matrix polytope and a robust stability analysis method with a state-dependent coefficient form. The other is based on sliding-mode control using some novel nonlinear sliding surfaces. The model demonstrates how successful motion control can be achieved by suppressing base oscillations and in the presence of uncertainties. This is important not only for ocean engineering systems in which the problems addressed here originate but more generally as a benchmark platform for robust motion control with disturbance rejection. Researche...
Optimization based tuning approach for offset free MPC
DEFF Research Database (Denmark)
Olesen, Daniel Haugård; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp
2012-01-01
We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The advantage of ARX model representations is that standard system...... identifiation techniques using convex optimization can be used for identification of such models from input-output data. The stochastic model of the ARX model identified from input-output data is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The ARMAX...... model description resulting from the extension can be realized as a state space model in innovation form. The MPC is designed and implemented based on this state space model in innovation form. Expressions for the closed-loop dynamics of the unconstrained system is used to derive the sensitivity...
Tiwari, Shivendra N.; Padhi, Radhakant
2018-01-01
Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.
Star, L.
2008-01-01
The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust
System Approach of Logistic Costs Optimization Solution in Supply Chain
Majerčák, Peter; Masárová, Gabriela; Buc, Daniel; Majerčáková, Eva
2013-01-01
This paper is focused on the possibility of using the costs simulation in supply chain, which are on relative high level. Our goal is to determine the costs using logistic costs optimization which must necessarily be used in business activities in the supply chain management. The paper emphasizes the need to perform not isolated optimization in the whole supply chain. Our goal is to compare classic approach, when every part tracks its costs isolated, a try to minimize them, with the system (l...
Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches
Directory of Open Access Journals (Sweden)
Jui-Yu Wu
2013-01-01
Full Text Available Stochastic global optimization (SGO algorithms such as the particle swarm optimization (PSO approach have become popular for solving unconstrained global optimization (UGO problems. The PSO approach, which belongs to the swarm intelligence domain, does not require gradient information, enabling it to overcome this limitation of traditional nonlinear programming methods. Unfortunately, PSO algorithm implementation and performance depend on several parameters, such as cognitive parameter, social parameter, and constriction coefficient. These parameters are tuned by using trial and error. To reduce the parametrization of a PSO method, this work presents two efficient hybrid SGO approaches, namely, a real-coded genetic algorithm-based PSO (RGA-PSO method and an artificial immune algorithm-based PSO (AIA-PSO method. The specific parameters of the internal PSO algorithm are optimized using the external RGA and AIA approaches, and then the internal PSO algorithm is applied to solve UGO problems. The performances of the proposed RGA-PSO and AIA-PSO algorithms are then evaluated using a set of benchmark UGO problems. Numerical results indicate that, besides their ability to converge to a global minimum for each test UGO problem, the proposed RGA-PSO and AIA-PSO algorithms outperform many hybrid SGO algorithms. Thus, the RGA-PSO and AIA-PSO approaches can be considered alternative SGO approaches for solving standard-dimensional UGO problems.
Design of pressure vessels using shape optimization: An integrated approach
Energy Technology Data Exchange (ETDEWEB)
Carbonari, R.C., E-mail: ronny@usp.br [Department of Mechatronic Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil); Munoz-Rojas, P.A., E-mail: pablo@joinville.udesc.br [Department of Mechanical Engineering, Universidade do Estado de Santa Catarina, Bom Retiro, Joinville, SC 89223-100 (Brazil); Andrade, E.Q., E-mail: edmundoq@petrobras.com.br [CENPES, PDP/Metodos Cientificos, Petrobras (Brazil); Paulino, G.H., E-mail: paulino@uiuc.edu [Newmark Laboratory, Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, 205 North Mathews Av., Urbana, IL 61801 (United States); Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, 158 Mechanical Engineering Building, 1206 West Green Street, Urbana, IL 61801-2906 (United States); Nishimoto, K., E-mail: knishimo@usp.br [Department of Naval Architecture and Ocean Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil); Silva, E.C.N., E-mail: ecnsilva@usp.br [Department of Mechatronic Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil)
2011-05-15
Previous papers related to the optimization of pressure vessels have considered the optimization of the nozzle independently from the dished end. This approach generates problems such as thickness variation from nozzle to dished end (coupling cylindrical region) and, as a consequence, it reduces the optimality of the final result which may also be influenced by the boundary conditions. Thus, this work discusses shape optimization of axisymmetric pressure vessels considering an integrated approach in which the entire pressure vessel model is used in conjunction with a multi-objective function that aims to minimize the von-Mises mechanical stress from nozzle to head. Representative examples are examined and solutions obtained for the entire vessel considering temperature and pressure loading. It is noteworthy that different shapes from the usual ones are obtained. Even though such different shapes may not be profitable considering present manufacturing processes, they may be competitive for future manufacturing technologies, and contribute to a better understanding of the actual influence of shape in the behavior of pressure vessels. - Highlights: > Shape optimization of entire pressure vessel considering an integrated approach. > By increasing the number of spline knots, the convergence stability is improved. > The null angle condition gives lower stress values resulting in a better design. > The cylinder stresses are very sensitive to the cylinder length. > The shape optimization of the entire vessel must be considered for cylinder length.
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
Zeng, Baoping; Wang, Chao; Zhang, Yu; Gong, Yajun; Hu, Sanbao
2017-12-01
Joint clearances and friction characteristics significantly influence the mechanism vibration characteristics; for example: as for joint clearances, the shaft and bearing of its clearance joint collide to bring about the dynamic normal contact force and tangential coulomb friction force while the mechanism works; thus, the whole system may vibrate; moreover, the mechanism is under contact-impact with impact force constraint from free movement under action of the above dynamic forces; in addition, the mechanism topology structure also changes. The constraint relationship between joints may be established by a repeated complex nonlinear dynamic process (idle stroke - contact-impact - elastic compression - rebound - impact relief - idle stroke movement - contact-impact). Analysis of vibration characteristics of joint parts is still a challenging open task by far. The dynamic equations for any mechanism with clearance is often a set of strong coupling, high-dimensional and complex time-varying nonlinear differential equations which are solved very difficultly. Moreover, complicated chaotic motions very sensitive to initial values in impact and vibration due to clearance let high-precision simulation and prediction of their dynamic behaviors be more difficult; on the other hand, their subsequent wearing necessarily leads to some certain fluctuation of structure clearance parameters, which acts as one primary factor for vibration of the mechanical system. A dynamic model was established to the device for opening the deepwater robot cabin door with joint clearance by utilizing the finite element method and analysis was carried out to its vibration characteristics in this study. Moreover, its response model was carried out by utilizing the DOE method and then the robust optimization design was performed to sizes of the joint clearance and the friction coefficient change range so that the optimization design results may be regarded as reference data for selecting bearings
Vector-model-supported approach in prostate plan optimization
International Nuclear Information System (INIS)
Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi
2017-01-01
Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration
Vector-model-supported approach in prostate plan optimization
Energy Technology Data Exchange (ETDEWEB)
Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)
2017-07-01
Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-01-01
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734
Directory of Open Access Journals (Sweden)
Ramviyas Parasuraman
2014-12-01
Full Text Available The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS. When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities, there is a possibility that some electronic components may fail randomly (due to radiation effects, which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.
Directory of Open Access Journals (Sweden)
Hanning Chen
2014-01-01
Full Text Available The development of radio frequency identification (RFID technology generates the most challenging RFID network planning (RNP problem, which needs to be solved in order to operate the large-scale RFID network in an optimal fashion. RNP involves many objectives and constraints and has been proven to be a NP-hard multi-objective problem. The application of evolutionary algorithm (EA and swarm intelligence (SI for solving multiobjective RNP (MORNP has gained significant attention in the literature, but these algorithms always transform multiple objectives into a single objective by weighted coefficient approach. In this paper, we use multiobjective EA and SI algorithms to find all the Pareto optimal solutions and to achieve the optimal planning solutions by simultaneously optimizing four conflicting objectives in MORNP, instead of transforming multiobjective functions into a single objective function. The experiment presents an exhaustive comparison of three successful multiobjective EA and SI, namely, the recently developed multiobjective artificial bee colony algorithm (MOABC, the nondominated sorting genetic algorithm II (NSGA-II, and the multiobjective particle swarm optimization (MOPSO, on MORNP instances of different nature, namely, the two-objective and three-objective MORNP. Simulation results show that MOABC proves to be more superior for planning RFID networks than NSGA-II and MOPSO in terms of optimization accuracy and computation robustness.
Meng, Xiaoli; Wang, Heng; Liu, Bingbing
2017-09-18
Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization.
Directory of Open Access Journals (Sweden)
Xiaoli Meng
2017-09-01
Full Text Available Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System/IMU (Inertial Measurement Unit/DMI (Distance-Measuring Instruments, a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization.