WorldWideScience

Sample records for robust format optimized

  1. Scope Oriented Thermoeconomic analysis of energy systems. Part II: Formation Structure of Optimality for robust design

    International Nuclear Information System (INIS)

    Piacentino, Antonio; Cardona, Ennio

    2010-01-01

    This paper represents the Part II of a paper in two parts. In Part I the fundamentals of Scope Oriented Thermoeconomics have been introduced, showing a scarce potential for the cost accounting of existing plants; in this Part II the same concepts are applied to the optimization of a small set of design variables for a vapour compression chiller. The method overcomes the limit of most conventional optimization techniques, which are usually based on hermetic algorithms not enabling the energy analyst to recognize all the margins for improvement. The Scope Oriented Thermoeconomic optimization allows us to disassemble the optimization process, thus recognizing the Formation Structure of Optimality, i.e. the specific influence of any thermodynamic and economic parameter in the path toward the optimal design. Finally, the potential applications of such an in-depth understanding of the inner driving forces of the optimization are discussed in the paper, with a particular focus on the sensitivity analysis to the variation of energy and capital costs and on the actual operation-oriented design.

  2. Robust Optimization Approach for Design for a Dynamic Cell Formation Considering Labor Utilization: Bi-objective Mathematical Mode

    Directory of Open Access Journals (Sweden)

    Hiwa Farughi

    2016-05-01

    Full Text Available In this paper, robust optimization of a bi-objective mathematical model in a dynamic cell formation problem considering labor utilization with uncertain data is carried out. The robust approach is used to reduce the effects of fluctuations of the uncertain parameters with regards to all the possible future scenarios. In this research, cost parameters of the cell formation and demand fluctuations are subject to uncertainty and a mixed-integer programming (MIP model is developed to formulate the related robust dynamic cell formation problem. Then the problem is transformed into a bi-objective linear one. The first objective function seeks to minimize relevant costs of the problem including machine procurement and relocation costs, machine variable cost, inter-cell movement and intra-cell movement costs, overtime cost and labor shifting cost between cells, machine maintenance cost, inventory, holding part cost. The second objective function seeks to minimize total man-hour deviations between cells or indeed labor utilization of the modeled.

  3. Robust Portfolio Optimization Using Pseudodistances.

    Science.gov (United States)

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  4. Robust Decentralized Formation Flight Control

    Directory of Open Access Journals (Sweden)

    Zhao Weihua

    2011-01-01

    Full Text Available Motivated by the idea of multiplexed model predictive control (MMPC, this paper introduces a new framework for unmanned aerial vehicles (UAVs formation flight and coordination. Formulated using MMPC approach, the whole centralized formation flight system is considered as a linear periodic system with control inputs of each UAV subsystem as its periodic inputs. Divided into decentralized subsystems, the whole formation flight system is guaranteed stable if proper terminal cost and terminal constraints are added to each decentralized MPC formulation of the UAV subsystem. The decentralized robust MPC formulation for each UAV subsystem with bounded input disturbances and model uncertainties is also presented. Furthermore, an obstacle avoidance control scheme for any shape and size of obstacles, including the nonapriorily known ones, is integrated under the unified MPC framework. The results from simulations demonstrate that the proposed framework can successfully achieve robust collision-free formation flights.

  5. Robust Portfolio Optimization Using Pseudodistances

    Science.gov (United States)

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  6. Robust boosting via convex optimization

    Science.gov (United States)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  7. Evolution strategies for robust optimization

    NARCIS (Netherlands)

    Kruisselbrink, Johannes Willem

    2012-01-01

    Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but

  8. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  9. Primal and dual approaches to adjustable robust optimization

    NARCIS (Netherlands)

    de Ruiter, Frans

    2018-01-01

    Robust optimization has become an important paradigm to deal with optimization under uncertainty. Adjustable robust optimization is an extension that deals with multistage problems. This thesis starts with a short but comprehensive introduction to adjustable robust optimization. Then the two

  10. Robust Optimization of Database Queries

    Indian Academy of Sciences (India)

    JAYANT

    2011-07-06

    Jul 6, 2011 ... Based on first-order logic. ○ Edgar ... Cost-based Query Optimizer s choice of execution plan ... Determines the values of goods shipped between nations in a time period select ..... Born: 1881 Elected: 1934 Section: Medicine.

  11. Efficient reanalysis techniques for robust topology optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Sigmund, Ole; Lazarov, Boyan Stefanov

    2012-01-01

    efficient robust topology optimization procedures based on reanalysis techniques. The approach is demonstrated on two compliant mechanism design problems where robust design is achieved by employing either a worst case formulation or a stochastic formulation. It is shown that the time spent on finite...

  12. Robust quantum optimizer with full connectivity.

    Science.gov (United States)

    Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P

    2017-04-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.

  13. Robust Structured Control Design via LMI Optimization

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Stoustrup, Jakob

    2011-01-01

    This paper presents a new procedure for discrete-time robust structured control design. Parameter-dependent nonconvex conditions for stabilizable and induced L2-norm performance controllers are solved by an iterative linear matrix inequalities (LMI) optimization. A wide class of controller...... structures including decentralized of any order, fixed-order dynamic output feedback, static output feedback can be designed robust to polytopic uncertainties. Stability is proven by a parameter-dependent Lyapunov function. Numerical examples on robust stability margins shows that the proposed procedure can...

  14. Robust Portfolio Optimization using CAPM Approach

    Directory of Open Access Journals (Sweden)

    mohsen gharakhani

    2013-08-01

    Full Text Available In this paper, a new robust model of multi-period portfolio problem has been developed. One of the key concerns in any asset allocation problem is how to cope with uncertainty about future returns. There are some approaches in the literature for this purpose including stochastic programming and robust optimization. Applying these techniques to multi-period portfolio problem may increase the problem size in a way that the resulting model is intractable. In this paper, a novel approach has been proposed to formulate multi-period portfolio problem as an uncertain linear program assuming that asset return follows the single-index factor model. Robust optimization technique has been also used to solve the problem. In order to evaluate the performance of the proposed model, a numerical example has been applied using simulated data.

  15. Robustizing Circuit Optimization using Huber Functions

    DEFF Research Database (Denmark)

    Bandler, John W.; Biernacki, Radek M.; Chen, Steve H.

    1993-01-01

    The authors introduce a novel approach to 'robustizing' microwave circuit optimization using Huber functions, both two-sided and one-sided. They compare Huber optimization with l/sub 1/, l/sub 2/, and minimax methods in the presence of faults, large and small measurement errors, bad starting poin......, a preliminary optimization by selecting a small number of dominant variables. It is demonstrated, through multiplexer optimization, that the one-sided Huber function can be more effective and efficient than minimax in overcoming a bad starting point.......The authors introduce a novel approach to 'robustizing' microwave circuit optimization using Huber functions, both two-sided and one-sided. They compare Huber optimization with l/sub 1/, l/sub 2/, and minimax methods in the presence of faults, large and small measurement errors, bad starting points......, and statistical uncertainties. They demonstrate FET statistical modeling, multiplexer optimization, analog fault location, and data fitting. They extend the Huber concept by introducing a 'one-sided' Huber function for large-scale optimization. For large-scale problems, the designer often attempts, by intuition...

  16. Robust design optimization using the price of robustness, robust least squares and regularization methods

    Science.gov (United States)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  17. Optimal interdependence enhances robustness of complex systems

    OpenAIRE

    Singh, R. K.; Sinha, Sitabhra

    2017-01-01

    While interdependent systems have usually been associated with increased fragility, we show that strengthening the interdependence between dynamical processes on different networks can make them more robust. By coupling the dynamics of networks that in isolation exhibit catastrophic collapse with extinction of nodal activity, we demonstrate system-wide persistence of activity for an optimal range of interdependence between the networks. This is related to the appearance of attractors of the g...

  18. Robust topology optimization accounting for geometric imperfections

    DEFF Research Database (Denmark)

    Schevenels, M.; Jansen, M.; Lombaert, Geert

    2013-01-01

    performance. As a consequence, the actual structure may be far from optimal. In this paper, a robust approach to topology optimization is presented, taking into account two types of geometric imperfections: variations of (1) the crosssections and (2) the locations of structural elements. The first type...... is modeled by means of a scalar non-Gaussian random field, which is represented as a translation process. The underlying Gaussian field is simulated by means of the EOLE method. The second type of imperfections is modeled as a Gaussian vector-valued random field, which is simulated directly by means...... of the EOLE method. In each iteration of the optimization process, the relevant statistics of the structural response are evaluated by means of a Monte Carlo simulation. The proposed methodology is successfully applied to a test problem involving the design of a compliant mechanism (for the first type...

  19. Automatic Synthesis of Robust and Optimal Controllers

    DEFF Research Database (Denmark)

    Cassez, Franck; Jessen, Jan Jacob; Larsen, Kim Guldstrand

    2009-01-01

    In this paper, we show how to apply recent tools for the automatic synthesis of robust and near-optimal controllers for a real industrial case study. We show how to use three different classes of models and their supporting existing tools, Uppaal-TiGA for synthesis, phaver for verification......, and Simulink for simulation, in a complementary way. We believe that this case study shows that our tools have reached a level of maturity that allows us to tackle interesting and relevant industrial control problems....

  20. Robust Optimal Design of Quantum Electronic Devices

    Directory of Open Access Journals (Sweden)

    Ociel Morales

    2018-01-01

    Full Text Available We consider the optimal design of a sequence of quantum barriers, in order to manufacture an electronic device at the nanoscale such that the dependence of its transmission coefficient on the bias voltage is linear. The technique presented here is easily adaptable to other response characteristics. There are two distinguishing features of our approach. First, the transmission coefficient is determined using a semiclassical approximation, so we can explicitly compute the gradient of the objective function. Second, in contrast with earlier treatments, manufacturing uncertainties are incorporated in the model through random variables; the optimal design problem is formulated in a probabilistic setting and then solved using a stochastic collocation method. As a measure of robustness, a weighted sum of the expectation and the variance of a least-squares performance metric is considered. Several simulations illustrate the proposed technique, which shows an improvement in accuracy over 69% with respect to brute-force, Monte-Carlo-based methods.

  1. Kinematically Optimal Robust Control of Redundant Manipulators

    Science.gov (United States)

    Galicki, M.

    2017-12-01

    This work deals with the problem of the robust optimal task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the endeffector. Furthermore, the movement is to be accomplished in such a way as to minimize both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we derive a class of chattering-free robust kinematically optimal controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.

  2. Distributed Robust Optimization in Networked System.

    Science.gov (United States)

    Wang, Shengnan; Li, Chunguang

    2016-10-11

    In this paper, we consider a distributed robust optimization (DRO) problem, where multiple agents in a networked system cooperatively minimize a global convex objective function with respect to a global variable under the global constraints. The objective function can be represented by a sum of local objective functions. The global constraints contain some uncertain parameters which are partially known, and can be characterized by some inequality constraints. After problem transformation, we adopt the Lagrangian primal-dual method to solve this problem. We prove that the primal and dual optimal solutions of the problem are restricted in some specific sets, and we give a method to construct these sets. Then, we propose a DRO algorithm to find the primal-dual optimal solutions of the Lagrangian function, which consists of a subgradient step, a projection step, and a diffusion step, and in the projection step of the algorithm, the optimized variables are projected onto the specific sets to guarantee the boundedness of the subgradients. Convergence analysis and numerical simulations verifying the performance of the proposed algorithm are then provided. Further, for nonconvex DRO problem, the corresponding approach and algorithm framework are also provided.

  3. Robust optimization based upon statistical theory.

    Science.gov (United States)

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose

  4. Oracle-based online robust optimization via online learning

    NARCIS (Netherlands)

    Ben-Tal, A.; Hazan, E.; Koren, T.; Shie, M.

    2015-01-01

    Robust optimization is a common optimization framework under uncertainty when problem parameters are unknown, but it is known that they belong to some given uncertainty set. In the robust optimization framework, a min-max problem is solved wherein a solution is evaluated according to its performance

  5. PARAMETER COORDINATION AND ROBUST OPTIMIZATION FOR MULTIDISCIPLINARY DESIGN

    Institute of Scientific and Technical Information of China (English)

    HU Jie; PENG Yinghong; XIONG Guangleng

    2006-01-01

    A new parameter coordination and robust optimization approach for multidisciplinary design is presented. Firstly, the constraints network model is established to support engineering change, coordination and optimization. In this model, interval boxes are adopted to describe the uncertainty of design parameters quantitatively to enhance the design robustness. Secondly, the parameter coordination method is presented to solve the constraints network model, monitor the potential conflicts due to engineering changes, and obtain the consistency solution space corresponding to the given product specifications. Finally, the robust parameter optimization model is established, and genetic arithmetic is used to obtain the robust optimization parameter. An example of bogie design is analyzed to show the scheme to be effective.

  6. Optimal Robust Fault Detection for Linear Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Nike Liu

    2008-01-01

    Full Text Available This paper considers robust fault-detection problems for linear discrete time systems. It is shown that the optimal robust detection filters for several well-recognized robust fault-detection problems, such as ℋ−/ℋ∞, ℋ2/ℋ∞, and ℋ∞/ℋ∞ problems, are the same and can be obtained by solving a standard algebraic Riccati equation. Optimal filters are also derived for many other optimization criteria and it is shown that some well-studied and seeming-sensible optimization criteria for fault-detection filter design could lead to (optimal but useless fault-detection filters.

  7. Extending the Scope of Robust Quadratic Optimization

    NARCIS (Netherlands)

    Marandi, Ahmadreza; Ben-Tal, A.; den Hertog, Dick; Melenberg, Bertrand

    In this paper, we derive tractable reformulations of the robust counterparts of convex quadratic and conic quadratic constraints with concave uncertainties for a broad range of uncertainty sets. For quadratic constraints with convex uncertainty, it is well-known that the robust counterpart is, in

  8. Self-optimizing robust nonlinear model predictive control

    NARCIS (Netherlands)

    Lazar, M.; Heemels, W.P.M.H.; Jokic, A.; Thoma, M.; Allgöwer, F.; Morari, M.

    2009-01-01

    This paper presents a novel method for designing robust MPC schemes that are self-optimizing in terms of disturbance attenuation. The method employs convex control Lyapunov functions and disturbance bounds to optimize robustness of the closed-loop system on-line, at each sampling instant - a unique

  9. Robust balance shift control with posture optimization

    NARCIS (Netherlands)

    Kavafoglu, Z.; Kavafoglu, Ersan; Egges, J.

    2015-01-01

    In this paper we present a control framework which creates robust and natural balance shifting behaviours during standing. Given high-level features such as the position of the center of mass projection and the foot configurations, a kinematic posture satisfying these features is synthesized using

  10. Chance constrained uncertain classification via robust optimization

    NARCIS (Netherlands)

    Ben-Tal, A.; Bhadra, S.; Bhattacharayya, C.; Saketha Nat, J.

    2011-01-01

    This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out

  11. Robust combined position and formation control for marine surface craft

    DEFF Research Database (Denmark)

    Ihle, Ivar-Andre F.; Jouffroy, Jerome; Fossen, Thor I.

    We consider the robustness properties of a formation control system for marine surface vessels. Intervessel constraint functions are stabilized to achieve the desired formation configuration. We show that the formation dynamics is Input-to-State Stable (ISS) to both environmental perturbations th...

  12. Robust structural optimization using Gauss-type quadrature formula

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Seo, Ki Seog; Chen, Shikui; Chen, Wei

    2009-01-01

    In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the Tensor Product Quadrature (TPQ) formula and the Univariate Dimension Reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty.

  13. Robust structural optimization using Gauss-type quadrature formula

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon; Seo, Ki Seog [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Chen, Shikui; Chen, Wei [Northwestern University, Illinois (United States)

    2009-07-01

    In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the Tensor Product Quadrature (TPQ) formula and the Univariate Dimension Reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty.

  14. Robust Structural Optimization Using Gauss-type Quadrature Formula

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon; Seo, Ki Seog; Chen, Shikui; Chen, Wei [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2009-08-15

    In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the tensor product quadrature (TPQ) formula and the univariate dimension reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty.

  15. Robust Structural Optimization Using Gauss-type Quadrature Formula

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Seo, Ki Seog; Chen, Shikui; Chen, Wei

    2009-01-01

    In robust design, the mean and variance of design performance are frequently used to measure the design performance and its robustness under uncertainties. In this paper, we present the Gauss-type quadrature formula as a rigorous method for mean and variance estimation involving arbitrary input distributions and further extend its use to robust design optimization. One dimensional Gauss-type quadrature formula are constructed from the input probability distributions and utilized in the construction of multidimensional quadrature formula such as the tensor product quadrature (TPQ) formula and the univariate dimension reduction (UDR) method. To improve the efficiency of using it for robust design optimization, a semi-analytic design sensitivity analysis with respect to the statistical moments is proposed. The proposed approach is applied to a simple bench mark problems and robust topology optimization of structures considering various types of uncertainty

  16. TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization

    Science.gov (United States)

    2016-11-28

    methods is presented in the book chapter [CG16d]. 4.4 Robust Timetable Information Problems. Timetable information is the process of determining a...Princeton and Oxford, 2009. [BTN98] A. Ben-Tal and A. Nemirovski. Robust convex optimization. Math - ematics of Operations Research, 23(4):769–805...Goerigk. A note on upper bounds to the robust knapsack problem with discrete scenarios. Annals of Operations Research, 223(1):461–469, 2014. [GS16] M

  17. Robust Optimization in Simulation : Taguchi and Response Surface Methodology

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, J.P.C.; Meloni, C.

    2008-01-01

    Optimization of simulated systems is tackled by many methods, but most methods assume known environments. This article, however, develops a 'robust' methodology for uncertain environments. This methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by

  18. Topology optimization of robust superhydrophobic surfaces

    DEFF Research Database (Denmark)

    Cavalli, Andrea; Bøggild, Peter; Okkels, Fridolin

    2013-01-01

    In this paper we apply topology optimization to micro-structured superhydrophobic surfaces for the first time. It has been experimentally observed that a droplet suspended on a brush of micrometric posts shows a high static contact angle and low roll-off angle. To keep the fluid from penetrating...

  19. Optimal design of robust piezoelectric unimorph microgrippers

    DEFF Research Database (Denmark)

    Ruiz, David; Díaz-Molina, Alex; Sigmund, Ole

    2018-01-01

    Topology optimization can be used to design piezoelectric actuators by simultaneous design of host structure and polarization profile. Subsequent micro-scale fabrication leads us to overcome important manufacturing limitations: difficulties in placing a piezoelectric layer on both top and bottom...

  20. Geometrical framework for robust portfolio optimization

    OpenAIRE

    Bazovkin, Pavel

    2014-01-01

    We consider a vector-valued multivariate risk measure that depends on the user's profile given by the user's utility. It is constructed on the basis of weighted-mean trimmed regions and represents the solution of an optimization problem. The key feature of this measure is convexity. We apply the measure to the portfolio selection problem, employing different measures of performance as objective functions in a common geometrical framework.

  1. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  2. Linear systems optimal and robust control

    CERN Document Server

    Sinha, Alok

    2007-01-01

    Introduction Overview Contents of the Book State Space Description of a Linear System Transfer Function of a Single Input/Single Output (SISO) System State Space Realizations of a SISO System SISO Transfer Function from a State Space Realization Solution of State Space Equations Observability and Controllability of a SISO System Some Important Similarity Transformations Simultaneous Controllability and Observability Multiinput/Multioutput (MIMO) Systems State Space Realizations of a Transfer Function Matrix Controllability and Observability of a MIMO System Matrix-Fraction Description (MFD) MFD of a Transfer Function Matrix for the Minimal Order of a State Space Realization Controller Form Realization from a Right MFD Poles and Zeros of a MIMO Transfer Function Matrix Stability Analysis State Feedback Control and Optimization State Variable Feedback for a Single Input System Computation of State Feedback Gain Matrix for a Multiinput System State Feedback Gain Matrix for a Multi...

  3. Robust optimization methods for cardiac sparing in tangential breast IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8 (Canada); Lee, Jenny [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Chan, Timothy C. Y. [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada); Purdie, Thomas G. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada)

    2015-05-15

    Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust optimization approach for breast IMRT. We compare robust optimized plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). Methods: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructed using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust optimized and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust optimization method was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the optimization models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust optimized plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust method reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical method while also improving the coverage of the accumulated whole breast target volume. On average, the robust method reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust method had smaller deviations from the planned dose to the

  4. Optimal Robust Self-Testing by Binary Nonlocal XOR Games

    OpenAIRE

    Miller, Carl A.; Shi, Yaoyun

    2013-01-01

    Self-testing a quantum apparatus means verifying the existence of a certain quantum state as well as the effect of the associated measuring devices based only on the statistics of the measurement outcomes. Robust (i.e., error-tolerant) self-testing quantum apparatuses are critical building blocks for quantum cryptographic protocols that rely on imperfect or untrusted devices. We devise a general scheme for proving optimal robust self-testing properties for tests based on nonlocal binary XOR g...

  5. Robust and optimal control a two-port framework approach

    CERN Document Server

    Tsai, Mi-Ching

    2014-01-01

    A Two-port Framework for Robust and Optimal Control introduces an alternative approach to robust and optimal controller synthesis procedures for linear, time-invariant systems, based on the two-port system widespread in electrical engineering. The novel use of the two-port system in this context allows straightforward engineering-oriented solution-finding procedures to be developed, requiring no mathematics beyond linear algebra. A chain-scattering description provides a unified framework for constructing the stabilizing controller set and for synthesizing H2 optimal and H∞ sub-optimal controllers. Simple yet illustrative examples explain each step. A Two-port Framework for Robust and Optimal Control  features: ·         a hands-on, tutorial-style presentation giving the reader the opportunity to repeat the designs presented and easily to modify them for their own programs; ·         an abundance of examples illustrating the most important steps in robust and optimal design; and ·   �...

  6. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    Science.gov (United States)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  7. Design optimization for cost and quality: The robust design approach

    Science.gov (United States)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  8. Robust topology optimization accounting for spatially varying manufacturing errors

    DEFF Research Database (Denmark)

    Schevenels, M.; Lazarov, Boyan Stefanov; Sigmund, Ole

    2011-01-01

    This paper presents a robust approach for the design of macro-, micro-, or nano-structures by means of topology optimization, accounting for spatially varying manufacturing errors. The focus is on structures produced by milling or etching; in this case over- or under-etching may cause parts...... optimization problem is formulated in a probabilistic way: the objective function is defined as a weighted sum of the mean value and the standard deviation of the structural performance. The optimization problem is solved by means of a Monte Carlo method: in each iteration of the optimization scheme, a Monte...

  9. Robust Design Optimization of an Aerospace Vehicle Prolusion System

    Directory of Open Access Journals (Sweden)

    Muhammad Aamir Raza

    2011-01-01

    Full Text Available This paper proposes a robust design optimization methodology under design uncertainties of an aerospace vehicle propulsion system. The approach consists of 3D geometric design coupled with complex internal ballistics, hybrid optimization, worst-case deviation, and efficient statistical approach. The uncertainties are propagated through worst-case deviation using first-order orthogonal design matrices. The robustness assessment is measured using the framework of mean-variance and percentile difference approach. A parametric sensitivity analysis is carried out to analyze the effects of design variables variation on performance parameters. A hybrid simulated annealing and pattern search approach is used as an optimizer. The results show the objective function of optimizing the mean performance and minimizing the variation of performance parameters in terms of thrust ratio and total impulse could be achieved while adhering to the system constraints.

  10. Robust Optimization of Fourth Party Logistics Network Design under Disruptions

    Directory of Open Access Journals (Sweden)

    Jia Li

    2015-01-01

    Full Text Available The Fourth Party Logistics (4PL network faces disruptions of various sorts under the dynamic and complex environment. In order to explore the robustness of the network, the 4PL network design with consideration of random disruptions is studied. The purpose of the research is to construct a 4PL network that can provide satisfactory service to customers at a lower cost when disruptions strike. Based on the definition of β-robustness, a robust optimization model of 4PL network design under disruptions is established. Based on the NP-hard characteristic of the problem, the artificial fish swarm algorithm (AFSA and the genetic algorithm (GA are developed. The effectiveness of the algorithms is tested and compared by simulation examples. By comparing the optimal solutions of the 4PL network for different robustness level, it is indicated that the robust optimization model can evade the market risks effectively and save the cost in the maximum limit when it is applied to 4PL network design.

  11. Application of Six Sigma Robust Optimization in Sheet Metal Forming

    International Nuclear Information System (INIS)

    Li, Y.Q.; Cui, Z.S.; Ruan, X.Y.; Zhang, D.J.

    2005-01-01

    Numerical simulation technology and optimization method have been applied in sheet metal forming process to improve design quality and shorten design cycle. While the existence of fluctuation in design variables or operation condition has great influence on the quality. In addition to that, iterative solution in numerical simulation and optimization usually take huge computational time or endure expensive experiment cost In order to eliminate effect of perturbations in design and improve design efficiency, a CAE-based six sigma robust design method is developed in this paper. In the six sigma procedure for sheet metal forming, statistical technology and dual response surface approximate model as well as algorithm of 'Design for Six Sigma (DFSS)' are integrated together to perform reliability optimization and robust improvement. A deep drawing process of a rectangular cup is taken as an example to illustrate the method. The optimization solutions show that the proposed optimization procedure not only improves significantly the reliability and robustness of the forming quality, but also increases optimization efficiency with approximate model

  12. Robust Optimization for Household Load Scheduling with Uncertain Parameters

    Directory of Open Access Journals (Sweden)

    Jidong Wang

    2018-04-01

    Full Text Available Home energy management systems (HEMS face many challenges of uncertainty, which have a great impact on the scheduling of home appliances. To handle the uncertain parameters in the household load scheduling problem, this paper uses a robust optimization method to rebuild the household load scheduling model for home energy management. The model proposed in this paper can provide the complete robust schedules for customers while considering the disturbance of uncertain parameters. The complete robust schedules can not only guarantee the customers’ comfort constraints but also cooperatively schedule the electric devices for cost minimization and load shifting. Moreover, it is available for customers to obtain multiple schedules through setting different robust levels while considering the trade-off between the comfort and economy.

  13. Robust Pitch Estimation Using an Optimal Filter on Frequency Estimates

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    of such signals from unconstrained frequency estimates (UFEs). A minimum variance distortionless response (MVDR) method is proposed as an optimal solution to minimize the variance of UFEs considering the constraint of integer harmonics. The MVDR filter is designed based on noise statistics making it robust...

  14. Robust Optimization in Simulation : Taguchi and Krige Combined

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.

    2009-01-01

    Optimization of simulated systems is the goal of many methods, but most methods as- sume known environments. We, however, develop a `robust' methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by

  15. Metamodel-based robust simulation-optimization : An overview

    NARCIS (Netherlands)

    Dellino, G.; Meloni, C.; Kleijnen, J.P.C.; Dellino, Gabriella; Meloni, Carlo

    2015-01-01

    Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a "robust" methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by

  16. Towards robust optimal design of storm water systems

    Science.gov (United States)

    Marquez Calvo, Oscar; Solomatine, Dimitri

    2015-04-01

    In this study the focus is on the design of a storm water or a combined sewer system. Such a system should be capable to handle properly most of the storm to minimize the damages caused by flooding due to the lack of capacity of the system to cope with rain water at peak times. This problem is a multi-objective optimization problem: we have to take into account the minimization of the construction costs, the minimization of damage costs due to flooding, and possibly other criteria. One of the most important factors influencing the design of storm water systems is the expected amount of water to deal with. It is common that this infrastructure is developed with the capacity to cope with events that occur once in, say 10 or 20 years - so-called design rainfall events. However, rainfall is a random variable and such uncertainty typically is not taken explicitly into account in optimization. Rainfall design data is based on historical information of rainfalls, but many times this data is based on unreliable measures; or in not enough historical information; or as we know, the patterns of rainfall are changing regardless of historical information. There are also other sources of uncertainty influencing design, for example, leakages in the pipes and accumulation of sediments in pipes. In the context of storm water or combined sewer systems design or rehabilitation, robust optimization technique should be able to find the best design (or rehabilitation plan) within the available budget but taking into account uncertainty in those variables that were used to design the system. In this work we consider various approaches to robust optimization proposed by various authors (Gabrel, Murat, Thiele 2013; Beyer, Sendhoff 2007) and test a novel method ROPAR (Solomatine 2012) to analyze robustness. References Beyer, H.G., & Sendhoff, B. (2007). Robust optimization - A comprehensive survey. Comput. Methods Appl. Mech. Engrg., 3190-3218. Gabrel, V.; Murat, C., Thiele, A. (2014

  17. Intelligent and robust optimization frameworks for smart grids

    Science.gov (United States)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic

  18. Robust Optimization Model for Production Planning Problem under Uncertainty

    Directory of Open Access Journals (Sweden)

    Pembe GÜÇLÜ

    2017-01-01

    Full Text Available Conditions of businesses change very quickly. To take into account the uncertainty engendered by changes has become almost a rule while planning. Robust optimization techniques that are methods of handling uncertainty ensure to produce less sensitive results to changing conditions. Production planning, is to decide from which product, when and how much will be produced, with a most basic definition. Modeling and solution of the Production planning problems changes depending on structure of the production processes, parameters and variables. In this paper, it is aimed to generate and apply scenario based robust optimization model for capacitated two-stage multi-product production planning problem under parameter and demand uncertainty. With this purpose, production planning problem of a textile company that operate in İzmir has been modeled and solved, then deterministic scenarios’ and robust method’s results have been compared. Robust method has provided a production plan that has higher cost but, will result close to feasible and optimal for most of the different scenarios in the future.

  19. Robust output LQ optimal control via integral sliding modes

    CERN Document Server

    Fridman, Leonid; Bejarano, Francisco Javier

    2014-01-01

    Featuring original research from well-known experts in the field of sliding mode control, this monograph presents new design schemes for implementing LQ control solutions in situations where the output system is the only information provided about the state of the plant. This new design works under the restrictions of matched disturbances without losing its desirable features. On the cutting-edge of optimal control research, Robust Output LQ Optimal Control via Integral Sliding Modes is an excellent resource for both graduate students and professionals involved in linear systems, optimal control, observation of systems with unknown inputs, and automatization. In the theory of optimal control, the linear quadratic (LQ) optimal problem plays an important role due to its physical meaning, and its solution is easily given by an algebraic Riccati equation. This solution turns out to be restrictive, however, because of two assumptions: the system must be free from disturbances and the entire state vector must be kn...

  20. Robustness Recipes for Minimax Robust Optimization in Intensity Modulated Proton Therapy for Oropharyngeal Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Voort, Sebastian van der [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Water, Steven van de [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Perkó, Zoltán [Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Heijmen, Ben [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Lathouwers, Danny [Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Hoogeman, Mischa, E-mail: m.hoogeman@erasmusmc.nl [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands)

    2016-05-01

    Purpose: We aimed to derive a “robustness recipe” giving the range robustness (RR) and setup robustness (SR) settings (ie, the error values) that ensure adequate clinical target volume (CTV) coverage in oropharyngeal cancer patients for given gaussian distributions of systematic setup, random setup, and range errors (characterized by standard deviations of Σ, σ, and ρ, respectively) when used in minimax worst-case robust intensity modulated proton therapy (IMPT) optimization. Methods and Materials: For the analysis, contoured computed tomography (CT) scans of 9 unilateral and 9 bilateral patients were used. An IMPT plan was considered robust if, for at least 98% of the simulated fractionated treatments, 98% of the CTV received 95% or more of the prescribed dose. For fast assessment of the CTV coverage for given error distributions (ie, different values of Σ, σ, and ρ), polynomial chaos methods were used. Separate recipes were derived for the unilateral and bilateral cases using one patient from each group, and all 18 patients were included in the validation of the recipes. Results: Treatment plans for bilateral cases are intrinsically more robust than those for unilateral cases. The required RR only depends on the ρ, and SR can be fitted by second-order polynomials in Σ and σ. The formulas for the derived robustness recipes are as follows: Unilateral patients need SR = −0.15Σ{sup 2} + 0.27σ{sup 2} + 1.85Σ − 0.06σ + 1.22 and RR=3% for ρ = 1% and ρ = 2%; bilateral patients need SR = −0.07Σ{sup 2} + 0.19σ{sup 2} + 1.34Σ − 0.07σ + 1.17 and RR=3% and 4% for ρ = 1% and 2%, respectively. For the recipe validation, 2 plans were generated for each of the 18 patients corresponding to Σ = σ = 1.5 mm and ρ = 0% and 2%. Thirty-four plans had adequate CTV coverage in 98% or more of the simulated fractionated treatments; the remaining 2 had adequate coverage in 97.8% and 97.9%. Conclusions: Robustness recipes were derived that can

  1. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    Science.gov (United States)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  2. Reactive Robustness and Integrated Approaches for Railway Optimization Problems

    DEFF Research Database (Denmark)

    Haahr, Jørgen Thorlund

    journeys helps the driver to drive efficiently and enhances robustness in a realistic (dynamic) environment. Four international scientific prizes have been awarded for distinct parts of the research during the course of this PhD project. The first prize was awarded for work during the \\2014 RAS Problem...... to absorb or withstand unexpected events such as delays. Making robust plans is central in order to maintain a safe and timely railway operation. This thesis focuses on reactive robustness, i.e., the ability to react once a plan is rendered infeasible in operation due to disruptions. In such time...... Solving Competition", where a freight yard optimization problem was considered. The second junior (PhD) prize was awared for the work performed in the \\ROADEF/EURO Challenge 2014: Trains don't vanish!", where the planning of rolling stock movements at a large station was considered. An honorable mention...

  3. Integrating robust timetabling in line plan optimization for railway systems

    DEFF Research Database (Denmark)

    Burggraeve, Sofie; Bull, Simon Henry; Vansteenwegen, Pieter

    2017-01-01

    We propose a heuristic algorithm to build a railway line plan from scratch that minimizes passenger travel time and operator cost and for which a feasible and robust timetable exists. A line planning module and a timetabling module work iteratively and interactively. The line planning module......, but is constrained by limited shunt capacity. While the operator and passenger cost remain close to those of the initially and (for these costs) optimally built line plan, the timetable corresponding to the finally developed robust line plan significantly improves the minimum buffer time, and thus the robustness...... creates an initial line plan. The timetabling module evaluates the line plan and identifies a critical line based on minimum buffer times between train pairs. The line planning module proposes a new line plan in which the time length of the critical line is modified in order to provide more flexibility...

  4. Optimization of robustness of interdependent network controllability by redundant design.

    Directory of Open Access Journals (Sweden)

    Zenghu Zhang

    Full Text Available Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy or DBS (degree based strategy for node backup and HDF(high degree first for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability.

  5. Stochastic simulation and robust design optimization of integrated photonic filters

    Directory of Open Access Journals (Sweden)

    Weng Tsui-Wei

    2016-07-01

    Full Text Available Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%–35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  6. On projection methods, convergence and robust formulations in topology optimization

    DEFF Research Database (Denmark)

    Wang, Fengwen; Lazarov, Boyan Stefanov; Sigmund, Ole

    2011-01-01

    alleviated using various projection methods. In this paper we show that simple projection methods do not ensure local mesh-convergence and propose a modified robust topology optimization formulation based on erosion, intermediate and dilation projections that ensures both global and local mesh-convergence.......Mesh convergence and manufacturability of topology optimized designs have previously mainly been assured using density or sensitivity based filtering techniques. The drawback of these techniques has been gray transition regions between solid and void parts, but this problem has recently been...

  7. Robust optimal self tuning regulator of nuclear reactors

    International Nuclear Information System (INIS)

    Nouri Khajavi, M.; Menhaj, M.B.; Ghofrani, M.B.

    2000-01-01

    Nuclear power reactors are, in nature nonlinear and time varying. These characteristics must be considered, if large power variations occur in their working regime. In this paper a robust optimal self-tuning regulator for regulating the power of a nuclear reactor has been designed and simulated. The proposed controller is capable of regulating power levels in a wide power range (10% to 100% power levels). The controller achieves a fast and good transient response. The simulation results show that the proposed controller outperforms the fixed optimal control recently cited in the literature for nuclear power plants

  8. Cascade-robustness optimization of coupling preference in interconnected networks

    International Nuclear Information System (INIS)

    Zhang, Xue-Jun; Xu, Guo-Qiang; Zhu, Yan-Bo; Xia, Yong-Xiang

    2016-01-01

    Highlights: • A specific memetic algorithm was proposed to optimize coupling links. • A small toy model was investigated to examine the underlying mechanism. • The MA optimized strategy exhibits a moderate assortative pattern. • A novel coupling coefficient index was proposed to quantify coupling preference. - Abstract: Recently, the robustness of interconnected networks has attracted extensive attentions, one of which is to investigate the influence of coupling preference. In this paper, the memetic algorithm (MA) is employed to optimize the coupling links of interconnected networks. Afterwards, a comparison is made between MA optimized coupling strategy and traditional assortative, disassortative and random coupling preferences. It is found that the MA optimized coupling strategy with a moderate assortative value shows an outstanding performance against cascading failures on both synthetic scale-free interconnected networks and real-world networks. We then provide an explanation for this phenomenon from a micro-scope point of view and propose a coupling coefficient index to quantify the coupling preference. Our work is helpful for the design of robust interconnected networks.

  9. Robust optimization of a tandem grating solar thermal absorber

    Science.gov (United States)

    Choi, Jongin; Kim, Mingeon; Kang, Kyeonghwan; Lee, Ikjin; Lee, Bong Jae

    2018-04-01

    Ideal solar thermal absorbers need to have a high value of the spectral absorptance in the broad solar spectrum to utilize the solar radiation effectively. Majority of recent studies about solar thermal absorbers focus on achieving nearly perfect absorption using nanostructures, whose characteristic dimension is smaller than the wavelength of sunlight. However, precise fabrication of such nanostructures is not easy in reality; that is, unavoidable errors always occur to some extent in the dimension of fabricated nanostructures, causing an undesirable deviation of the absorption performance between the designed structure and the actually fabricated one. In order to minimize the variation in the solar absorptance due to the fabrication error, the robust optimization can be performed during the design process. However, the optimization of solar thermal absorber considering all design variables often requires tremendous computational costs to find an optimum combination of design variables with the robustness as well as the high performance. To achieve this goal, we apply the robust optimization using the Kriging method and the genetic algorithm for designing a tandem grating solar absorber. By constructing a surrogate model through the Kriging method, computational cost can be substantially reduced because exact calculation of the performance for every combination of variables is not necessary. Using the surrogate model and the genetic algorithm, we successfully design an effective solar thermal absorber exhibiting a low-level of performance degradation due to the fabrication uncertainty of design variables.

  10. Group Elevator Peak Scheduling Based on Robust Optimization Model

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2013-08-01

    Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.

  11. On the robust optimization to the uncertain vaccination strategy problem

    International Nuclear Information System (INIS)

    Chaerani, D.; Anggriani, N.; Firdaniza

    2014-01-01

    In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented

  12. On the robust optimization to the uncertain vaccination strategy problem

    Energy Technology Data Exchange (ETDEWEB)

    Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id [Department of Mathematics, Faculty of Mathematics and Natural Sciences, University of Padjadjaran Indonesia, Jalan Raya Bandung Sumedang KM 21 Jatinangor Sumedang 45363 (Indonesia)

    2014-02-21

    In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.

  13. Robust Optimal Adaptive Trajectory Tracking Control of Quadrotor Helicopter

    Directory of Open Access Journals (Sweden)

    M. Navabi

    Full Text Available Abstract This paper focuses on robust optimal adaptive control strategy to deal with tracking problem of a quadrotor unmanned aerial vehicle (UAV in presence of parametric uncertainties, actuator amplitude constraints, and unknown time-varying external disturbances. First, Lyapunov-based indirect adaptive controller optimized by particle swarm optimization (PSO is developed for multi-input multi-output (MIMO nonlinear quadrotor to prevent input constraints violation, and then disturbance observer-based control (DOBC technique is aggregated with the control system to attenuate the effects of disturbance generated by an exogenous system. The performance of synthesis control method is evaluated by a new performance index function in time-domain, and the stability analysis is carried out using Lyapunov theory. Finally, illustrative numerical simulations are conducted to demonstrate the effectiveness of the presented approach in altitude and attitude tracking under several conditions, including large time-varying uncertainty, exogenous disturbance, and control input constraints.

  14. Robust sawtooth period control based on adaptive online optimization

    International Nuclear Information System (INIS)

    Bolder, J.J.; Witvoet, G.; De Baar, M.R.; Steinbuch, M.; Van de Wouw, N.; Haring, M.A.M.; Westerhof, E.; Doelman, N.J.

    2012-01-01

    The systematic design of a robust adaptive control strategy for the sawtooth period using electron cyclotron current drive (ECCD) is presented. Recent developments in extremum seeking control (ESC) are employed to derive an optimized controller structure and offer practical tuning guidelines for its parameters. In this technique a cost function in terms of the desired sawtooth period is optimized online by changing the ECCD deposition location based on online estimations of the gradient of the cost function. The controller design does not require a detailed model of the sawtooth instability. Therefore, the proposed ESC is widely applicable to any sawtoothing plasma or plasma simulation and is inherently robust against uncertainties or plasma variations. Moreover, it can handle a broad class of disturbances. This is demonstrated by time-domain simulations, which show successful tracking of time-varying sawtooth period references throughout the whole operating space, even in the presence of variations in plasma parameters, disturbances and slow launcher mirror dynamics. Due to its simplicity and robustness the proposed ESC is a valuable sawtooth control candidate for any experimental tokamak plasma, and may even be applicable to other fusion-related control problems. (paper)

  15. Doubly Robust Estimation of Optimal Dynamic Treatment Regimes

    DEFF Research Database (Denmark)

    Barrett, Jessica K; Henderson, Robin; Rosthøj, Susanne

    2014-01-01

    We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret-regression appro......We compare methods for estimating optimal dynamic decision rules from observational data, with particular focus on estimating the regret functions defined by Murphy (in J. R. Stat. Soc., Ser. B, Stat. Methodol. 65:331-355, 2003). We formulate a doubly robust version of the regret......-regression approach of Almirall et al. (in Biometrics 66:131-139, 2010) and Henderson et al. (in Biometrics 66:1192-1201, 2010) and demonstrate that it is equivalent to a reduced form of Robins' efficient g-estimation procedure (Robins, in Proceedings of the Second Symposium on Biostatistics. Springer, New York, pp....... 189-326, 2004). Simulation studies suggest that while the regret-regression approach is most efficient when there is no model misspecification, in the presence of misspecification the efficient g-estimation procedure is more robust. The g-estimation method can be difficult to apply in complex...

  16. Optimizing the robustness of electrical power systems against cascading failures.

    Science.gov (United States)

    Zhang, Yingrui; Yağan, Osman

    2016-06-21

    Electrical power systems are one of the most important infrastructures that support our society. However, their vulnerabilities have raised great concern recently due to several large-scale blackouts around the world. In this paper, we investigate the robustness of power systems against cascading failures initiated by a random attack. This is done under a simple yet useful model based on global and equal redistribution of load upon failures. We provide a comprehensive understanding of system robustness under this model by (i) deriving an expression for the final system size as a function of the size of initial attacks; (ii) deriving the critical attack size after which system breaks down completely; (iii) showing that complete system breakdown takes place through a first-order (i.e., discontinuous) transition in terms of the attack size; and (iv) establishing the optimal load-capacity distribution that maximizes robustness. In particular, we show that robustness is maximized when the difference between the capacity and initial load is the same for all lines; i.e., when all lines have the same redundant space regardless of their initial load. This is in contrast with the intuitive and commonly used setting where capacity of a line is a fixed factor of its initial load.

  17. Enhancing product robustness in reliability-based design optimization

    International Nuclear Information System (INIS)

    Zhuang, Xiaotian; Pan, Rong; Du, Xiaoping

    2015-01-01

    Different types of uncertainties need to be addressed in a product design optimization process. In this paper, the uncertainties in both product design variables and environmental noise variables are considered. The reliability-based design optimization (RBDO) is integrated with robust product design (RPD) to concurrently reduce the production cost and the long-term operation cost, including quality loss, in the process of product design. This problem leads to a multi-objective optimization with probabilistic constraints. In addition, the model uncertainties associated with a surrogate model that is derived from numerical computation methods, such as finite element analysis, is addressed. A hierarchical experimental design approach, augmented by a sequential sampling strategy, is proposed to construct the response surface of product performance function for finding optimal design solutions. The proposed method is demonstrated through an engineering example. - Highlights: • A unifying framework for integrating RBDO and RPD is proposed. • Implicit product performance function is considered. • The design problem is solved by sequential optimization and reliability assessment. • A sequential sampling technique is developed for improving design optimization. • The comparison with traditional RBDO is provided

  18. Distributionally Robust Return-Risk Optimization Models and Their Applications

    Directory of Open Access Journals (Sweden)

    Li Yang

    2014-01-01

    Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.

  19. Robust C subroutines for non-linear optimization

    DEFF Research Database (Denmark)

    Brock, Pernille; Madsen, Kaj; Nielsen, Hans Bruun

    2004-01-01

    This report presents a package of robust and easy-to-use C subroutines for solving unconstrained and constrained non-linear optimization problems. The intention is that the routines should use the currently best algorithms available. All routines have standardized calls, and the user does not have...... by changing 1 to 0. The present report is a new and updated version of a previous report NI-91-03 with the same title, [16]. Both the previous and the present report describe a collection of subroutines, which have been translated from Fortran to C. The reason for writing the present report is that some...... of the C subroutines have been replaced by more effective and robust versions translated from the original Fortran subroutines to C by the Bandler Group, see [1]. Also the test examples have been modi ed to some extent. For a description of the original Fortran subroutines see the report [17]. The software...

  20. An adaptive robust optimization scheme for water-flooding optimization in oil reservoirs using residual analysis

    NARCIS (Netherlands)

    Siraj, M.M.; Van den Hof, P.M.J.; Jansen, J.D.

    2017-01-01

    Model-based dynamic optimization of the water-flooding process in oil reservoirs is a computationally complex problem and suffers from high levels of uncertainty. A traditional way of quantifying uncertainty in robust water-flooding optimization is by considering an ensemble of uncertain model

  1. A robust optimization approach for energy generation scheduling in microgrids

    International Nuclear Information System (INIS)

    Wang, Ran; Wang, Ping; Xiao, Gaoxi

    2015-01-01

    Highlights: • A new uncertainty model is proposed for better describing unstable energy demands. • An optimization problem is formulated to minimize the cost of microgrid operations. • Robust optimization algorithms are developed to transform and solve the problem. • The proposed scheme can prominently reduce energy expenses. • Numerical results provide useful insights for future investment policy making. - Abstract: In this paper, a cost minimization problem is formulated to intelligently schedule energy generations for microgrids equipped with unstable renewable sources and combined heat and power (CHP) generators. In such systems, the fluctuant net demands (i.e., the electricity demands not balanced by renewable energies) and heat demands impose unprecedented challenges. To cope with the uncertainty nature of net demand and heat demand, a new flexible uncertainty model is developed. Specifically, we introduce reference distributions according to predictions and field measurements and then define uncertainty sets to confine net and heat demands. The model allows the net demand and heat demand distributions to fluctuate around their reference distributions. Another difficulty existing in this problem is the indeterminate electricity market prices. We develop chance constraint approximations and robust optimization approaches to firstly transform and then solve the prime problem. Numerical results based on real-world data evaluate the impacts of different parameters. It is shown that our energy generation scheduling strategy performs well and the integration of combined heat and power (CHP) generators effectively reduces the system expenditure. Our research also helps shed some illuminations on the investment policy making for microgrids.

  2. Robust optimization of supersonic ORC nozzle guide vanes

    Science.gov (United States)

    Bufi, Elio A.; Cinnella, Paola

    2017-03-01

    An efficient Robust Optimization (RO) strategy is developed for the design of 2D supersonic Organic Rankine Cycle turbine expanders. The dense gas effects are not-negligible for this application and they are taken into account describing the thermodynamics by means of the Peng-Robinson-Stryjek-Vera equation of state. The design methodology combines an Uncertainty Quantification (UQ) loop based on a Bayesian kriging model of the system response to the uncertain parameters, used to approximate statistics (mean and variance) of the uncertain system output, a CFD solver, and a multi-objective non-dominated sorting algorithm (NSGA), also based on a Kriging surrogate of the multi-objective fitness function, along with an adaptive infill strategy for surrogate enrichment at each generation of the NSGA. The objective functions are the average and variance of the isentropic efficiency. The blade shape is parametrized by means of a Free Form Deformation (FFD) approach. The robust optimal blades are compared to the baseline design (based on the Method of Characteristics) and to a blade obtained by means of a deterministic CFD-based optimization.

  3. Robust buckling optimization of laminated composite structures using discrete material optimization considering “worst” shape imperfections

    DEFF Research Database (Denmark)

    Henrichsen, Søren Randrup; Lindgaard, Esben; Lund, Erik

    2015-01-01

    Robust buckling optimal design of laminated composite structures is conducted in this work. Optimal designs are obtained by considering geometric imperfections in the optimization procedure. Discrete Material Optimization is applied to obtain optimal laminate designs. The optimal geometric...... imperfection is represented by the “worst” shape imperfection. The two optimization problems are combined through the recurrence optimization. Hereby the imperfection sensitivity of the considered structures can be studied. The recurrence optimization is demonstrated through a U-profile and a cylindrical panel...... example. The imperfection sensitivity of the optimized structure decreases during the recurrence optimization for both examples, hence robust buckling optimal structures are designed....

  4. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    Science.gov (United States)

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  5. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    Science.gov (United States)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  6. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    Science.gov (United States)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value

  7. Multidisciplinary Design Optimization for High Reliability and Robustness

    National Research Council Canada - National Science Library

    Grandhi, Ramana

    2005-01-01

    .... Over the last 3 years Wright State University has been applying analysis tools to predict the behavior of critical disciplines to produce highly robust torpedo designs using robust multi-disciplinary...

  8. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

    Science.gov (United States)

    Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

    2012-07-02

    Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of

  9. A kriging metamodel-assisted robust optimization method based on a reverse model

    Science.gov (United States)

    Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao

    2018-02-01

    The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.

  10. On the relation between flexibility analysis and robust optimization for linear systems

    KAUST Repository

    Zhang, Qi; Grossmann, Ignacio E.; Lima, Ricardo

    2016-01-01

    Flexibility analysis and robust optimization are two approaches to solving optimization problems under uncertainty that share some fundamental concepts, such as the use of polyhedral uncertainty sets and the worst-case approach to guarantee

  11. Research on robust optimization of emergency logistics network considering the time dependence characteristic

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng; LI, Ying; ZHANG, Zhengkun

    2017-06-01

    Considering the time dependence of emergency logistic network and complexity of the environment that the network exists in, in this paper the time dependent network optimization theory and robust discrete optimization theory are combined, and the emergency logistics dynamic network optimization model with characteristics of robustness is built to maximize the timeliness of emergency logistics. On this basis, considering the complexity of dynamic network and the time dependence of edge weight, an improved ant colony algorithm is proposed to realize the coupling of the optimization algorithm and the network time dependence and robustness. Finally, a case study has been carried out in order to testify validity of this robustness optimization model and its algorithm, and the value of different regulation factors was analyzed considering the importance of the value of the control factor in solving the optimal path. Analysis results show that this model and its algorithm above-mentioned have good timeliness and strong robustness.

  12. Robust optimization-based DC optimal power flow for managing wind generation uncertainty

    Science.gov (United States)

    Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn

    2012-11-01

    Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.

  13. Systematic and robust design of photonic crystal waveguides by topology optimization

    DEFF Research Database (Denmark)

    Wang, Fengwen; Jensen, Jakob Søndergaard; Sigmund, Ole

    2010-01-01

    on a threshold projection. The objective is formulated to minimize the maximum error between actual group indices and a prescribed group index among these three designs. Novel photonic crystal waveguide facilitating slow light with a group index of n(g) = 40 is achieved by the robust optimization approach......A robust topology optimization method is presented to consider manufacturing uncertainties in tailoring dispersion properties of photonic crystal waveguides. The under, normal and over-etching scenarios in manufacturing process are represented by dilated, intermediate and eroded designs based....... The numerical result illustrates that the robust topology optimization provides a systematic and robust design methodology for photonic crystal waveguide design....

  14. The research on optimization of auto supply chain network robust model under macroeconomic fluctuations

    International Nuclear Information System (INIS)

    Guo, Chunxiang; Liu, Xiaoli; Jin, Maozhu; Lv, Zhihan

    2016-01-01

    Considering the uncertainty of the macroeconomic environment, the robust optimization method is studied for constructing and designing the automotive supply chain network, and based on the definition of robust solution a robust optimization model is built for integrated supply chain network design that consists of supplier selection problem and facility location–distribution problem. The tabu search algorithm is proposed for supply chain node configuration, analyzing the influence of the level of uncertainty on robust results, and by comparing the performance of supply chain network design through the stochastic programming model and robustness optimize model, on this basis, determining the rational layout of supply chain network under macroeconomic fluctuations. At last the contrastive test result validates that the performance of tabu search algorithm is outstanding on convergence and computational time. Meanwhile it is indicated that the robust optimization model can reduce investment risks effectively when it is applied to supply chain network design.

  15. Robust formation control of marine surface craft using Lagrange multipliers

    DEFF Research Database (Denmark)

    Ihle, Ivar-Andre F.; Jouffroy, Jerome; Fossen, Thor I.

    2006-01-01

    This paper presents a formation modelling scheme based on a set of inter-body constraint functions and Lagrangian multipliers. Formation control for a °eet of marine craft is achieved by stabilizing the auxiliary constraints such that the desired formation con¯guration appears. In the proposed fr...

  16. Robust Optimization Using Supremum of the Objective Function for Nonlinear Programming Problems

    International Nuclear Information System (INIS)

    Lee, Se Jung; Park, Gyung Jin

    2014-01-01

    In the robust optimization field, the robustness of the objective function emphasizes an insensitive design. In general, the robustness of the objective function can be achieved by reducing the change of the objective function with respect to the variation of the design variables and parameters. However, in conventional methods, when an insensitive design is emphasized, the performance of the objective function can be deteriorated. Besides, if the numbers of the design variables are increased, the numerical cost is quite high in robust optimization for nonlinear programming problems. In this research, the robustness index for the objective function and a process of robust optimization are proposed. Moreover, a method using the supremum of linearized functions is also proposed to reduce the computational cost. Mathematical examples are solved for the verification of the proposed method and the results are compared with those from the conventional methods. The proposed approach improves the performance of the objective function and its efficiency

  17. Robust fluence map optimization via alternating direction method of multipliers with empirical parameter optimization

    International Nuclear Information System (INIS)

    Gao, Hao

    2016-01-01

    For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)

  18. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  19. A Two-Stage Robust Optimization for Centralized-Optimal Dispatch of Photovoltaic Inverters in Active Distribution Networks

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Yang, Yongheng

    2017-01-01

    Optimally dispatching Photovoltaic (PV) inverters is an efficient way to avoid overvoltage in active distribution networks, which may occur in the case of PV generation surplus load demand. Typically, the dispatching optimization objective is to identify critical PV inverters that have the most...... nature of solar PV energy may affect the selection of the critical PV inverters and also the final optimal objective value. In order to address this issue, a two-stage robust optimization model is proposed in this paper to achieve a robust optimal solution to the PV inverter dispatch, which can hedge...... against any possible realization within the uncertain PV outputs. In addition, the conic relaxation-based branch flow formulation and second-order cone programming based column-and-constraint generation algorithm are employed to deal with the proposed robust optimization model. Case studies on a 33-bus...

  20. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  1. Optimal design of robust piezoelectric microgrippers undergoing large displacements

    DEFF Research Database (Denmark)

    Ruiz, D.; Sigmund, Ole

    2018-01-01

    Topology optimization combined with optimal design of electrodes is used to design piezoelectric microgrippers. Fabrication at micro-scale presents an important challenge: due to non-symmetrical lamination of the structures, out-of-plane bending spoils the behaviour of the grippers. Suppression...

  2. Assuring robustness to noise in optimal quantum control experiments

    International Nuclear Information System (INIS)

    Bartelt, A.F.; Roth, M.; Mehendale, M.; Rabitz, H.

    2005-01-01

    Closed-loop optimal quantum control experiments operate in the inherent presence of laser noise. In many applications, attaining high quality results [i.e., a high signal-to-noise (S/N) ratio for the optimized objective] is as important as producing a high control yield. Enhancement of the S/N ratio will typically be in competition with the mean signal, however, the latter competition can be balanced by biasing the optimization experiments towards higher mean yields while retaining a good S/N ratio. Other strategies can also direct the optimization to reduce the standard deviation of the statistical signal distribution. The ability to enhance the S/N ratio through an optimized choice of the control is demonstrated for two condensed phase model systems: second harmonic generation in a nonlinear optical crystal and stimulated emission pumping in a dye solution

  3. Approximating the Pareto set of multiobjective linear programs via robust optimization

    NARCIS (Netherlands)

    Gorissen, B.L.; den Hertog, D.

    2012-01-01

    We consider problems with multiple linear objectives and linear constraints and use adjustable robust optimization and polynomial optimization as tools to approximate the Pareto set with polynomials of arbitrarily large degree. The main difference with existing techniques is that we optimize a

  4. Design and Validation of Optimized Feedforward with Robust Feedback Control of a Nuclear Reactor

    International Nuclear Information System (INIS)

    Shaffer, Roman; He Weidong; Edwards, Robert M.

    2004-01-01

    Design applications for robust feedback and optimized feedforward control, with confirming results from experiments conducted on the Pennsylvania State University TRIGA reactor, are presented. The combination of feedforward and feedback control techniques complement each other in that robust control offers guaranteed closed-loop stability in the presence of uncertainties, and optimized feedforward offers an approach to achieving performance that is sometimes limited by overly conservative robust feedback control. The design approach taken in this work combines these techniques by first designing robust feedback control. Alternative methods for specifying a low-order linear model and uncertainty specifications, while seeking as much performance as possible, are discussed and evaluated. To achieve desired performance characteristics, the optimized feedforward control is then computed by using the nominal nonlinear plant model that incorporates the robust feedback control

  5. Simulation-based robust optimization for signal timing and setting.

    Science.gov (United States)

    2009-12-30

    The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...

  6. Three Essays on Robust Optimization of Efficient Portfolios

    OpenAIRE

    Liu, Hao

    2013-01-01

    The mean-variance approach was first proposed by Markowitz (1952), and laid the foundation of the modern portfolio theory. Despite its theoretical appeal, the practical implementation of optimized portfolios is strongly restricted by the fact that the two inputs, the means and the covariance matrix of asset returns, are unknown and have to be estimated by available historical information. Due to the estimation risk inherited from inputs, desired properties of estimated optimal portfolios are ...

  7. APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    N. Belavendram

    2010-12-01

    Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.

  8. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    Science.gov (United States)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

  9. Robust Topology Optimization Based on Stochastic Collocation Methods under Loading Uncertainties

    Directory of Open Access Journals (Sweden)

    Qinghai Zhao

    2015-01-01

    Full Text Available A robust topology optimization (RTO approach with consideration of loading uncertainties is developed in this paper. The stochastic collocation method combined with full tensor product grid and Smolyak sparse grid transforms the robust formulation into a weighted multiple loading deterministic problem at the collocation points. The proposed approach is amenable to implementation in existing commercial topology optimization software package and thus feasible to practical engineering problems. Numerical examples of two- and three-dimensional topology optimization problems are provided to demonstrate the proposed RTO approach and its applications. The optimal topologies obtained from deterministic and robust topology optimization designs under tensor product grid and sparse grid with different levels are compared with one another to investigate the pros and cons of optimization algorithm on final topologies, and an extensive Monte Carlo simulation is also performed to verify the proposed approach.

  10. Distributed Consensus-Based Robust Adaptive Formation Control for Nonholonomic Mobile Robots with Partial Known Dynamics

    Directory of Open Access Journals (Sweden)

    Zhaoxia Peng

    2014-01-01

    Full Text Available This paper investigates the distributed consensus-based robust adaptive formation control for nonholonomic mobile robots with partially known dynamics. Firstly, multirobot formation control problem has been converted into a state consensus problem. Secondly, the practical control strategies, which incorporate the distributed kinematic controllers and the robust adaptive torque controllers, are designed for solving the formation control problem. Thirdly, the specified reference trajectory for the geometric centroid of the formation is assumed as the trajectory of a virtual leader, whose information is available to only a subset of the followers. Finally, numerical results are provided to illustrate the effectiveness of the proposed control approaches.

  11. Hybrid Robust Optimization for the Design of a Smartphone Metal Frame Antenna

    Directory of Open Access Journals (Sweden)

    Sungwoo Lee

    2018-01-01

    Full Text Available Hybrid robust optimization that combines a genetical swarm optimization (GSO scheme with an orthogonal array (OA is proposed to design an antenna robust to the tolerances arising during the fabrication process of the antenna in this paper. An inverted-F antenna with a metal frame serves as an example to explain the procedure of the proposed method. GSO is adapted to determine the design variables of the antenna, which operates on the GSM850 band (824–894 MHz. The robustness of the antenna is evaluated through a noise test using the OA. The robustness of the optimized antenna is improved by approximately 61.3% relative to that of a conventional antenna. Conventional and optimized antennas are fabricated and measured to validate the experimental results.

  12. LMI–based robust controller design approach in aircraft multidisciplinary design optimization problem

    Directory of Open Access Journals (Sweden)

    Qinghua Zeng

    2015-07-01

    Full Text Available This article proposes a linear matrix inequality–based robust controller design approach to implement the synchronous design of aircraft control discipline and other disciplines, in which the variation in design parameters is treated as equivalent perturbations. Considering the complicated mapping relationships between the coefficient arrays of aircraft motion model and the aircraft design parameters, the robust controller designed is directly based on the variation in these coefficient arrays so conservative that the multidisciplinary design optimization problem would be too difficult to solve, or even if there is a solution, the robustness of design result is generally poor. Therefore, this article derives the uncertainty model of disciplinary design parameters based on response surface approximation, converts the design problem of the robust controller into a problem of solving a standard linear matrix inequality, and theoretically gives a less conservative design method of the robust controller which is based on the variation in design parameters. Furthermore, the concurrent subspace approach is applied to the multidisciplinary system with this kind of robust controller in the design loop. A multidisciplinary design optimization of a tailless aircraft as example is shown that control discipline can be synchronous optimal design with other discipline, especially this method will greatly reduce the calculated amount of multidisciplinary design optimization and make multidisciplinary design optimization results more robustness of flight performance.

  13. Robust dynamical pattern formation from a multifunctional minimal genetic circuit

    Directory of Open Access Journals (Sweden)

    Carrera Javier

    2010-04-01

    Full Text Available Abstract Background A practical problem during the analysis of natural networks is their complexity, thus the use of synthetic circuits would allow to unveil the natural mechanisms of operation. Autocatalytic gene regulatory networks play an important role in shaping the development of multicellular organisms, whereas oscillatory circuits are used to control gene expression under variable environments such as the light-dark cycle. Results We propose a new mechanism to generate developmental patterns and oscillations using a minimal number of genes. For this, we design a synthetic gene circuit with an antagonistic self-regulation to study the spatio-temporal control of protein expression. Here, we show that our minimal system can behave as a biological clock or memory, and it exhibites an inherent robustness due to a quorum sensing mechanism. We analyze this property by accounting for molecular noise in an heterogeneous population. We also show how the period of the oscillations is tunable by environmental signals, and we study the bifurcations of the system by constructing different phase diagrams. Conclusions As this minimal circuit is based on a single transcriptional unit, it provides a new mechanism based on post-translational interactions to generate targeted spatio-temporal behavior.

  14. Benefits and Challenges when Performing Robust Topology Optimization for Interior Acoustic Problems

    DEFF Research Database (Denmark)

    Christiansen, Rasmus Ellebæk; Jensen, Jakob Søndergaard; Lazarov, Boyan Stefanov

    The objective of this work is to present benets and challenges of using robust topology optimization techniques for minimizing the sound pressure in interior acoustic problems. The focus is on creating designs which maintain high performance under uniform spatial variations. This work takes offset...... in previous work considering topology optimization for interior acoustic problems, [1]. However in the previous work the robustness of the designs was not considered....

  15. Benefits and Challenges when Performing Robust Topology Optimization for Interior Acoustic Problems

    OpenAIRE

    Christiansen, Rasmus Ellebæk; Jensen, Jakob Søndergaard; Lazarov, Boyan Stefanov; Sigmund, Ole

    2014-01-01

    The objective of this work is to present benets and challenges of using robust topology optimization techniques for minimizing the sound pressure in interior acoustic problems. The focus is on creating designs which maintain high performance under uniform spatial variations. This work takes offset in previous work considering topology optimization for interior acoustic problems, [1]. However in the previous work the robustness of the designs was not considered.

  16. Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects

    OpenAIRE

    Li, Ming; Wu, Guangdong

    2014-01-01

    Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illus...

  17. Nickel-Cadmium Battery Operation Management Optimization Using Robust Design

    Science.gov (United States)

    Blosiu, Julian O.; Deligiannis, Frank; DiStefano, Salvador

    1996-01-01

    In recent years following several spacecraft battery anomalies, it was determined that managing the operational factors of NASA flight NiCd rechargeable battery was very important in order to maintain space flight battery nominal performance. The optimization of existing flight battery operational performance was viewed as something new for a Taguchi Methods application.

  18. Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm

    Science.gov (United States)

    2009-03-10

    xfar by xint. Else, generate a new individual, using the Sobol pseudo- random sequence generator within the upper and lower bounds of the variables...12. Deb, K., Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons. 2002. 13. Sobol , I. M., "Uniformly Distributed Sequences

  19. Robust optimization in simulation : Taguchi and Krige combined

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.

    2012-01-01

    Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a “robust” methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by

  20. Decreasing-Rate Pruning Optimizes the Construction of Efficient and Robust Distributed Networks.

    Directory of Open Access Journals (Sweden)

    Saket Navlakha

    2015-07-01

    Full Text Available Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. During neural network development in the brain, synapses are massively over-produced and then pruned-back over time. This strategy is not commonly used when designing engineered networks, since adding connections that will soon be removed is considered wasteful. Here, we show that for large distributed routing networks, network function is markedly enhanced by hyper-connectivity followed by aggressive pruning and that the global rate of pruning, a developmental parameter not previously studied by experimentalists, plays a critical role in optimizing network structure. We first used high-throughput image analysis techniques to quantify the rate of pruning in the mammalian neocortex across a broad developmental time window and found that the rate is decreasing over time. Based on these results, we analyzed a model of computational routing networks and show using both theoretical analysis and simulations that decreasing rates lead to more robust and efficient networks compared to other rates. We also present an application of this strategy to improve the distributed design of airline networks. Thus, inspiration from neural network formation suggests effective ways to design distributed networks across several domains.

  1. Robust Bayesian decision theory applied to optimal dosage.

    Science.gov (United States)

    Abraham, Christophe; Daurès, Jean-Pierre

    2004-04-15

    We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. Copyright 2004 John Wiley & Sons, Ltd.

  2. Reliability-Based Robust Design Optimization of Structures Considering Uncertainty in Design Variables

    Directory of Open Access Journals (Sweden)

    Shujuan Wang

    2015-01-01

    Full Text Available This paper investigates the structural design optimization to cover both the reliability and robustness under uncertainty in design variables. The main objective is to improve the efficiency of the optimization process. To address this problem, a hybrid reliability-based robust design optimization (RRDO method is proposed. Prior to the design optimization, the Sobol sensitivity analysis is used for selecting key design variables and providing response variance as well, resulting in significantly reduced computational complexity. The single-loop algorithm is employed to guarantee the structural reliability, allowing fast optimization process. In the case of robust design, the weighting factor balances the response performance and variance with respect to the uncertainty in design variables. The main contribution of this paper is that the proposed method applies the RRDO strategy with the usage of global approximation and the Sobol sensitivity analysis, leading to the reduced computational cost. A structural example is given to illustrate the performance of the proposed method.

  3. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    Science.gov (United States)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  4. Creating geometrically robust designs for highly sensitive problems using topology optimization: Acoustic cavity design

    DEFF Research Database (Denmark)

    Christiansen, Rasmus E.; Lazarov, Boyan S.; Jensen, Jakob S.

    2015-01-01

    Resonance and wave-propagation problems are known to be highly sensitive towards parameter variations. This paper discusses topology optimization formulations for creating designs that perform robustly under spatial variations for acoustic cavity problems. For several structural problems, robust...... and limitations are discussed. In addition, a known explicit penalization approach is considered for comparison. For near-uniform spatial variations it is shown that highly robust designs can be obtained using the double filter approach. It is finally demonstrated that taking non-uniform variations into account...... further improves the robustness of the designs....

  5. Optimal robust control strategy of a solid oxide fuel cell system

    Science.gov (United States)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  6. Hybrid robust predictive optimization method of power system dispatch

    Science.gov (United States)

    Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY

    2011-08-02

    A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.

  7. Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects

    Directory of Open Access Journals (Sweden)

    Ming Li

    2014-01-01

    Full Text Available Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illustrate the robust model, nondominated sorting genetic algorithm-II (NSGA-II is modified to solve the project example. The results show that, by means of adjusting the time and cost robust coefficients, the robust Pareto sets for time-cost tradeoff can be obtained according to different acceptable risk level, from which the decision maker could choose the preferred construction alternative.

  8. Optimal interdependence enhances the dynamical robustness of complex systems

    Science.gov (United States)

    Singh, Rishu Kumar; Sinha, Sitabhra

    2017-08-01

    Although interdependent systems have usually been associated with increased fragility, we show that strengthening the interdependence between dynamical processes on different networks can make them more likely to survive over long times. By coupling the dynamics of networks that in isolation exhibit catastrophic collapse with extinction of nodal activity, we demonstrate system-wide persistence of activity for an optimal range of interdependence between the networks. This is related to the appearance of attractors of the global dynamics comprising disjoint sets ("islands") of stable activity.

  9. Robust topology optimization accounting for misplacement of material

    DEFF Research Database (Denmark)

    Jansen, Miche; Lombaert, Geert; Diehl, Moritz

    2013-01-01

    into account this type of geometric imperfections. A density filter based approach is followed, and translations of material are obtained by adding a small perturbation to the center of the filter kernel. The spatial variation of the geometric imperfections is modeled by means of a vector valued random field....... A sampling method is used to estimate these statistics during the optimization process. The proposed method is successfully applied to three example problems: the minimum compliance design of a slender column-like structure and a cantilever beam and a compliant mechanism design. An extensive Monte Carlo...

  10. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  11. Robust optimization methods for chance constrained, simulation-based, and bilevel problems

    NARCIS (Netherlands)

    Yanikoglu, I.

    2014-01-01

    The objective of robust optimization is to find solutions that are immune to the uncertainty of the parameters in a mathematical optimization problem. It requires that the constraints of a given problem should be satisfied for all realizations of the uncertain parameters in a so-called uncertainty

  12. Robust Optimization of Thermal Aspects of Friction Stir Welding Using Manifold Mapping Techniques

    DEFF Research Database (Denmark)

    Larsen, Anders Astrup; Lahaye, Domenico; Schmidt, Henrik Nikolaj Blicher

    2008-01-01

    The aim of this paper is to optimize a friction stir welding process taking robustness into account. The optimization problems are formulated with the goal of obtaining desired mean responses while reducing the variance of the response. We restrict ourselves to a thermal model of the process...

  13. Handling Uncertain Gross Margin and Water Demand in Agricultural Water Resources Management using Robust Optimization

    Science.gov (United States)

    Chaerani, D.; Lesmana, E.; Tressiana, N.

    2018-03-01

    In this paper, an application of Robust Optimization in agricultural water resource management problem under gross margin and water demand uncertainty is presented. Water resource management is a series of activities that includes planning, developing, distributing and managing the use of water resource optimally. Water resource management for agriculture can be one of the efforts to optimize the benefits of agricultural output. The objective function of agricultural water resource management problem is to maximizing total benefits by water allocation to agricultural areas covered by the irrigation network in planning horizon. Due to gross margin and water demand uncertainty, we assume that the uncertain data lies within ellipsoidal uncertainty set. We employ robust counterpart methodology to get the robust optimal solution.

  14. Robust Homography Estimation Based on Nonlinear Least Squares Optimization

    Directory of Open Access Journals (Sweden)

    Wei Mou

    2014-01-01

    Full Text Available The homography between image pairs is normally estimated by minimizing a suitable cost function given 2D keypoint correspondences. The correspondences are typically established using descriptor distance of keypoints. However, the correspondences are often incorrect due to ambiguous descriptors which can introduce errors into following homography computing step. There have been numerous attempts to filter out these erroneous correspondences, but it is unlikely to always achieve perfect matching. To deal with this problem, we propose a nonlinear least squares optimization approach to compute homography such that false matches have no or little effect on computed homography. Unlike normal homography computation algorithms, our method formulates not only the keypoints’ geometric relationship but also their descriptor similarity into cost function. Moreover, the cost function is parametrized in such a way that incorrect correspondences can be simultaneously identified while the homography is computed. Experiments show that the proposed approach can perform well even with the presence of a large number of outliers.

  15. Including robustness in multi-criteria optimization for intensity-modulated proton therapy

    Science.gov (United States)

    Chen, Wei; Unkelbach, Jan; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David

    2012-02-01

    We present a method to include robustness in a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties (or errors) of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios (shifted patient positions, proton beam undershoot and overshoot). Objectives and constraints can be defined for the nominal scenario, thus characterizing nominal plan quality. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios and thus provides a measure of plan robustness. The optimization method is based on a linear projection solver and is capable of handling large problem sizes resulting from a fine dose grid resolution, many scenarios, and a large number of proton pencil beams. A base-of-skull case is used to demonstrate the robust optimization method. It is demonstrated that the robust optimization method reduces the sensitivity of the treatment plan to setup and range errors to a degree that is not achieved by a safety margin approach. A chordoma case is analyzed in more detail to demonstrate the involved trade-offs between target underdose and brainstem sparing as well as robustness and nominal plan quality. The latter illustrates the advantage of MCO in the context of robust planning. For all cases examined, the robust optimization for

  16. Robust Multivariable Optimization and Performance Simulation for ASIC Design

    Science.gov (United States)

    DuMonthier, Jeffrey; Suarez, George

    2013-01-01

    Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.

  17. Stochastic analysis and robust optimization for a deck lid inner panel stamping

    International Nuclear Information System (INIS)

    Hou, Bo; Wang, Wurong; Li, Shuhui; Lin, Zhongqin; Xia, Z. Cedric

    2010-01-01

    FE-simulation and optimization are widely used in the stamping process to improve design quality and shorten development cycle. However, the current simulation and optimization may lead to non-robust results due to not considering the variation of material and process parameters. In this study, a novel stochastic analysis and robust optimization approach is proposed to improve the stamping robustness, where the uncertainties are involved to reflect manufacturing reality. A meta-model based stochastic analysis method is developed, where FE-simulation, uniform design and response surface methodology (RSM) are used to construct meta-model, based on which Monte-Carlo simulation is performed to predict the influence of input parameters variation on the final product quality. By applying the stochastic analysis, uniform design and RSM, the mean and the standard deviation (SD) of product quality are calculated as functions of the controllable process parameters. The robust optimization model composed of mean and SD is constructed and solved, the result of which is compared with the deterministic one to show its advantages. It is demonstrated that the product quality variations are reduced significantly, and quality targets (reject rate) are achieved under the robust optimal solution. The developed approach offers rapid and reliable results for engineers to deal with potential stamping problems during the early phase of product and tooling design, saving more time and resources.

  18. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    Science.gov (United States)

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  19. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Xing-cai Liu

    2014-01-01

    Full Text Available Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  20. Optimal control of quantum systems: Origins of inherent robustness to control field fluctuations

    International Nuclear Information System (INIS)

    Rabitz, Herschel

    2002-01-01

    The impact of control field fluctuations on the optimal manipulation of quantum dynamics phenomena is investigated. The quantum system is driven by an optimal control field, with the physical focus on the evolving expectation value of an observable operator. A relationship is shown to exist between the system dynamics and the control field fluctuations, wherein the process of seeking optimal performance assures an inherent degree of system robustness to such fluctuations. The presence of significant field fluctuations breaks down the evolution of the observable expectation value into a sequence of partially coherent robust steps. Robustness occurs because the optimization process reduces sensitivity to noise-driven quantum system fluctuations by taking advantage of the observable expectation value being bilinear in the evolution operator and its adjoint. The consequences of this inherent robustness are discussed in the light of recent experiments and numerical simulations on the optimal control of quantum phenomena. The analysis in this paper bodes well for the future success of closed-loop quantum optimal control experiments, even in the presence of reasonable levels of field fluctuations

  1. Robust design optimization method for centrifugal impellers under surface roughness uncertainties due to blade fouling

    Science.gov (United States)

    Ju, Yaping; Zhang, Chuhua

    2016-03-01

    Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.

  2. Human platelet lysate supports the formation of robust human periodontal ligament cell sheets.

    Science.gov (United States)

    Tian, Bei-Min; Wu, Rui-Xin; Bi, Chun-Sheng; He, Xiao-Tao; Yin, Yuan; Chen, Fa-Ming

    2018-04-01

    The use of stem cell-derived sheets has become increasingly common in a wide variety of biomedical applications. Although substantial evidence has demonstrated that human platelet lysate (PL) can be used for therapeutic cell expansion, either as a substitute for or as a supplement to xenogeneic fetal bovine serum (FBS), its impact on cell sheet production remains largely unexplored. In this study, we manufactured periodontal ligament stem cell (PDLSC) sheets in vitro by incubating PDLSCs in sheet-induction media supplemented with various ratios of PL and FBS, i.e. 10% PL without FBS, 7.5% PL + 2.5% FBS, 5% PL + 5% FBS, 2.5% PL + 7.5% FBS or 10% FBS without PL. Cultures with the addition of all the designed supplements led to successful cell sheet production. In addition, all the resultant cellular materials exhibited similar expression profiles of matrix-related genes and proteins, such as collagen I, fibronectin and integrin β1. Interestingly, the cell components within sheets generated by media containing both PL and FBS exhibited improved osteogenic potential. Following in vivo transplantation, all sheets supported significant new bone formation. Our data suggest that robust PDLSC sheets can be produced by applying PL as either an alternative or an adjuvant to FBS. Further examination of the relevant influences of human PL that benefit cell behaviour and matrix production will pave the way towards optimized and standardized conditions for cell sheet production. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Data-adaptive Robust Optimization Method for the Economic Dispatch of Active Distribution Networks

    DEFF Research Database (Denmark)

    Zhang, Yipu; Ai, Xiaomeng; Fang, Jiakun

    2018-01-01

    Due to the restricted mathematical description of the uncertainty set, the current two-stage robust optimization is usually over-conservative which has drawn concerns from the power system operators. This paper proposes a novel data-adaptive robust optimization method for the economic dispatch...... of active distribution network with renewables. The scenario-generation method and the two-stage robust optimization are combined in the proposed method. To reduce the conservativeness, a few extreme scenarios selected from the historical data are used to replace the conventional uncertainty set....... The proposed extreme-scenario selection algorithm takes advantage of considering the correlations and can be adaptive to different historical data sets. A theoretical proof is given that the constraints will be satisfied under all the possible scenarios if they hold in the selected extreme scenarios, which...

  4. Satellite formation flying relative dynamics, formation design, fuel optimal maneuvers and formation maintenance

    CERN Document Server

    Wang, Danwei; Poh, Eng Kee

    2017-01-01

    This book systematically describes the concepts and principles for multi-satellite relative motion, passive and near passive formation designs, trajectory planning and control for fuel optimal formation maneuvers, and formation flying maintenance control design. As such, it provides a sound foundation for researchers and engineers in this field to develop further theories and pursue their implementations. Though satellite formation flying is widely considered to be a major advance in space technology, there are few systematic treatments of the topic in the literature. Addressing that gap, the book offers a valuable resource for academics, researchers, postgraduate students and practitioners in the field of satellite science and engineering.

  5. Robust D-optimal designs under correlated error, applicable invariantly for some lifetime distributions

    International Nuclear Information System (INIS)

    Das, Rabindra Nath; Kim, Jinseog; Park, Jeong-Soo

    2015-01-01

    In quality engineering, the most commonly used lifetime distributions are log-normal, exponential, gamma and Weibull. Experimental designs are useful for predicting the optimal operating conditions of the process in lifetime improvement experiments. In the present article, invariant robust first-order D-optimal designs are derived for correlated lifetime responses having the above four distributions. Robust designs are developed for some correlated error structures. It is shown that robust first-order D-optimal designs for these lifetime distributions are always robust rotatable but the converse is not true. Moreover, it is observed that these designs depend on the respective error covariance structure but are invariant to the above four lifetime distributions. This article generalizes the results of Das and Lin [7] for the above four lifetime distributions with general (intra-class, inter-class, compound symmetry, and tri-diagonal) correlated error structures. - Highlights: • This paper presents invariant robust first-order D-optimal designs under correlated lifetime responses. • The results of Das and Lin [7] are extended for the four lifetime (log-normal, exponential, gamma and Weibull) distributions. • This paper also generalizes the results of Das and Lin [7] to more general correlated error structures

  6. Dynamic excitatory and inhibitory gain modulation can produce flexible, robust and optimal decision-making.

    Directory of Open Access Journals (Sweden)

    Ritwik K Niyogi

    Full Text Available Behavioural and neurophysiological studies in primates have increasingly shown the involvement of urgency signals during the temporal integration of sensory evidence in perceptual decision-making. Neuronal correlates of such signals have been found in the parietal cortex, and in separate studies, demonstrated attention-induced gain modulation of both excitatory and inhibitory neurons. Although previous computational models of decision-making have incorporated gain modulation, their abstract forms do not permit an understanding of the contribution of inhibitory gain modulation. Thus, the effects of co-modulating both excitatory and inhibitory neuronal gains on decision-making dynamics and behavioural performance remain unclear. In this work, we incorporate time-dependent co-modulation of the gains of both excitatory and inhibitory neurons into our previous biologically based decision circuit model. We base our computational study in the context of two classic motion-discrimination tasks performed in animals. Our model shows that by simultaneously increasing the gains of both excitatory and inhibitory neurons, a variety of the observed dynamic neuronal firing activities can be replicated. In particular, the model can exhibit winner-take-all decision-making behaviour with higher firing rates and within a significantly more robust model parameter range. It also exhibits short-tailed reaction time distributions even when operating near a dynamical bifurcation point. The model further shows that neuronal gain modulation can compensate for weaker recurrent excitation in a decision neural circuit, and support decision formation and storage. Higher neuronal gain is also suggested in the more cognitively demanding reaction time than in the fixed delay version of the task. Using the exact temporal delays from the animal experiments, fast recruitment of gain co-modulation is shown to maximize reward rate, with a timescale that is surprisingly near the

  7. A robust optimization model for agile and build-to-order supply chain planning under uncertainties

    DEFF Research Database (Denmark)

    Lalmazloumian, Morteza; Wong, Kuan Yew; Govindan, Kannan

    2016-01-01

    Supply chain planning as one of the most important processes within the supply chain management concept, has a great impact on firms' success or failure. This paper considers a supply chain planning problem of an agile manufacturing company operating in a build-to-order environment under various....... The formulation is a robust optimization model with the objective of minimizing the expected total supply chain cost while maintaining customer service level. The developed multi-product, multi-period, multi-echelon robust mixed-integer linear programming model is then solved using the CPLEX optimization studio...

  8. Using spatial information about recurrence risk for robust optimization of dose-painting prescription functions

    International Nuclear Information System (INIS)

    Bender, Edward T.

    2012-01-01

    Purpose: To develop a robust method for deriving dose-painting prescription functions using spatial information about the risk for disease recurrence. Methods: Spatial distributions of radiobiological model parameters are derived from distributions of recurrence risk after uniform irradiation. These model parameters are then used to derive optimal dose-painting prescription functions given a constant mean biologically effective dose. Results: An estimate for the optimal dose distribution can be derived based on spatial information about recurrence risk. Dose painting based on imaging markers that are moderately or poorly correlated with recurrence risk are predicted to potentially result in inferior disease control when compared the same mean biologically effective dose delivered uniformly. A robust optimization approach may partially mitigate this issue. Conclusions: The methods described here can be used to derive an estimate for a robust, patient-specific prescription function for use in dose painting. Two approximate scaling relationships were observed: First, the optimal choice for the maximum dose differential when using either a linear or two-compartment prescription function is proportional to R, where R is the Pearson correlation coefficient between a given imaging marker and recurrence risk after uniform irradiation. Second, the predicted maximum possible gain in tumor control probability for any robust optimization technique is nearly proportional to the square of R.

  9. A robust optimization model for distribution and evacuation in the disaster response phase

    Science.gov (United States)

    Fereiduni, Meysam; Shahanaghi, Kamran

    2017-03-01

    Natural disasters, such as earthquakes, affect thousands of people and can cause enormous financial loss. Therefore, an efficient response immediately following a natural disaster is vital to minimize the aforementioned negative effects. This research paper presents a network design model for humanitarian logistics which will assist in location and allocation decisions for multiple disaster periods. At first, a single-objective optimization model is presented that addresses the response phase of disaster management. This model will help the decision makers to make the most optimal choices in regard to location, allocation, and evacuation simultaneously. The proposed model also considers emergency tents as temporary medical centers. To cope with the uncertainty and dynamic nature of disasters, and their consequences, our multi-period robust model considers the values of critical input data in a set of various scenarios. Second, because of probable disruption in the distribution infrastructure (such as bridges), the Monte Carlo simulation is used for generating related random numbers and different scenarios; the p-robust approach is utilized to formulate the new network. The p-robust approach can predict possible damages along pathways and among relief bases. We render a case study of our robust optimization approach for Tehran's plausible earthquake in region 1. Sensitivity analysis' experiments are proposed to explore the effects of various problem parameters. These experiments will give managerial insights and can guide DMs under a variety of conditions. Then, the performances of the "robust optimization" approach and the "p-robust optimization" approach are evaluated. Intriguing results and practical insights are demonstrated by our analysis on this comparison.

  10. Design optimization of a robust sleeve antenna for hepatic microwave ablation

    International Nuclear Information System (INIS)

    Prakash, Punit; Webster, John G; Deng Geng; Converse, Mark C; Mahvi, David M; Ferris, Michael C

    2008-01-01

    We describe the application of a Bayesian variable-number sample-path (VNSP) optimization algorithm to yield a robust design for a floating sleeve antenna for hepatic microwave ablation. Finite element models are used to generate the electromagnetic (EM) field and thermal distribution in liver given a particular design. Dielectric properties of the tissue are assumed to vary within ± 10% of average properties to simulate the variation among individuals. The Bayesian VNSP algorithm yields an optimal design that is a 14.3% improvement over the original design and is more robust in terms of lesion size, shape and efficiency. Moreover, the Bayesian VNSP algorithm finds an optimal solution saving 68.2% simulation of the evaluations compared to the standard sample-path optimization method

  11. Robust optimization of psychotropic drug mixture separation in hydrophilic interaction liquid chromatography.

    Science.gov (United States)

    Rakić, Tijana; Jovanović, Marko; Dumić, Aleksandra; Pekić, Marina; Ribić, Sanja; Stojanović, Biljana Jancić

    2013-01-01

    This paper presents multiobjective optimization of complex mixtures separation in hydrophilic interaction liquid chromatography (HILIC). The selected model mixture consisted of five psychotropic drugs: clozapine, thioridazine, sulpiride, pheniramine and lamotrigine. Three factors related to the mobile phase composition (acetonitrile content, pH of the water phase and concentration of ammonium acetate) were optimized in order to achieve the following goals: maximal separation quality, minimal total analysis duration and robustness of an optimum. The consideration of robustness in early phases of the method development provides reliable methods with low risk for failure in validation phase. The simultaneous optimization of all goals was achieved by multiple threshold approach combined with grid point search. The identified optimal separation conditions (acetonitrile content 83%, pH of the water phase 3.5 and ammonium acetate content in water phase 14 mM) were experimentally verified.

  12. Dynamic optimization and robust explicit model predictive control of hydrogen storage tank

    KAUST Repository

    Panos, C.

    2010-09-01

    We present a general framework for the optimal design and control of a metal-hydride bed under hydrogen desorption operation. The framework features: (i) a detailed two-dimension dynamic process model, (ii) a design and operational dynamic optimization step, and (iii) an explicit/multi-parametric model predictive controller design step. For the controller design, a reduced order approximate model is obtained, based on which nominal and robust multi-parametric controllers are designed. © 2010 Elsevier Ltd.

  13. Robust design of broadband EUV multilayer beam splitters based on particle swarm optimization

    International Nuclear Information System (INIS)

    Jiang, Hui; Michette, Alan G.

    2013-01-01

    A robust design idea for broadband EUV multilayer beam splitters is introduced that achieves the aim of decreasing the influence of layer thickness errors on optical performances. Such beam splitters can be used in interferometry to determine the quality of EUVL masks by comparing with a reference multilayer. In the optimization, particle swarm techniques were used for the first time in such designs. Compared to conventional genetic algorithms, particle swarm optimization has stronger ergodicity, simpler processing and faster convergence

  14. Commitment and dispatch of heat and power units via affinely adjustable robust optimization

    DEFF Research Database (Denmark)

    Zugno, Marco; Morales González, Juan Miguel; Madsen, Henrik

    2016-01-01

    compromising computational tractability. We perform an extensive numerical study based on data from the Copenhagen area in Denmark, which highlights important features of the proposed model. Firstly, we illustrate commitment and dispatch choices that increase conservativeness in the robust optimization...... and conservativeness of the solution. Finally, we perform a thorough comparison with competing models based on deterministic optimization and stochastic programming. (C) 2016 Elsevier Ltd. All rights reserved....

  15. Dynamic optimization and robust explicit model predictive control of hydrogen storage tank

    KAUST Repository

    Panos, C.; Kouramas, K.I.; Georgiadis, M.C.; Pistikopoulos, E.N.

    2010-01-01

    We present a general framework for the optimal design and control of a metal-hydride bed under hydrogen desorption operation. The framework features: (i) a detailed two-dimension dynamic process model, (ii) a design and operational dynamic optimization step, and (iii) an explicit/multi-parametric model predictive controller design step. For the controller design, a reduced order approximate model is obtained, based on which nominal and robust multi-parametric controllers are designed. © 2010 Elsevier Ltd.

  16. Machine learning meliorates computing and robustness in discrete combinatorial optimization problems.

    Directory of Open Access Journals (Sweden)

    Fushing Hsieh

    2016-11-01

    Full Text Available Discrete combinatorial optimization problems in real world are typically defined via an ensemble of potentially high dimensional measurements pertaining to all subjects of a system under study. We point out that such a data ensemble in fact embeds with system's information content that is not directly used in defining the combinatorial optimization problems. Can machine learning algorithms extract such information content and make combinatorial optimizing tasks more efficient? Would such algorithmic computations bring new perspectives into this classic topic of Applied Mathematics and Theoretical Computer Science? We show that answers to both questions are positive. One key reason is due to permutation invariance. That is, the data ensemble of subjects' measurement vectors is permutation invariant when it is represented through a subject-vs-measurement matrix. An unsupervised machine learning algorithm, called Data Mechanics (DM, is applied to find optimal permutations on row and column axes such that the permuted matrix reveals coupled deterministic and stochastic structures as the system's information content. The deterministic structures are shown to facilitate geometry-based divide-and-conquer scheme that helps optimizing task, while stochastic structures are used to generate an ensemble of mimicries retaining the deterministic structures, and then reveal the robustness pertaining to the original version of optimal solution. Two simulated systems, Assignment problem and Traveling Salesman problem, are considered. Beyond demonstrating computational advantages and intrinsic robustness in the two systems, we propose brand new robust optimal solutions. We believe such robust versions of optimal solutions are potentially more realistic and practical in real world settings.

  17. A Hybrid Interval-Robust Optimization Model for Water Quality Management.

    Science.gov (United States)

    Xu, Jieyu; Li, Yongping; Huang, Guohe

    2013-05-01

    In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.

  18. A Hybrid Interval–Robust Optimization Model for Water Quality Management

    Science.gov (United States)

    Xu, Jieyu; Li, Yongping; Huang, Guohe

    2013-01-01

    Abstract In water quality management problems, uncertainties may exist in many system components and pollution-related processes (i.e., random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval–robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements. PMID:23922495

  19. A novel non-probabilistic approach using interval analysis for robust design optimization

    International Nuclear Information System (INIS)

    Sun, Wei; Dong, Rongmei; Xu, Huanwei

    2009-01-01

    A technique for formulation of the objective and constraint functions with uncertainty plays a crucial role in robust design optimization. This paper presents the first application of interval methods for reformulating the robust optimization problem. Based on interval mathematics, the original real-valued objective and constraint functions are replaced with the interval-valued functions, which directly represent the upper and lower bounds of the new functions under uncertainty. The single objective function is converted into two objective functions for minimizing the mean value and the variation, and the constraint functions are reformulated with the acceptable robustness level, resulting in a bi-level mathematical model. Compared with other methods, this method is efficient and does not require presumed probability distribution of uncertain factors or gradient or continuous information of constraints. Two numerical examples are used to illustrate the validity and feasibility of the presented method

  20. Robust Fault Detection for a Class of Uncertain Nonlinear Systems Based on Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    Bingyong Yan

    2015-01-01

    Full Text Available A robust fault detection scheme for a class of nonlinear systems with uncertainty is proposed. The proposed approach utilizes robust control theory and parameter optimization algorithm to design the gain matrix of fault tracking approximator (FTA for fault detection. The gain matrix of FTA is designed to minimize the effects of system uncertainty on residual signals while maximizing the effects of system faults on residual signals. The design of the gain matrix of FTA takes into account the robustness of residual signals to system uncertainty and sensitivity of residual signals to system faults simultaneously, which leads to a multiobjective optimization problem. Then, the detectability of system faults is rigorously analyzed by investigating the threshold of residual signals. Finally, simulation results are provided to show the validity and applicability of the proposed approach.

  1. An Effective, Robust And Parallel Implementation Of An Interior Point Algorithm For Limit State Optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars

    2013-01-01

    The artide describes a robust and effective implementation of the interior point optimization algorithm. The adopted method includes a precalculation step, which reduces the number of variables by fulfilling the equilibrium equations a priori. This work presents an improved implementation of the ...

  2. An effective, robust and parallel implementation of an interior point algorithm for limit state optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Frier, Christian

    2014-01-01

    A robust and effective finite element based implementation of lower bound limit state analysis applying an interior point formulation is presented in this paper. The lower bound formulation results in a convex optimization problem consisting of a number of linear constraints from the equilibrium...

  3. Optimizing clinical performance and geometrical robustness of a new electrode device for intracranial tumor electroporation

    DEFF Research Database (Denmark)

    Mahmood, Faisal; Gehl, Julie

    2011-01-01

    and genes to intracranial tumors in humans, and demonstrate a method to optimize the design (i.e. geometry) of the electrode device prototype to improve both clinical performance and geometrical tolerance (robustness). We have employed a semiempirical objective function based on constraints similar to those...... sensitive to random geometrical deviations. The method is readily applicable to other electrode configurations....

  4. Towards a Robuster Interpretive Parsing: learning from overt forms in Optimality Theory

    NARCIS (Netherlands)

    Biró, T.

    2013-01-01

    The input data to grammar learning algorithms often consist of overt forms that do not contain full structural descriptions. This lack of information may contribute to the failure of learning. Past work on Optimality Theory introduced Robust Interpretive Parsing (RIP) as a partial solution to this

  5. Avoiding Optimal Mean ℓ2,1-Norm Maximization-Based Robust PCA for Reconstruction.

    Science.gov (United States)

    Luo, Minnan; Nie, Feiping; Chang, Xiaojun; Yang, Yi; Hauptmann, Alexander G; Zheng, Qinghua

    2017-04-01

    Robust principal component analysis (PCA) is one of the most important dimension-reduction techniques for handling high-dimensional data with outliers. However, most of the existing robust PCA presupposes that the mean of the data is zero and incorrectly utilizes the average of data as the optimal mean of robust PCA. In fact, this assumption holds only for the squared [Formula: see text]-norm-based traditional PCA. In this letter, we equivalently reformulate the objective of conventional PCA and learn the optimal projection directions by maximizing the sum of projected difference between each pair of instances based on [Formula: see text]-norm. The proposed method is robust to outliers and also invariant to rotation. More important, the reformulated objective not only automatically avoids the calculation of optimal mean and makes the assumption of centered data unnecessary, but also theoretically connects to the minimization of reconstruction error. To solve the proposed nonsmooth problem, we exploit an efficient optimization algorithm to soften the contributions from outliers by reweighting each data point iteratively. We theoretically analyze the convergence and computational complexity of the proposed algorithm. Extensive experimental results on several benchmark data sets illustrate the effectiveness and superiority of the proposed method.

  6. Robust optimization of robotic pick and place operations for deformable objects through simulation

    DEFF Research Database (Denmark)

    Bo Jorgensen, Troels; Debrabant, Kristian; Kruger, Norbert

    2016-01-01

    for the task. The solutions are parameterized in terms of the robot motion and the gripper configuration, and after each simulation various objective scores are determined and combined. This enables the use of various optimization strategies. Based on visual inspection of the most robust solution found...

  7. Approximating the Pareto Set of Multiobjective Linear Programs via Robust Optimization

    NARCIS (Netherlands)

    Gorissen, B.L.; den Hertog, D.

    2012-01-01

    Abstract: The Pareto set of a multiobjective optimization problem consists of the solutions for which one or more objectives can not be improved without deteriorating one or more other objectives. We consider problems with linear objectives and linear constraints and use Adjustable Robust

  8. Optimal and Robust Switching Control Strategies : Theory, and Applications in Traffic Management

    NARCIS (Netherlands)

    Hajiahmadi, M.

    2015-01-01

    Macroscopic modeling, predictive and robust control and route guidance for large-scale freeway and urban traffic networks are the main focus of this thesis. In order to increase the efficiency of our control strategies, we propose several mathematical and optimization techniques. Moreover, in the

  9. Quantifying the robustness of optimal reservoir operation for the Xinanjiang-Fuchunjiang Reservoir Cascade

    NARCIS (Netherlands)

    Vonk, E.; Xu, YuePing; Booij, Martijn J.; Augustijn, Dionysius C.M.

    2016-01-01

    In this research we investigate the robustness of the common implicit stochastic optimization (ISO) method for dam reoperation. As a case study, we focus on the Xinanjiang-Fuchunjiang reservoir cascade in eastern China, for which adapted operating rules were proposed as a means to reduce the impact

  10. Optimization Versus Robustness in Simulation : A Practical Methodology, With a Production-Management Case-Study

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Gaury, E.G.A.

    2001-01-01

    Whereas Operations Research has always paid much attention to optimization, practitioners judge the robustness of the 'optimum' solution to be of greater importance.Therefore this paper proposes a practical methodology that is a stagewise combination of the following four proven techniques: (1)

  11. SU-E-T-07: 4DCT Robust Optimization for Esophageal Cancer Using Intensity Modulated Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Liao, L [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Department of Industrial Engineering, University of Houston, Houston, TX (United States); Yu, J; Zhu, X; Li, H; Zhang, X [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Li, Y [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Varian Medical Systems, Houston, TX (United States); Lim, G [Department of Industrial Engineering, University of Houston, Houston, TX (United States)

    2015-06-15

    Purpose: To develop a 4DCT robust optimization method to reduce the dosimetric impact from respiratory motion in intensity modulated proton therapy (IMPT) for esophageal cancer. Methods: Four esophageal cancer patients were selected for this study. The different phases of CT from a set of 4DCT were incorporated into the worst-case dose distribution robust optimization algorithm. 4DCT robust treatment plans were designed and compared with the conventional non-robust plans. Result doses were calculated on the average and maximum inhale/exhale phases of 4DCT. Dose volume histogram (DVH) band graphic and ΔD95%, ΔD98%, ΔD5%, ΔD2% of CTV between different phases were used to evaluate the robustness of the plans. Results: Compare to the IMPT plans optimized using conventional methods, the 4DCT robust IMPT plans can achieve the same quality in nominal cases, while yield a better robustness to breathing motion. The mean ΔD95%, ΔD98%, ΔD5% and ΔD2% of CTV are 6%, 3.2%, 0.9% and 1% for the robustly optimized plans vs. 16.2%, 11.8%, 1.6% and 3.3% from the conventional non-robust plans. Conclusion: A 4DCT robust optimization method was proposed for esophageal cancer using IMPT. We demonstrate that the 4DCT robust optimization can mitigate the dose deviation caused by the diaphragm motion.

  12. On robust control of uncertain chaotic systems: a sliding-mode synthesis via chaotic optimization

    International Nuclear Information System (INIS)

    Lu Zhao; Shieh Leangsan; Chen GuanRong

    2003-01-01

    This paper presents a novel Lyapunov-based control approach which utilizes a Lyapunov function of the nominal plant for robust tracking control of general multi-input uncertain nonlinear systems. The difficulty of constructing a control Lyapunov function is alleviated by means of predefining an optimal sliding mode. The conventional schemes for constructing sliding modes of nonlinear systems stipulate that the system of interest is canonical-transformable or feedback-linearizable. An innovative approach that exploits a chaotic optimizing algorithm is developed thereby obtaining the optimal sliding manifold for the control purpose. Simulations on the uncertain chaotic Chen's system illustrate the effectiveness of the proposed approach

  13. A mean–variance objective for robust production optimization in uncertain geological scenarios

    DEFF Research Database (Denmark)

    Capolei, Andrea; Suwartadi, Eka; Foss, Bjarne

    2014-01-01

    directly. In the mean–variance bi-criterion objective function risk appears directly, it also considers an ensemble of reservoir models, and has robust optimization as a special extreme case. The mean–variance objective is common for portfolio optimization problems in finance. The Markowitz portfolio...... optimization problem is the original and simplest example of a mean–variance criterion for mitigating risk. Risk is mitigated in oil production by including both the expected NPV (mean of NPV) and the risk (variance of NPV) for the ensemble of possible reservoir models. With the inclusion of the risk...

  14. Multiobjective Robust Design of the Double Wishbone Suspension System Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Xianfu Cheng

    2014-01-01

    Full Text Available The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.

  15. Multiobjective Robust Design of the Double Wishbone Suspension System Based on Particle Swarm Optimization

    Science.gov (United States)

    Lin, Yuqun

    2014-01-01

    The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system. PMID:24683334

  16. Multiobjective robust design of the double wishbone suspension system based on particle swarm optimization.

    Science.gov (United States)

    Cheng, Xianfu; Lin, Yuqun

    2014-01-01

    The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.

  17. Robust and Reliable Portfolio Optimization Formulation of a Chance Constrained Problem

    Directory of Open Access Journals (Sweden)

    Sengupta Raghu Nandan

    2017-02-01

    Full Text Available We solve a linear chance constrained portfolio optimization problem using Robust Optimization (RO method wherein financial script/asset loss return distributions are considered as extreme valued. The objective function is a convex combination of portfolio’s CVaR and expected value of loss return, subject to a set of randomly perturbed chance constraints with specified probability values. The robust deterministic counterpart of the model takes the form of Second Order Cone Programming (SOCP problem. Results from extensive simulation runs show the efficacy of our proposed models, as it helps the investor to (i utilize extensive simulation studies to draw insights into the effect of randomness in portfolio decision making process, (ii incorporate different risk appetite scenarios to find the optimal solutions for the financial portfolio allocation problem and (iii compare the risk and return profiles of the investments made in both deterministic as well as in uncertain and highly volatile financial markets.

  18. Robust Inventory System Optimization Based on Simulation and Multiple Criteria Decision Making

    Directory of Open Access Journals (Sweden)

    Ahmad Mortazavi

    2014-01-01

    Full Text Available Inventory management in retailers is difficult and complex decision making process which is related to the conflict criteria, also existence of cyclic changes and trend in demand is inevitable in many industries. In this paper, simulation modeling is considered as efficient tool for modeling of retailer multiproduct inventory system. For simulation model optimization, a novel multicriteria and robust surrogate model is designed based on multiple attribute decision making (MADM method, design of experiments (DOE, and principal component analysis (PCA. This approach as a main contribution of this paper, provides a framework for robust multiple criteria decision making under uncertainty.

  19. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    Science.gov (United States)

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  20. On the relation between flexibility analysis and robust optimization for linear systems

    KAUST Repository

    Zhang, Qi

    2016-03-05

    Flexibility analysis and robust optimization are two approaches to solving optimization problems under uncertainty that share some fundamental concepts, such as the use of polyhedral uncertainty sets and the worst-case approach to guarantee feasibility. The connection between these two approaches has not been sufficiently acknowledged and examined in the literature. In this context, the contributions of this work are fourfold: (1) a comparison between flexibility analysis and robust optimization from a historical perspective is presented; (2) for linear systems, new formulations for the three classical flexibility analysis problems—flexibility test, flexibility index, and design under uncertainty—based on duality theory and the affinely adjustable robust optimization (AARO) approach are proposed; (3) the AARO approach is shown to be generally more restrictive such that it may lead to overly conservative solutions; (4) numerical examples show the improved computational performance from the proposed formulations compared to the traditional flexibility analysis models. © 2016 American Institute of Chemical Engineers AIChE J, 62: 3109–3123, 2016

  1. SU-E-T-625: Robustness Evaluation and Robust Optimization of IMPT Plans Based on Per-Voxel Standard Deviation of Dose Distributions.

    Science.gov (United States)

    Liu, W; Mohan, R

    2012-06-01

    Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD

  2. A robust optimization model for blood supply chain in emergency situations

    Directory of Open Access Journals (Sweden)

    Meysam Fereiduni

    2016-09-01

    Full Text Available In this paper, a multi-period model for blood supply chain in emergency situation is presented to optimize decisions related to locate blood facilities and distribute blood products after natural disasters. In disastrous situations, uncertainty is an inseparable part of humanitarian logistics and blood supply chain as well. This paper proposes a robust network to capture the uncertain nature of blood supply chain during and after disasters. This study considers donor points, blood facilities, processing and testing labs, and hospitals as the components of blood supply chain. In addition, this paper makes location and allocation decisions for multiple post disaster periods through real data. The study compares the performances of “p-robust optimization” approach and “robust optimization” approach and the results are discussed.

  3. RECOVERY ACT - Robust Optimization for Connectivity and Flows in Dynamic Complex Networks

    Energy Technology Data Exchange (ETDEWEB)

    Balasundaram, Balabhaskar [Oklahoma State Univ., Stillwater, OK (United States); Butenko, Sergiy [Texas A & M Univ., College Station, TX (United States); Boginski, Vladimir [Univ. of Florida, Gainesville, FL (United States); Uryasev, Stan [Univ. of Florida, Gainesville, FL (United States)

    2013-12-25

    The goal of this project was to study robust connectivity and flow patterns of complex multi-scale systems modeled as networks. Networks provide effective ways to study global, system level properties, as well as local, multi-scale interactions at a component level. Numerous applications from power systems, telecommunication, transportation, biology, social science, and other areas have benefited from novel network-based models and their analysis. Modeling and optimization techniques that employ appropriate measures of risk for identifying robust clusters and resilient network designs in networks subject to uncertain failures were investigated in this collaborative multi-university project. In many practical situations one has to deal with uncertainties associated with possible failures of network components, thereby affecting the overall efficiency and performance of the system (e.g., every node/connection has a probability of partial or complete failure). Some extreme examples include power grid component failures, airline hub failures due to weather, or freeway closures due to emergencies. These are also situations in which people, materials, or other resources need to be managed efficiently. Important practical examples include rerouting flow through power grids, adjusting flight plans, and identifying routes for emergency services and supplies, in the event network elements fail unexpectedly. Solutions that are robust under uncertainty, in addition to being economically efficient, are needed. This project has led to the development of novel models and methodologies that can tackle the optimization problems arising in such situations. A number of new concepts, which have not been previously applied in this setting, were investigated in the framework of the project. The results can potentially help decision-makers to better control and identify robust or risk-averse decisions in such situations. Formulations and optimal solutions of the considered problems need

  4. Robustness of IPSA optimized high-dose-rate prostate brachytherapy treatment plans to catheter displacements.

    Science.gov (United States)

    Poder, Joel; Whitaker, May

    2016-06-01

    Inverse planning simulated annealing (IPSA) optimized brachytherapy treatment plans are characterized with large isolated dwell times at the first or last dwell position of each catheter. The potential of catheter shifts relative to the target and organs at risk in these plans may lead to a more significant change in delivered dose to the volumes of interest relative to plans with more uniform dwell times. This study aims to determine if the Nucletron Oncentra dwell time deviation constraint (DTDC) parameter can be optimized to improve the robustness of high-dose-rate (HDR) prostate brachytherapy plans to catheter displacements. A set of 10 clinically acceptable prostate plans were re-optimized with a DTDC parameter of 0 and 0.4. For each plan, catheter displacements of 3, 7, and 14 mm were retrospectively applied and the change in dose volume histogram (DVH) indices and conformity indices analyzed. The robustness of clinically acceptable prostate plans to catheter displacements in the caudal direction was found to be dependent on the DTDC parameter. A DTDC value of 0 improves the robustness of planning target volume (PTV) coverage to catheter displacements, whereas a DTDC value of 0.4 improves the robustness of the plans to changes in hotspots. The results indicate that if used in conjunction with a pre-treatment catheter displacement correction protocol and a tolerance of 3 mm, a DTDC value of 0.4 may produce clinically superior plans. However, the effect of the DTDC parameter in plan robustness was not observed to be as strong as initially suspected.

  5. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  6. Robust state feedback controller design of STATCOM using chaotic optimization algorithm

    Directory of Open Access Journals (Sweden)

    Safari Amin

    2010-01-01

    Full Text Available In this paper, a new design technique for the design of robust state feedback controller for static synchronous compensator (STATCOM using Chaotic Optimization Algorithm (COA is presented. The design is formulated as an optimization problem which is solved by the COA. Since chaotic planning enjoys reliability, ergodicity and stochastic feature, the proposed technique presents chaos mapping using Lozi map chaotic sequences which increases its convergence rate. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results reveal that the proposed controller has an excellent capability in damping power system low frequency oscillations and enhances greatly the dynamic stability of the power systems. Moreover, the system performance analysis under different operating conditions shows that the phase based controller is superior compare to the magnitude based controller.

  7. Robust non-gradient C subroutines for non-linear optimization

    DEFF Research Database (Denmark)

    Brock, Pernille; Madsen, Kaj; Nielsen, Hans Bruun

    2004-01-01

    This report presents a package of robust and easy-to-use C subroutines for solving unconstrained and constrained non-linear optimization problems, where gradient information is not required. The intention is that the routines should use the currently best algorithms available. All routines have...... subroutines are obtained by changing 0 to 1. The present report is a new and updated version of a previous report NI-91-04 with the title Non-gradient c Subroutines for Non- Linear Optimization, [16]. Both the previous and the present report describe a collection of subroutines, which have been translated...... from Fortran to C. The reason for writing the present report is that some of the C subroutines have been replaced by more e ective and robust versions translated from the original Fortran subroutines to C by the Bandler Group, see [1]. Also the test examples have been modified to some extent...

  8. The French biofuels mandates under cost uncertainty - an assessment based on robust optimization

    International Nuclear Information System (INIS)

    Lorne, Daphne; Tchung-Ming, Stephane

    2012-01-01

    This paper investigates the impact of primary energy and technology cost uncertainty on the achievement of renewable and especially biofuel policies - mandates and norms - in France by 2030. A robust optimization technique that allows to deal with uncertainty sets of high dimensionality is implemented in a TIMES-based long-term planning model of the French energy transport and electricity sectors. The energy system costs and potential benefits (GHG emissions abatements, diversification) of the French renewable mandates are assessed within this framework. The results of this systemic analysis highlight how setting norms and mandates allows to reduce the variability of CO 2 emissions reductions and supply mix diversification when the costs of technological progress and prices are uncertain. Beyond that, we discuss the usefulness of robust optimization in complement of other techniques to integrate uncertainty in large-scale energy models. (authors)

  9. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    Science.gov (United States)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  10. SOCP relaxation bounds for the optimal subset selection problem applied to robust linear regression

    OpenAIRE

    Flores, Salvador

    2015-01-01

    This paper deals with the problem of finding the globally optimal subset of h elements from a larger set of n elements in d space dimensions so as to minimize a quadratic criterion, with an special emphasis on applications to computing the Least Trimmed Squares Estimator (LTSE) for robust regression. The computation of the LTSE is a challenging subset selection problem involving a nonlinear program with continuous and binary variables, linked in a highly nonlinear fashion. The selection of a ...

  11. Robust subspace estimation using low-rank optimization theory and applications

    CERN Document Server

    Oreifej, Omar

    2014-01-01

    Various fundamental applications in computer vision and machine learning require finding the basis of a certain subspace. Examples of such applications include face detection, motion estimation, and activity recognition. An increasing interest has been recently placed on this area as a result of significant advances in the mathematics of matrix rank optimization. Interestingly, robust subspace estimation can be posed as a low-rank optimization problem, which can be solved efficiently using techniques such as the method of Augmented Lagrange Multiplier. In this book,?the authors?discuss fundame

  12. A robust approach to optimal matched filter design in ultrasonic non-destructive evaluation (NDE)

    Science.gov (United States)

    Li, Minghui; Hayward, Gordon

    2017-02-01

    The matched filter was demonstrated to be a powerful yet efficient technique to enhance defect detection and imaging in ultrasonic non-destructive evaluation (NDE) of coarse grain materials, provided that the filter was properly designed and optimized. In the literature, in order to accurately approximate the defect echoes, the design utilized the real excitation signals, which made it time consuming and less straightforward to implement in practice. In this paper, we present a more robust and flexible approach to optimal matched filter design using the simulated excitation signals, and the control parameters are chosen and optimized based on the real scenario of array transducer, transmitter-receiver system response, and the test sample, as a result, the filter response is optimized and depends on the material characteristics. Experiments on industrial samples are conducted and the results confirm the great benefits of the method.

  13. Optimal Control for Fast and Robust Generation of Entangled States in Anisotropic Heisenberg Chains

    Science.gov (United States)

    Zhang, Xiong-Peng; Shao, Bin; Zou, Jian

    2017-05-01

    Motivated by some recent results of the optimal control (OC) theory, we study anisotropic XXZ Heisenberg spin-1/2 chains with control fields acting on a single spin, with the aim of exploring how maximally entangled state can be prepared. To achieve the goal, we use a numerical optimization algorithm (e.g., the Krotov algorithm, which was shown to be capable of reaching the quantum speed limit) to search an optimal set of control parameters, and then obtain OC pulses corresponding to the target fidelity. We find that the minimum time for implementing our target state depending on the anisotropy parameter Δ of the model. Finally, we analyze the robustness of the obtained results for the optimal fidelities and the effectiveness of the Krotov method under some realistic conditions.

  14. EABOT - Energetic analysis as a basis for robust optimization of trigeneration systems by linear programming

    International Nuclear Information System (INIS)

    Piacentino, A.; Cardona, F.

    2008-01-01

    The optimization of synthesis, design and operation in trigeneration systems for building applications is a quite complex task, due to the high number of decision variables, the presence of irregular heat, cooling and electric load profiles and the variable electricity price. Consequently, computer-aided techniques are usually adopted to achieve the optimal solution, based either on iterative techniques, linear or non-linear programming or evolutionary search. Large efforts have been made in improving algorithm efficiency, which have resulted in an increasingly rapid convergence to the optimal solution and in reduced calculation time; robust algorithm have also been formulated, assuming stochastic behaviour for energy loads and prices. This paper is based on the assumption that margins for improvements in the optimization of trigeneration systems still exist, which require an in-depth understanding of plant's energetic behaviour. Robustness in the optimization of trigeneration systems has more to do with a 'correct and comprehensive' than with an 'efficient' modelling, being larger efforts required to energy specialists rather than to experts in efficient algorithms. With reference to a mixed integer linear programming model implemented in MatLab for a trigeneration system including a pressurized (medium temperature) heat storage, the relevant contribute of thermoeconomics and energo-environmental analysis in the phase of mathematical modelling and code testing are shown

  15. A robust optimization based approach for microgrid operation in deregulated environment

    International Nuclear Information System (INIS)

    Gupta, R.A.; Gupta, Nand Kishor

    2015-01-01

    Highlights: • RO based approach developed for optimal MG operation in deregulated environment. • Wind uncertainty modeled by interval forecasting through ARIMA model. • Proposed approach evaluated using two realistic case studies. • Proposed approach evaluated the impact of degree of robustness. • Proposed approach gives a significant reduction in operation cost of microgrid. - Abstract: Micro Grids (MGs) are clusters of Distributed Energy Resource (DER) units and loads. MGs are self-sustainable and generally operated in two modes: (1) grid connected and (2) grid isolated. In deregulated environment, the operation of MG is managed by the Microgrid Operator (MO) with an objective to minimize the total cost of operation. The MG management is crucial in the deregulated power system due to (i) integration of intermittent renewable sources such as wind and Photo Voltaic (PV) generation, and (ii) volatile grid prices. This paper presents robust optimization based approach for optimal MG management considering wind power uncertainty. Time series based Autoregressive Integrated Moving Average (ARIMA) model is used to characterize the wind power uncertainty through interval forecasting. The proposed approach is illustrated through a case study having both dispatchable and non-dispatchable generators through different modes of operation. Further the impact of degree of robustness is analyzed in both cases on the total cost of operation of the MG. A comparative analysis between obtained results using proposed approach and other existing approach shows the strength of proposed approach in cost minimization in MG management

  16. Beyond optimality: Multistakeholder robustness tradeoffs for regional water portfolio planning under deep uncertainty

    Science.gov (United States)

    Herman, Jonathan D.; Zeff, Harrison B.; Reed, Patrick M.; Characklis, Gregory W.

    2014-10-01

    While optimality is a foundational mathematical concept in water resources planning and management, "optimal" solutions may be vulnerable to failure if deeply uncertain future conditions deviate from those assumed during optimization. These vulnerabilities may produce severely asymmetric impacts across a region, making it vital to evaluate the robustness of management strategies as well as their impacts for regional stakeholders. In this study, we contribute a multistakeholder many-objective robust decision making (MORDM) framework that blends many-objective search and uncertainty analysis tools to discover key tradeoffs between water supply alternatives and their robustness to deep uncertainties (e.g., population pressures, climate change, and financial risks). The proposed framework is demonstrated for four interconnected water utilities representing major stakeholders in the "Research Triangle" region of North Carolina, U.S. The utilities supply well over one million customers and have the ability to collectively manage drought via transfer agreements and shared infrastructure. We show that water portfolios for this region that compose optimal tradeoffs (i.e., Pareto-approximate solutions) under expected future conditions may suffer significantly degraded performance with only modest changes in deeply uncertain hydrologic and economic factors. We then use the Patient Rule Induction Method (PRIM) to identify which uncertain factors drive the individual and collective vulnerabilities for the four cooperating utilities. Our framework identifies key stakeholder dependencies and robustness tradeoffs associated with cooperative regional planning, which are critical to understanding the tensions between individual versus regional water supply goals. Cooperative demand management was found to be the key factor controlling the robustness of regional water supply planning, dominating other hydroclimatic and economic uncertainties through the 2025 planning horizon. Results

  17. Optimized Lift for Autonomous Formation Flight

    Data.gov (United States)

    National Aeronautics and Space Administration — Experimental in-flight evaluations have demonstrated that the concept of formation flight can reduce fuel consumption of trailing aircraft by 10 percent. Armstrong...

  18. Designing a Robust Nonlinear Dynamic Inversion Controller for Spacecraft Formation Flying

    Directory of Open Access Journals (Sweden)

    Inseok Yang

    2014-01-01

    Full Text Available The robust nonlinear dynamic inversion (RNDI control technique is proposed to keep the relative position of spacecrafts while formation flying. The proposed RNDI control method is based on nonlinear dynamic inversion (NDI. NDI is nonlinear control method that replaces the original dynamics into the user-selected desired dynamics. Because NDI removes nonlinearities in the model by inverting the original dynamics directly, it also eliminates the need of designing suitable controllers for each equilibrium point; that is, NDI works as self-scheduled controller. Removing the original model also provides advantages of ease to satisfy the specific requirements by simply handling desired dynamics. Therefore, NDI is simple and has many similarities to classical control. In real applications, however, it is difficult to achieve perfect cancellation of the original dynamics due to uncertainties that lead to performance degradation and even make the system unstable. This paper proposes robustness assurance method for NDI. The proposed RNDI is designed by combining NDI and sliding mode control (SMC. SMC is inherently robust using high-speed switching inputs. This paper verifies similarities of NDI and SMC, firstly. And then RNDI control method is proposed. The performance of the proposed method is evaluated by simulations applied to spacecraft formation flying problem.

  19. Performance and robustness of optimal fractional fuzzy PID controllers for pitch control of a wind turbine using chaotic optimization algorithms.

    Science.gov (United States)

    Asgharnia, Amirhossein; Shahnazi, Reza; Jamali, Ali

    2018-05-11

    The most studied controller for pitch control of wind turbines is proportional-integral-derivative (PID) controller. However, due to uncertainties in wind turbine modeling and wind speed profiles, the need for more effective controllers is inevitable. On the other hand, the parameters of PID controller usually are unknown and should be selected by the designer which is neither a straightforward task nor optimal. To cope with these drawbacks, in this paper, two advanced controllers called fuzzy PID (FPID) and fractional-order fuzzy PID (FOFPID) are proposed to improve the pitch control performance. Meanwhile, to find the parameters of the controllers the chaotic evolutionary optimization methods are used. Using evolutionary optimization methods not only gives us the unknown parameters of the controllers but also guarantees the optimality based on the chosen objective function. To improve the performance of the evolutionary algorithms chaotic maps are used. All the optimization procedures are applied to the 2-mass model of 5-MW wind turbine model. The proposed optimal controllers are validated using simulator FAST developed by NREL. Simulation results demonstrate that the FOFPID controller can reach to better performance and robustness while guaranteeing fewer fatigue damages in different wind speeds in comparison to FPID, fractional-order PID (FOPID) and gain-scheduling PID (GSPID) controllers. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. An integrated framework of agent-based modelling and robust optimization for microgrid energy management

    International Nuclear Information System (INIS)

    Kuznetsova, Elizaveta; Li, Yan-Fu; Ruiz, Carlos; Zio, Enrico

    2014-01-01

    Highlights: • Microgrid composed of a train station, wind power plant and district is investigated. • Each player is modeled as an individual agent aiming at a particular goal. • Prediction Intervals quantify the uncertain operational and environmental parameters. • Optimal goal-directed actions planning is achieved with robust optimization. • Optimization framework improves system reliability and decreases power imbalances. - Abstract: A microgrid energy management framework for the optimization of individual objectives of microgrid stakeholders is proposed. The framework is exemplified by way of a microgrid that is connected to an external grid via a transformer and includes the following players: a middle-size train station with integrated photovoltaic power production system, a small energy production plant composed of urban wind turbines, and a surrounding district including residences and small businesses. The system is described by Agent-Based Modelling (ABM), in which each player is modelled as an individual agent aiming at a particular goal, (i) decreasing its expenses for power purchase or (ii) increasing its revenues from power selling. The context in which the agents operate is uncertain due to the stochasticity of operational and environmental parameters, and the technical failures of the renewable power generators. The uncertain operational and environmental parameters of the microgrid are quantified in terms of Prediction Intervals (PIs) by a Non-dominated Sorting Genetic Algorithm (NSGA-II) – trained Neural Network (NN). Under these uncertainties, each agent is seeking for optimal goal-directed actions planning by Robust Optimization (RO). The developed framework is shown to lead to an increase in system performance, evaluated in terms of typical reliability (adequacy) indicators for energy systems, such as Loss of Load Expectation (LOLE) and Loss of Expected Energy (LOEE), in comparison with optimal planning based on expected values of

  1. Reliability- and performance-based robust design optimization of MEMS structures considering technological uncertainties

    Science.gov (United States)

    Martowicz, Adam; Uhl, Tadeusz

    2012-10-01

    The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.

  2. An optimization methodology for identifying robust process integration investments under uncertainty

    International Nuclear Information System (INIS)

    Svensson, Elin; Berntsson, Thore; Stroemberg, Ann-Brith; Patriksson, Michael

    2009-01-01

    Uncertainties in future energy prices and policies strongly affect decisions on investments in process integration measures in industry. In this paper, we present a five-step methodology for the identification of robust investment alternatives incorporating explicitly such uncertainties in the optimization model. Methods for optimization under uncertainty (or, stochastic programming) are thus combined with a deep understanding of process integration and process technology in order to achieve a framework for decision-making concerning the investment planning of process integration measures under uncertainty. The proposed methodology enables the optimization of investments in energy efficiency with respect to their net present value or an environmental objective. In particular, as a result of the optimization approach, complex investment alternatives, allowing for combinations of energy efficiency measures, can be analyzed. Uncertainties as well as time-dependent parameters, such as energy prices and policies, are modelled using a scenario-based approach, enabling the identification of robust investment solutions. The methodology is primarily an aid for decision-makers in industry, but it will also provide insight for policy-makers into how uncertainties regarding future price levels and policy instruments affect the decisions on investments in energy efficiency measures. (author)

  3. Estimation and robust control of microalgae culture for optimization of biological fixation of CO2

    International Nuclear Information System (INIS)

    Filali, R.

    2012-01-01

    This thesis deals with the optimization of carbon dioxide consumption by microalgae. Indeed, following several current environmental issues primarily related to large emissions of CO 2 , it is shown that microalgae represent a very promising solution for CO 2 mitigation. From this perspective, we are interested in the optimization strategy of CO 2 consumption through the development of a robust control law. The main aim is to ensure optimal operating conditions for a Chlorella vulgaris culture in an instrumented photo-bioreactor. The thesis is based on three major axes. The first one concerns growth modeling of the selected species based on a mathematical model reflecting the influence of light and total inorganic carbon concentration. For the control context, the second axis is related to biomass estimation from the real-time measurement of dissolved carbon dioxide. This step is necessary for the control part due to the lack of affordable real-time sensors for this kind of measurement. Three observers structures have been studied and compared: an extended Kalman filter, an asymptotic observer and an interval observer. The last axis deals with the implementation of a non-linear predictive control law coupled to the estimation strategy for the regulation of the cellular concentration around a value which maximizes the CO 2 consumption. Performance and robustness of this control law have been validated in simulation and experimentally on a laboratory-scale instrumented photo-bioreactor. This thesis represents a preliminary study for the optimization of CO 2 mitigation strategy by microalgae. (author)

  4. An optimization methodology for identifying robust process integration investments under uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Elin; Berntsson, Thore [Department of Energy and Environment, Division of Heat and Power Technology, Chalmers University of Technology, SE-412 96 Goeteborg (Sweden); Stroemberg, Ann-Brith [Fraunhofer-Chalmers Research Centre for Industrial Mathematics, Chalmers Science Park, SE-412 88 Gothenburg (Sweden); Patriksson, Michael [Department of Mathematical Sciences, Chalmers University of Technology and Department of Mathematical Sciences, University of Gothenburg, SE-412 96 Goeteborg (Sweden)

    2009-02-15

    Uncertainties in future energy prices and policies strongly affect decisions on investments in process integration measures in industry. In this paper, we present a five-step methodology for the identification of robust investment alternatives incorporating explicitly such uncertainties in the optimization model. Methods for optimization under uncertainty (or, stochastic programming) are thus combined with a deep understanding of process integration and process technology in order to achieve a framework for decision-making concerning the investment planning of process integration measures under uncertainty. The proposed methodology enables the optimization of investments in energy efficiency with respect to their net present value or an environmental objective. In particular, as a result of the optimization approach, complex investment alternatives, allowing for combinations of energy efficiency measures, can be analyzed. Uncertainties as well as time-dependent parameters, such as energy prices and policies, are modelled using a scenario-based approach, enabling the identification of robust investment solutions. The methodology is primarily an aid for decision-makers in industry, but it will also provide insight for policy-makers into how uncertainties regarding future price levels and policy instruments affect the decisions on investments in energy efficiency measures. (author)

  5. Possibility-based robust design optimization for the structural-acoustic system with fuzzy parameters

    Science.gov (United States)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2018-03-01

    The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.

  6. Robust Nearfield Wideband Beamforming Design Based on Adaptive-Weighted Convex Optimization

    Directory of Open Access Journals (Sweden)

    Guo Ye-Cai

    2017-01-01

    Full Text Available Nearfield wideband beamformers for microphone arrays have wide applications in multichannel speech enhancement. The nearfield wideband beamformer design based on convex optimization is one of the typical representatives of robust approaches. However, in this approach, the coefficient of convex optimization is a constant, which has not used all the freedom provided by the weighting coefficient efficiently. Therefore, it is still necessary to further improve the performance. To solve this problem, we developed a robust nearfield wideband beamformer design approach based on adaptive-weighted convex optimization. The proposed approach defines an adaptive-weighted function by the adaptive array signal processing theory and adjusts its value flexibly, which has improved the beamforming performance. During each process of the adaptive updating of the weighting function, the convex optimization problem can be formulated as a SOCP (Second-Order Cone Program problem, which could be solved efficiently using the well-established interior-point methods. This method is suitable for the case where the sound source is in the nearfield range, can work well in the presence of microphone mismatches, and is applicable to arbitrary array geometries. Several design examples are presented to verify the effectiveness of the proposed approach and the correctness of the theoretical analysis.

  7. Optimal probabilistic energy management in a typical micro-grid based-on robust optimization and point estimate method

    International Nuclear Information System (INIS)

    Alavi, Seyed Arash; Ahmadian, Ali; Aliakbar-Golkar, Masoud

    2015-01-01

    Highlights: • Energy management is necessary in the active distribution network to reduce operation costs. • Uncertainty modeling is essential in energy management studies in active distribution networks. • Point estimate method is a suitable method for uncertainty modeling due to its lower computation time and acceptable accuracy. • In the absence of Probability Distribution Function (PDF) robust optimization has a good ability for uncertainty modeling. - Abstract: Uncertainty can be defined as the probability of difference between the forecasted value and the real value. As this probability is small, the operation cost of the power system will be less. This purpose necessitates modeling of system random variables (such as the output power of renewable resources and the load demand) with appropriate and practicable methods. In this paper, an adequate procedure is proposed in order to do an optimal energy management on a typical micro-grid with regard to the relevant uncertainties. The point estimate method is applied for modeling the wind power and solar power uncertainties, and robust optimization technique is utilized to model load demand uncertainty. Finally, a comparison is done between deterministic and probabilistic management in different scenarios and their results are analyzed and evaluated

  8. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    Science.gov (United States)

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  9. System optimization for HVAC energy management using the robust evolutionary algorithm

    International Nuclear Information System (INIS)

    Fong, K.F.; Hanby, V.I.; Chow, T.T.

    2009-01-01

    For an installed centralized heating, ventilating and air conditioning (HVAC) system, appropriate energy management measures would achieve energy conservation targets through the optimal control and operation. The performance optimization of conventional HVAC systems may be handled by operation experience, but it may not cover different optimization scenarios and parameters in response to a variety of load and weather conditions. In this regard, it is common to apply the suitable simulation-optimization technique to model the system then determine the required operation parameters. The particular plant simulation models can be built up by either using the available simulation programs or a system of mathematical expressions. To handle the simulation models, iterations would be involved in the numerical solution methods. Since the gradient information is not easily available due to the complex nature of equations, the traditional gradient-based optimization methods are not applicable for this kind of system models. For the heuristic optimization methods, the continual search is commonly necessary, and the system function call is required for each search. The frequency of simulation function calls would then be a time-determining step, and an efficient optimization method is crucial, in order to find the solution through a number of function calls in a reasonable computational period. In this paper, the robust evolutionary algorithm (REA) is presented to tackle this nature of the HVAC simulation models. REA is based on one of the paradigms of evolutionary algorithm, evolution strategy, which is a stochastic population-based searching technique emphasized on mutation. The REA, which incorporates the Cauchy deterministic mutation, tournament selection and arithmetic recombination, would provide a synergetic effect for optimal search. The REA is effective to cope with the complex simulation models, as well as those represented by explicit mathematical expressions of

  10. Robust video watermarking via optimization algorithm for quantization of pseudo-random semi-global statistics

    Science.gov (United States)

    Kucukgoz, Mehmet; Harmanci, Oztan; Mihcak, Mehmet K.; Venkatesan, Ramarathnam

    2005-03-01

    In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.

  11. A Data-Driven Frequency-Domain Approach for Robust Controller Design via Convex Optimization

    CERN Document Server

    AUTHOR|(CDS)2092751; Martino, Michele

    The objective of this dissertation is to develop data-driven frequency-domain methods for designing robust controllers through the use of convex optimization algorithms. Many of today's industrial processes are becoming more complex, and modeling accurate physical models for these plants using first principles may be impossible. Albeit a model may be available; however, such a model may be too complex to consider for an appropriate controller design. With the increased developments in the computing world, large amounts of measured data can be easily collected and stored for processing purposes. Data can also be collected and used in an on-line fashion. Thus it would be very sensible to make full use of this data for controller design, performance evaluation, and stability analysis. The design methods imposed in this work ensure that the dynamics of a system are captured in an experiment and avoids the problem of unmodeled dynamics associated with parametric models. The devised methods consider robust designs...

  12. A case study on robust optimal experimental design for model calibration of ω-Transaminase

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hauwermeiren, Daan; Ringborg, Rolf Hoffmeyer

    the experimental space. However, it is expected that more informative experiments can be designed to increase the confidence of the parameter estimates. Therefore, we apply Optimal Experimental Design (OED) to the calibrated model of Shin and Kim (1998). The total number of samples was retained to allow fair......” parameter values are not known before finishing the model calibration. However, it is important that the chosen parameter values are close to the real parameter values, otherwise the OED can possibly yield non-informative experiments. To counter this problem, one can use robust OED. The idea of robust OED......Proper calibration of models describing enzyme kinetics can be quite challenging. This is especially the case for more complex models like transaminase models (Shin and Kim, 1998). The latter fitted model parameters, but the confidence on the parameter estimation was not derived. Hence...

  13. QFT Based Robust Positioning Control of the PMSM Using Automatic Loop Shaping with Teaching Learning Optimization

    Directory of Open Access Journals (Sweden)

    Nitish Katal

    2016-01-01

    Full Text Available Automation of the robust control system synthesis for uncertain systems is of great practical interest. In this paper, the loop shaping step for synthesizing quantitative feedback theory (QFT based controller for a two-phase permanent magnet stepper motor (PMSM has been automated using teaching learning-based optimization (TLBO algorithm. The QFT controller design problem has been posed as an optimization problem and TLBO algorithm has been used to minimize the proposed cost function. This facilitates designing low-order fixed-structure controller, eliminates the need of manual loop shaping step on the Nichols charts, and prevents the overdesign of the controller. A performance comparison of the designed controller has been made with the classical PID tuning method of Ziegler-Nichols and QFT controller tuned using other optimization algorithms. The simulation results show that the designed QFT controller using TLBO offers robust stability, disturbance rejection, and proper reference tracking over a range of PMSM’s parametric uncertainties as compared to the classical design techniques.

  14. A critical evaluation of worst case optimization methods for robust intensity-modulated proton therapy planning

    International Nuclear Information System (INIS)

    Fredriksson, Albin; Bokrantz, Rasmus

    2014-01-01

    Purpose: To critically evaluate and compare three worst case optimization methods that have been previously employed to generate intensity-modulated proton therapy treatment plans that are robust against systematic errors. The goal of the evaluation is to identify circumstances when the methods behave differently and to describe the mechanism behind the differences when they occur. Methods: The worst case methods optimize plans to perform as well as possible under the worst case scenario that can physically occur (composite worst case), the combination of the worst case scenarios for each objective constituent considered independently (objectivewise worst case), and the combination of the worst case scenarios for each voxel considered independently (voxelwise worst case). These three methods were assessed with respect to treatment planning for prostate under systematic setup uncertainty. An equivalence with probabilistic optimization was used to identify the scenarios that determine the outcome of the optimization. Results: If the conflict between target coverage and normal tissue sparing is small and no dose-volume histogram (DVH) constraints are present, then all three methods yield robust plans. Otherwise, they all have their shortcomings: Composite worst case led to unnecessarily low plan quality in boundary scenarios that were less difficult than the worst case ones. Objectivewise worst case generally led to nonrobust plans. Voxelwise worst case led to overly conservative plans with respect to DVH constraints, which resulted in excessive dose to normal tissue, and less sharp dose fall-off than the other two methods. Conclusions: The three worst case methods have clearly different behaviors. These behaviors can be understood from which scenarios that are active in the optimization. No particular method is superior to the others under all circumstances: composite worst case is suitable if the conflicts are not very severe or there are DVH constraints whereas

  15. An intrinsic robust rank-one-approximation approach for currencyportfolio optimization

    Directory of Open Access Journals (Sweden)

    Hongxuan Huang

    2018-03-01

    Full Text Available A currency portfolio is a special kind of wealth whose value fluctuates with foreignexchange rates over time, which possesses 3Vs (volume, variety and velocity properties of big datain the currency market. In this paper, an intrinsic robust rank one approximation (ROA approachis proposed to maximize the value of currency portfolios over time. The main results of the paperinclude four parts: Firstly, under the assumptions about the currency market, the currency portfoliooptimization problem is formulated as the basic model, in which there are two types of variablesdescribing currency amounts in portfolios and the amount of each currency exchanged into another,respectively. Secondly, the rank one approximation problem and its variants are also formulated toapproximate a foreign exchange rate matrix, whose performance is measured by the Frobenius normor the 2-norm of a residual matrix. The intrinsic robustness of the rank one approximation is provedtogether with summarizing properties of the basic ROA problem and designing a modified powermethod to search for the virtual exchange rates hidden in a foreign exchange rate matrix. Thirdly,a technique for decision variables reduction is presented to attack the currency portfolio optimization.The reduced formulation is referred to as the ROA model, which keeps only variables describingcurrency amounts in portfolios. The optimal solution to the ROA model also induces a feasible solutionto the basic model of the currency portfolio problem by integrating forex operations from the ROAmodel with practical forex rates. Finally, numerical examples are presented to verify the feasibility ande ciency of the intrinsic robust rank one approximation approach. They also indicate that there existsan objective measure for evaluating and optimizing currency portfolios over time, which is related tothe virtual standard currency and independent of any real currency selected specially for measurement.

  16. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  17. Multiple shooting applied to robust reservoir control optimization including output constraints on coherent risk measures

    DEFF Research Database (Denmark)

    Codas, Andrés; Hanssen, Kristian G.; Foss, Bjarne

    2017-01-01

    The production life of oil reservoirs starts under significant uncertainty regarding the actual economical return of the recovery process due to the lack of oil field data. Consequently, investors and operators make management decisions based on a limited and uncertain description of the reservoir....... In this work, we propose a new formulation for robust optimization of reservoir well controls. It is inspired by the multiple shooting (MS) method which permits a broad range of parallelization opportunities and output constraint handling. This formulation exploits coherent risk measures, a concept...

  18. Optimal and robust control of quantum state transfer by shaping the spectral phase of ultrafast laser pulses.

    Science.gov (United States)

    Guo, Yu; Dong, Daoyi; Shu, Chuan-Cun

    2018-04-04

    Achieving fast and efficient quantum state transfer is a fundamental task in physics, chemistry and quantum information science. However, the successful implementation of the perfect quantum state transfer also requires robustness under practically inevitable perturbative defects. Here, we demonstrate how an optimal and robust quantum state transfer can be achieved by shaping the spectral phase of an ultrafast laser pulse in the framework of frequency domain quantum optimal control theory. Our numerical simulations of the single dibenzoterrylene molecule as well as in atomic rubidium show that optimal and robust quantum state transfer via spectral phase modulated laser pulses can be achieved by incorporating a filtering function of the frequency into the optimization algorithm, which in turn has potential applications for ultrafast robust control of photochemical reactions.

  19. Robust optimal control design using a differential game approach for open-loop linear quadratic descriptor systems

    NARCIS (Netherlands)

    Musthofa, M.W.; Salmah, S.; Engwerda, Jacob; Suparwanto, A.

    This paper studies the robust optimal control problem for descriptor systems. We applied differential game theory to solve the disturbance attenuation problem. The robust control problem was converted into a reduced ordinary zero-sum game. Within a linear quadratic setting, we solved the problem for

  20. Optimal design of modular cogeneration plants for hospital facilities and robustness evaluation of the results

    International Nuclear Information System (INIS)

    Gimelli, A.; Muccillo, M.; Sannino, R.

    2017-01-01

    Highlights: • A specific methodology has been set up based on genetic optimization algorithm. • Results highlight a tradeoff between primary energy savings (TPES) and simple payback (SPB). • Optimized plant configurations show TPES exceeding 18% and SPB of approximately three years. • The study aims to identify the most stable plant solutions through the robust design optimization. • The research shows how a deterministic definition of the decision variables could lead to an overestimation of the results. - Abstract: The widespread adoption of combined heat and power generation is widely recognized as a strategic goal to achieve significant primary energy savings and lower carbon dioxide emissions. In this context, the purpose of this research is to evaluate the potential of cogeneration based on reciprocating gas engines for some Italian hospital buildings. Comparative analyses have been conducted based on the load profiles of two specific hospital facilities and through the study of the cogeneration system-user interaction. To this end, a specific methodology has been set up by coupling a specifically developed calculation algorithm to a genetic optimization algorithm, and a multi-objective approach has been adopted. The results from the optimization problem highlight a clear trade-off between total primary energy savings (TPES) and simple payback period (SPB). Optimized plant configurations and management strategies show TPES exceeding 18% for the reference hospital facilities and multi–gas engine solutions along with a minimum SPB of approximately three years, thereby justifying the European regulation promoting cogeneration. However, designing a CHP plant for a specific energetic, legislative or market scenario does not guarantee good performance when these scenarios change. For this reason, the proposed methodology has been enhanced in order to focus on some innovative aspects. In particular, this study proposes an uncommon and effective approach

  1. Optimal robustness of supervised learning from a noniterative point of view

    Science.gov (United States)

    Hu, Chia-Lun J.

    1995-08-01

    In most artificial neural network applications, (e.g. pattern recognition) if the dimension of the input vectors is much larger than the number of patterns to be recognized, generally, a one- layered, hard-limited perceptron is sufficient to do the recognition job. As long as the training input-output mapping set is numerically given, and as long as this given set satisfies a special linear-independency relation, the connection matrix to meet the supervised learning requirements can be solved by a noniterative, one-step, algebra method. The learning of this noniterative scheme is very fast (close to real-time learning) because the learning is one-step and noniterative. The recognition of the untrained patterns is very robust because a universal geometrical optimization process of selecting the solution can be applied to the learning process. This paper reports the theoretical foundation of this noniterative learning scheme and focuses the result at the optimal robustness analysis. A real-time character recognition scheme is then designed along this line. This character recognition scheme will be used (in a movie presentation) to demonstrate the experimental results of some theoretical parts reported in this paper.

  2. Experimental Optimization In Polymer BLEND Composite Preparation Based On Mix Level of Taguchi Robust Design

    International Nuclear Information System (INIS)

    Abdul Aziz Mohamed; Jaafar Abdullah; Dahlan Mohd; Rozaidi Rasid; Megat Harun AlRashid Megat Ahmad; Mahathir Mohamad; Mohd Hamzah Harun

    2012-01-01

    L 18 orthogonal array in mix level of Taguchi robust design method was carried out to optimize experimental conditions for the preparation of polymer blend composite. Tensile strength and neutron absorption of the composite were the properties of interest. Filler size, filler loading, ball mixing time and dispersion agent concentration were selected as parameters or factors which are expected to affect the composite properties. As a result of Taguchi analysis, filler loading was the most influencing parameter on the tensile strength and neutron absorption. The least influencing was ball-mixing time. The optimal conditions were determined by using mix-level Taguchi robust design method and a polymer composite with tensile strength of 6.33 MPa was successfully prepared. The composite was found to fully absorb thermal neutron flux of 1.04 x 10 5 n/ cm 2 / s with only 2 mm in thickness. In addition, the filler was also characterized by scanning electron microscopy (SEM) and elemental analysis (EDX). (Author)

  3. Robust Optimization-Based Generation Self-Scheduling under Uncertain Price

    Directory of Open Access Journals (Sweden)

    Xiao Luo

    2011-01-01

    Full Text Available This paper considers generation self-scheduling in electricity markets under uncertain price. Based on the robust optimization (denoted as RO methodology, a new self-scheduling model, which has a complicated max-min optimization structure, is set up. By using optimal dual theory, the proposed model is reformulated to an ordinary quadratic and quadratic cone programming problems in the cases of box and ellipsoidal uncertainty, respectively. IEEE 30-bus system is used to test the new model. Some comparisons with other methods are done, and the sensitivity with respect to the uncertain set is analyzed. Comparing with the existed uncertain self-scheduling approaches, the new method has twofold characteristics. First, it does not need a prediction of distribution of random variables and just requires an estimated value and the uncertain set of power price. Second, the counterpart of RO corresponding to the self-scheduling is a simple quadratic or quadratic cone programming. This indicates that the reformulated problem can be solved by many ordinary optimization algorithms.

  4. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion

    International Nuclear Information System (INIS)

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian

    2015-01-01

    Purpose: Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. Methods: The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor’s) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. Results: The experimental results demonstrate that the authors’ proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors’ framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. Conclusions: A robust electromagnetically guided endoscopy framework was

  5. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion.

    Science.gov (United States)

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian

    2015-04-01

    Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor's) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. The experimental results demonstrate that the authors' proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors' framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm

  6. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Xiongbiao, E-mail: xluo@robarts.ca, E-mail: Ying.Wan@student.uts.edu.au [Robarts Research Institute, Western University, London, Ontario N6A 5K8 (Canada); Wan, Ying, E-mail: xluo@robarts.ca, E-mail: Ying.Wan@student.uts.edu.au; He, Xiangjian [School of Computing and Communications, University of Technology, Sydney, New South Wales 2007 (Australia)

    2015-04-15

    Purpose: Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. Methods: The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor’s) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. Results: The experimental results demonstrate that the authors’ proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors’ framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. Conclusions: A robust electromagnetically guided endoscopy framework was

  7. SU-F-T-187: Quantifying Normal Tissue Sparing with 4D Robust Optimization of Intensity Modulated Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Newpower, M; Ge, S; Mohan, R [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: To report an approach to quantify the normal tissue sparing for 4D robustly-optimized versus PTV-optimized IMPT plans. Methods: We generated two sets of 90 DVHs from a patient’s 10-phase 4D CT set; one by conventional PTV-based optimization done in the Eclipse treatment planning system, and the other by an in-house robust optimization algorithm. The 90 DVHs were created for the following scenarios in each of the ten phases of the 4DCT: ± 5mm shift along x, y, z; ± 3.5% range uncertainty and a nominal scenario. A Matlab function written by Gay and Niemierko was modified to calculate EUD for each DVH for the following structures: esophagus, heart, ipsilateral lung and spinal cord. An F-test determined whether or not the variances of each structure’s DVHs were statistically different. Then a t-test determined if the average EUDs for each optimization algorithm were statistically significantly different. Results: T-test results showed each structure had a statistically significant difference in average EUD when comparing robust optimization versus PTV-based optimization. Under robust optimization all structures except the spinal cord received lower EUDs than PTV-based optimization. Using robust optimization the average EUDs decreased 1.45% for the esophagus, 1.54% for the heart and 5.45% for the ipsilateral lung. The average EUD to the spinal cord increased 24.86% but was still well below tolerance. Conclusion: This work has helped quantify a qualitative relationship noted earlier in our work: that robust optimization leads to plans with greater normal tissue sparing compared to PTV-based optimization. Except in the case of the spinal cord all structures received a lower EUD under robust optimization and these results are statistically significant. While the average EUD to the spinal cord increased to 25.06 Gy under robust optimization it is still well under the TD50 value of 66.5 Gy from Emami et al. Supported in part by the NCI U19 CA021239.

  8. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  9. Multi-objective robust optimization method for the modified epoxy resin sheet molding compounds of the impeller

    Directory of Open Access Journals (Sweden)

    Xiaozhang Qu

    2016-07-01

    Full Text Available A kind of modified epoxy resin sheet molding compounds of the impeller has been designed. Through the test, the non-metal impeller has a better environmental aging performance, but must do the waterproof processing design. In order to improve the stability of the impeller vibration design, the influence of uncertainty factors is considered, and a multi-objective robust optimization method is proposed to reduce the weight of the impeller. Firstly, based on the fluid-structure interaction,the analysis model of the impeller vibration is constructed. Secondly, the optimal approximate model of the impeller is constructed by using the Latin hypercube and radial basis function, and the fitting and optimization accuracy of the approximate model is improved by increasing the sample points. Finally, the micro multi-objective genetic algorithm is applied to the robust optimization of approximate model, and the Monte Carlo simulation and Sobol sampling techniques are used for reliability analysis. By comparing the results of the deterministic, different sigma levels and different materials, the multi-objective optimization of the SMC molding impeller can meet the requirements of engineering stability and lightweight. And the effectiveness of the proposed multi-objective robust optimization method is verified by the error analysis. After the SMC molding and the robust optimization of the impeller, the optimized rate reached 42.5%, which greatly improved the economic benefit, and greatly reduce the vibration of the ventilation system.

  10. Exploratory Study of 4D versus 3D Robust Optimization in Intensity Modulated Proton Therapy for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wei, E-mail: Liu.Wei@mayo.edu [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Schild, Steven E. [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Chang, Joe Y.; Liao, Zhongxing [Department of Radiation Oncology, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Chang, Yu-Hui [Division of Health Sciences Research, Mayo Clinic Arizona, Phoenix, Arizona (United States); Wen, Zhifei [Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Shen, Jiajian; Stoker, Joshua B.; Ding, Xiaoning; Hu, Yanle [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States); Sahoo, Narayan [Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Herman, Michael G. [Department of Radiation Oncology, Mayo Clinic Rochester, Rochester, Minnesota (United States); Vargas, Carlos; Keole, Sameer; Wong, William; Bues, Martin [Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona (United States)

    2016-05-01

    Purpose: The purpose of this study was to compare the impact of uncertainties and interplay on 3-dimensional (3D) and 4D robustly optimized intensity modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods and Materials: IMPT plans were created for 11 nonrandomly selected non-small cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D computed tomography (CT) to irradiate clinical target volume (CTV). Regular fractionation (66 Gy [relative biological effectiveness; RBE] in 33 fractions) was considered. In 4D optimization, the CTV of individual phases received nonuniform doses to achieve a uniform cumulative dose. The root-mean-square dose-volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram (DVH) indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed rank test. Results: 4D robust optimization plans led to smaller AUC for CTV (14.26 vs 18.61, respectively; P=.001), better CTV coverage (Gy [RBE]) (D{sub 95%} CTV: 60.6 vs 55.2, respectively; P=.001), and better CTV homogeneity (D{sub 5%}-D{sub 95%} CTV: 10.3 vs 17.7, resspectively; P=.002) in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage (D{sub 95%} CTV: 64.5 vs 63.8, respectively; P=.0068), comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions: Our exploratory methodology study showed

  11. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    Science.gov (United States)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  12. Optimization of controllability and robustness of complex networks by edge directionality

    Science.gov (United States)

    Liang, Man; Jin, Suoqin; Wang, Dingjie; Zou, Xiufen

    2016-09-01

    Recently, controllability of complex networks has attracted enormous attention in various fields of science and engineering. How to optimize structural controllability has also become a significant issue. Previous studies have shown that an appropriate directional assignment can improve structural controllability; however, the evolution of the structural controllability of complex networks under attacks and cascading has always been ignored. To address this problem, this study proposes a new edge orientation method (NEOM) based on residual degree that changes the link direction while conserving topology and directionality. By comparing the results with those of previous methods in two random graph models and several realistic networks, our proposed approach is demonstrated to be an effective and competitive method for improving the structural controllability of complex networks. Moreover, numerical simulations show that our method is near-optimal in optimizing structural controllability. Strikingly, compared to the original network, our method maintains the structural controllability of the network under attacks and cascading, indicating that the NEOM can also enhance the robustness of controllability of networks. These results alter the view of the nature of controllability in complex networks, change the understanding of structural controllability and affect the design of network models to control such networks.

  13. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    Science.gov (United States)

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  14. The Study of an Optimal Robust Design and Adjustable Ordering Strategies in the HSCM.

    Science.gov (United States)

    Liao, Hung-Chang; Chen, Yan-Kwang; Wang, Ya-huei

    2015-01-01

    The purpose of this study was to establish a hospital supply chain management (HSCM) model in which three kinds of drugs in the same class and with the same indications were used in creating an optimal robust design and adjustable ordering strategies to deal with a drug shortage. The main assumption was that although each doctor has his/her own prescription pattern, when there is a shortage of a particular drug, the doctor may choose a similar drug with the same indications as a replacement. Four steps were used to construct and analyze the HSCM model. The computation technology used included a simulation, a neural network (NN), and a genetic algorithm (GA). The mathematical methods of the simulation and the NN were used to construct a relationship between the factor levels and performance, while the GA was used to obtain the optimal combination of factor levels from the NN. A sensitivity analysis was also used to assess the change in the optimal factor levels. Adjustable ordering strategies were also developed to prevent drug shortages.

  15. APPLYING ROBUST RANKING METHOD IN TWO PHASE FUZZY OPTIMIZATION LINEAR PROGRAMMING PROBLEMS (FOLPP

    Directory of Open Access Journals (Sweden)

    Monalisha Pattnaik

    2014-12-01

    Full Text Available Background: This paper explores the solutions to the fuzzy optimization linear program problems (FOLPP where some parameters are fuzzy numbers. In practice, there are many problems in which all decision parameters are fuzzy numbers, and such problems are usually solved by either probabilistic programming or multi-objective programming methods. Methods: In this paper, using the concept of comparison of fuzzy numbers, a very effective method is introduced for solving these problems. This paper extends linear programming based problem in fuzzy environment. With the problem assumptions, the optimal solution can still be theoretically solved using the two phase simplex based method in fuzzy environment. To handle the fuzzy decision variables can be initially generated and then solved and improved sequentially using the fuzzy decision approach by introducing robust ranking technique. Results and conclusions: The model is illustrated with an application and a post optimal analysis approach is obtained. The proposed procedure was programmed with MATLAB (R2009a version software for plotting the four dimensional slice diagram to the application. Finally, numerical example is presented to illustrate the effectiveness of the theoretical results, and to gain additional managerial insights. 

  16. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    Science.gov (United States)

    Takemiya, Tetsushi

    , and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite

  17. Optimal robust stabilizer design based on UPFC for interconnected power systems considering time delay

    Directory of Open Access Journals (Sweden)

    Koofigar Hamid Reza

    2017-09-01

    Full Text Available A robust auxiliary wide area damping controller is proposed for a unified power flow controller (UPFC. The mixed H2 / H∞ problem with regional pole placement, resolved by linear matrix inequality (LMI, is applied for controller design. Based on modal analysis, the optimal wide area input signals for the controller are selected. The time delay of input signals, due to electrical distance from the UPFC location is taken into account in the design procedure. The proposed controller is applied to a multi-machine interconnected power system from the IRAN power grid. It is shown that the both transient and dynamic stability are significantly improved despite different disturbances and loading conditions.

  18. Application of Iterative Robust Model-based Optimal Experimental Design for the Calibration of Biocatalytic Models

    DEFF Research Database (Denmark)

    Van Daele, Timothy; Gernaey, Krist V.; Ringborg, Rolf Hoffmeyer

    2017-01-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during...... experimentation is not actively used to optimise the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω......-transaminase catalysed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is a more accurate, but also a computationally more expensive method. As a result, an important deviation between both approaches...

  19. Stochastic algorithm for channel optimized vector quantization: application to robust narrow-band speech coding

    International Nuclear Information System (INIS)

    Bouzid, M.; Benkherouf, H.; Benzadi, K.

    2011-01-01

    In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.

  20. Energy-scales convergence for optimal and robust quantum transport in photosynthetic complexes

    Energy Technology Data Exchange (ETDEWEB)

    Mohseni, M. [Google Research, Venice, California 90291 (United States); Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Shabani, A. [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States); Department of Chemistry, University of California at Berkeley, Berkeley, California 94720 (United States); Lloyd, S. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States); Rabitz, H. [Department of Chemistry, Princeton University, Princeton, New Jersey 08544 (United States)

    2014-01-21

    Underlying physical principles for the high efficiency of excitation energy transfer in light-harvesting complexes are not fully understood. Notably, the degree of robustness of these systems for transporting energy is not known considering their realistic interactions with vibrational and radiative environments within the surrounding solvent and scaffold proteins. In this work, we employ an efficient technique to estimate energy transfer efficiency of such complex excitonic systems. We observe that the dynamics of the Fenna-Matthews-Olson (FMO) complex leads to optimal and robust energy transport due to a convergence of energy scales among all important internal and external parameters. In particular, we show that the FMO energy transfer efficiency is optimum and stable with respect to important parameters of environmental interactions including reorganization energy λ, bath frequency cutoff γ, temperature T, and bath spatial correlations. We identify the ratio of k{sub B}λT/ℏγ⁢g as a single key parameter governing quantum transport efficiency, where g is the average excitonic energy gap.

  1. Energy-scales convergence for optimal and robust quantum transport in photosynthetic complexes

    International Nuclear Information System (INIS)

    Mohseni, M.; Shabani, A.; Lloyd, S.; Rabitz, H.

    2014-01-01

    Underlying physical principles for the high efficiency of excitation energy transfer in light-harvesting complexes are not fully understood. Notably, the degree of robustness of these systems for transporting energy is not known considering their realistic interactions with vibrational and radiative environments within the surrounding solvent and scaffold proteins. In this work, we employ an efficient technique to estimate energy transfer efficiency of such complex excitonic systems. We observe that the dynamics of the Fenna-Matthews-Olson (FMO) complex leads to optimal and robust energy transport due to a convergence of energy scales among all important internal and external parameters. In particular, we show that the FMO energy transfer efficiency is optimum and stable with respect to important parameters of environmental interactions including reorganization energy λ, bath frequency cutoff γ, temperature T, and bath spatial correlations. We identify the ratio of k B λT/ℏγ⁢g as a single key parameter governing quantum transport efficiency, where g is the average excitonic energy gap

  2. Optimal placement and decentralized robust vibration control for spacecraft smart solar panel structures

    International Nuclear Information System (INIS)

    Jiang, Jian-ping; Li, Dong-xu

    2010-01-01

    The decentralized robust vibration control with collocated piezoelectric actuator and strain sensor pairs is considered in this paper for spacecraft solar panel structures. Each actuator is driven individually by the output of the corresponding sensor so that only local feedback control is implemented, with each actuator, sensor and controller operating independently. Firstly, an optimal placement method for the location of the collocated piezoelectric actuator and strain gauge sensor pairs is developed based on the degree of observability and controllability indices for solar panel structures. Secondly, a decentralized robust H ∞ controller is designed to suppress the vibration induced by external disturbance. Finally, a numerical comparison between centralized and decentralized control systems is performed in order to investigate their effectiveness to suppress vibration of the smart solar panel. The simulation results show that the vibration can be significantly suppressed with permitted actuator voltages by the controllers. The decentralized control system almost has the same disturbance attenuation level as the centralized control system with a bit higher control voltages. More importantly, the decentralized controller composed of four three-order systems is a better practical implementation than a high-order centralized controller is

  3. Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection.

    Science.gov (United States)

    Zhang, Minliang; Chen, Qian; Tao, Tianyang; Feng, Shijie; Hu, Yan; Li, Hui; Zuo, Chao

    2017-08-21

    Temporal phase unwrapping (TPU) is an essential algorithm in fringe projection profilometry (FPP), especially when measuring complex objects with discontinuities and isolated surfaces. Among others, the multi-frequency TPU has been proven to be the most reliable algorithm in the presence of noise. For a practical FPP system, in order to achieve an accurate, efficient, and reliable measurement, one needs to make wise choices about three key experimental parameters: the highest fringe frequency, the phase-shifting steps, and the fringe pattern sequence. However, there was very little research on how to optimize these parameters quantitatively, especially considering all three aspects from a theoretical and analytical perspective simultaneously. In this work, we propose a new scheme to determine simultaneously the optimal fringe frequency, phase-shifting steps and pattern sequence under multi-frequency TPU, robustly achieving high accuracy measurement by a minimum number of fringe frames. Firstly, noise models regarding phase-shifting algorithms as well as 3-D coordinates are established under a projector defocusing condition, which leads to the optimal highest fringe frequency for a FPP system. Then, a new concept termed frequency-to-frame ratio (FFR) that evaluates the magnitude of the contribution of each frame for TPU is defined, on which an optimal phase-shifting combination scheme is proposed. Finally, a judgment criterion is established, which can be used to judge whether the ratio between adjacent fringe frequencies is conducive to stably and efficiently unwrapping the phase. The proposed method provides a simple and effective theoretical framework to improve the accuracy, efficiency, and robustness of a practical FPP system in actual measurement conditions. The correctness of the derived models as well as the validity of the proposed schemes have been verified through extensive simulations and experiments. Based on a normal monocular 3-D FPP hardware system

  4. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    Science.gov (United States)

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.

  5. Robust Trajectory Optimization of a Ski Jumper for Uncertainty Influence and Safety Quantification

    Directory of Open Access Journals (Sweden)

    Patrick Piprek

    2018-02-01

    Full Text Available This paper deals with the development of a robust optimal control framework for a previously developed multi-body ski jumper simulation model by the authors. This framework is used to model uncertainties acting on the jumper during his jump, e.g., wind or mass, to enhance the performance, but also to increase the fairness and safety of the competition. For the uncertainty modeling the method of generalized polynomial chaos together with the discrete expansion by stochastic collocation is applied: This methodology offers a very flexible framework to model multiple uncertainties using a small number of required optimizations to calculate an uncertain trajectory. The results are then compared to the results of the Latin-Hypercube sampling method to show the correctness of the applied methods. Finally, the results are examined with respect to two major metrics: First, the influence of the uncertainties on the jumper, his positioning with respect to the air, and his maximal achievable flight distance are examined. Then, the results are used in a further step to quantify the safety of the jumper.

  6. An efficient global energy optimization approach for robust 3D plane segmentation of point clouds

    Science.gov (United States)

    Dong, Zhen; Yang, Bisheng; Hu, Pingbo; Scherer, Sebastian

    2018-03-01

    Automatic 3D plane segmentation is necessary for many applications including point cloud registration, building information model (BIM) reconstruction, simultaneous localization and mapping (SLAM), and point cloud compression. However, most of the existing 3D plane segmentation methods still suffer from low precision and recall, and inaccurate and incomplete boundaries, especially for low-quality point clouds collected by RGB-D sensors. To overcome these challenges, this paper formulates the plane segmentation problem as a global energy optimization because it is robust to high levels of noise and clutter. First, the proposed method divides the raw point cloud into multiscale supervoxels, and considers planar supervoxels and individual points corresponding to nonplanar supervoxels as basic units. Then, an efficient hybrid region growing algorithm is utilized to generate initial plane set by incrementally merging adjacent basic units with similar features. Next, the initial plane set is further enriched and refined in a mutually reinforcing manner under the framework of global energy optimization. Finally, the performances of the proposed method are evaluated with respect to six metrics (i.e., plane precision, plane recall, under-segmentation rate, over-segmentation rate, boundary precision, and boundary recall) on two benchmark datasets. Comprehensive experiments demonstrate that the proposed method obtained good performances both in high-quality TLS point clouds (i.e., http://SEMANTIC3D.NET)

  7. Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle

    Science.gov (United States)

    Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun

    2018-05-01

    The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.

  8. Robust Optimization of the Self- scheduling and Market Involvement for an Electricity Producer

    KAUST Repository

    Lima, Ricardo

    2015-01-01

    This work address the optimization under uncertainty of the self-scheduling, forward contracting, and pool involvement of an electricity producer operating a mixed power generation station, which combines thermal, hydro and wind sources, and uses a two-stage adaptive robust optimization approach. In this problem the wind power production and the electricity pool price are considered to be uncertain, and are described by uncertainty convex sets. Two variants of a constraint generation algorithm are proposed, namely a primal and dual version, and they are used to solve two case studies based on two different producers. Their market strategies are investigated for three different scenarios, corresponding to as many instances of electricity price forecasts. The effect of the producers’ approach, whether conservative or more risk prone, is also investigated by solving each instance for multiple values of the so-called budget parameter. It was possible to conclude that this parameter influences markedly the producers’ strategy, in terms of scheduling, profit, forward contracting, and pool involvement. Regarding the computational results, these show that for some instances, the two variants of the algorithms have a similar performance, while for a particular subset of them one variant has a clear superiority

  9. A robust optimization model for green regional logistics network design with uncertainty in future logistics demand

    Directory of Open Access Journals (Sweden)

    Dezhi Zhang

    2015-12-01

    Full Text Available This article proposes a new model to address the design problem of a sustainable regional logistics network with uncertainty in future logistics demand. In the proposed model, the future logistics demand is assumed to be a random variable with a given probability distribution. A set of chance constraints with regard to logistics service capacity and environmental impacts is incorporated to consider the sustainability of logistics network design. The proposed model is formulated as a two-stage robust optimization problem. The first-stage problem before the realization of future logistics demand aims to minimize a risk-averse objective by determining the optimal location and size of logistics parks with CO2 emission taxes consideration. The second stage after the uncertain logistics demand has been determined is a scenario-based stochastic logistics service route choices equilibrium problem. A heuristic solution algorithm, which is a combination of penalty function method, genetic algorithm, and Gauss–Seidel decomposition approach, is developed to solve the proposed model. An illustrative example is given to show the application of the proposed model and solution algorithm. The findings show that total social welfare of the logistics system depends very much on the level of uncertainty in future logistics demand, capital budget for logistics parks, and confidence levels of the chance constraints.

  10. Robust Optimization of the Self- scheduling and Market Involvement for an Electricity Producer

    KAUST Repository

    Lima, Ricardo

    2015-01-07

    This work address the optimization under uncertainty of the self-scheduling, forward contracting, and pool involvement of an electricity producer operating a mixed power generation station, which combines thermal, hydro and wind sources, and uses a two-stage adaptive robust optimization approach. In this problem the wind power production and the electricity pool price are considered to be uncertain, and are described by uncertainty convex sets. Two variants of a constraint generation algorithm are proposed, namely a primal and dual version, and they are used to solve two case studies based on two different producers. Their market strategies are investigated for three different scenarios, corresponding to as many instances of electricity price forecasts. The effect of the producers’ approach, whether conservative or more risk prone, is also investigated by solving each instance for multiple values of the so-called budget parameter. It was possible to conclude that this parameter influences markedly the producers’ strategy, in terms of scheduling, profit, forward contracting, and pool involvement. Regarding the computational results, these show that for some instances, the two variants of the algorithms have a similar performance, while for a particular subset of them one variant has a clear superiority

  11. SU-E-J-137: Incorporating Tumor Regression Into Robust Plan Optimization for Head and Neck Radiotherapy

    International Nuclear Information System (INIS)

    Zhang, P; Hu, J; Tyagi, N; Mageras, G; Lee, N; Hunt, M

    2014-01-01

    Purpose: To develop a robust planning paradigm which incorporates a tumor regression model into the optimization process to ensure tumor coverage in head and neck radiotherapy. Methods: Simulation and weekly MR images were acquired for a group of head and neck patients to characterize tumor regression during radiotherapy. For each patient, the tumor and parotid glands were segmented on the MR images and the weekly changes were formulated with an affine transformation, where morphological shrinkage and positional changes are modeled by a scaling factor, and centroid shifts, respectively. The tumor and parotid contours were also transferred to the planning CT via rigid registration. To perform the robust planning, weekly predicted PTV and parotid structures were created by transforming the corresponding simulation structures according to the weekly affine transformation matrix averaged over patients other than him/herself. Next, robust PTV and parotid structures were generated as the union of the simulation and weekly prediction contours. In the subsequent robust optimization process, attainment of the clinical dose objectives was required for the robust PTV and parotids, as well as other organs at risk (OAR). The resulting robust plans were evaluated by looking at the weekly and total accumulated dose to the actual weekly PTV and parotid structures. The robust plan was compared with the original plan based on the planning CT to determine its potential clinical benefit. Results: For four patients, the average weekly change to tumor volume and position was −4% and 1.2 mm laterally-posteriorly. Due to these temporal changes, the robust plans resulted in an accumulated PTV D95 that was, on average, 2.7 Gy higher than the plan created from the planning CT. OAR doses were similar. Conclusion: Integration of a tumor regression model into target delineation and plan robust optimization is feasible and may yield improved tumor coverage. Part of this research is supported

  12. Feasibility and robustness of dose painting by numbers in proton therapy with contour-driven plan optimization

    International Nuclear Information System (INIS)

    Barragán, A. M.; Differding, S.; Lee, J. A.; Sterpin, E.; Janssens, G.

    2015-01-01

    Purpose: To prove the ability of protons to reproduce a dose gradient that matches a dose painting by numbers (DPBN) prescription in the presence of setup and range errors, by using contours and structure-based optimization in a commercial treatment planning system. Methods: For two patients with head and neck cancer, voxel-by-voxel prescription to the target volume (GTV PET ) was calculated from 18 FDG-PET images and approximated with several discrete prescription subcontours. Treatments were planned with proton pencil beam scanning. In order to determine the optimal plan parameters to approach the DPBN prescription, the effects of the scanning pattern, number of fields, number of subcontours, and use of range shifter were separately tested on each patient. Different constant scanning grids (i.e., spot spacing = Δx = Δy = 3.5, 4, and 5 mm) and uniform energy layer separation [4 and 5 mm WED (water equivalent distance)] were analyzed versus a dynamic and automatic selection of the spots grid. The number of subcontours was increased from 3 to 11 while the number of beams was set to 3, 5, or 7. Conventional PTV-based and robust clinical target volumes (CTV)-based optimization strategies were considered and their robustness against range and setup errors assessed. Because of the nonuniform prescription, ensuring robustness for coverage of GTV PET inevitably leads to overdosing, which was compared for both optimization schemes. Results: The optimal number of subcontours ranged from 5 to 7 for both patients. All considered scanning grids achieved accurate dose painting (1% average difference between the prescribed and planned doses). PTV-based plans led to nonrobust target coverage while robust-optimized plans improved it considerably (differences between worst-case CTV dose and the clinical constraint was up to 3 Gy for PTV-based plans and did not exceed 1 Gy for robust CTV-based plans). Also, only 15% of the points in the GTV PET (worst case) were above 5% of DPBN

  13. Research on the robust optimization of the enterprise's decision on the investment to the collaborative innovation: Under the risk constraints

    International Nuclear Information System (INIS)

    Zhou, Qing; Fang, Gang; Wang, Dong-peng; Yang, Wei

    2016-01-01

    Abstracts: The robust optimization model is applied to analyze the enterprise's decision of the investment portfolio for the collaborative innovation under the risk constraints. Through the mathematical model deduction and the simulation analysis, the research result shows that the enterprise's investment to the collaborative innovation has relatively obvious robust effect. As for the collaborative innovation, the return from the investment coexists with the risk of it. Under the risk constraints, the robust optimization method could solve the minimum risk as well as the proportion of each investment scheme in the portfolio on the condition of different target returns from the investment. On the basis of the result, the enterprise could balance between the investment return and risk and make optimal decision on the investment scheme.

  14. Robust active combustion control for the optimization of environmental performance and energy efficiency

    Science.gov (United States)

    Demayo, Trevor Nat

    optimization. The active control system was demonstrated and evaluated by optimizing the burners under practical conditions. In most cases, the controller was able to locate, within 10--15 min, a global performance peak that simultaneously minimized emissions and maximized system efficiency within specified stability limits. The active controller demonstrated flexibility and robustness by (a) successfully optimizing different burners for different J functions, initial conditions, and sensor combinations, and (b) successfully reoptimizing a burner under the effect of simulated window fouling and following sudden inlet perturbations, including load cycling and a misaligned fuel injector.

  15. Sensory Cortical Plasticity Participates in the Epigenetic Regulation of Robust Memory Formation

    Science.gov (United States)

    Phan, Mimi L.; Bieszczad, Kasia M.

    2016-01-01

    Neuroplasticity remodels sensory cortex across the lifespan. A function of adult sensory cortical plasticity may be capturing available information during perception for memory formation. The degree of experience-dependent remodeling in sensory cortex appears to determine memory strength and specificity for important sensory signals. A key open question is how plasticity is engaged to induce different degrees of sensory cortical remodeling. Neural plasticity for long-term memory requires the expression of genes underlying stable changes in neuronal function, structure, connectivity, and, ultimately, behavior. Lasting changes in transcriptional activity may depend on epigenetic mechanisms; some of the best studied in behavioral neuroscience are DNA methylation and histone acetylation and deacetylation, which, respectively, promote and repress gene expression. One purpose of this review is to propose epigenetic regulation of sensory cortical remodeling as a mechanism enabling the transformation of significant information from experiences into content-rich memories of those experiences. Recent evidence suggests how epigenetic mechanisms regulate highly specific reorganization of sensory cortical representations that establish a widespread network for memory. Thus, epigenetic mechanisms could initiate events to establish exceptionally persistent and robust memories at a systems-wide level by engaging sensory cortical plasticity for gating what and how much information becomes encoded. PMID:26881129

  16. Sensory Cortical Plasticity Participates in the Epigenetic Regulation of Robust Memory Formation.

    Science.gov (United States)

    Phan, Mimi L; Bieszczad, Kasia M

    2016-01-01

    Neuroplasticity remodels sensory cortex across the lifespan. A function of adult sensory cortical plasticity may be capturing available information during perception for memory formation. The degree of experience-dependent remodeling in sensory cortex appears to determine memory strength and specificity for important sensory signals. A key open question is how plasticity is engaged to induce different degrees of sensory cortical remodeling. Neural plasticity for long-term memory requires the expression of genes underlying stable changes in neuronal function, structure, connectivity, and, ultimately, behavior. Lasting changes in transcriptional activity may depend on epigenetic mechanisms; some of the best studied in behavioral neuroscience are DNA methylation and histone acetylation and deacetylation, which, respectively, promote and repress gene expression. One purpose of this review is to propose epigenetic regulation of sensory cortical remodeling as a mechanism enabling the transformation of significant information from experiences into content-rich memories of those experiences. Recent evidence suggests how epigenetic mechanisms regulate highly specific reorganization of sensory cortical representations that establish a widespread network for memory. Thus, epigenetic mechanisms could initiate events to establish exceptionally persistent and robust memories at a systems-wide level by engaging sensory cortical plasticity for gating what and how much information becomes encoded.

  17. Sensory Cortical Plasticity Participates in the Epigenetic Regulation of Robust Memory Formation

    Directory of Open Access Journals (Sweden)

    Mimi L. Phan

    2016-01-01

    Full Text Available Neuroplasticity remodels sensory cortex across the lifespan. A function of adult sensory cortical plasticity may be capturing available information during perception for memory formation. The degree of experience-dependent remodeling in sensory cortex appears to determine memory strength and specificity for important sensory signals. A key open question is how plasticity is engaged to induce different degrees of sensory cortical remodeling. Neural plasticity for long-term memory requires the expression of genes underlying stable changes in neuronal function, structure, connectivity, and, ultimately, behavior. Lasting changes in transcriptional activity may depend on epigenetic mechanisms; some of the best studied in behavioral neuroscience are DNA methylation and histone acetylation and deacetylation, which, respectively, promote and repress gene expression. One purpose of this review is to propose epigenetic regulation of sensory cortical remodeling as a mechanism enabling the transformation of significant information from experiences into content-rich memories of those experiences. Recent evidence suggests how epigenetic mechanisms regulate highly specific reorganization of sensory cortical representations that establish a widespread network for memory. Thus, epigenetic mechanisms could initiate events to establish exceptionally persistent and robust memories at a systems-wide level by engaging sensory cortical plasticity for gating what and how much information becomes encoded.

  18. Robust optimization for load scheduling of a smart home with photovoltaic system

    International Nuclear Information System (INIS)

    Wang, Chengshan; Zhou, Yue; Jiao, Bingqi; Wang, Yamin; Liu, Wenjian; Wang, Dan

    2015-01-01

    Highlights: • Robust household load scheduling is presented for smart homes with PV system. • A robust counterpart is formulated to deal with PV output uncertainty. • The robust counterpart is finally transformed to a quadratic programming problem. • Load schedules with different robustness can be made by the proposed method. • Feed-in tariff and PV output would affect the significance of the proposed method. - Abstract: In this paper, a robust approach is developed to tackle the uncertainty of PV power output for load scheduling of smart homes integrated with household PV system. Specifically, a robust formulation is proposed and further transformed to an equivalent quadratic programming problem. Day-ahead load schedules with different robustness can be generated by solving the proposed robust formulation with different predefined parameters. The validity and advantage of the proposed approach has been verified by simulation results. Also, the effects of feed-in tariff and PV output have been evaluated

  19. The role of robust optimization in single-leg airline revenue management

    NARCIS (Netherlands)

    Birbil, S.I.; Frenk, J.B.G.; Gromicho Dos Santos, J.A.; Zhang, S.

    2009-01-01

    In this paper, we introduce robust versions of the classical static and dynamic single-leg seat allocation models. These robust models take into account the inaccurate estimates of the underlying probability distributions. As observed by simulation experiments, it turns out that for these robust

  20. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    Science.gov (United States)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing

  1. Robust economic optimization and environmental policy analysis for microgrid planning: An application to Taichung Industrial Park, Taiwan

    International Nuclear Information System (INIS)

    Yu, Nan; Kang, Jin-Su; Chang, Chung-Chuan; Lee, Tai-Yong; Lee, Dong-Yup

    2016-01-01

    This study aims to provide economical and environmentally friendly solutions for a microgrid system with distributed energy resources in the design stage, considering multiple uncertainties during operation and conflicting interests among diverse microgrid stakeholders. For the purpose, we develop a multi-objective optimization model for robust microgrid planning, on the basis of an economic robustness measure, i.e. the worst-case cost among possible scenarios, to reduce the variability among scenario costs caused by uncertainties. The efficacy of the model is successfully demonstrated by applying it to Taichung Industrial Park in Taiwan, an industrial complex, where significant amount of greenhouse gases are emitted. Our findings show that the most robust solution, but the highest cost, mainly includes 45% (26.8 MW) of gas engine and 47% (28 MW) of photovoltaic panel with the highest system capacity (59 MW). Further analyses reveal the environmental benefits from the significant reduction of the expected annual CO_2 emission and carbon tax by about half of the current utility facilities in the region. In conclusion, the developed model provides an efficient decision-making tool for robust microgrid planning at the preliminary stage. - Highlights: • Developed robust economic and environmental optimization model for microgrid planning. • Provided Pareto optimal planning solutions for Taichung Industrial Park, Taiwan. • Suggested microgrid configuration with significant economic and environmental benefits. • Identified gas engine and photovoltaic panel as two promising energy sources.

  2. Emergence of robust growth laws from optimal regulation of ribosome synthesis.

    Science.gov (United States)

    Scott, Matthew; Klumpp, Stefan; Mateescu, Eduard M; Hwa, Terence

    2014-08-22

    Bacteria must constantly adapt their growth to changes in nutrient availability; yet despite large-scale changes in protein expression associated with sensing, adaptation, and processing different environmental nutrients, simple growth laws connect the ribosome abundance and the growth rate. Here, we investigate the origin of these growth laws by analyzing the features of ribosomal regulation that coordinate proteome-wide expression changes with cell growth in a variety of nutrient conditions in the model organism Escherichia coli. We identify supply-driven feedforward activation of ribosomal protein synthesis as the key regulatory motif maximizing amino acid flux, and autonomously guiding a cell to achieve optimal growth in different environments. The growth laws emerge naturally from the robust regulatory strategy underlying growth rate control, irrespective of the details of the molecular implementation. The study highlights the interplay between phenomenological modeling and molecular mechanisms in uncovering fundamental operating constraints, with implications for endogenous and synthetic design of microorganisms. © 2014 The Authors. Published under the terms of the CC BY 4.0 license.

  3. RSMDP-based Robust Q-learning for Optimal Path Planning in a Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Yunfei Zhang

    2014-07-01

    Full Text Available This paper presents arobust Q-learning method for path planningin a dynamic environment. The method consists of three steps: first, a regime-switching Markov decision process (RSMDP is formed to present the dynamic environment; second a probabilistic roadmap (PRM is constructed, integrated with the RSMDP and stored as a graph whose nodes correspond to a collision-free world state for the robot; and third, an onlineQ-learning method with dynamic stepsize, which facilitates robust convergence of the Q-value iteration, is integrated with the PRM to determine an optimal path for reaching the goal. In this manner, the robot is able to use past experience for improving its performance in avoiding not only static obstacles but also moving obstacles, without knowing the nature of the obstacle motion. The use ofregime switching in the avoidance of obstacles with unknown motion is particularly innovative.  The developed approach is applied to a homecare robot in computer simulation. The results show that the online path planner with Q-learning is able torapidly and successfully converge to the correct path.

  4. Robust optimization on sustainable biodiesel supply chain produced from waste cooking oil under price uncertainty.

    Science.gov (United States)

    Zhang, Yong; Jiang, Yunjian

    2017-02-01

    Waste cooking oil (WCO)-for-biodiesel conversion is regarded as the "waste-to-wealthy" industry. This paper addresses the design of a WCO-for-biodiesel supply chain at both strategic and tactical levels. The supply chain of this problem is studied, which is based on a typical mode of the waste collection (from restaurants' kitchen) and conversion in the cities. The supply chain comprises three stakeholders: WCO supplier, integrated bio-refinery and demand zone. Three key problems should be addressed for the optimal design of the supply chain: (1) the number, sizes and locations of bio-refinery; (2) the sites and amount of WCO collected; (3) the transportation plans of WCO and biodiesel. A robust mixed integer linear model with muti-objective (economic, environmental and social objectives) is proposed for these problems. Finally, a large-scale practical case study is adopted based on Suzhou, a city in the east of China, to verify the proposed models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Optimizing Diamond Structured Automobile Supply Chain Network Towards a Robust Business Continuity Management

    Directory of Open Access Journals (Sweden)

    Abednico Montshiwa

    2016-02-01

    Full Text Available This paper presents an optimized diamond structured automobile supply chain network towards a robust Business Continuity Management model. The model is necessitated by the nature of the automobile supply chain. Companies in tier two are centralized and numerically limited and have to supply multiple tier one companies with goods and services. The challenge with this supply chain structure is the inherent risks in the supply chain. Once supply chain disruption takes place at tier 2 level, the whole supply chain network suffers huge loses. To address this challenge, the paper replaces Risk Analysis with Risk Ranking and it introduces Supply Chain Cooperation (SCC to the traditional Business Continuity Plan (BCP concept. The paper employed three statistical analysis techniques (correlation analysis, regression analysis and Smart PLS 3.0 calculations. In this study, correlation and regression analysis results on risk rankings, SCC and Business Impact Analysis were significant, ascertaining the value of the model. The multivariate data analysis calculations demonstrated that SCC has a positive total significant effect on risk rankings and BCM while BIA has strongest positive effects on all BCP factors. Finally, sensitivity analysis demonstrated that company size plays a role in BCM.

  6. An optimization of robust SMES with specified structure H∞ controller for power system stabilization considering superconducting magnetic coil size

    International Nuclear Information System (INIS)

    Ngamroo, Issarachai

    2011-01-01

    Even the superconducting magnetic energy storage (SMES) is the smart stabilizing device in electric power systems, the installation cost of SMES is very high. Especially, the superconducting magnetic coil size which is the critical part of SMES, must be well designed. On the contrary, various system operating conditions result in system uncertainties. The power controller of SMES designed without taking such uncertainties into account, may fail to stabilize the system. By considering both coil size and system uncertainties, this paper copes with the optimization of robust SMES controller. No need of exact mathematic equations, the normalized coprime factorization is applied to model system uncertainties. Based on the normalized integral square error index of inter-area rotor angle difference and specified structured H ∞ loop shaping optimization, the robust SMES controller with the smallest coil size, can be achieved by the genetic algorithm. The robustness of the proposed SMES with the smallest coil size can be confirmed by simulation study.

  7. DETERMINING A ROBUST D-OPTIMAL DESIGN FOR TESTING FOR DEPARTURE FROM ADDITIVITY IN A MIXTURE OF FOUR PFAAS

    Science.gov (United States)

    Our objective was to determine an optimal experimental design for a mixture of perfluoroalkyl acids (PFAAs) that is robust to the assumption of additivity. Of particular focus to this research project is whether an environmentally relevant mixture of four PFAAs with long half-liv...

  8. Determining a Robust D-Optimal Design for Testing for Departure from Additivity in a Mixture of Four Perfluoroalkyl Acids.

    Science.gov (United States)

    Our objective is to determine an optimal experimental design for a mixture of perfluoroalkyl acids (PFAAs) that is robust to the assumption of additivity. PFAAs are widely used in consumer products and industrial applications. The presence and persistence of PFAAs, especially in ...

  9. On robust multi-period pre-commitment and time-consistent mean-variance portfolio optimization

    NARCIS (Netherlands)

    F. Cong (Fei); C.W. Oosterlee (Kees)

    2017-01-01

    textabstractWe consider robust pre-commitment and time-consistent mean-variance optimal asset allocation strategies, that are required to perform well also in a worst-case scenario regarding the development of the asset price. We show that worst-case scenarios for both strategies can be found by

  10. Efficacy of robust optimization plan with partial-arc VMAT for photon volumetric-modulated arc therapy: A phantom study.

    Science.gov (United States)

    Miura, Hideharu; Ozawa, Shuichi; Nagata, Yasushi

    2017-09-01

    This study investigated position dependence in planning target volume (PTV)-based and robust optimization plans using full-arc and partial-arc volumetric modulated arc therapy (VMAT). The gantry angles at the periphery, intermediate, and center CTV positions were 181°-180° (full-arc VMAT) and 181°-360° (partial-arc VMAT). A PTV-based optimization plan was defined by 5 mm margin expansion of the CTV to a PTV volume, on which the dose constraints were applied. The robust optimization plan consisted of a directly optimized dose to the CTV under a maximum-uncertainties setup of 5 mm. The prescription dose was normalized to the CTV D 99% (the minimum relative dose that covers 99% of the volume of the CTV) as an original plan. The isocenter was rigidly shifted at 1 mm intervals in the anterior-posterior (A-P), superior-inferior (S-I), and right-left (R-L) directions from the original position to the maximum-uncertainties setup of 5 mm in the original plan, yielding recalculated dose distributions. It was found that for the intermediate and center positions, the uncertainties in the D 99% doses to the CTV for all directions did not significantly differ when comparing the PTV-based and robust optimization plans (P > 0.05). For the periphery position, uncertainties in the D 99% doses to the CTV in the R-L direction for the robust optimization plan were found to be lower than those in the PTV-based optimization plan (P plan's efficacy using partial-arc VMAT depends on the periphery CTV position. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  11. Convergence of a Scholtes-type regularization method for cardinality-constrained optimization problems with an application in sparse robust portfolio optimization

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin; Bucher, M.; Červinka, Michal; Schwartz, A.

    2018-01-01

    Roč. 70, č. 2 (2018), s. 503-530 ISSN 0926-6003 R&D Projects: GA ČR GA15-00735S Institutional support: RVO:67985556 Keywords : Cardinality constraints * Regularization method * Scholtes regularization * Strong stationarity * Sparse portfolio optimization * Robust portfolio optimization Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 1.520, year: 2016 http://library.utia.cas.cz/separaty/2018/MTR/branda-0489264.pdf

  12. Impact of Spot Size and Spacing on the Quality of Robustly Optimized Intensity Modulated Proton Therapy Plans for Lung Cancer.

    Science.gov (United States)

    Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei

    2018-06-01

    To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large

  13. Optimization of sources for focusing wave energy in targeted formations

    KAUST Repository

    Jeong, C

    2010-06-08

    We discuss a numerical approach for identifying the surface excitation that is necessary to maximize the response of a targeted subsurface formation. The motivation stems from observations in the aftermath of earthquakes, and from limited field experiments, whereby increased oil production rates were recorded and were solely attributable to the induced reservoir shaking. The observations suggest that focusing wave energy to the reservoir could serve as an effective low-cost enhanced oil recovery method. In this paper, we report on a general method that allows the determination of the source excitation, when provided with a desired maximization outcome at the targeted formation. We discuss, for example, how to construct the excitation that will maximize the kinetic energy in the target zone, while keeping silent the neighbouring zones. To this end, we cast the problem as an inverse-source problem, and use a partial-differential- equation-constrained optimization approach to arrive at an optimized source signal. We seek to satisfy stationarity of an augmented functional, which formally leads to a triplet of state, adjoint and control problems. We use finite elements to resolve the state and adjoint problems, and an iterative scheme to satisfy the control problem to converge to the sought source signal. We report on one-dimensional numerical experiments in the time domain involving a layered medium of semi-infinite extent. The numerical results show that the targeted formation\\'s kinetic energy resulting from an optimized wave source could be several times greater than the one resulting from a blind source choice, and could overcome the mobility threshold of entrapped reservoir oil. © 2010 Nanjing Geophysical Research Institute.

  14. SU-E-T-527: Is CTV-Based Robust Optimized IMPT in Non-Small-Cell Lung Cancer Robust Against Respiratory Motion?

    International Nuclear Information System (INIS)

    Anetai, Y; Mizuno, H; Sumida, I; Ogawa, K; Takegawa, H; Inoue, T; Koizumi, M; Veld, A van’t; Korevaar, E

    2015-01-01

    Purpose: To determine which proton planning technique on average-CT is more vulnerable to respiratory motion induced density changes and interplay effect among (a) IMPT of CTV-based minimax robust optimization with 5mm set-up error considered, (b, c) IMPT/SFUD of 5mm-expanded PTV optimization. Methods: Three planning techniques were optimized in Raystation with a prescription of 60/25 (Gy/fractions) and almost the same OAR constraints/objectives for each of 10 NSCLC patients. 4D dose without/with interplay effect was recalculated on eight 4D-CT phases and accumulated after deforming the dose of each phase to a reference (exhalation phase). The change of D98% of each CTV caused by density changes and interplay was determined. In addition, evaluation of the DVH information vector (D99%, D98%, D95%, Dave, D50%, D2%, D1%) which compares the whole DVH by η score = (cosine similarity × Pearson correlation coefficient − 0.9) × 1000 quantified the degree of DVH change: score below 100 indicates changed DVH. Results: Three 3D plans of each technique satisfied our clinical goals. D98% shift mean±SD (Gy) due to density changes was largest in (c): −0.78±1.1 while (a): −0.11±0.65 and (b): − 0.59±0.93. Also the shift due to interplay effect most was (c): −.54±0.70 whereas (a): −0.25±0.93 and (b): −0.12±0.13. Moreover lowest η score caused by density change was also (c): 69, while (a) and (b) kept around 90. η score also indicated less effect of interplay than density changes. Note that generally the changed DVH were still acceptable clinically. Paired T-tests showed a significantly smaller density change effect in (a) (p<0.05) than in (b) or (c) and no significant difference in interplay effect. Conclusion: CTV-based robust optimized IMPT was more robust against respiratory motion induced density changes than PTV-based IMPT and SFUD. The interplay effect was smaller than the effect of density changes and similar among the three techniques. The JSPS Core

  15. An Integrated Approach to Single-Leg Airline Revenue Management: The Role of Robust Optimization

    OpenAIRE

    Birbil, S.I.; Frenk, J.B.G.; Gromicho, J.A.S.; Zhang, S.

    2006-01-01

    textabstractIn this paper we introduce robust versions of the classical static and dynamic single leg seat allocation models as analyzed by Wollmer, and Lautenbacher and Stidham, respectively. These robust models take into account the inaccurate estimates of the underlying probability distributions. As observed by simulation experiments it turns out that for these robust versions the variability compared to their classical counter parts is considerably reduced with a negligible decrease of av...

  16. Novel Approaches for Spacecraft Formation Robustness and Performance using Distributed Estimation, Control and Communication, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Formation flight can provide the benefits of a large effective telescope using precision formation flying of smaller, lower cost, collaborating telescopes. A...

  17. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques

    International Nuclear Information System (INIS)

    Chao, Ming; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi; Wei, Jie; Li, Tianfang

    2016-01-01

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as  −0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. (paper)

  18. A robust algorithm for optimizing protein structures with NMR chemical shifts

    Energy Technology Data Exchange (ETDEWEB)

    Berjanskii, Mark; Arndt, David; Liang, Yongjie; Wishart, David S., E-mail: david.wishart@ualberta.ca [University of Alberta, Department of Computing Science (Canada)

    2015-11-15

    Over the past decade, a number of methods have been developed to determine the approximate structure of proteins using minimal NMR experimental information such as chemical shifts alone, sparse NOEs alone or a combination of comparative modeling data and chemical shifts. However, there have been relatively few methods that allow these approximate models to be substantively refined or improved using the available NMR chemical shift data. Here, we present a novel method, called Chemical Shift driven Genetic Algorithm for biased Molecular Dynamics (CS-GAMDy), for the robust optimization of protein structures using experimental NMR chemical shifts. The method incorporates knowledge-based scoring functions and structural information derived from NMR chemical shifts via a unique combination of multi-objective MD biasing, a genetic algorithm, and the widely used XPLOR molecular modelling language. Using this approach, we demonstrate that CS-GAMDy is able to refine and/or fold models that are as much as 10 Å (RMSD) away from the correct structure using only NMR chemical shift data. CS-GAMDy is also able to refine of a wide range of approximate or mildly erroneous protein structures to more closely match the known/correct structure and the known/correct chemical shifts. We believe CS-GAMDy will allow protein models generated by sparse restraint or chemical-shift-only methods to achieve sufficiently high quality to be considered fully refined and “PDB worthy”. The CS-GAMDy algorithm is explained in detail and its performance is compared over a range of refinement scenarios with several commonly used protein structure refinement protocols. The program has been designed to be easily installed and easily used and is available at http://www.gamdy.ca http://www.gamdy.ca.

  19. The optimal shape of elastomer mushroom-like fibers for high and robust adhesion

    Directory of Open Access Journals (Sweden)

    Burak Aksak

    2014-05-01

    Full Text Available Over the last decade, significant effort has been put into mimicking the ability of the gecko lizard to strongly and reversibly cling to surfaces, by using synthetic structures. Among these structures, mushroom-like elastomer fiber arrays have demonstrated promising performance on smooth surfaces matching the adhesive strengths obtained with the natural gecko foot-pads. It is possible to improve the already impressive adhesive performance of mushroom-like fibers provided that the underlying adhesion mechanism is understood. Here, the adhesion mechanism of bio-inspired mushroom-like fibers is investigated by implementing the Dugdale–Barenblatt cohesive zone model into finite elements simulations. It is found that the magnitude of pull-off stress depends on the edge angle θ and the ratio of the tip radius to the stalk radius β of the mushroom-like fiber. Pull-off stress is also found to depend on a dimensionless parameter χ, the ratio of the fiber radius to a length-scale related to the dominance of adhesive stress. As an estimate, the optimal parameters are found to be β = 1.1 and θ = 45°. Further, the location of crack initiation is found to depend on χ for given β and θ. An analytical model for pull-off stress, which depends on the location of crack initiation as well as on θ and β, is proposed and found to agree with the simulation results. Results obtained in this work provide a geometrical guideline for designing robust bio-inspired dry fibrillar adhesives.

  20. Decentralized formation of random regular graphs for robust multi-agent networks

    KAUST Repository

    Yazicioglu, A. Yasin; Egerstedt, Magnus; Shamma, Jeff S.

    2014-01-01

    systems. One family of robust graphs is the random regular graphs. In this paper, we present a locally applicable reconfiguration scheme to build random regular graphs through self-organization. For any connected initial graph, the proposed scheme

  1. An integrated approach to single-leg airline revenue management: The role of robust optimization

    NARCIS (Netherlands)

    S.I. Birbil (Ilker); J.B.G. Frenk (Hans); J.A.S. Gromicho (Joaquim); S. Zhang (Shuzhong)

    2006-01-01

    textabstractIn this paper we introduce robust versions of the classical static and dynamic single leg seat allocation models as analyzed by Wollmer, and Lautenbacher and Stidham, respectively. These robust models take into account the inaccurate estimates of the underlying probability distributions.

  2. An Integrated Approach to Single-Leg Airline Revenue Management: The Role of Robust Optimization

    NARCIS (Netherlands)

    S.I. Birbil (Ilker); J.B.G. Frenk (Hans); J.A.S. Gromicho (Joaquim); S. Zhang (Shuzhong)

    2006-01-01

    textabstractIn this paper we introduce robust versions of the classical static and dynamic single leg seat allocation models as analyzed by Wollmer, and Lautenbacher and Stidham, respectively. These robust models take into account the inaccurate estimates of the underlying probability distributions.

  3. Optimization of sources for focusing wave energy in targeted formations

    International Nuclear Information System (INIS)

    Jeong, C; Kallivokas, L F; Huh, C; Lake, L W

    2010-01-01

    We discuss a numerical approach for identifying the surface excitation that is necessary to maximize the response of a targeted subsurface formation. The motivation stems from observations in the aftermath of earthquakes, and from limited field experiments, whereby increased oil production rates were recorded and were solely attributable to the induced reservoir shaking. The observations suggest that focusing wave energy to the reservoir could serve as an effective low-cost enhanced oil recovery method. In this paper, we report on a general method that allows the determination of the source excitation, when provided with a desired maximization outcome at the targeted formation. We discuss, for example, how to construct the excitation that will maximize the kinetic energy in the target zone, while keeping silent the neighbouring zones. To this end, we cast the problem as an inverse-source problem, and use a partial-differential-equation-constrained optimization approach to arrive at an optimized source signal. We seek to satisfy stationarity of an augmented functional, which formally leads to a triplet of state, adjoint and control problems. We use finite elements to resolve the state and adjoint problems, and an iterative scheme to satisfy the control problem to converge to the sought source signal. We report on one-dimensional numerical experiments in the time domain involving a layered medium of semi-infinite extent. The numerical results show that the targeted formation's kinetic energy resulting from an optimized wave source could be several times greater than the one resulting from a blind source choice, and could overcome the mobility threshold of entrapped reservoir oil

  4. Exploitation and Optimization of Reservoir Performance in Hunton Formation, Oklahoma

    Energy Technology Data Exchange (ETDEWEB)

    Mohan Kelkar

    2007-06-30

    Hunton formation in Oklahoma has been the subject of attention for the last ten years. The new interest started with the drilling of the West Carney field in 1995 in Lincoln County. Subsequently, many other operators have expanded the search for oil and gas in Hunton formation in other parts of Oklahoma. These fields exhibit many unique production characteristics, including: (1) decreasing water-oil or water-gas ratio over time; (2) decreasing gas-oil ratio followed by an increase; (3) poor prediction capability of the reserves based on the log data; and (4) low geological connectivity but high hydrodynamic connectivity. The purpose of this investigation is to understand the principal mechanisms affecting the production, and propose methods by which we can optimize the production from fields with similar characteristics.

  5. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    Directory of Open Access Journals (Sweden)

    A. Chebbi

    2013-10-01

    Full Text Available Based on rainfall intensity-duration-frequency (IDF curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2. Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World

  6. Automatic spinal cord localization, robust to MRI contrasts using global curve optimization.

    Science.gov (United States)

    Gros, Charley; De Leener, Benjamin; Dupont, Sara M; Martin, Allan R; Fehlings, Michael G; Bakshi, Rohit; Tummala, Subhash; Auclair, Vincent; McLaren, Donald G; Callot, Virginie; Cohen-Adad, Julien; Sdika, Michaël

    2018-02-01

    During the last two decades, MRI has been increasingly used for providing valuable quantitative information about spinal cord morphometry, such as quantification of the spinal cord atrophy in various diseases. However, despite the significant improvement of MR sequences adapted to the spinal cord, automatic image processing tools for spinal cord MRI data are not yet as developed as for the brain. There is nonetheless great interest in fully automatic and fast processing methods to be able to propose quantitative analysis pipelines on large datasets without user bias. The first step of most of these analysis pipelines is to detect the spinal cord, which is challenging to achieve automatically across the broad range of MRI contrasts, field of view, resolutions and pathologies. In this paper, a fully automated, robust and fast method for detecting the spinal cord centerline on MRI volumes is introduced. The algorithm uses a global optimization scheme that attempts to strike a balance between a probabilistic localization map of the spinal cord center point and the overall spatial consistency of the spinal cord centerline (i.e. the rostro-caudal continuity of the spinal cord). Additionally, a new post-processing feature, which aims to automatically split brain and spine regions is introduced, to be able to detect a consistent spinal cord centerline, independently from the field of view. We present data on the validation of the proposed algorithm, known as "OptiC", from a large dataset involving 20 centers, 4 contrasts (T 2 -weighted n = 287, T 1 -weighted n = 120, T 2 ∗ -weighted n = 307, diffusion-weighted n = 90), 501 subjects including 173 patients with a variety of neurologic diseases. Validation involved the gold-standard centerline coverage, the mean square error between the true and predicted centerlines and the ability to accurately separate brain and spine regions. Overall, OptiC was able to cover 98.77% of the gold-standard centerline, with a

  7. Comparison of linear and nonlinear programming approaches for "worst case dose" and "minmax" robust optimization of intensity-modulated proton therapy dose distributions.

    Science.gov (United States)

    Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino

    2017-03-01

    Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet

  8. Optimal Formation of Multirobot Systems Based on a Recurrent Neural Network.

    Science.gov (United States)

    Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Yu, Junzhi; Tan, Min

    2016-02-01

    The optimal formation problem of multirobot systems is solved by a recurrent neural network in this paper. The desired formation is described by the shape theory. This theory can generate a set of feasible formations that share the same relative relation among robots. An optimal formation means that finding one formation from the feasible formation set, which has the minimum distance to the initial formation of the multirobot system. Then, the formation problem is transformed into an optimization problem. In addition, the orientation, scale, and admissible range of the formation can also be considered as the constraints in the optimization problem. Furthermore, if all robots are identical, their positions in the system are exchangeable. Then, each robot does not necessarily move to one specific position in the formation. In this case, the optimal formation problem becomes a combinational optimization problem, whose optimal solution is very hard to obtain. Inspired by the penalty method, this combinational optimization problem can be approximately transformed into a convex optimization problem. Due to the involvement of the Euclidean norm in the distance, the objective function of these optimization problems are nonsmooth. To solve these nonsmooth optimization problems efficiently, a recurrent neural network approach is employed, owing to its parallel computation ability. Finally, some simulations and experiments are given to validate the effectiveness and efficiency of the proposed optimal formation approach.

  9. Distributed Time-Varying Formation Robust Tracking for General Linear Multiagent Systems With Parameter Uncertainties and External Disturbances.

    Science.gov (United States)

    Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang

    2017-05-18

    This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.

  10. Efficient and Robust Data Collection Using Compact Micro Hardware, Distributed Bus Architectures and Optimizing Software

    Science.gov (United States)

    Chau, Savio; Vatan, Farrokh; Randolph, Vincent; Baroth, Edmund C.

    2006-01-01

    Future In-Space propulsion systems for exploration programs will invariably require data collection from a large number of sensors. Consider the sensors needed for monitoring several vehicle systems states of health, including the collection of structural health data, over a large area. This would include the fuel tanks, habitat structure, and science containment of systems required for Lunar, Mars, or deep space exploration. Such a system would consist of several hundred or even thousands of sensors. Conventional avionics system design will require these sensors to be connected to a few Remote Health Units (RHU), which are connected to robust, micro flight computers through a serial bus. This results in a large mass of cabling and unacceptable weight. This paper first gives a survey of several techniques that may reduce the cabling mass for sensors. These techniques can be categorized into four classes: power line communication, serial sensor buses, compound serial buses, and wireless network. The power line communication approach uses the power line to carry both power and data, so that the conventional data lines can be eliminated. The serial sensor bus approach reduces most of the cabling by connecting all the sensors with a single (or redundant) serial bus. Many standard buses for industrial control and sensor buses can support several hundreds of nodes, however, have not been space qualified. Conventional avionics serial buses such as the Mil-Std-1553B bus and IEEE 1394a are space qualified but can support only a limited number of nodes. The third approach is to combine avionics buses to increase their addressability. The reliability, EMI/EMC, and flight qualification issues of wireless networks have to be addressed. Several wireless networks such as the IEEE 802.11 and Ultra Wide Band are surveyed in this paper. The placement of sensors can also affect cable mass. Excessive sensors increase the number of cables unnecessarily. Insufficient number of sensors

  11. A PARTIAL ROBUST OPTIMIZATION APPROACH TO INVENTORY MANAGEMENT FOR THE OFFLINE-TO-ONLINE PROBLEM UNDER DIFFERENT SELLING PRICES

    Institute of Scientific and Technical Information of China (English)

    Hui Yu; Jie Deng

    2017-01-01

    This study examines an optimal inventory strategy when a retailer markets a product at different selling prices through a dual-channel supply chain,comprising an online channel and an offline channel.Using the operating pattern of the offiine-to-online (O2O) business model,we develop a partial robust optimization (PRO) model.Then,we provide a closed-form solution when only the mean and standard deviation of the online channel demand distribution is known and the offline channel demand follows a uniform distribution (partial robust).Specifically,owing to the good structural properties of the solution,we obtain a heuristic ordering formula for the general distribution case (i.e.,the offline channel demand follows a general distribution).In addition,a series of numerical experiments prove the rationality of our conjecture.Moreover,after comparing our solution with other possible policies,we conclude that the PRO approach improves the performance of incorporating the internet into an existing supply chain and,thus,is able to adjust the level of conservativeness of the solution.Finally,in a degenerated situation,we compare our PRO approach with a combination of information approach.The results show that the PRO approach has more "robust" performance.As a result,a reasonable trade-off between robustness and performance is achieved.

  12. Novel Robust Optimization and Power Allocation of Time Reversal-MIMO-UWB Systems in an Imperfect CSI

    Directory of Open Access Journals (Sweden)

    Sajjad Alizadeh

    2013-03-01

    Full Text Available Time Reversal (TR technique is an attractive solution for a scenario where the transmission system employs low complexity receivers with multiple antennas at both transmitter and receiver sides. The TR technique can be combined with a high data rate MIMO-UWB system as TR-MIMO-UWB system. In spite of TR's good performance in MIMO-UWB systems, it suffers from performance degradation in an imperfect Channel State Information (CSI case. In this paper, at first a robust TR pre-filter is designed together with a MMSE equalizer in TR-MIMO-UWB system where is robust against channel imperfection conditions. We show that the robust pre-filter optimization technique, considerably improves the BER performance of TR-MIMO-UWB system in imperfect CSI, where temporal focusing of the TR technique is kept, especially for high SNR values. Then, in order to improve the system performance more than ever, a power loading scheme is developed by minimizing the average symbol error rate in an imperfect CSI. Numerical and simulation results are presented to confirm the performance advantage attained by the proposed robust optimization and power loading in an imperfect CSI scenario.

  13. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  14. Integration of CCS, emissions trading and volatilities of fuel prices into sustainable energy planning, and its robust optimization

    International Nuclear Information System (INIS)

    Koo, Jamin; Han, Kyusang; Yoon, En Sup

    2011-01-01

    In this paper, a new approach has been proposed that allows a robust optimization of sustainable energy planning over a period of years. It is based on the modified energy flow optimization model (EFOM) and minimizes total costs in planning capacities of power plants and CCS to be added, stripped or retrofitted. In the process, it reduces risks due to a high volatility in fuel prices; it also provides robustness against infeasibility with respect to meeting the required emission level by adopting a penalty constant that corresponds to the price level of emission allowances. In this manner, the proposed methodology enables decision makers to determine the optimal capacities of power plants and/or CCS, as well as volumes of emissions trading in the future that will meet the required emission level and satisfy energy demand from various user-sections with minimum costs and maximum robustness. They can also gain valuable insights on the effects that the price of emission allowances has on the competitiveness of RES and CCS technologies; it may be used in, for example, setting appropriate subsidies and tax policies for promoting greater use of these technologies. The proposed methodology is applied to a case based on directions and volumes of energy flows in South Korea during the year 2008. (author)

  15. Robust Optimization on Regional WCO-for-Biodiesel Supply Chain under Supply and Demand Uncertainties

    Directory of Open Access Journals (Sweden)

    Yong Zhang

    2016-01-01

    Full Text Available This paper aims to design a robust waste cooking oil- (WCO- for-biodiesel supply chain under WCO supply and price as well as biodiesel demand and price uncertainties, so as to improve biorefineries’ ability to cope with the poor environment. A regional supply chain is firstly introduced based on the biggest WCO-for-biodiesel company in Changzhou, Jiangsu province, and it comprises three components: WCO supplier, biorefinery, and demand zone. And then a robust mixed integer linear model with multiple objectives (economic, environmental, and social objectives is proposed for both biorefinery location and transportation plans. After that, a heuristic algorithm based on genetic algorithm is proposed to solve this model. Finally, the 27 cities in Yangtze River delta are adopted to verify the proposed models and methods, and the sustainability and robustness of biodiesel supply are discussed.

  16. Decentralized formation of random regular graphs for robust multi-agent networks

    KAUST Repository

    Yazicioglu, A. Yasin

    2014-12-15

    Multi-agent networks are often modeled via interaction graphs, where the nodes represent the agents and the edges denote direct interactions between the corresponding agents. Interaction graphs have significant impact on the robustness of networked systems. One family of robust graphs is the random regular graphs. In this paper, we present a locally applicable reconfiguration scheme to build random regular graphs through self-organization. For any connected initial graph, the proposed scheme maintains connectivity and the average degree while minimizing the degree differences and randomizing the links. As such, if the average degree of the initial graph is an integer, then connected regular graphs are realized uniformly at random as time goes to infinity.

  17. Formation of Robust Multi-Agent Networks through Self-Organizing Random Regular Graphs

    KAUST Repository

    Yasin Yazicioǧlu, A.; Egerstedt, Magnus; Shamma, Jeff S.

    2015-01-01

    Multi-Agent networks are often modeled as interaction graphs, where the nodes represent the agents and the edges denote some direct interactions. The robustness of a multi-Agent network to perturbations such as failures, noise, or malicious attacks largely depends on the corresponding graph. In many applications, networks are desired to have well-connected interaction graphs with relatively small number of links. One family of such graphs is the random regular graphs. In this paper, we present a decentralized scheme for transforming any connected interaction graph with a possibly non-integer average degree of k into a connected random m-regular graph for some m ϵ [k+k ] 2. Accordingly, the agents improve the robustness of the network while maintaining a similar number of links as the initial configuration by locally adding or removing some edges. © 2015 IEEE.

  18. Formation of Robust Multi-Agent Networks through Self-Organizing Random Regular Graphs

    KAUST Repository

    Yasin Yazicioǧlu, A.

    2015-11-25

    Multi-Agent networks are often modeled as interaction graphs, where the nodes represent the agents and the edges denote some direct interactions. The robustness of a multi-Agent network to perturbations such as failures, noise, or malicious attacks largely depends on the corresponding graph. In many applications, networks are desired to have well-connected interaction graphs with relatively small number of links. One family of such graphs is the random regular graphs. In this paper, we present a decentralized scheme for transforming any connected interaction graph with a possibly non-integer average degree of k into a connected random m-regular graph for some m ϵ [k+k ] 2. Accordingly, the agents improve the robustness of the network while maintaining a similar number of links as the initial configuration by locally adding or removing some edges. © 2015 IEEE.

  19. Probabilistic risk assessment for CO2 storage in geological formations: robust design and support for decision making under uncertainty

    Science.gov (United States)

    Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang

    2010-05-01

    CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis

  20. Optimizing edge detectors for robust automatic threshold selection : Coping with edge curvature and noise

    NARCIS (Netherlands)

    Wilkinson, M.H.F.

    The Robust Automatic Threshold Selection algorithm was introduced as a threshold selection based on a simple image statistic. The statistic is an average of the grey levels of the pixels in an image weighted by the response at each pixel of a specific edge detector. Other authors have suggested that

  1. Testing the robustness of deterministic models of optimal dynamic pricing and lot-sizing for deteriorating items under stochastic conditions

    DEFF Research Database (Denmark)

    Ghoreishi, Maryam

    2018-01-01

    Many models within the field of optimal dynamic pricing and lot-sizing models for deteriorating items assume everything is deterministic and develop a differential equation as the core of analysis. Two prominent examples are the papers by Rajan et al. (Manag Sci 38:240–262, 1992) and Abad (Manag......, we will try to expose the model by Abad (1996) and Rajan et al. (1992) to stochastic inputs; however, designing these stochastic inputs such that they as closely as possible are aligned with the assumptions of those papers. We do our investigation through a numerical test where we test the robustness...... of the numerical results reported in Rajan et al. (1992) and Abad (1996) in a simulation model. Our numerical results seem to confirm that the results stated in these papers are indeed robust when being imposed to stochastic inputs....

  2. Robustness analysis of the Zhang neural network for online time-varying quadratic optimization

    International Nuclear Information System (INIS)

    Zhang Yunong; Ruan Gongqin; Li Kene; Yang Yiwen

    2010-01-01

    A general type of recurrent neural network (termed as Zhang neural network, ZNN) has recently been proposed by Zhang et al for the online solution of time-varying quadratic-minimization (QM) and quadratic-programming (QP) problems. Global exponential convergence of the ZNN could be achieved theoretically in an ideal error-free situation. In this paper, with the normal differentiation and dynamics-implementation errors considered, the robustness properties of the ZNN model are investigated for solving these time-varying problems. In addition, linear activation functions and power-sigmoid activation functions could be applied to such a perturbed ZNN model. Both theoretical-analysis and computer-simulation results demonstrate the good ZNN robustness and superior performance for online time-varying QM and QP problem solving, especially when using power-sigmoid activation functions.

  3. An intrinsic robust rank-one-approximation approach for currencyportfolio optimization

    OpenAIRE

    Hongxuan Huang; Zhengjun Zhang

    2018-01-01

    A currency portfolio is a special kind of wealth whose value fluctuates with foreignexchange rates over time, which possesses 3Vs (volume, variety and velocity) properties of big datain the currency market. In this paper, an intrinsic robust rank one approximation (ROA) approachis proposed to maximize the value of currency portfolios over time. The main results of the paperinclude four parts: Firstly, under the assumptions about the currency market, the currency portfoliooptimization problem ...

  4. Tools for Trustworthy Autonomy: Robust Predictions, Intuitive Control, and Optimized Interaction

    OpenAIRE

    Driggs Campbell, Katherine Rose

    2017-01-01

    In the near future, robotics will impact nearly every aspect of life. Yet for technology to smoothly integrate into society, we need interactive systems to be well modeled and predictable; have robust decision making and control; and be trustworthy to improve cooperation and interaction. To achieve these goals, we propose taking a human-centered approach to ease the transition into human-dominated fields. In this work, our modeling methods and control schemes are validated through user stu...

  5. A robust stochastic approach for design optimization of air cooled heat exchangers

    Energy Technology Data Exchange (ETDEWEB)

    Doodman, A.R.; Fesanghary, M.; Hosseini, R. [Department of Mechanical Engineering, Amirkabir University of Technology, 424-Hafez Avenue, 15875-4413 Tehran (Iran)

    2009-07-15

    This study investigates the use of global sensitivity analysis (GSA) and harmony search (HS) algorithm for design optimization of air cooled heat exchangers (ACHEs) from the economic viewpoint. In order to reduce the size of the optimization problem, GSA is performed to examine the effect of the design parameters and to identify the non-influential parameters. Then HS is applied to optimize influential parameters. To demonstrate the ability of the HS algorithm a case study is considered and for validation purpose, genetic algorithm (GA) is also applied to this case study. Results reveal that the HS algorithm converges to optimum solution with higher accuracy in comparison with GA. (author)

  6. A robust stochastic approach for design optimization of air cooled heat exchangers

    International Nuclear Information System (INIS)

    Doodman, A.R.; Fesanghary, M.; Hosseini, R.

    2009-01-01

    This study investigates the use of global sensitivity analysis (GSA) and harmony search (HS) algorithm for design optimization of air cooled heat exchangers (ACHEs) from the economic viewpoint. In order to reduce the size of the optimization problem, GSA is performed to examine the effect of the design parameters and to identify the non-influential parameters. Then HS is applied to optimize influential parameters. To demonstrate the ability of the HS algorithm a case study is considered and for validation purpose, genetic algorithm (GA) is also applied to this case study. Results reveal that the HS algorithm converges to optimum solution with higher accuracy in comparison with GA

  7. SU-E-T-452: Impact of Respiratory Motion On Robustly-Optimized Intensity-Modulated Proton Therapy to Treat Lung Cancers

    International Nuclear Information System (INIS)

    Liu, W; Schild, S; Bues, M; Liao, Z; Sahoo, N; Park, P; Li, H; Li, Y; Li, X; Shen, J; Anand, A; Dong, L; Zhu, X; Mohan, R

    2014-01-01

    Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from the internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly

  8. Robust design of decentralized power system stabilizers using meta-heuristic optimization techniques for multimachine systems

    Directory of Open Access Journals (Sweden)

    Jeevanandham Arumugam

    2009-01-01

    Full Text Available In this paper a classical lead-lag power system stabilizer is used for demonstration. The stabilizer parameters are selected in such a manner to damp the rotor oscillations. The problem of selecting the stabilizer parameters is converted to a simple optimization problem with an eigen value based objective function and it is proposed to employ simulated annealing and particle swarm optimization for solving the optimization problem. The objective function allows the selection of the stabilizer parameters to optimally place the closed-loop eigen values in the left hand side of the complex s-plane. The single machine connected to infinite bus system and 10-machine 39-bus system are considered for this study. The effectiveness of the stabilizer tuned using the best technique, in enhancing the stability of power system. Stability is confirmed through eigen value analysis and simulation results and suitable heuristic technique will be selected for the best performance of the system.

  9. Robust output observer-based control of neutral uncertain systems with discrete and distributed time delays: LMI optimization approach

    International Nuclear Information System (INIS)

    Chen, J.-D.

    2007-01-01

    In this paper, the robust control problem of output dynamic observer-based control for a class of uncertain neutral systems with discrete and distributed time delays is considered. Linear matrix inequality (LMI) optimization approach is used to design the new output dynamic observer-based controls. Three classes of observer-based controls are proposed and the maximal perturbed bound is given. Based on the results of this paper, the constraint of matrix equality is not necessary for designing the observer-based controls. Finally, a numerical example is given to illustrate the usefulness of the proposed method

  10. An integer optimization algorithm for robust identification of non-linear gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Chemmangattuvalappil Nishanth

    2012-09-01

    Full Text Available Abstract Background Reverse engineering gene networks and identifying regulatory interactions are integral to understanding cellular decision making processes. Advancement in high throughput experimental techniques has initiated innovative data driven analysis of gene regulatory networks. However, inherent noise associated with biological systems requires numerous experimental replicates for reliable conclusions. Furthermore, evidence of robust algorithms directly exploiting basic biological traits are few. Such algorithms are expected to be efficient in their performance and robust in their prediction. Results We have developed a network identification algorithm to accurately infer both the topology and strength of regulatory interactions from time series gene expression data in the presence of significant experimental noise and non-linear behavior. In this novel formulism, we have addressed data variability in biological systems by integrating network identification with the bootstrap resampling technique, hence predicting robust interactions from limited experimental replicates subjected to noise. Furthermore, we have incorporated non-linearity in gene dynamics using the S-system formulation. The basic network identification formulation exploits the trait of sparsity of biological interactions. Towards that, the identification algorithm is formulated as an integer-programming problem by introducing binary variables for each network component. The objective function is targeted to minimize the network connections subjected to the constraint of maximal agreement between the experimental and predicted gene dynamics. The developed algorithm is validated using both in silico and experimental data-sets. These studies show that the algorithm can accurately predict the topology and connection strength of the in silico networks, as quantified by high precision and recall, and small discrepancy between the actual and predicted kinetic parameters

  11. The effect of the signalling scheme on the robustness of pattern formation in development

    KAUST Repository

    Kang, H.-W.

    2012-03-21

    Pattern formation in development is a complex process which involves spatially distributed signals called morphogens that influence gene expression and thus the phenotypic identity of cells. Usually different cell types are spatially segregated, and the boundary between them may be determined by a threshold value of some state variable. The question arises as to how sensitive the location of such a boundary is to variations in properties, such as parameter values, that characterize the system. Here, we analyse both deterministic and stochastic reaction-diffusion models of pattern formation with a view towards understanding how the signalling scheme used for patterning affects the variability of boundary determination between cell types in a developing tissue.

  12. Sensory Cortical Plasticity Participates in the Epigenetic Regulation of Robust Memory Formation

    OpenAIRE

    Mimi L. Phan; Kasia M. Bieszczad

    2016-01-01

    Neuroplasticity remodels sensory cortex across the lifespan. A function of adult sensory cortical plasticity may be capturing available information during perception for memory formation. The degree of experience-dependent remodeling in sensory cortex appears to determine memory strength and specificity for important sensory signals. A key open question is how plasticity is engaged to induce different degrees of sensory cortical remodeling. Neural plasticity for long-term memory requires the ...

  13. Robust Drones Formation Control in 5G Wireless Sensor Network Using mmWave

    Directory of Open Access Journals (Sweden)

    Shan Meng

    2018-01-01

    Full Text Available The drones formation control in 5G wireless sensor network is discussed. The base station (BS is used to receive backhaul position signals from the lead drone in formation and launches the beam to the lead one as the fronthaul flying signal enhancement. It is a promising approach to raise the formation strength of drones during flight control. The BS can transform the direction of the antennas and transmit energy to the lead drone that could widely enlarge the number of the receivers and increase the transmission speed of the data links. The millimeter-Wave (mmWave communication system offers new opportunities to meet this requirement owing to the tremendous amount of available spectrums. However, the massive non-line-of-sight (NLoS transmission and the site constraints in urban environment are severely challenging the conventional deploying terrestrial low power nodes (LPNs. Simulation experiments have been performed to verify the availability and effectiveness of mmWave in 5G wireless sensor network.

  14. Experimental Modeling of Monolithic Resistors for Silicon ICS with a Robust Optimizer-Driving Scheme

    Directory of Open Access Journals (Sweden)

    Philippe Leduc

    2002-06-01

    Full Text Available Today, an exhaustive library of models describing the electrical behavior of integrated passive components in the radio-frequency range is essential for the simulation and optimization of complex circuits. In this work, a preliminary study has been done on Tantalum Nitride (TaN resistors integrated on silicon, and this leads to a single p-type lumped-element circuit. An efficient extraction technique will be presented to provide a computer-driven optimizer with relevant initial model parameter values (the "guess-timate". The results show the unicity in most cases of the lumped element determination, which leads to a precise simulation of self-resonant frequencies.

  15. Robust optimization of the laser induced damage threshold of dielectric mirrors for high power lasers.

    Science.gov (United States)

    Chorel, Marine; Lanternier, Thomas; Lavastre, Éric; Bonod, Nicolas; Bousquet, Bruno; Néauport, Jérôme

    2018-04-30

    We report on a numerical optimization of the laser induced damage threshold of multi-dielectric high reflection mirrors in the sub-picosecond regime. We highlight the interplay between the electric field distribution, refractive index and intrinsic laser induced damage threshold of the materials on the overall laser induced damage threshold (LIDT) of the multilayer. We describe an optimization method of the multilayer that minimizes the field enhancement in high refractive index materials while preserving a near perfect reflectivity. This method yields a significant improvement of the damage resistance since a maximum increase of 40% can be achieved on the overall LIDT of the multilayer.

  16. Optimizing Linear Functions with Randomized Search Heuristics - The Robustness of Mutation

    DEFF Research Database (Denmark)

    Witt, Carsten

    2012-01-01

    The analysis of randomized search heuristics on classes of functions is fundamental for the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of the simple (1...

  17. Robust optimal control of material flows in demand-driven supply networks

    NARCIS (Netherlands)

    Laumanns, M.; Lefeber, A.A.J.

    2006-01-01

    We develop a model based on stochastic discrete-time controlleddynamical systems in order to derive optimal policies for controllingthe material flow in supply networks. Each node in the network isdescribed as a transducer such that the dynamics of the material andinformation flows within the entire

  18. The Orienteering Problem under Uncertainty Stochastic Programming and Robust Optimization compared

    NARCIS (Netherlands)

    Evers, L.; Glorie, K.; Ster, S. van der; Barros, A.I.; Monsuur, H.

    2012-01-01

    The Orienteering Problem (OP) is a generalization of the well-known traveling salesman problem and has many interesting applications in logistics, tourism and defense. To reflect real-life situations, we focus on an uncertain variant of the OP. Two main approaches that deal with optimization under

  19. Benefits of using an optimization methodology for identifying robust process integration investments under uncertainty-A pulp mill example

    International Nuclear Information System (INIS)

    Svensson, Elin; Berntsson, Thore; Stroemberg, Ann-Brith

    2009-01-01

    This paper presents a case study on the optimization of process integration investments in a pulp mill considering uncertainties in future electricity and biofuel prices and CO 2 emissions charges. The work follows the methodology described in Svensson et al. [Svensson, E., Berntsson, T., Stroemberg, A.-B., Patriksson, M., 2008b. An optimization methodology for identifying robust process integration investments under uncertainty. Energy Policy, in press, (doi:10.1016/j.enpol.2008.10.023)] where a scenario-based approach is proposed for the modelling of uncertainties. The results show that the proposed methodology provides a way to handle the time dependence and the uncertainties of the parameters. For the analyzed case, a robust solution is found which turns out to be a combination of two opposing investment strategies. The difference between short-term and strategic views for the investment decision is analyzed and it is found that uncertainties are increasingly important to account for as a more strategic view is employed. Furthermore, the results imply that the obvious effect of policy instruments aimed at decreasing CO 2 emissions is, in applications like this, an increased profitability for all energy efficiency investments, and not as much a shift between different alternatives

  20. Benefits of using an optimization methodology for identifying robust process integration investments under uncertainty-A pulp mill example

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Elin [Department of Energy and Environment, Division of Heat and Power Technology, Chalmers University of Technology, SE-412 96 Goeteborg (Sweden)], E-mail: elin.svensson@chalmers.se; Berntsson, Thore [Department of Energy and Environment, Division of Heat and Power Technology, Chalmers University of Technology, SE-412 96 Goeteborg (Sweden); Stroemberg, Ann-Brith [Fraunhofer-Chalmers Research Centre for Industrial Mathematics, Chalmers Science Park, SE-412 88 Gothenburg (Sweden)

    2009-03-15

    This paper presents a case study on the optimization of process integration investments in a pulp mill considering uncertainties in future electricity and biofuel prices and CO{sub 2} emissions charges. The work follows the methodology described in Svensson et al. [Svensson, E., Berntsson, T., Stroemberg, A.-B., Patriksson, M., 2008b. An optimization methodology for identifying robust process integration investments under uncertainty. Energy Policy, in press, (doi:10.1016/j.enpol.2008.10.023)] where a scenario-based approach is proposed for the modelling of uncertainties. The results show that the proposed methodology provides a way to handle the time dependence and the uncertainties of the parameters. For the analyzed case, a robust solution is found which turns out to be a combination of two opposing investment strategies. The difference between short-term and strategic views for the investment decision is analyzed and it is found that uncertainties are increasingly important to account for as a more strategic view is employed. Furthermore, the results imply that the obvious effect of policy instruments aimed at decreasing CO{sub 2} emissions is, in applications like this, an increased profitability for all energy efficiency investments, and not as much a shift between different alternatives.

  1. Benefits of using an optimization methodology for identifying robust process integration investments under uncertainty. A pulp mill example

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Elin; Berntsson, Thore [Department of Energy and Environment, Division of Heat and Power Technology, Chalmers University of Technology, SE-412 96 Goeteborg (Sweden); Stroemberg, Ann-Brith [Fraunhofer-Chalmers Research Centre for Industrial Mathematics, Chalmers Science Park, SE-412 88 Gothenburg (Sweden)

    2009-03-15

    This paper presents a case study on the optimization of process integration investments in a pulp mill considering uncertainties in future electricity and biofuel prices and CO{sub 2} emissions charges. The work follows the methodology described in Svensson et al. [Svensson, E., Berntsson, T., Stroemberg, A.-B., Patriksson, M., 2008b. An optimization methodology for identifying robust process integration investments under uncertainty. Energy Policy, in press, doi:10.1016/j.enpol.2008.10.023] where a scenario-based approach is proposed for the modelling of uncertainties. The results show that the proposed methodology provides a way to handle the time dependence and the uncertainties of the parameters. For the analyzed case, a robust solution is found which turns out to be a combination of two opposing investment strategies. The difference between short-term and strategic views for the investment decision is analyzed and it is found that uncertainties are increasingly important to account for as a more strategic view is employed. Furthermore, the results imply that the obvious effect of policy instruments aimed at decreasing CO{sub 2} emissions is, in applications like this, an increased profitability for all energy efficiency investments, and not as much a shift between different alternatives. (author)

  2. Dual-loop self-optimizing robust control of wind power generation with Doubly-Fed Induction Generator.

    Science.gov (United States)

    Chen, Quan; Li, Yaoyu; Seem, John E

    2015-09-01

    This paper presents a self-optimizing robust control scheme that can maximize the power generation for a variable speed wind turbine with Doubly-Fed Induction Generator (DFIG) operated in Region 2. A dual-loop control structure is proposed to synergize the conversion from aerodynamic power to rotor power and the conversion from rotor power to the electrical power. The outer loop is an Extremum Seeking Control (ESC) based generator torque regulation via the electric power feedback. The ESC can search for the optimal generator torque constant to maximize the rotor power without wind measurement or accurate knowledge of power map. The inner loop is a vector-control based scheme that can both regulate the generator torque requested by the ESC and also maximize the conversion from the rotor power to grid power. An ℋ(∞) controller is synthesized for maximizing, with performance specifications defined based upon the spectrum of the rotor power obtained by the ESC. Also, the controller is designed to be robust against the variations of some generator parameters. The proposed control strategy is validated via simulation study based on the synergy of several software packages including the TurbSim and FAST developed by NREL, Simulink and SimPowerSystems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Practical Robust Optimization Method for Unit Commitment of a System with Integrated Wind Resource

    Directory of Open Access Journals (Sweden)

    Yuanchao Yang

    2017-01-01

    Full Text Available Unit commitment, one of the significant tasks in power system operations, faces new challenges as the system uncertainty increases dramatically due to the integration of time-varying resources, such as wind. To address these challenges, we propose the formulation and solution of a generalized unit commitment problem for a system with integrated wind resources. Given the prespecified interval information acquired from real central wind forecasting system for uncertainty representation of nodal wind injections with their correlation information, the proposed unit commitment problem solution is computationally tractable and robust against all uncertain wind power injection realizations. We provide a solution approach to tackle this problem with complex mathematical basics and illustrate the capabilities of the proposed mixed integer solution approach on the large-scale power system of the Northwest China Grid. The numerical results demonstrate that the approach is realistic and not overly conservative in terms of the resulting dispatch cost outcomes.

  4. Using Multi-Objective Optimization to Explore Robust Policies in the Colorado River Basin

    Science.gov (United States)

    Alexander, E.; Kasprzyk, J. R.; Zagona, E. A.; Prairie, J. R.; Jerla, C.; Butler, A.

    2017-12-01

    The long term reliability of water deliveries in the Colorado River Basin has degraded due to the imbalance of growing demand and dwindling supply. The Colorado River meanders 1,450 miles across a watershed that covers seven US states and Mexico and is an important cultural, economic, and natural resource for nearly 40 million people. Its complex operating policy is based on the "Law of the River," which has evolved since the Colorado River Compact in 1922. Recent (2007) refinements to address shortage reductions and coordinated operations of Lakes Powell and Mead were negotiated with stakeholders in which thousands of scenarios were explored to identify operating guidelines that could ultimately be agreed on. This study explores a different approach to searching for robust operating policies to inform the policy making process. The Colorado River Simulation System (CRSS), a long-term water management simulation model implemented in RiverWare, is combined with the Borg multi-objective evolutionary algorithm (MOEA) to solve an eight objective problem formulation. Basin-wide performance metrics are closely tied to system health through incorporating critical reservoir pool elevations, duration, frequency and quantity of shortage reductions in the objective set. For example, an objective to minimize the frequency that Lake Powell falls below the minimum power pool elevation of 3,490 feet for Glen Canyon Dam protects a vital economic and renewable energy source for the southwestern US. The decision variables correspond to operating tiers in Lakes Powell and Mead that drive the implementation of various shortage and release policies, thus affecting system performance. The result will be a set of non-dominated solutions that can be compared with respect to their trade-offs based on the various objectives. These could inform policy making processes by eliminating dominated solutions and revealing robust solutions that could remain hidden under conventional analysis.

  5. Robustness, efficiency, and optimality in the Fenna-Matthews-Olson photosynthetic pigment-protein complex

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Lewis A.; Habershon, Scott, E-mail: S.Habershon@warwick.ac.uk [Department of Chemistry and Centre for Scientific Computing, University of Warwick, Coventry CV4 7AL (United Kingdom)

    2015-09-14

    Pigment-protein complexes (PPCs) play a central role in facilitating excitation energy transfer (EET) from light-harvesting antenna complexes to reaction centres in photosynthetic systems; understanding molecular organisation in these biological networks is key to developing better artificial light-harvesting systems. In this article, we combine quantum-mechanical simulations and a network-based picture of transport to investigate how chromophore organization and protein environment in PPCs impacts on EET efficiency and robustness. In a prototypical PPC model, the Fenna-Matthews-Olson (FMO) complex, we consider the impact on EET efficiency of both disrupting the chromophore network and changing the influence of (local and global) environmental dephasing. Surprisingly, we find a large degree of resilience to changes in both chromophore network and protein environmental dephasing, the extent of which is greater than previously observed; for example, FMO maintains EET when 50% of the constituent chromophores are removed, or when environmental dephasing fluctuations vary over two orders-of-magnitude relative to the in vivo system. We also highlight the fact that the influence of local dephasing can be strongly dependent on the characteristics of the EET network and the initial excitation; for example, initial excitations resulting in rapid coherent decay are generally insensitive to the environment, whereas the incoherent population decay observed following excitation at weakly coupled chromophores demonstrates a more pronounced dependence on dephasing rate as a result of the greater possibility of local exciton trapping. Finally, we show that the FMO electronic Hamiltonian is not particularly optimised for EET; instead, it is just one of many possible chromophore organisations which demonstrate a good level of EET transport efficiency following excitation at different chromophores. Overall, these robustness and efficiency characteristics are attributed to the highly

  6. Robustness, efficiency, and optimality in the Fenna-Matthews-Olson photosynthetic pigment-protein complex

    International Nuclear Information System (INIS)

    Baker, Lewis A.; Habershon, Scott

    2015-01-01

    Pigment-protein complexes (PPCs) play a central role in facilitating excitation energy transfer (EET) from light-harvesting antenna complexes to reaction centres in photosynthetic systems; understanding molecular organisation in these biological networks is key to developing better artificial light-harvesting systems. In this article, we combine quantum-mechanical simulations and a network-based picture of transport to investigate how chromophore organization and protein environment in PPCs impacts on EET efficiency and robustness. In a prototypical PPC model, the Fenna-Matthews-Olson (FMO) complex, we consider the impact on EET efficiency of both disrupting the chromophore network and changing the influence of (local and global) environmental dephasing. Surprisingly, we find a large degree of resilience to changes in both chromophore network and protein environmental dephasing, the extent of which is greater than previously observed; for example, FMO maintains EET when 50% of the constituent chromophores are removed, or when environmental dephasing fluctuations vary over two orders-of-magnitude relative to the in vivo system. We also highlight the fact that the influence of local dephasing can be strongly dependent on the characteristics of the EET network and the initial excitation; for example, initial excitations resulting in rapid coherent decay are generally insensitive to the environment, whereas the incoherent population decay observed following excitation at weakly coupled chromophores demonstrates a more pronounced dependence on dephasing rate as a result of the greater possibility of local exciton trapping. Finally, we show that the FMO electronic Hamiltonian is not particularly optimised for EET; instead, it is just one of many possible chromophore organisations which demonstrate a good level of EET transport efficiency following excitation at different chromophores. Overall, these robustness and efficiency characteristics are attributed to the highly

  7. Robust optimization of the output voltage of nanogenerators by statistical design of experiments

    KAUST Repository

    Song, Jinhui; Xie, Huizhi; Wu, Wenzhuo; Roshan Joseph, V.; Jeff Wu, C. F.; Wang, Zhong Lin

    2010-01-01

    Nanogenerators were first demonstrated by deflecting aligned ZnO nanowires using a conductive atomic force microscopy (AFM) tip. The output of a nanogenerator is affected by three parameters: tip normal force, tip scanning speed, and tip abrasion. In this work, systematic experimental studies have been carried out to examine the combined effects of these three parameters on the output, using statistical design of experiments. A statistical model has been built to analyze the data and predict the optimal parameter settings. For an AFM tip of cone angle 70° coated with Pt, and ZnO nanowires with a diameter of 50 nm and lengths of 600 nm to 1 μm, the optimized parameters for the nanogenerator were found to be a normal force of 137 nN and scanning speed of 40 μm/s, rather than the conventional settings of 120 nN for the normal force and 30 μm/s for the scanning speed. A nanogenerator with the optimized settings has three times the average output voltage of one with the conventional settings. © 2010 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  8. Robust optimization of the output voltage of nanogenerators by statistical design of experiments

    KAUST Repository

    Song, Jinhui

    2010-09-01

    Nanogenerators were first demonstrated by deflecting aligned ZnO nanowires using a conductive atomic force microscopy (AFM) tip. The output of a nanogenerator is affected by three parameters: tip normal force, tip scanning speed, and tip abrasion. In this work, systematic experimental studies have been carried out to examine the combined effects of these three parameters on the output, using statistical design of experiments. A statistical model has been built to analyze the data and predict the optimal parameter settings. For an AFM tip of cone angle 70° coated with Pt, and ZnO nanowires with a diameter of 50 nm and lengths of 600 nm to 1 μm, the optimized parameters for the nanogenerator were found to be a normal force of 137 nN and scanning speed of 40 μm/s, rather than the conventional settings of 120 nN for the normal force and 30 μm/s for the scanning speed. A nanogenerator with the optimized settings has three times the average output voltage of one with the conventional settings. © 2010 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  9. Robust Sliding Mode Control Based on GA Optimization and CMAC Compensation for Lower Limb Exoskeleton

    Directory of Open Access Journals (Sweden)

    Yi Long

    2016-01-01

    Full Text Available A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user’s intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC with a cerebellar model articulation controller (CMAC neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA, and SMC with CMAC compensation (SMC_CMAC, all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems.

  10. Robust Sliding Mode Control Based on GA Optimization and CMAC Compensation for Lower Limb Exoskeleton

    Science.gov (United States)

    Long, Yi; Du, Zhi-jiang; Wang, Wei-dong; Dong, Wei

    2016-01-01

    A lower limb assistive exoskeleton is designed to help operators walk or carry payloads. The exoskeleton is required to shadow human motion intent accurately and compliantly to prevent incoordination. If the user's intention is estimated accurately, a precise position control strategy will improve collaboration between the user and the exoskeleton. In this paper, a hybrid position control scheme, combining sliding mode control (SMC) with a cerebellar model articulation controller (CMAC) neural network, is proposed to control the exoskeleton to react appropriately to human motion intent. A genetic algorithm (GA) is utilized to determine the optimal sliding surface and the sliding control law to improve performance of SMC. The proposed control strategy (SMC_GA_CMAC) is compared with three other types of approaches, that is, conventional SMC without optimization, optimal SMC with GA (SMC_GA), and SMC with CMAC compensation (SMC_CMAC), all of which are employed to track the desired joint angular position which is deduced from Clinical Gait Analysis (CGA) data. Position tracking performance is investigated with cosimulation using ADAMS and MATLAB/SIMULINK in two cases, of which the first case is without disturbances while the second case is with a bounded disturbance. The cosimulation results show the effectiveness of the proposed control strategy which can be employed in similar exoskeleton systems. PMID:27069353

  11. Robust source and mask optimization compensating for mask topography effects in computational lithography.

    Science.gov (United States)

    Li, Jia; Lam, Edmund Y

    2014-04-21

    Mask topography effects need to be taken into consideration for a more accurate solution of source mask optimization (SMO) in advanced optical lithography. However, rigorous 3D mask models generally involve intensive computation and conventional SMO fails to manipulate the mask-induced undesired phase errors that degrade the usable depth of focus (uDOF) and process yield. In this work, an optimization approach incorporating pupil wavefront aberrations into SMO procedure is developed as an alternative to maximize the uDOF. We first design the pupil wavefront function by adding primary and secondary spherical aberrations through the coefficients of the Zernike polynomials, and then apply the conjugate gradient method to achieve an optimal source-mask pair under the condition of aberrated pupil. We also use a statistical model to determine the Zernike coefficients for the phase control and adjustment. Rigorous simulations of thick masks show that this approach provides compensation for mask topography effects by improving the pattern fidelity and increasing uDOF.

  12. Using a Robust Design Approach to Optimize Chair Set-up in Wheelchair Sport

    Directory of Open Access Journals (Sweden)

    David S. Haydon

    2018-02-01

    Full Text Available Optimisation of wheelchairs for court sports is currently a difficult and time-consuming process due to the broad range of impairments across athletes, difficulties in monitoring on-court performance, and the trade-off set-up that parameters have on key performance variables. A robust design approach to this problem can potentially reduce the amount of testing required, and therefore allow for individual on-court assessments. This study used orthogonal design with four set-up factors (seat height, depth, and angle, as well as tyre pressure at three levels (current, decreased, and increased for three elite wheelchair rugby players. Each player performed two maximal effort sprints from a stationary position in nine different set-ups, with this allowing for detailed analysis of each factor and level. Whilst statistical significance is difficult to obtain due to the small sample size, meaningful difference results aligning with previous research findings were identified and provide support for the use of this approach.

  13. Robust/optimal temperature profile control of a high-speed aerospace vehicle using neural networks.

    Science.gov (United States)

    Yadav, Vivek; Padhi, Radhakant; Balakrishnan, S N

    2007-07-01

    An approximate dynamic programming (ADP)-based suboptimal neurocontroller to obtain desired temperature for a high-speed aerospace vehicle is synthesized in this paper. A 1-D distributed parameter model of a fin is developed from basic thermal physics principles. "Snapshot" solutions of the dynamics are generated with a simple dynamic inversion-based feedback controller. Empirical basis functions are designed using the "proper orthogonal decomposition" (POD) technique and the snapshot solutions. A low-order nonlinear lumped parameter system to characterize the infinite dimensional system is obtained by carrying out a Galerkin projection. An ADP-based neurocontroller with a dual heuristic programming (DHP) formulation is obtained with a single-network-adaptive-critic (SNAC) controller for this approximate nonlinear model. Actual control in the original domain is calculated with the same POD basis functions through a reverse mapping. Further contribution of this paper includes development of an online robust neurocontroller to account for unmodeled dynamics and parametric uncertainties inherent in such a complex dynamic system. A neural network (NN) weight update rule that guarantees boundedness of the weights and relaxes the need for persistence of excitation (PE) condition is presented. Simulation studies show that in a fairly extensive but compact domain, any desired temperature profile can be achieved starting from any initial temperature profile. Therefore, the ADP and NN-based controllers appear to have the potential to become controller synthesis tools for nonlinear distributed parameter systems.

  14. Product code optimization for determinate state LDPC decoding in robust image transmission.

    Science.gov (United States)

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  15. Robust Weighted Sum Harvested Energy Maximization for SWIPT Cognitive Radio Networks Based on Particle Swarm Optimization.

    Science.gov (United States)

    Tuan, Pham Viet; Koo, Insoo

    2017-10-06

    In this paper, we consider multiuser simultaneous wireless information and power transfer (SWIPT) for cognitive radio systems where a secondary transmitter (ST) with an antenna array provides information and energy to multiple single-antenna secondary receivers (SRs) equipped with a power splitting (PS) receiving scheme when multiple primary users (PUs) exist. The main objective of the paper is to maximize weighted sum harvested energy for SRs while satisfying their minimum required signal-to-interference-plus-noise ratio (SINR), the limited transmission power at the ST, and the interference threshold of each PU. For the perfect channel state information (CSI), the optimal beamforming vectors and PS ratios are achieved by the proposed PSO-SDR in which semidefinite relaxation (SDR) and particle swarm optimization (PSO) methods are jointly combined. We prove that SDR always has a rank-1 solution, and is indeed tight. For the imperfect CSI with bounded channel vector errors, the upper bound of weighted sum harvested energy (WSHE) is also obtained through the S-Procedure. Finally, simulation results demonstrate that the proposed PSO-SDR has fast convergence and better performance as compared to the other baseline schemes.

  16. Robust optimization of the billet for isothermal local loading transitional region of a Ti-alloy rib-web component based on dual-response surface method

    Science.gov (United States)

    Wei, Ke; Fan, Xiaoguang; Zhan, Mei; Meng, Miao

    2018-03-01

    Billet optimization can greatly improve the forming quality of the transitional region in the isothermal local loading forming (ILLF) of large-scale Ti-alloy ribweb components. However, the final quality of the transitional region may be deteriorated by uncontrollable factors, such as the manufacturing tolerance of the preforming billet, fluctuation of the stroke length, and friction factor. Thus, a dual-response surface method (RSM)-based robust optimization of the billet was proposed to address the uncontrollable factors in transitional region of the ILLF. Given that the die underfilling and folding defect are two key factors that influence the forming quality of the transitional region, minimizing the mean and standard deviation of the die underfilling rate and avoiding folding defect were defined as the objective function and constraint condition in robust optimization. Then, the cross array design was constructed, a dual-RSM model was established for the mean and standard deviation of the die underfilling rate by considering the size parameters of the billet and uncontrollable factors. Subsequently, an optimum solution was derived to achieve the robust optimization of the billet. A case study on robust optimization was conducted. Good results were attained for improving the die filling and avoiding folding defect, suggesting that the robust optimization of the billet in the transitional region of the ILLF was efficient and reliable.

  17. Fab-based bispecific antibody formats with robust biophysical properties and biological activity.

    Science.gov (United States)

    Wu, Xiufeng; Sereno, Arlene J; Huang, Flora; Lewis, Steven M; Lieu, Ricky L; Weldon, Caroline; Torres, Carina; Fine, Cody; Batt, Micheal A; Fitchett, Jonathan R; Glasebrook, Andrew L; Kuhlman, Brian; Demarest, Stephen J

    2015-01-01

    A myriad of innovative bispecific antibody (BsAb) platforms have been reported. Most require significant protein engineering to be viable from a development and manufacturing perspective. Single-chain variable fragments (scFvs) and diabodies that consist only of antibody variable domains have been used as building blocks for making BsAbs for decades. The drawback with Fv-only moieties is that they lack the native-like interactions with CH1/CL domains that make antibody Fab regions stable and soluble. Here, we utilize a redesigned Fab interface to explore 2 novel Fab-based BsAbs platforms. The redesigned Fab interface designs limit heavy and light chain mixing when 2 Fabs are co-expressed simultaneously, thus allowing the use of 2 different Fabs within a BsAb construct without the requirement of one or more scFvs. We describe the stability and activity of a HER2×HER2 IgG-Fab BsAb, and compare its biophysical and activity properties with those of an IgG-scFv that utilizes the variable domains of the same parental antibodies. We also generated an EGFR × CD3 tandem Fab protein with a similar format to a tandem scFv (otherwise known as a bispecific T cell engager or BiTE). We show that the Fab-based BsAbs have superior biophysical properties compared to the scFv-based BsAbs. Additionally, the Fab-based BsAbs do not simply recapitulate the activity of their scFv counterparts, but are shown to possess unique biological activity.

  18. Classical gas: Hearty prices, robust demand combine to pump breezy optimism through 2005 forecasts

    International Nuclear Information System (INIS)

    Lunan, D.

    2005-01-01

    The outlook for natural gas in 2005 is said to be a watershed year, with a lengthy list of developments that could have significant effect on the industry for many years to come. In light of continuing high demand and static supply prospects, prices will have to continue to be high in order to ensure the necessary infrastructure investments to keep gas flowing from multiple sources to the consumer. It is predicted that against the backdrop of robust prices several supply initiatives will continue to advance rapidly in 2005, such as the $7 billion Mackenzie Gas Project on which public hearings are expected to start this summer, along with regulatory clarity about the $20 billion Alaska Highway Natural Gas Pipeline Project to move North Slope gas to southern markets. Drilling of new gas wells will continue to approach or even surpass 18,000 new wells, with an increasing number of these being coal-bed methane wells. Despite high level drilling activity, supply is expected to grow only about 400 MMcf per day. Greater supply increments are expected through continued LNG terminal development, although plans for new LNG terminal development have been met with stiff resistance from local residents both in Canada and the United States. Imports of liquefied natural gas into the United States slowed dramatically in 2004 under the severe short-term downward pressure on natural gas prices, nevertheless, these imports are expected to rebound to new record highs in 2005. Capacity is expected to climb from about 2.55 Bcf per day in 2004 to as much as 6.4 Bcf per day by late 2007. At least one Canadian import facility, Anadarko's one Bcf per day Bear Head terminal on Nova Scotia's Strait of Canso, is expected to become operational by late 2007 or early 2008. 6 photos

  19. Optimization of the HyPer sensor for robust real-time detection of hydrogen peroxide in the rice blast fungus.

    Science.gov (United States)

    Huang, Kun; Caplan, Jeff; Sweigard, James A; Czymmek, Kirk J; Donofrio, Nicole M

    2017-02-01

    Reactive oxygen species (ROS) production and breakdown have been studied in detail in plant-pathogenic fungi, including the rice blast fungus, Magnaporthe oryzae; however, the examination of the dynamic process of ROS production in real time has proven to be challenging. We resynthesized an existing ROS sensor, called HyPer, to exhibit optimized codon bias for fungi, specifically Neurospora crassa, and used a combination of microscopy and plate reader assays to determine whether this construct could detect changes in fungal ROS during the plant infection process. Using confocal microscopy, we were able to visualize fluctuating ROS levels during the formation of an appressorium on an artificial hydrophobic surface, as well as during infection on host leaves. Using the plate reader, we were able to ascertain measurements of hydrogen peroxide (H 2 O 2 ) levels in conidia as detected by the MoHyPer sensor. Overall, by the optimization of codon usage for N. crassa and related fungal genomes, the MoHyPer sensor can be used as a robust, dynamic and powerful tool to both monitor and quantify H 2 O 2 dynamics in real time during important stages of the plant infection process. © 2016 BSPP AND JOHN WILEY & SONS LTD.

  20. A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications

    Science.gov (United States)

    DuMonthier, Jeffrey; Suarez, George

    2013-01-01

    Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the

  1. A traditional and a less-invasive robust design: choices in optimizing effort allocation for seabird population studies

    Science.gov (United States)

    Converse, S.J.; Kendall, W.L.; Doherty, P.F.; Naughton, M.B.; Hines, J.E.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    For many animal populations, one or more life stages are not accessible to sampling, and therefore an unobservable state is created. For colonially-breeding populations, this unobservable state could represent the subset of adult breeders that have foregone breeding in a given year. This situation applies to many seabird populations, notably albatrosses, where skipped breeders are either absent from the colony, or are present but difficult to capture or correctly assign to breeding state. Kendall et al. have proposed design strategies for investigations of seabird demography where such temporary emigration occurs, suggesting the use of the robust design to permit the estimation of time-dependent parameters and to increase the precision of estimates from multi-state models. A traditional robust design, where animals are subject to capture multiple times in a sampling season, is feasible in many cases. However, due to concerns that multiple captures per season could cause undue disturbance to animals, Kendall et al. developed a less-invasive robust design (LIRD), where initial captures are followed by an assessment of the ratio of marked-to-unmarked birds in the population or sampled plot. This approach has recently been applied in the Northwestern Hawaiian Islands to populations of Laysan (Phoebastria immutabilis) and black-footed (P. nigripes) albatrosses. In this paper, we outline the LIRD and its application to seabird population studies. We then describe an approach to determining optimal allocation of sampling effort in which we consider a non-robust design option (nRD), and variations of both the traditional robust design (RD), and the LIRD. Variations we considered included the number of secondary sampling occasions for the RD and the amount of total effort allocated to the marked-to-unmarked ratio assessment for the LIRD. We used simulations, informed by early data from the Hawaiian study, to address optimal study design for our example cases. We found that

  2. Robustness of Operational Matrices of Differentiation for Solving State-Space Analysis and Optimal Control Problems

    Directory of Open Access Journals (Sweden)

    Emran Tohidi

    2013-01-01

    Full Text Available The idea of approximation by monomials together with the collocation technique over a uniform mesh for solving state-space analysis and optimal control problems (OCPs has been proposed in this paper. After imposing the Pontryagins maximum principle to the main OCPs, the problems reduce to a linear or nonlinear boundary value problem. In the linear case we propose a monomial collocation matrix approach, while in the nonlinear case, the general collocation method has been applied. We also show the efficiency of the operational matrices of differentiation with respect to the operational matrices of integration in our numerical examples. These matrices of integration are related to the Bessel, Walsh, Triangular, Laguerre, and Hermite functions.

  3. Emergence of robust solutions to 0-1 optimization problems in multi-agent systems

    DEFF Research Database (Denmark)

    constructive application in engineering. The approach is demonstrated by giving two examples: First, time-dependent robot-target assignment problems with several autonomous robots and several targets are considered as model of flexible manufacturing systems. Each manufacturing target has to be served...... of autonomous space robots building a space station by a distributed transportation of several parts from a space shuttle to defined positions at the space station. Second, the suggested approach is used for the design and selection of traffic networks. The topology of the network is optimized with respect...... to an additive quantity like the length of route segments and an upper bound for the number of route segments. For this, the dynamics of the selection processes of the previous example is extended such that for each vertex several choices for the edges can be made simultaneously up to an individually given upper...

  4. Microwave potentials and optimal control for robust quantum gates on an atom chip

    International Nuclear Information System (INIS)

    Treutlein, Philipp; Haensch, Theodor W.; Reichel, Jakob; Negretti, Antonio; Cirone, Markus A.; Calarco, Tommaso

    2006-01-01

    We propose a two-qubit collisional phase gate that can be implemented with available atom chip technology and present a detailed theoretical analysis of its performance. The gate is based on earlier phase gate schemes, but uses a qubit state pair with an experimentally demonstrated, very long coherence lifetime. Microwave near fields play a key role in our implementation as a means to realize the state-dependent potentials required for conditional dynamics. Quantum control algorithms are used to optimize gate performance. We employ circuit configurations that can be built with current fabrication processes and extensively discuss the impact of technical noise and imperfections that characterize an actual atom chip. We find an overall infidelity compatible with requirements for fault-tolerant quantum computation

  5. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    CERN Document Server

    Parasuraman, Ramviyas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide red...

  6. MO-FG-CAMPUS-TeP3-04: Deliverable Robust Optimization in IMPT Using Quadratic Objective Function

    Energy Technology Data Exchange (ETDEWEB)

    Shan, J; Liu, W; Bues, M; Schild, S [Mayo Clinic Arizona, Phoenix, AZ (United States)

    2016-06-15

    Purpose: To find and evaluate the way of applying deliverable MU constraints into robust spot intensity optimization in Intensity-Modulated- Proton-Therapy (IMPT) to prevent plan quality and robustness from degrading due to machine deliverable MU-constraints. Methods: Currently, the influence of the deliverable MU-constraints is retrospectively evaluated by post-processing immediately following optimization. In this study, we propose a new method based on the quasi-Newton-like L-BFGS-B algorithm with which we turn deliverable MU-constraints on and off alternatively during optimization. Seven patients with two different machine settings (small and large spot size) were planned with both conventional and new methods. For each patient, three kinds of plans were generated — conventional non-deliverable plan (plan A), conventional deliverable plan with post-processing (plan B), and new deliverable plan (plan C). We performed this study with both realistic (small) and artificial (large) deliverable MU-constraints. Results: With small minimum MU-constraints considered, new method achieved a slightly better plan quality than conventional method (D95% CTV normalized to the prescription dose: 0.994[0.992∼0.996] (Plan C) vs 0.992[0.986∼0.996] (Plan B)). With large minimum MU constraints considered, results show that the new method maintains plan quality while plan quality from the conventional method is degraded greatly (D95% CTV normalized to the prescription dose: 0.987[0.978∼0.994] (Plan C) vs 0.797[0.641∼1.000] (Plan B)). Meanwhile, plan robustness of these two method’s results is comparable. (For all 7 patients, CTV DVH band gap at D95% normalized to the prescription dose: 0.015[0.005∼0.043] (Plan C) vs 0.012[0.006∼0.038] (Plan B) with small MU-constraints and 0.019[0.009∼0.039] (Plan C) vs 0.030[0.015∼0.041] (Plan B) with large MU-constraints) Conclusion: Positive correlation has been found between plan quality degeneration and magnitude of

  7. MO-FG-CAMPUS-TeP3-04: Deliverable Robust Optimization in IMPT Using Quadratic Objective Function

    International Nuclear Information System (INIS)

    Shan, J; Liu, W; Bues, M; Schild, S

    2016-01-01

    Purpose: To find and evaluate the way of applying deliverable MU constraints into robust spot intensity optimization in Intensity-Modulated- Proton-Therapy (IMPT) to prevent plan quality and robustness from degrading due to machine deliverable MU-constraints. Methods: Currently, the influence of the deliverable MU-constraints is retrospectively evaluated by post-processing immediately following optimization. In this study, we propose a new method based on the quasi-Newton-like L-BFGS-B algorithm with which we turn deliverable MU-constraints on and off alternatively during optimization. Seven patients with two different machine settings (small and large spot size) were planned with both conventional and new methods. For each patient, three kinds of plans were generated — conventional non-deliverable plan (plan A), conventional deliverable plan with post-processing (plan B), and new deliverable plan (plan C). We performed this study with both realistic (small) and artificial (large) deliverable MU-constraints. Results: With small minimum MU-constraints considered, new method achieved a slightly better plan quality than conventional method (D95% CTV normalized to the prescription dose: 0.994[0.992∼0.996] (Plan C) vs 0.992[0.986∼0.996] (Plan B)). With large minimum MU constraints considered, results show that the new method maintains plan quality while plan quality from the conventional method is degraded greatly (D95% CTV normalized to the prescription dose: 0.987[0.978∼0.994] (Plan C) vs 0.797[0.641∼1.000] (Plan B)). Meanwhile, plan robustness of these two method’s results is comparable. (For all 7 patients, CTV DVH band gap at D95% normalized to the prescription dose: 0.015[0.005∼0.043] (Plan C) vs 0.012[0.006∼0.038] (Plan B) with small MU-constraints and 0.019[0.009∼0.039] (Plan C) vs 0.030[0.015∼0.041] (Plan B) with large MU-constraints) Conclusion: Positive correlation has been found between plan quality degeneration and magnitude of

  8. Optimization of sources for focusing wave energy in targeted formations

    KAUST Repository

    Jeong, C; Kallivokas, L F; Huh, C; Lake, L W

    2010-01-01

    that will maximize the kinetic energy in the target zone, while keeping silent the neighbouring zones. To this end, we cast the problem as an inverse-source problem, and use a partial-differential- equation-constrained optimization approach to arrive at an optimized

  9. Intermediate levels of hippocampal activity appear optimal for associative memory formation.

    NARCIS (Netherlands)

    Liu, X.; Qin, S.; Rijpkema, M.J.P.; Luo, J.; Fernandez, G.S.E.

    2010-01-01

    BACKGROUND: It is well established that hippocampal activity is positively related to effective associative memory formation. However, in biological systems often optimal levels of activity are contrasted by both sub- and supra-optimal levels. Sub-optimal levels of hippocampal activity are commonly

  10. Robust and Optimal Control of Magnetic Microparticles inside Fluidic Channels with Time-Varying Flow Rates

    Directory of Open Access Journals (Sweden)

    Islam S.M. Khalil

    2016-06-01

    Full Text Available Targeted therapy using magnetic microparticles and nanoparticles has the potential to mitigate the negative side-effects associated with conventional medical treatment. Major technological challenges still need to be addressed in order to translate these particles into in vivo applications. For example, magnetic particles need to be navigated controllably in vessels against flowing streams of body fluid. This paper describes the motion control of paramagnetic microparticles in the flowing streams of fluidic channels with time-varying flow rates (maximum flow is 35 ml.hr−1. This control is designed using a magnetic-based proportional-derivative (PD control system to compensate for the time-varying flow inside the channels (with width and depth of 2 mm and 1.5 mm, respectively. First, we achieve point-to-point motion control against and along flow rates of 4 ml.hr−1, 6 ml.hr−1, 17 ml.hr−1, and 35 ml.hr−1. The average speeds of single microparticle (with average diameter of 100 μm against flow rates of 6 ml.hr−1 and 30 ml.hr−1 are calculated to be 45 μm.s−1 and 15 μm.s−1, respectively. Second, we implement PD control with disturbance estimation and compensation. This control decreases the steady-state error by 50%, 70%, 73%, and 78% at flow rates of 4 ml.hr−1, 6 ml.hr−1, 17 ml.hr−1, and 35 ml.hr−1, respectively. Finally, we consider the problem of finding the optimal path (minimal kinetic energy between two points using calculus of variation, against the mentioned flow rates. Not only do we find that an optimal path between two collinear points with the direction of maximum flow (middle of the fluidic channel decreases the rise time of the microparticles, but we also decrease the input current that is supplied to the electromagnetic coils by minimizing the kinetic energy of the microparticles, compared to a PD control with disturbance compensation.

  11. A Robust Bayesian Approach to an Optimal Replacement Policy for Gas Pipelines

    Directory of Open Access Journals (Sweden)

    José Pablo Arias-Nicolás

    2015-06-01

    Full Text Available In the paper, we address Bayesian sensitivity issues when integrating experts’ judgments with available historical data in a case study about strategies for the preventive maintenance of low-pressure cast iron pipelines in an urban gas distribution network. We are interested in replacement priorities, as determined by the failure rates of pipelines deployed under different conditions. We relax the assumptions, made in previous papers, about the prior distributions on the failure rates and study changes in replacement priorities under different choices of generalized moment-constrained classes of priors. We focus on the set of non-dominated actions, and among them, we propose the least sensitive action as the optimal choice to rank different classes of pipelines, providing a sound approach to the sensitivity problem. Moreover, we are also interested in determining which classes have a failure rate exceeding a given acceptable value, considered as the threshold determining no need for replacement. Graphical tools are introduced to help decisionmakers to determine if pipelines are to be replaced and the corresponding priorities.

  12. Generic Community System Specification: A Proposed Format for Reporting the Results of Microgrid Optimization Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Antonio [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-03-14

    This document provides a proposed format for reporting the results of microgrid optimization analysis. While the proposed format assumes that the modeling is conducted as part of a renewable energy retrofit of an existing diesel micro-grid, the format can certainly be adopted for other situations.

  13. TCSC robust damping controller design based on particle swarm optimization for a multi-machine power system

    Energy Technology Data Exchange (ETDEWEB)

    Shayeghi, H., E-mail: hshayeghi@gmail.co [Technical Engineering Department, University of Mohaghegh Ardabili, Ardabil (Iran, Islamic Republic of); Shayanfar, H.A. [Center of Excellence for Power System Automation and Operation, Electrical Engineering Department, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of); Jalilzadeh, S.; Safari, A. [Technical Engineering Department, Zanjan University, Zanjan (Iran, Islamic Republic of)

    2010-10-15

    In this paper, a new approach based on the particle swarm optimization (PSO) technique is proposed to tune the parameters of the thyristor controlled series capacitor (TCSC) power oscillation damping controller. The design problem of the damping controller is converted to an optimization problem with the time-domain-based objective function which is solved by a PSO technique which has a strong ability to find the most optimistic results. To ensure the robustness of the proposed stabilizers, the design process takes a wide range of operating conditions into account. The performance of the newly designed controller is evaluated in a four-machine power system subjected to the different types of disturbances in comparison with the genetic algorithm based damping controller. The effectiveness of the proposed controller is demonstrated through the nonlinear time-domain simulation and some performance indices studies. The results analysis reveals that the tuned PSO based TCSC damping controller using the proposed fitness function has an excellent capability in damping power system inter-area oscillations and enhances greatly the dynamic stability of the power systems. Moreover, it is superior to the genetic algorithm based damping controller.

  14. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  15. Formulation and demonstration of a robust mean variance optimization approach for concurrent airline network and aircraft design

    Science.gov (United States)

    Davendralingam, Navindran

    Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is

  16. Intermediate levels of hippocampal activity appear optimal for associative memory formation.

    Directory of Open Access Journals (Sweden)

    Xiao Liu

    Full Text Available BACKGROUND: It is well established that hippocampal activity is positively related to effective associative memory formation. However, in biological systems often optimal levels of activity are contrasted by both sub- and supra-optimal levels. Sub-optimal levels of hippocampal activity are commonly attributed to unsuccessful memory formation, whereas the supra-optimal levels of hippocampal activity related to unsuccessful memory formation have been rarely studied. It is still unclear under what circumstances such supra-optimal levels of hippocampal activity occur. To clarify this issue, we aimed at creating a condition, in which supra-optimal hippocampal activity is associated with encoding failure. We assumed that such supra-optimal activity occurs when task-relevant information is embedded in task-irrelevant, distracting information, which can be considered as noise. METHODOLOGY/PRINCIPAL FINDINGS: In the present fMRI study, we probed neural correlates of associative memory formation in a full-factorial design with associative memory (subsequently remembered versus forgotten and noise (induced by high versus low distraction as factors. Results showed that encoding failure was associated with supra-optimal activity in the high-distraction condition and with sub-optimal activity in the low distraction condition. Thus, we revealed evidence for a bell-shape function relating hippocampal activity with associative encoding success. CONCLUSIONS/SIGNIFICANCE: Our findings indicate that intermediate levels of hippocampal activity are optimal while both too low and too high levels appear detrimental for associative memory formation. Supra-optimal levels of hippocampal activity seem to occur when task-irrelevant information is added to task-relevant signal. If such task-irrelevant noise is reduced adequately, hippocampal activity is lower and thus optimal for associative memory formation.

  17. Impact of mechanism vibration characteristics by joint clearance and optimization design of its multi-objective robustness

    Science.gov (United States)

    Zeng, Baoping; Wang, Chao; Zhang, Yu; Gong, Yajun; Hu, Sanbao

    2017-12-01

    Joint clearances and friction characteristics significantly influence the mechanism vibration characteristics; for example: as for joint clearances, the shaft and bearing of its clearance joint collide to bring about the dynamic normal contact force and tangential coulomb friction force while the mechanism works; thus, the whole system may vibrate; moreover, the mechanism is under contact-impact with impact force constraint from free movement under action of the above dynamic forces; in addition, the mechanism topology structure also changes. The constraint relationship between joints may be established by a repeated complex nonlinear dynamic process (idle stroke - contact-impact - elastic compression - rebound - impact relief - idle stroke movement - contact-impact). Analysis of vibration characteristics of joint parts is still a challenging open task by far. The dynamic equations for any mechanism with clearance is often a set of strong coupling, high-dimensional and complex time-varying nonlinear differential equations which are solved very difficultly. Moreover, complicated chaotic motions very sensitive to initial values in impact and vibration due to clearance let high-precision simulation and prediction of their dynamic behaviors be more difficult; on the other hand, their subsequent wearing necessarily leads to some certain fluctuation of structure clearance parameters, which acts as one primary factor for vibration of the mechanical system. A dynamic model was established to the device for opening the deepwater robot cabin door with joint clearance by utilizing the finite element method and analysis was carried out to its vibration characteristics in this study. Moreover, its response model was carried out by utilizing the DOE method and then the robust optimization design was performed to sizes of the joint clearance and the friction coefficient change range so that the optimization design results may be regarded as reference data for selecting bearings

  18. Energy-saving management modelling and optimization for lead-acid battery formation process

    Science.gov (United States)

    Wang, T.; Chen, Z.; Xu, J. Y.; Wang, F. Y.; Liu, H. M.

    2017-11-01

    In this context, a typical lead-acid battery producing process is introduced. Based on the formation process, an efficiency management method is proposed. An optimization model with the objective to minimize the formation electricity cost in a single period is established. This optimization model considers several related constraints, together with two influencing factors including the transformation efficiency of IGBT charge-and-discharge machine and the time-of-use price. An example simulation is shown using PSO algorithm to solve this mathematic model, and the proposed optimization strategy is proved to be effective and learnable for energy-saving and efficiency optimization in battery producing industries.

  19. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    Science.gov (United States)

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  20. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    Directory of Open Access Journals (Sweden)

    Ramviyas Parasuraman

    2014-12-01

    Full Text Available The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS. When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities, there is a possibility that some electronic components may fail randomly (due to radiation effects, which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.

  1. Exploitation and Optimization of Reservoir Performance in Hunton Formation, Oklahoma

    Energy Technology Data Exchange (ETDEWEB)

    Kelkar, Mohan

    2001-05-08

    This report presents the work done so far on Hunton Formation in West Carney Field in Lincoln County, Oklahoma. West Carney Field produces oil and gas from the Hunton Formation. The field was developed starting in 1995. Some of the unique characteristics of the field include decreasing water oil and ratio over time, decreasing gas-oil ratio at the beginning of production, inability to calculate oil reserves in the field based on long data, and sustained oil rates over long periods of time.

  2. Anatomical robust optimization to account for nasal cavity filling variation during intensity-modulated proton therapy: a comparison with conventional and adaptive planning strategies

    Science.gov (United States)

    van de Water, Steven; Albertini, Francesca; Weber, Damien C.; Heijmen, Ben J. M.; Hoogeman, Mischa S.; Lomax, Antony J.

    2018-01-01

    The aim of this study is to develop an anatomical robust optimization method for intensity-modulated proton therapy (IMPT) that accounts for interfraction variations in nasal cavity filling, and to compare it with conventional single-field uniform dose (SFUD) optimization and online plan adaptation. We included CT data of five patients with tumors in the sinonasal region. Using the planning CT, we generated for each patient 25 ‘synthetic’ CTs with varying nasal cavity filling. The robust optimization method available in our treatment planning system ‘Erasmus-iCycle’ was extended to also account for anatomical uncertainties by including (synthetic) CTs with varying patient anatomy as error scenarios in the inverse optimization. For each patient, we generated treatment plans using anatomical robust optimization and, for benchmarking, using SFUD optimization and online plan adaptation. Clinical target volume (CTV) and organ-at-risk (OAR) doses were assessed by recalculating the treatment plans on the synthetic CTs, evaluating dose distributions individually and accumulated over an entire fractionated 50 GyRBE treatment, assuming each synthetic CT to correspond to a 2 GyRBE fraction. Treatment plans were also evaluated using actual repeat CTs. Anatomical robust optimization resulted in adequate CTV doses (V95%  ⩾  98% and V107%  ⩽  2%) if at least three synthetic CTs were included in addition to the planning CT. These CTV requirements were also fulfilled for online plan adaptation, but not for the SFUD approach, even when applying a margin of 5 mm. Compared with anatomical robust optimization, OAR dose parameters for the accumulated dose distributions were on average 5.9 GyRBE (20%) higher when using SFUD optimization and on average 3.6 GyRBE (18%) lower for online plan adaptation. In conclusion, anatomical robust optimization effectively accounted for changes in nasal cavity filling during IMPT, providing substantially improved CTV and

  3. An Optimal Delivery Format for Presentations Targeting Older Adults.

    Science.gov (United States)

    Austin-Wells, Vonnette; Zimmerman, Teena; McDougall, Graham J., Jr.

    2003-01-01

    African-American, Hispanic, and white older adults (n=34) attended three information sessions presented via flipcharts, transparencies, and PowerPoint (one format per session). In focus groups, participants rated accessibility, novelty, and efficiency. They overwhelmingly preferred PowerPoint on all dimensions. (SK)

  4. EXPLOITATION AND OPTIMIZATION OF RESERVOIR PERFORMANCE IN HUNTON FORMATION, OKLAHOMA

    Energy Technology Data Exchange (ETDEWEB)

    Mohan Kelkar

    2005-02-01

    Hunton formation in Oklahoma has displayed some unique production characteristics. These include high initial water-oil and gas-oil ratios, decline in those ratios over time and temporary increase in gas-oil ratio during pressure build up. The formation also displays highly complex geology, but surprising hydrodynamic continuity. This report addresses three key issues related specifically to West Carney Hunton field and, in general, to any other Hunton formation exhibiting similar behavior: (1) What is the primary mechanism by which oil and gas is produced from the field? (2) How can the knowledge gained from studying the existing fields can be extended to other fields which have the potential to produce? (3) What can be done to improve the performance of this reservoir? We have developed a comprehensive model to explain the behavior of the reservoir. By using available production, geological, core and log data, we are able to develop a reservoir model which explains the production behavior in the reservoir. Using easily available information, such as log data, we have established the parameters needed for a field to be economically successful. We provide guidelines in terms of what to look for in a new field and how to develop it. Finally, through laboratory experiments, we show that surfactants can be used to improve the hydrocarbons recovery from the field. In addition, injection of CO{sub 2} or natural gas also will help us recover additional oil from the field.

  5. Limited Impact of Setup and Range Uncertainties, Breathing Motion, and Interplay Effects in Robustly Optimized Intensity Modulated Proton Therapy for Stage III Non-small Cell Lung Cancer

    NARCIS (Netherlands)

    Inoue, Tatsuya; Widder, Joachim; van Dijk, Lisanne V; Takegawa, Hideki; Koizumi, Masahiko; Takashina, Masaaki; Usui, Keisuke; Kurokawa, Chie; Sugimoto, Satoru; Saito, Anneyuko I; Sasai, Keisuke; Van't Veld, Aart A; Langendijk, Johannes A; Korevaar, Erik W

    2016-01-01

    Purpose: To investigate the impact of setup and range uncertainties, breathing motion, and interplay effects using scanning pencil beams in robustly optimized intensity modulated proton therapy (IMPT) for stage III non-small cell lung cancer (NSCLC). Methods and Materials: Three-field IMPT plans

  6. Limited Impact of Setup and Range Uncertainties, Breathing Motion, and Interplay Effects in Robustly Optimized Intensity Modulated Proton Therapy for Stage III Non-small Cell Lung Cancer

    International Nuclear Information System (INIS)

    Inoue, Tatsuya; Widder, Joachim; Dijk, Lisanne V. van; Takegawa, Hideki; Koizumi, Masahiko; Takashina, Masaaki; Usui, Keisuke; Kurokawa, Chie; Sugimoto, Satoru; Saito, Anneyuko I.; Sasai, Keisuke; Veld, Aart A. van't; Langendijk, Johannes A.; Korevaar, Erik W.

    2016-01-01

    Purpose: To investigate the impact of setup and range uncertainties, breathing motion, and interplay effects using scanning pencil beams in robustly optimized intensity modulated proton therapy (IMPT) for stage III non-small cell lung cancer (NSCLC). Methods and Materials: Three-field IMPT plans were created using a minimax robust optimization technique for 10 NSCLC patients. The plans accounted for 5- or 7-mm setup errors with ±3% range uncertainties. The robustness of the IMPT nominal plans was evaluated considering (1) isotropic 5-mm setup errors with ±3% range uncertainties; (2) breathing motion; (3) interplay effects; and (4) a combination of items 1 and 2. The plans were calculated using 4-dimensional and average intensity projection computed tomography images. The target coverage (TC, volume receiving 95% of prescribed dose) and homogeneity index (D_2 − D_9_8, where D_2 and D_9_8 are the least doses received by 2% and 98% of the volume) for the internal clinical target volume, and dose indexes for lung, esophagus, heart and spinal cord were compared with that of clinical volumetric modulated arc therapy plans. Results: The TC and homogeneity index for all plans were within clinical limits when considering the breathing motion and interplay effects independently. The setup and range uncertainties had a larger effect when considering their combined effect. The TC decreased to 98% for robust 7-mm evaluations for all patients. The organ at risk dose parameters did not significantly vary between the respective robust 5-mm and robust 7-mm evaluations for the 4 error types. Compared with the volumetric modulated arc therapy plans, the IMPT plans showed better target homogeneity and mean lung and heart dose parameters reduced by about 40% and 60%, respectively. Conclusions: In robustly optimized IMPT for stage III NSCLC, the setup and range uncertainties, breathing motion, and interplay effects have limited impact on target coverage, dose homogeneity, and

  7. Asymmetric underlap optimization of sub-10nm finfets for realizing energy-efficient logic and robust memories

    Science.gov (United States)

    Akkala, Arun Goud

    Leakage currents in CMOS transistors have risen dramatically with technology scaling leading to significant increase in standby power consumption. Among the various transistor candidates, the excellent short channel immunity of Silicon double gate FinFETs have made them the best contender for successful scaling to sub-10nm nodes. For sub-10nm FinFETs, new quantum mechanical leakage mechanisms such as direct source to drain tunneling (DSDT) of charge carriers through channel potential energy barrier arising due to proximity of source/drain regions coupled with the high transport direction electric field is expected to dominate overall leakage. To counter the effects of DSDT and worsening short channel effects and to maintain Ion/ Ioff, performance and power consumption at reasonable values, device optimization techniques are necessary for deeply scaled transistors. In this work, source/drain underlapping of FinFETs has been explored using quantum mechanical device simulations as a potentially promising method to lower DSDT while maintaining the Ion/ Ioff ratio at acceptable levels. By adopting a device/circuit/system level co-design approach, it is shown that asymmetric underlapping, where the drain side underlap is longer than the source side underlap, results in optimal energy efficiency for logic circuits in near-threshold as well as standard, super-threshold operating regimes. In addition, read/write conflict in 6T SRAMs and the degradation in cell noise margins due to the low supply voltage can be mitigated by using optimized asymmetric underlapped n-FinFETs for the access transistor, thereby leading to robust cache memories. When gate-workfunction tuning is possible, using asymmetric underlapped n-FinFETs for both access and pull-down devices in an SRAM bit cell can lead to high-speed and low-leakage caches. Further, it is shown that threshold voltage degradation in the presence of Hot Carrier Injection (HCI) is less severe in asymmetric underlap n-FinFETs. A

  8. EXPLOITATION AND OPTIMIZATION OF RESERVOIR PERFORMANCE IN HUNTON FORMATION, OKLAHOMA

    Energy Technology Data Exchange (ETDEWEB)

    Mohan Kelkar

    2004-10-01

    West Carney field--one of the newest fields discovered in Oklahoma--exhibits many unique production characteristics. These characteristics include: (1) decreasing water-oil ratio; (2) decreasing gas-oil ratio followed by an increase; (3) poor prediction capability of the reserves based on the log data; and (4) low geological connectivity but high hydrodynamic connectivity. The purpose of this investigation is to understand the principal mechanisms affecting the production, and propose methods by which we can extend the phenomenon to other fields with similar characteristics. In our experimental investigation section, we present the data on surfactant injection in near well bore region. We demonstrate that by injecting the surfactant, the relative permeability of water could be decreased, and that of gas could be increased. This should result in improved gas recovery from the reservoir. Our geological analysis of the reservoir develops the detailed stratigraphic description of the reservoir. Two new stratigraphic units, previously unrecognized, are identified. Additional lithofacies are recognized in new core descriptions. Our engineering analysis has determined that well density is an important parameter in optimally producing Hunton reservoirs. It appears that 160 acre is an optimal spacing. The reservoir pressure appears to decline over time; however, recovery per well is only weakly influenced by the pressure. This indicates that additional opportunity to drill wells exists in relatively depleted fields. A simple material balance technique is developed to validate the recovery of gas, oil and water. This technique can be used to further extrapolate recoveries from other fields with similar field characteristics.

  9. TH-CD-209-05: Impact of Spot Size and Spacing On the Quality of Robustly-Optimized Intensity-Modulated Proton Therapy Plans for Lung Cancer

    International Nuclear Information System (INIS)

    Liu, W; Ding, X; Hu, Y; Shen, J; Korte, S; Bues, M; Schild, S; Wong, W; Chang, J; Liao, Z; Sahoo, N; Herman, M

    2016-01-01

    Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan

  10. TH-CD-209-05: Impact of Spot Size and Spacing On the Quality of Robustly-Optimized Intensity-Modulated Proton Therapy Plans for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, W; Ding, X; Hu, Y; Shen, J; Korte, S; Bues, M [Mayo Clinic Arizona, Phoenix, AZ (United States); Schild, S; Wong, W [Mayo Clinic AZ, Phoenix, AZ (United States); Chang, J [MD Anderson Cancer Center, Houston, TX (United States); Liao, Z; Sahoo, N [UT MD Anderson Cancer Center, Houston, TX (United States); Herman, M [Mayo Clinic, Rochester, MN (United States)

    2016-06-15

    Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan

  11. Optimization of RFP formation and sustainment in RFX

    Energy Technology Data Exchange (ETDEWEB)

    Martini, S.; Buffa, A.; Collarin, P.; Lorenzi, A. De; Fiorentin, P.; Innocente, P.; Marchiori, G.; Paccagnella, R.; Piovan, R.; Sonato, P. [Istituto Gas Ionizzati del CNR, EURATOM-ENEA-CNR Association, Padua (Italy)

    1993-12-31

    In the first months of 1993, the RFX experiment (R=2 m, a=0.46 m) has operated at reduced volt-second (6 V{center_dot}s out of 15) to study the formation and sustainment of the RFP in a relatively safer power input regime, before increasing the parameters to reach the design value of 2 MA plasma current. At present the RFP configuration is obtained, similarly to the ETA-BETA II experiment, in the aided mode: a capacitor bank is discharged into the toroidal winding to produce an initial toroidal flux, {Phi}{sub T}, then the plasma current, I{sub T}, is induced by varying the poloidal flux stored in the magnetising winding; the free oscillation of the toroidal circuit continues and {Phi}{sub T} decays during the initial plasma current rise, until the toroidal field at the wall, B{sub T}(a), reverses and the toroidal circuit is crow-barred. The overall performance of the plasma during the RFP sustainment phase is strongly influenced by the control performed on density, toroidal field and plasma position during the formation phase. As soon as the RFP is obtained, a clear improvement of confinement is seen and the plasma current increases again until the applied toroidal voltage V{sub T}, which decreases exponentially, no longer matches the resistive drop. In RFX it is also possible to insert a pre-programmed flat-top power amplifier by which V{sub T} can be sustained and controlled in the range (0-60 V). In this way quasi-steady RFP current flat-top phases lasting {approx} 90 ms can be obtained which terminate only when the amplifiers are switched off and V{sub T} is no longer sustained. (author) 9 refs., 6 figs.

  12. Optimization of RFP formation and sustainment in RFX

    International Nuclear Information System (INIS)

    Martini, S.; Buffa, A.; Collarin, P.; Lorenzi, A. De; Fiorentin, P.; Innocente, P.; Marchiori, G.; Paccagnella, R.; Piovan, R.; Sonato, P.

    1993-01-01

    In the first months of 1993, the RFX experiment (R=2 m, a=0.46 m) has operated at reduced volt-second (6 V·s out of 15) to study the formation and sustainment of the RFP in a relatively safer power input regime, before increasing the parameters to reach the design value of 2 MA plasma current. At present the RFP configuration is obtained, similarly to the ETA-BETA II experiment, in the aided mode: a capacitor bank is discharged into the toroidal winding to produce an initial toroidal flux, Φ T , then the plasma current, I T , is induced by varying the poloidal flux stored in the magnetising winding; the free oscillation of the toroidal circuit continues and Φ T decays during the initial plasma current rise, until the toroidal field at the wall, B T (a), reverses and the toroidal circuit is crow-barred. The overall performance of the plasma during the RFP sustainment phase is strongly influenced by the control performed on density, toroidal field and plasma position during the formation phase. As soon as the RFP is obtained, a clear improvement of confinement is seen and the plasma current increases again until the applied toroidal voltage V T , which decreases exponentially, no longer matches the resistive drop. In RFX it is also possible to insert a pre-programmed flat-top power amplifier by which V T can be sustained and controlled in the range (0-60 V). In this way quasi-steady RFP current flat-top phases lasting ∼ 90 ms can be obtained which terminate only when the amplifiers are switched off and V T is no longer sustained. (author) 9 refs., 6 figs

  13. The optimization model for multi-type customers assisting wind power consumptive considering uncertainty and demand response based on robust stochastic theory

    International Nuclear Information System (INIS)

    Tan, Zhongfu; Ju, Liwei; Reed, Brent; Rao, Rao; Peng, Daoxin; Li, Huanhuan; Pan, Ge

    2015-01-01

    Highlights: • Our research focuses on demand response behaviors of multi-type customers. • A wind power simulation method is proposed based on the Brownian motion theory. • Demand response revenue functions are proposed for multi-type customers. • A robust stochastic optimization model is proposed for wind power consumptive. • Models are built to measure the impacts of demand response on wind power consumptive. - Abstract: In order to relieve the influence of wind power uncertainty on power system operation, demand response and robust stochastic theory are introduced to build a stochastic scheduling optimization model. Firstly, this paper presents a simulation method for wind power considering external environment based on Brownian motion theory. Secondly, price-based demand response and incentive-based demand response are introduced to build demand response model. Thirdly, the paper constructs the demand response revenue functions for electric vehicle customers, business customers, industry customers and residential customers. Furthermore, robust stochastic optimization theory is introduced to build a wind power consumption stochastic optimization model. Finally, simulation analysis is taken in the IEEE 36 nodes 10 units system connected with 650 MW wind farms. The results show the robust stochastic optimization theory is better to overcome wind power uncertainty. Demand response can improve system wind power consumption capability. Besides, price-based demand response could transform customers’ load demand distribution, but its load curtailment capacity is not as obvious as incentive-based demand response. Since price-based demand response cannot transfer customer’s load demand as the same as incentive-based demand response, the comprehensive optimization effect will reach best when incentive-based demand response and price-based demand response are both introduced.

  14. Detecting epileptic seizure with different feature extracting strategies using robust machine learning classification techniques by applying advance parameter optimization approach.

    Science.gov (United States)

    Hussain, Lal

    2018-06-01

    Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.

  15. SU-F-BRD-01: A Novel 4D Robust Optimization Mitigates Interplay Effect in Intensity-Modulated Proton Therapy for Lung Cancer

    International Nuclear Information System (INIS)

    Liu, W; Shen, J; Stoker, J; Bues, M; Schild, S; Wong, W; Chang, J; Liao, Z; Wen, Z; Sahoo, N; Herman, M; Mohan, R

    2015-01-01

    Purpose: To compare the impact of interplay effect on 3D and 4D robustly optimized intensity-modulated proton therapy (IMPT) plans to treat lung cancer. Methods: Two IMPT plans were created for 11 non-small-cell-lung-cancer cases with 6–14 mm spots. 3D robust optimization generated plans on average CTs with the internal gross tumor volume density overridden to deliver 66 CGyE in 33 fractions to the internal target volume (ITV). 4D robust optimization generated plans on 4D CTs with the delivery of prescribed dose to the clinical target volume (CTV). In 4D optimization, the CTV of individual 4D CT phases received non-uniform doses to achieve a uniform cumulative dose. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Indices from dose-volume histograms were used to compare target coverage, dose homogeneity, and normal-tissue sparing. DVH indices were compared using Wilcoxon test. Results: Given the presence of interplay effect, 4D robust optimization produced IMPT plans with better target coverage and homogeneity, but slightly worse normal tissue sparing compared to 3D robust optimization (unit: Gy) [D95% ITV: 63.5 vs 62.0 (p=0.014), D5% - D95% ITV: 6.2 vs 7.3 (p=0.37), D1% spinal cord: 29.0 vs 29.5 (p=0.52), Dmean total lung: 14.8 vs 14.5 (p=0.12), D33% esophagus: 33.6 vs 33.1 (p=0.28)]. The improvement of target coverage (D95%,4D – D95%,3D) was related to the ratio RMA3/(TVx10−4), with RMA and TV being respiratory motion amplitude (RMA) and tumor volume (TV), respectively. Peak benefit was observed at ratios between 2 and 10. This corresponds to 125 – 625 cm3 TV with 0.5-cm RMA. Conclusion: 4D optimization produced more interplay-effect-resistant plans compared to 3D optimization. It is most effective when respiratory motion is modest

  16. SU-F-BRD-01: A Novel 4D Robust Optimization Mitigates Interplay Effect in Intensity-Modulated Proton Therapy for Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Liu, W; Shen, J; Stoker, J; Bues, M [Mayo Clinic Arizona, Phoenix, AZ (United States); Schild, S; Wong, W [Mayo Clinic, Phoenix, Arizona (United States); Chang, J; Liao, Z; Wen, Z; Sahoo, N [MD Anderson Cancer Center, Houston, TX (United States); Herman, M [Mayo Clinic, Rochester, MN (United States); Mohan, R [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: To compare the impact of interplay effect on 3D and 4D robustly optimized intensity-modulated proton therapy (IMPT) plans to treat lung cancer. Methods: Two IMPT plans were created for 11 non-small-cell-lung-cancer cases with 6–14 mm spots. 3D robust optimization generated plans on average CTs with the internal gross tumor volume density overridden to deliver 66 CGyE in 33 fractions to the internal target volume (ITV). 4D robust optimization generated plans on 4D CTs with the delivery of prescribed dose to the clinical target volume (CTV). In 4D optimization, the CTV of individual 4D CT phases received non-uniform doses to achieve a uniform cumulative dose. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Indices from dose-volume histograms were used to compare target coverage, dose homogeneity, and normal-tissue sparing. DVH indices were compared using Wilcoxon test. Results: Given the presence of interplay effect, 4D robust optimization produced IMPT plans with better target coverage and homogeneity, but slightly worse normal tissue sparing compared to 3D robust optimization (unit: Gy) [D95% ITV: 63.5 vs 62.0 (p=0.014), D5% - D95% ITV: 6.2 vs 7.3 (p=0.37), D1% spinal cord: 29.0 vs 29.5 (p=0.52), Dmean total lung: 14.8 vs 14.5 (p=0.12), D33% esophagus: 33.6 vs 33.1 (p=0.28)]. The improvement of target coverage (D95%,4D – D95%,3D) was related to the ratio RMA3/(TVx10−4), with RMA and TV being respiratory motion amplitude (RMA) and tumor volume (TV), respectively. Peak benefit was observed at ratios between 2 and 10. This corresponds to 125 – 625 cm3 TV with 0.5-cm RMA. Conclusion: 4D optimization produced more interplay-effect-resistant plans compared to 3D optimization. It is most effective when respiratory motion is modest

  17. Robust Control of PEP Formation Rate in the Carbon Fixation Pathway of C4 Plants by a Bi-functional Enzyme

    Directory of Open Access Journals (Sweden)

    Hart Yuval

    2011-10-01

    Full Text Available Abstract Background C4 plants such as corn and sugarcane assimilate atmospheric CO2 into biomass by means of the C4 carbon fixation pathway. We asked how PEP formation rate, a key step in the carbon fixation pathway, might work at a precise rate, regulated by light, despite fluctuations in substrate and enzyme levels constituting and regulating this process. Results We present a putative mechanism for robustness in C4 carbon fixation, involving a key enzyme in the pathway, pyruvate orthophosphate dikinase (PPDK, which is regulated by a bifunctional enzyme, Regulatory Protein (RP. The robust mechanism is based on avidity of the bifunctional enzyme RP to its multimeric substrate PPDK, and on a product-inhibition feedback loop that couples the system output to the activity of the bifunctional regulator. The model provides an explanation for several unusual biochemical characteristics of the system and predicts that the system's output, phosphoenolpyruvate (PEP formation rate, is insensitive to fluctuations in enzyme levels (PPDK and RP, substrate levels (ATP and pyruvate and the catalytic rate of PPDK, while remaining sensitive to the system's input (light levels. Conclusions The presented PPDK mechanism is a new way to achieve robustness using product inhibition as a feedback loop on a bifunctional regulatory enzyme. This mechanism exhibits robustness to protein and metabolite levels as well as to catalytic rate changes. At the same time, the output of the system remains tuned to input levels.

  18. Limited Impact of Setup and Range Uncertainties, Breathing Motion, and Interplay Effects in Robustly Optimized Intensity Modulated Proton Therapy for Stage III Non-small Cell Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, Tatsuya [Department of Radiology, Juntendo University Urayasu Hospital, Chiba (Japan); Widder, Joachim; Dijk, Lisanne V. van [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Takegawa, Hideki [Department of Radiation Oncology, Kansai Medical University Hirakata Hospital, Osaka (Japan); Koizumi, Masahiko; Takashina, Masaaki [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Usui, Keisuke; Kurokawa, Chie; Sugimoto, Satoru [Department of Radiation Oncology, Juntendo University Graduate School of Medicine, Tokyo (Japan); Saito, Anneyuko I. [Department of Radiology, Juntendo University Urayasu Hospital, Chiba (Japan); Department of Radiation Oncology, Juntendo University Graduate School of Medicine, Tokyo (Japan); Sasai, Keisuke [Department of Radiation Oncology, Juntendo University Graduate School of Medicine, Tokyo (Japan); Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Korevaar, Erik W., E-mail: e.w.korevaar@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2016-11-01

    Purpose: To investigate the impact of setup and range uncertainties, breathing motion, and interplay effects using scanning pencil beams in robustly optimized intensity modulated proton therapy (IMPT) for stage III non-small cell lung cancer (NSCLC). Methods and Materials: Three-field IMPT plans were created using a minimax robust optimization technique for 10 NSCLC patients. The plans accounted for 5- or 7-mm setup errors with ±3% range uncertainties. The robustness of the IMPT nominal plans was evaluated considering (1) isotropic 5-mm setup errors with ±3% range uncertainties; (2) breathing motion; (3) interplay effects; and (4) a combination of items 1 and 2. The plans were calculated using 4-dimensional and average intensity projection computed tomography images. The target coverage (TC, volume receiving 95% of prescribed dose) and homogeneity index (D{sub 2} − D{sub 98}, where D{sub 2} and D{sub 98} are the least doses received by 2% and 98% of the volume) for the internal clinical target volume, and dose indexes for lung, esophagus, heart and spinal cord were compared with that of clinical volumetric modulated arc therapy plans. Results: The TC and homogeneity index for all plans were within clinical limits when considering the breathing motion and interplay effects independently. The setup and range uncertainties had a larger effect when considering their combined effect. The TC decreased to <98% (clinical threshold) in 3 of 10 patients for robust 5-mm evaluations. However, the TC remained >98% for robust 7-mm evaluations for all patients. The organ at risk dose parameters did not significantly vary between the respective robust 5-mm and robust 7-mm evaluations for the 4 error types. Compared with the volumetric modulated arc therapy plans, the IMPT plans showed better target homogeneity and mean lung and heart dose parameters reduced by about 40% and 60%, respectively. Conclusions: In robustly optimized IMPT for stage III NSCLC, the setup and range

  19. TH-CD-209-04: Fuzzy Robust Optimization in Intensity-Modulated Proton Therapy Planning to Account for Range and Patient Setup Uncertainties

    International Nuclear Information System (INIS)

    An, Y; Bues, M; Schild, S; Liu, W

    2016-01-01

    Purpose: We propose to apply a robust optimization model based on fuzzy-logic constraints in the intensity-modulated proton therapy (IMPT) planning subject to range and patient setup uncertainties. The purpose is to ensure the plan robustness under uncertainty and obtain the best trade-off between tumor dose coverage and organ-at-risk(OAR) sparing. Methods: Two IMPT plans were generated for 3 head-and-neck cancer patients: one used the planning target volume(PTV) method; the other used the fuzzy robust optimization method. In the latter method, nine dose distributions were computed - the nominal one and one each for ±3mm setup uncertainties along three cardinal axes and for ±3.5% range uncertainty. For tumors, these nine dose distributions were explicitly controlled by adding hard constraints with adjustable parameters. For OARs, fuzzy constraints that allow the dose to vary within a certain range were used so that the tumor dose distribution was guaranteed by minimum compromise of that of OARs. We rendered this model tractable by converting the fuzzy constraints to linear constraints. The plan quality was evaluated using dose-volume histogram(DVH) indices such as tumor dose coverage(D95%), homogeneity(D5%-D95%), plan robustness(DVH band at D95%), and OAR sparing like D1% of brain and D1% of brainstem. Results: Our model could yield clinically acceptable plans. The fuzzy-logic robust optimization method produced IMPT plans with comparable target dose coverage and homogeneity compared to the PTV method(unit: Gy[RBE]; average[min, max])(CTV D95%: 59 [52.7, 63.5] vs 53.5[46.4, 60.1], CTV D5% - D95%: 11.1[5.3, 18.6] vs 14.4[9.2, 21.5]). It also generated more robust plans(CTV DVH band at D95%: 3.8[1.2, 5.6] vs 11.5[6.2, 16.7]). The parameters of tumor constraints could be adjusted to control the tradeoff between tumor coverage and OAR sparing. Conclusion: The fuzzy-logic robust optimization generates superior IMPT with minimum compromise of OAR sparing. This research

  20. TH-CD-209-04: Fuzzy Robust Optimization in Intensity-Modulated Proton Therapy Planning to Account for Range and Patient Setup Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    An, Y; Bues, M; Schild, S; Liu, W [Mayo Clinic Arizona, Phoenix, AZ (United States)

    2016-06-15

    Purpose: We propose to apply a robust optimization model based on fuzzy-logic constraints in the intensity-modulated proton therapy (IMPT) planning subject to range and patient setup uncertainties. The purpose is to ensure the plan robustness under uncertainty and obtain the best trade-off between tumor dose coverage and organ-at-risk(OAR) sparing. Methods: Two IMPT plans were generated for 3 head-and-neck cancer patients: one used the planning target volume(PTV) method; the other used the fuzzy robust optimization method. In the latter method, nine dose distributions were computed - the nominal one and one each for ±3mm setup uncertainties along three cardinal axes and for ±3.5% range uncertainty. For tumors, these nine dose distributions were explicitly controlled by adding hard constraints with adjustable parameters. For OARs, fuzzy constraints that allow the dose to vary within a certain range were used so that the tumor dose distribution was guaranteed by minimum compromise of that of OARs. We rendered this model tractable by converting the fuzzy constraints to linear constraints. The plan quality was evaluated using dose-volume histogram(DVH) indices such as tumor dose coverage(D95%), homogeneity(D5%-D95%), plan robustness(DVH band at D95%), and OAR sparing like D1% of brain and D1% of brainstem. Results: Our model could yield clinically acceptable plans. The fuzzy-logic robust optimization method produced IMPT plans with comparable target dose coverage and homogeneity compared to the PTV method(unit: Gy[RBE]; average[min, max])(CTV D95%: 59 [52.7, 63.5] vs 53.5[46.4, 60.1], CTV D5% - D95%: 11.1[5.3, 18.6] vs 14.4[9.2, 21.5]). It also generated more robust plans(CTV DVH band at D95%: 3.8[1.2, 5.6] vs 11.5[6.2, 16.7]). The parameters of tumor constraints could be adjusted to control the tradeoff between tumor coverage and OAR sparing. Conclusion: The fuzzy-logic robust optimization generates superior IMPT with minimum compromise of OAR sparing. This research

  1. Optimization for set-points and robust model predictive control for steam generator in nuclear power plants

    International Nuclear Information System (INIS)

    Osgouee, Ahmad

    2010-01-01

    many advanced control methods proposed for the control of nuclear SG water level, operators are still experiencing difficulties especially at low powers. Therefore, it seems that a suitable controller to replace the manual operations is still needed. In this paper optimization of SGL set-points and designing a robust control for SGL control system using will be discussed

  2. Reducing regional vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    Science.gov (United States)

    Reed, Patrick; Trindade, Bernardo; Jonathan, Herman; Harrison, Zeff; Gregory, Characklis

    2016-04-01

    Emerging water scarcity concerns in southeastern US are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify regionally coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative management strategies. Results show that the sampling of deeply uncertain factors in the computational search phase of MORDM can aid in the discovery of management actions that substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be explored jointly to decrease robustness conflicts between the utilities. The insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.

  3. Performance Analysis of Generating Function Approach for Optimal Reconfiguration of Formation Flying

    Directory of Open Access Journals (Sweden)

    Kwangwon Lee

    2013-03-01

    Full Text Available The use of generating functions for solving optimal rendezvous problems has an advantage in the sense that it does not require one to guess and iterate the initial costate. This paper presents how to apply generating functions to analyze spacecraft optimal reconfiguration between projected circular orbits. The series-based solution obtained by using generating functions demonstrates excellent convergence and approximation to the nonlinear reference solution obtained from a numerical shooting method. These favorable properties are expected to hold for analyzing optimal formation reconfiguration under perturbations and non-circular reference orbits.

  4. AREA 2: Novel Materials for Robust Repair of Leaky Wellbores in CO2 Storage Formations

    Energy Technology Data Exchange (ETDEWEB)

    Balhoff, Matthew [Univ. of Texas, Austin, TX (United States); Tavassoli, Shayan [Univ. of Texas, Austin, TX (United States); Fei Ho, Jostine [Univ. of Texas, Austin, TX (United States)

    2016-01-31

    The potential leakage of hydrocarbon fluids or CO2 out of subsurface formations through wells with fractured cement or debonded microannuli is a primary concern in oil and gas production and CO2 storage. The presence of fractures in a cement annulus with apertures on the order of 10–300 microns can pose a significant leakage danger with effective permeability in the range of 0.1–1 mD (millidarcy). Leakage pathways with small apertures are often difficult to repair using conventional oilfield cement, thus a low-viscosity sealant that can be easily placed into these fractures while providing an effective seal is desired. The development of a novel application using pH-triggered polymeric sealants could potentially be the solution to plugging these fractures and that was the research aim of this study. The application is based on the transport and reaction of a low-pH poly(acrylic acid) polymer through fractures in strongly alkaline cement. The pH-sensitive microgels viscosify upon neutralization with cement to become highly swollen gels with substantial yield stress that can block fluid flow. Experiments in a cement fracture determined the effects of the viscosification and gel deposition via real-time visual observation and measurements of pressure gradient and effluent pH. While the pH-triggered gelling mechanism and rheology measurements of the neutralized polymer gel show promising results, the polymer solution in contact with cement undergoes an undesirable reaction known as polymer syneresis. Syneresis is caused by the release of calcium cation from cement that collapses the polymer network. Syneresis produces an unstable calcium-precipitation byproduct that is detrimental to the strength and stability of the gel in place. As a result, gel-sealed leakage pathways that subjected to various degrees of syneresis often failed to hold back pressures. Several chemicals were studied to inhibit polymer syneresis and tested for pretreatment of

  5. Reducing regional drought vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    Science.gov (United States)

    Trindade, B. C.; Reed, P. M.; Herman, J. D.; Zeff, H. B.; Characklis, G. W.

    2017-06-01

    Emerging water scarcity concerns in many urban regions are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative drought management strategies. Our results show that appropriately designing adaptive risk-of-failure action triggers required stressing them with a comprehensive sample of deeply uncertain factors in the computational search phase of MORDM. Search under the new ensemble of states-of-the-world is shown to fundamentally change perceived performance tradeoffs and substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Search under deep uncertainty enhanced the discovery of how cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be employed jointly to improve regional robustness and decrease robustness conflicts between the utilities. Insights from this work have general merit for regions where

  6. Model-aided optimization of delta-endotoxin-formation in continuous culture systems

    Energy Technology Data Exchange (ETDEWEB)

    Schulz, V; Schorcht, R; Ignatenko, Yu N; Sakharova, Z V; Khovrychev, M P

    1985-01-01

    A mathematical model of growth, sporulation and delta-endotoxin-formation of bac. thuringiensis is given. The results of model-aided optimization of steady-state continuous culture systems indicate that the productivity in the one-stage system is 1.9% higher and in the two-stage system is 18.5% higher than in the batch process.

  7. Quantitative NMR Approach to Optimize the Formation of Chemical Building Blocks from Abundant Carbohydrates

    DEFF Research Database (Denmark)

    Elliot, Samuel Gilbert; Tolborg, Søren; Sádaba, Irantzu

    2017-01-01

    -containing catalysts such as Sn-Beta. These compounds are potential building blocks for polyesters with additional olefin and alcohol functionalities. We employ an NMR approach to identify, quantify and optimize the formation these building blocks in the chemocatalytic transformation of abundant carbohydrates by Sn...

  8. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function.

    Science.gov (United States)

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-05-02

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength (DW) PSA is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesis herein presented may be used for interferometric contouring of discontinuous industrial objects. Also DW-PSA may be useful for DW shop-testing of deep free-form aspheres. As shown here, using the FTF-based synthesis one may easily find explicit DW-PSA formulae optimized for high signal-to-noise and high detuning robustness. To this date, no general synthesis and analysis for temporal DW-PSAs has been given; only ad hoc DW-PSAs formulas have been reported. Consequently, no explicit formulae for their spectra, their signal-to-noise, their detuning and harmonic robustness has been given. Here for the first time a fully general procedure for designing DW-PSAs (or triple-wavelengths PSAs) with desire spectrum, signal-to-noise ratio and detuning robustness is given. We finally generalize DW-PSA to higher number of wavelength temporal PSAs.

  9. A Multiobjective Robust Scheduling Optimization Mode for Multienergy Hybrid System Integrated by Wind Power, Solar Photovoltaic Power, and Pumped Storage Power

    Directory of Open Access Journals (Sweden)

    Lihui Zhang

    2017-01-01

    Full Text Available Wind power plant (WPP, photovoltaic generators (PV, cell-gas turbine (CGT, and pumped storage power station (PHSP are integrated into multienergy hybrid system (MEHS. Firstly, this paper presents MEHS structure and constructs a scheduling model with the objective functions of maximum economic benefit and minimum power output fluctuation. Secondly, in order to relieve the uncertainty influence of WPP and PV on system, robust stochastic theory is introduced to describe uncertainty and propose a multiobjective stochastic scheduling optimization mode by transforming constraint conditions with uncertain variables. Finally, a 9.6 MW WPP, a 6.5 MW PV, three CGT units, and an upper reservoir with 10 MW·h equivalent capacity are chosen as simulation system. The results show MEHS system can achieve the best operation result by using the multienergy hybrid generation characteristic. PHSP could shave peak and fill valley of load curve by optimizing pumping storage and inflowing generating behaviors based on the load supply and demand status and the available power of WPP and PV. Robust coefficients can relieve the uncertainty of WPP and PV and provide flexible scheduling decision tools for decision-makers with different risk attitudes by setting different robust coefficients, which could maximize economic benefits and minimize operation risks at the same time.

  10. A Unified Trading Model Based on Robust Optimization for Day-Ahead and Real-Time Markets with Wind Power Integration

    DEFF Research Database (Denmark)

    Jiang, Yuewen; Chen, Meisen; You, Shi

    2017-01-01

    In a conventional electricity market, trading is conducted based on power forecasts in the day-ahead market, while the power imbalance is regulated in the real-time market, which is a separate trading scheme. With large-scale wind power connected into the power grid, power forecast errors increase...... in the day-ahead market which lowers the economic efficiency of the separate trading scheme. This paper proposes a robust unified trading model that includes the forecasts of real-time prices and imbalance power into the day-ahead trading scheme. The model is developed based on robust optimization in view...... of the undefined probability distribution of clearing prices of the real-time market. For the model to be used efficiently, an improved quantum-behaved particle swarm algorithm (IQPSO) is presented in the paper based on an in-depth analysis of the limitations of the static character of quantum-behaved particle...

  11. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  12. Solusi Optimal Model Optimisasi Robust Untuk Masalah Traveling Salesman Dengan Ketidaktentuan Kotak Dan Pendekatan Metode Branch And Bound

    Directory of Open Access Journals (Sweden)

    Poppy Amriyati

    2015-12-01

    Full Text Available Traveling Salesman Problem (TSP merupakan teknik pencarian rute yang dimulai dari satu titik awal, setiap kota harus dikunjungi sekali dan kemudian kembali ke tempat asal sehingga total jarak atau waktu perjalanan adalah minimum. Untuk mengatasi kedakpastian jarak atau waktu perjalanan, maka perlu dilakukan pengembangan model TSP. Salah satu bidang Optimisasi yang mampu menyelesaikan permasalahan terkait ketidakpastian adalah Optimisasi Robust. Dalam makalah ini dibahas mengenai penerapan Optimisasi Robust pada TSP (RTSP menggunakan pendekatan Box Uncertainty dan diselesaikan dengan menggunakan Metode Branch and Bound. Disajikan simulasi numerik pada software aplikasi Maple untuk beberapa kasus nyata terkait penerapan Optimisasi RTSP , seperti masalah manajemen konstruksi, penentuan jarak tempuh kota di Pulau Jawa, dan Penentuan Rute Mandiri Fun Run.

  13. A fast and robust kinematic model for a 12 DoF hyper-redundant robot positioning: An optimization proposal

    Science.gov (United States)

    Lima, José; Pereira, Ana I.; Costa, Paulo; Pinto, Andry; Costa, Pedro

    2017-07-01

    This paper describes an optimization procedure for a robot with 12 degrees of freedom avoiding the inverse kinematics problem, which is a hard task for this type of robot manipulator. This robot can be used to pick and place tasks in complex designs. Combining an accurate and fast direct kinematics model with optimization strategies, it is possible to achieve the joints angles for a desired end-effector position and orientation. The optimization methods stretched simulated annealing algorithm and genetic algorithm were used. The solutions found were validated using data originated by a real and by a simulated robot formed by 12 servomotors with a gripper.

  14. An Explicit Example Of Optimal Portfolio-Consumption Choices With Habit Formation And Partial Observations

    OpenAIRE

    Yu, Xiang

    2011-01-01

    We consider a model of optimal investment and consumption with both habit formation and partial observations in incomplete It\\^{o} processes market. The investor chooses his consumption under the addictive habits constraint while only observing the market stock prices but not the instantaneous rate of return. Applying the Kalman-Bucy filtering theorem and the Dynamic Programming arguments, we solve the associated Hamilton-Jacobi-Bellman (HJB) equation explicitly for the path dependent stochas...

  15. Efficient Parallel Sorting for Migrating Birds Optimization When Solving Machine-Part Cell Formation Problems

    Directory of Open Access Journals (Sweden)

    Ricardo Soto

    2016-01-01

    Full Text Available The Machine-Part Cell Formation Problem (MPCFP is a NP-Hard optimization problem that consists in grouping machines and parts in a set of cells, so that each cell can operate independently and the intercell movements are minimized. This problem has largely been tackled in the literature by using different techniques ranging from classic methods such as linear programming to more modern nature-inspired metaheuristics. In this paper, we present an efficient parallel version of the Migrating Birds Optimization metaheuristic for solving the MPCFP. Migrating Birds Optimization is a population metaheuristic based on the V-Flight formation of the migrating birds, which is proven to be an effective formation in energy saving. This approach is enhanced by the smart incorporation of parallel procedures that notably improve performance of the several sorting processes performed by the metaheuristic. We perform computational experiments on 1080 benchmarks resulting from the combination of 90 well-known MPCFP instances with 12 sorting configurations with and without threads. We illustrate promising results where the proposal is able to reach the global optimum in all instances, while the solving time with respect to a nonparallel approach is notably reduced.

  16. A Robust Optimization Based Energy-Aware Virtual Network Function Placement Proposal for Small Cell 5G Networks with Mobile Edge Computing Capabilities

    Directory of Open Access Journals (Sweden)

    Bego Blanco

    2017-01-01

    Full Text Available In the context of cloud-enabled 5G radio access networks with network function virtualization capabilities, we focus on the virtual network function placement problem for a multitenant cluster of small cells that provide mobile edge computing services. Under an emerging distributed network architecture and hardware infrastructure, we employ cloud-enabled small cells that integrate microservers for virtualization execution, equipped with additional hardware appliances. We develop an energy-aware placement solution using a robust optimization approach based on service demand uncertainty in order to minimize the power consumption in the system constrained by network service latency requirements and infrastructure terms. Then, we discuss the results of the proposed placement mechanism in 5G scenarios that combine several service flavours and robust protection values. Once the impact of the service flavour and robust protection on the global power consumption of the system is analyzed, numerical results indicate that our proposal succeeds in efficiently placing the virtual network functions that compose the network services in the available hardware infrastructure while fulfilling service constraints.

  17. Robust high-throughput batch screening method in 384-well format with optical in-line resin quantification.

    Science.gov (United States)

    Kittelmann, Jörg; Ottens, Marcel; Hubbuch, Jürgen

    2015-04-15

    High-throughput batch screening technologies have become an important tool in downstream process development. Although continuative miniaturization saves time and sample consumption, there is yet no screening process described in the 384-well microplate format. Several processes are established in the 96-well dimension to investigate protein-adsorbent interactions, utilizing between 6.8 and 50 μL resin per well. However, as sample consumption scales with resin volumes and throughput scales with experiments per microplate, they are limited in costs and saved time. In this work, a new method for in-well resin quantification by optical means, applicable in the 384-well format, and resin volumes as small as 0.1 μL is introduced. A HTS batch isotherm process is described, utilizing this new method in combination with optical sample volume quantification for screening of isotherm parameters in 384-well microplates. Results are qualified by confidence bounds determined by bootstrap analysis and a comprehensive Monte Carlo study of error propagation. This new approach opens the door to a variety of screening processes in the 384-well format on HTS stations, higher quality screening data and an increase in throughput. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Enabling School Structure, Collective Responsibility, and a Culture of Academic Optimism: Toward a Robust Model of School Performance in Taiwan

    Science.gov (United States)

    Wu, Jason H.; Hoy, Wayne K.; Tarter, C. John

    2013-01-01

    Purpose: The purpose of this research is twofold: to test a theory of academic optimism in Taiwan elementary schools and to expand the theory by adding new variables, collective responsibility and enabling school structure, to the model. Design/methodology/approach: Structural equation modeling was used to test, refine, and expand an…

  19. Optimal and robust control of a class of nonlinear systems using dynamically re-optimised single network adaptive critic design

    Science.gov (United States)

    Tiwari, Shivendra N.; Padhi, Radhakant

    2018-01-01

    Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.

  20. Outcome based state budget allocation for diabetes prevention programs using multi-criteria optimization with robust weights.

    Science.gov (United States)

    Mehrotra, Sanjay; Kim, Kibaek

    2011-12-01

    We consider the problem of outcomes based budget allocations to chronic disease prevention programs across the United States (US) to achieve greater geographical healthcare equity. We use Diabetes Prevention and Control Programs (DPCP) by the Center for Disease Control and Prevention (CDC) as an example. We present a multi-criteria robust weighted sum model for such multi-criteria decision making in a group decision setting. The principal component analysis and an inverse linear programming techniques are presented and used to study the actual 2009 budget allocation by CDC. Our results show that the CDC budget allocation process for the DPCPs is not likely model based. In our empirical study, the relative weights for different prevalence and comorbidity factors and the corresponding budgets obtained under different weight regions are discussed. Parametric analysis suggests that money should be allocated to states to promote diabetes education and to increase patient-healthcare provider interactions to reduce disparity across the US.

  1. Experimental design, modeling and optimization of polyplex formation between DNA oligonucleotides and branched polyethylenimine.

    Science.gov (United States)

    Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana

    2015-09-28

    The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure.

  2. Uncertainty of Blood Alcohol Concentration (BAC Results as Related to Instrumental Conditions: Optimization and Robustness of BAC Analysis Headspace Parameters

    Directory of Open Access Journals (Sweden)

    Haleigh A. Boswell

    2015-12-01

    Full Text Available Analysis of blood alcohol concentration is a routine analysis performed in many forensic laboratories. This analysis commonly utilizes static headspace sampling, followed by gas chromatography combined with flame ionization detection (GC-FID. Studies have shown several “optimal” methods for instrumental operating conditions, which are intended to yield accurate and precise data. Given that different instruments, sampling methods, application specific columns and parameters are often utilized, it is much less common to find information on the robustness of these reported conditions. A major problem can arise when these “optimal” conditions may not also be robust, thus producing data with higher than desired uncertainty or potentially inaccurate results. The goal of this research was to incorporate the principles of quality by design (QBD in the adjustment and determination of BAC (blood alcohol concentration instrumental headspace parameters, thereby ensuring that minor instrumental variations, which occur as a matter of normal work, do not appreciably affect the final results of this analysis. This study discusses both the QBD principles as well as the results of the experiments, which allow for determination of more favorable instrumental headspace conditions. Additionally, method detection limits will also be reported in order to determine a reporting threshold and the degree of uncertainty at the common threshold value of 0.08 g/dL. Furthermore, the comparison of two internal standards, n-propanol and t-butanol, will be investigated. The study showed that an altered parameter of 85 °C headspace oven temperature and 15 psi headspace vial pressurization produces the lowest percent relative standard deviation of 1.3% when t-butanol is implemented as an internal standard, at least for one very common platform. The study also showed that an altered parameter of 100 °C headspace oven temperature and 15-psi headspace vial pressurization

  3. A hyper-robust sauropodomorph dinosaur ilium from the Upper Triassic-Lower Jurassic Elliot Formation of South Africa: Implications for the functional diversity of basal Sauropodomorpha

    Science.gov (United States)

    McPhee, Blair W.; Choiniere, Jonah N.

    2016-11-01

    It has generally been held that the locomotory habits of sauropodomorph dinosaurs moved in a relatively linear evolutionary progression from bipedal through "semi-bipedal" to the fully quadrupedal gait of Sauropoda. However, there is now a growing appreciation of the range of locomotory strategies practiced amongst contemporaneous taxa of the latest Triassic and earliest Jurassic. Here we present on the anatomy of a hyper-robust basal sauropodomorph ilium from the Late Triassic-Early Jurassic Elliot Formation of South Africa. This element, in addition to highlighting the unexpected range of bauplan diversity throughout basal Sauropodomorpha, also has implications for our understanding of the relevance of "robusticity" to sauropodomorph evolution beyond generalized limb scaling relationships. Possibly representing a unique form of hindlimb stabilization during phases of bipedal locomotion, the autapomorphic morphology of this newly rediscovered ilium provides additional insight into the myriad ways in which basal Sauropodomorpha managed the inherited behavioural and biomechanical challenges of increasing body-size, hyper-herbivory, and a forelimb primarily adapted for use in a bipedal context.

  4. The Key Role of the Vector Optimization Algorithm and Robust Design Approach for the Design of Polygeneration Systems

    Directory of Open Access Journals (Sweden)

    Alfredo Gimelli

    2018-04-01

    Full Text Available In recent decades, growing concerns about global warming and climate change effects have led to specific directives, especially in Europe, promoting the use of primary energy-saving techniques and renewable energy systems. The increasingly stringent requirements for carbon dioxide reduction have led to a more widespread adoption of distributed energy systems. In particular, besides renewable energy systems for power generation, one of the most effective techniques used to face the energy-saving challenges has been the adoption of polygeneration plants for combined heating, cooling, and electricity generation. This technique offers the possibility to achieve a considerable enhancement in energy and cost savings as well as a simultaneous reduction of greenhouse gas emissions. However, the use of small-scale polygeneration systems does not ensure the achievement of mandatory, but sometimes conflicting, aims without the proper sizing and operation of the plant. This paper is focused on a methodology based on vector optimization algorithms and developed by the authors for the identification of optimal polygeneration plant solutions. To this aim, a specific calculation algorithm for the study of cogeneration systems has also been developed. This paper provides, after a detailed description of the proposed methodology, some specific applications to the study of combined heat and power (CHP and organic Rankine cycle (ORC plants, thus highlighting the potential of the proposed techniques and the main results achieved.

  5. Quantitative NMR Approach to Optimize the Formation of Chemical Building Blocks from Abundant Carbohydrates.

    Science.gov (United States)

    Elliot, Samuel G; Tolborg, Søren; Sádaba, Irantzu; Taarning, Esben; Meier, Sebastian

    2017-07-21

    The future role of biomass-derived chemicals relies on the formation of diverse functional monomers in high yields from carbohydrates. Recently, it has become clear that a series of α-hydroxy acids, esters, and lactones can be formed from carbohydrates in alcohol and water solvents using tin-containing catalysts such as Sn-Beta. These compounds are potential building blocks for polyesters bearing additional olefin and alcohol functionalities. An NMR approach was used to identify, quantify, and optimize the formation of these building blocks in the Sn-Beta-catalyzed transformation of abundant carbohydrates. Record yields of the target molecules can be achieved by obstructing competing reactions through solvent selection. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Optimization study on multiple train formation scheme of urban rail transit

    Science.gov (United States)

    Xia, Xiaomei; Ding, Yong; Wen, Xin

    2018-05-01

    The new organization method, represented by the mixed operation of multi-marshalling trains, can adapt to the characteristics of the uneven distribution of passenger flow, but the research on this aspect is still not perfect enough. This paper introduced the passenger sharing rate and congestion penalty coefficient with different train formations. On this basis, this paper established an optimization model with the minimum passenger cost and operation cost as objective, and operation frequency and passenger demand as constraint. The ideal point method is used to solve this model. Compared with the fixed marshalling operation model, the overall cost of this scheme saves 9.24% and 4.43% respectively. This result not only validates the validity of the model, but also illustrate the advantages of the multiple train formations scheme.

  7. Neural-Network-Based Robust Optimal Tracking Control for MIMO Discrete-Time Systems With Unknown Uncertainty Using Adaptive Critic Design.

    Science.gov (United States)

    Liu, Lei; Wang, Zhanshan; Zhang, Huaguang

    2018-04-01

    This paper is concerned with the robust optimal tracking control strategy for a class of nonlinear multi-input multi-output discrete-time systems with unknown uncertainty via adaptive critic design (ACD) scheme. The main purpose is to establish an adaptive actor-critic control method, so that the cost function in the procedure of dealing with uncertainty is minimum and the closed-loop system is stable. Based on the neural network approximator, an action network is applied to generate the optimal control signal and a critic network is used to approximate the cost function, respectively. In contrast to the previous methods, the main features of this paper are: 1) the ACD scheme is integrated into the controllers to cope with the uncertainty and 2) a novel cost function, which is not in quadric form, is proposed so that the total cost in the design procedure is reduced. It is proved that the optimal control signals and the tracking errors are uniformly ultimately bounded even when the uncertainty exists. Finally, a numerical simulation is developed to show the effectiveness of the present approach.

  8. 8-dimensional lattice optimized formats in 25-GBaud/s VCSEL based IM/DD optical interconnections

    DEFF Research Database (Denmark)

    Lu, Xiaofeng; Tafur Monroy, Idelfonso

    2015-01-01

    Temporally combined 4- and 8-dimensional lattice grids optimized modulation formats for VCSEL based IM/DD short-reach optical inter-connections has been proposed and investigated numerically together with its conventional counterpart PAM-4. © 2015 OSA.......Temporally combined 4- and 8-dimensional lattice grids optimized modulation formats for VCSEL based IM/DD short-reach optical inter-connections has been proposed and investigated numerically together with its conventional counterpart PAM-4. © 2015 OSA....

  9. Robust Co-Optimization to Energy and Reserve Joint Dispatch Considering Wind Power Generation and Zonal Reserve Constraints in Real-Time Electricity Markets

    Directory of Open Access Journals (Sweden)

    Chunlai Li

    2017-07-01

    Full Text Available This paper proposes an energy and reserve joint dispatch model based on a robust optimization approach in real-time electricity markets, considering wind power generation uncertainties as well as zonal reserve constraints under both normal and N-1 contingency conditions. In the proposed model, the operating reserves are classified as regulating reserve and spinning reserve according to the response performance. More specifically, the regulating reserve is usually utilized to reduce the gap due to forecasting errors, while the spinning reserve is commonly adopted to enhance the ability for N-1 contingencies. Since the transmission bottlenecks may inhibit the deliverability of reserve, the zonal placement of spinning reserve is considered in this paper to improve the reserve deliverability under the contingencies. Numerical results on the IEEE 118-bus test system show the effectiveness of the proposed model.

  10. A Unified Trading Model Based on Robust Optimization for Day-Ahead and Real-Time Markets with Wind Power Integration

    Directory of Open Access Journals (Sweden)

    Yuewen Jiang

    2017-04-01

    Full Text Available In a conventional electricity market, trading is conducted based on power forecasts in the day-ahead market, while the power imbalance is regulated in the real-time market, which is a separate trading scheme. With large-scale wind power connected into the power grid, power forecast errors increase in the day-ahead market which lowers the economic efficiency of the separate trading scheme. This paper proposes a robust unified trading model that includes the forecasts of real-time prices and imbalance power into the day-ahead trading scheme. The model is developed based on robust optimization in view of the undefined probability distribution of clearing prices of the real-time market. For the model to be used efficiently, an improved quantum-behaved particle swarm algorithm (IQPSO is presented in the paper based on an in-depth analysis of the limitations of the static character of quantum-behaved particle swarm algorithm (QPSO. Finally, the impacts of associated parameters on the separate trading and unified trading model are analyzed to verify the superiority of the proposed model and algorithm.

  11. Optimization and validation of an existing, surgical and robust dry eye rat model for the evaluation of therapeutic compounds.

    Science.gov (United States)

    Joossen, Cedric; Lanckacker, Ellen; Zakaria, Nadia; Koppen, Carina; Joossens, Jurgen; Cools, Nathalie; De Meester, Ingrid; Lambeir, Anne-Marie; Delputte, Peter; Maes, Louis; Cos, Paul

    2016-05-01

    The aim of this research was to optimize and validate an animal model for dry eye, adopting clinically relevant evaluation parameters. Dry eye was induced in female Wistar rats by surgical removal of the exorbital lacrimal gland. The clinical manifestations of dry eye were evaluated by tear volume measurements, corneal fluorescein staining, cytokine measurements in tear fluid, MMP-9 mRNA expression and CD3(+) cell infiltration in the conjunctiva. The animal model was validated by treatment with Restasis(®) (4 weeks) and commercial dexamethasone eye drops (2 weeks). Removal of the exorbital lacrimal gland resulted in 50% decrease in tear volume and a gradual increase in corneal fluorescein staining. Elevated levels of TNF-α and IL-1α have been registered in tear fluid together with an increase in CD3(+) cells in the palpebral conjunctiva when compared to control animals. Additionally, an increase in MMP-9 mRNA expression was recorded in conjunctival tissue. Reference treatment with Restasis(®) and dexamethasone eye drops had a positive effect on all evaluation parameters, except on tear volume. This rat dry eye model was validated extensively and judged appropriate for the evaluation of novel compounds and therapeutic preparations for dry eye disease. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Combined mask and illumination scheme optimization for robust contact patterning on 45nm technology node flash memory devices

    Science.gov (United States)

    Vaglio Pret, Alessandro; Capetti, Gianfranco; Bollin, Maddalena; Cotti, Gina; De Simone, Danilo; Cantù, Pietro; Vaccaro, Alessandro; Soma, Laura

    2008-03-01

    Immersion Lithography is the most important technique for extending optical lithography's capabilities and meeting the requirements of Semiconductor Roadmap. The introduction of immersion tools has recently allowed the development of 45nm technology node in single exposure. Nevertheless the usage of hyper-high NA scanners (NA > 1), some levels still remain very critical to be imaged with sufficient process performances. For memory devices, contact mask is for sure the most challenging layer. Aim of this paper is to present the lithographic assessment of 193nm contact holes process, with k I value of ~0.30 using NA 1.20 immersion lithography (minimum pitch is 100nm). Different issues will be reported, related to mask choices (Binary or Attenuated Phase Shift) and illuminator configurations. First phase of the work will be dedicated to a preliminary experimental screening on a simple test case in order to reduce the variables in the following optimization sections. Based on this analysis we will discard X-Y symmetrical illuminators (Annular, C-Quad) due to poor contrast. Second phase will be dedicated to a full simulation assessment. Different illuminators will be compared, with both mask type and several mask biases. From this study, we will identify some general trends of lithography performances that can be used for the fine tuning of the RET settings. The last phase of the work will be dedicated to find the sensitivity trends for one of the analyzed illuminators. In particular we study the effect of Numerical Aperture, mask bias in both X and Y direction and poles sigma ring-width and centre.

  13. Optimal pH in chlorinated swimming pools - balancing formation of by-products

    DEFF Research Database (Denmark)

    Hansen, Kamilla Marie Speht; Albrechtsen, Hans-Jørgen; Andersen, Henrik Rasmus

    2013-01-01

    In order to identify the optimal pH range for chlorinated swimming pools the formation of trihalomethanes, haloacetonitriles and trichloramine was investigated in the pH-range 6.5–7.5 in batch experiments. An artificial body fluid analogue was used to simulate bather load as the precursor for by-products....... The chlorine-to-precursor ratio used in the batch experiments influenced the amounts of by-products formed, but regardless of the ratio the same trends in the effect of pH were observed. Trihalomethane formation was reduced by decreasing pH but haloacetonitrile and trichloramine formation increased....... To evaluate the significance of the increase and decrease of the investigated organic by-products at the different pH values, the genotoxicity was calculated based on literature values. The calculated genotoxicity was approximately at the same level in the pH range 6.8–7.5 and increased when pH was 6...

  14. Optimized Jasmonic Acid Production by Lasiodiplodia theobromae Reveals Formation of Valuable Plant Secondary Metabolites.

    Directory of Open Access Journals (Sweden)

    Felipe Eng

    Full Text Available Jasmonic acid is a plant hormone that can be produced by the fungus Lasiodiplodia theobromae via submerged fermentation. From a biotechnological perspective jasmonic acid is a valuable feedstock as its derivatives serve as important ingredients in different cosmetic products and in the future it may be used for pharmaceutical applications. The objective of this work was to improve the production of jasmonic acid by L. theobromae strain 2334. We observed that jasmonic acid formation is dependent on the culture volume. Moreover, cultures grown in medium containing potassium nitrate as nitrogen source produced higher amounts of jasmonic acid than analogous cultures supplemented with ammonium nitrate. When cultivated under optimal conditions for jasmonic acid production, L. theobromae secreted several secondary metabolites known from plants into the medium. Among those we found 3-oxo-2-(pent-2-enyl-cyclopentane-1-butanoic acid (OPC-4 and hydroxy-jasmonic acid derivatives, respectively, suggesting that fungal jasmonate metabolism may involve similar reaction steps as that of plants. To characterize fungal growth and jasmonic acid-formation, we established a mathematical model describing both processes. This model may form the basis of industrial upscaling attempts. Importantly, it showed that jasmonic acid-formation is not associated to fungal growth. Therefore, this finding suggests that jasmonic acid, despite its enormous amount being produced upon fungal development, serves merely as secondary metabolite.

  15. Forecasting the Optimal Factors of Formation of the Population Savings as the Basis for Investment Resources of the Regional Economy

    Directory of Open Access Journals (Sweden)

    Odintsova Tetiana M.

    2017-04-01

    Full Text Available The article is aimed at studying the optimal factors of formation of the population savings as the basis for investment resources of the regional economy. A factorial (nonlinear correlative-regression analysis of the formation of savings of the population of Ukraine was completed. On its basis a forecast of the optimal structure and volumes of formation of the population incomes was carried out taking into consideration impact of fundamental factors on these incomes. Such approach provides to identify the marginal volumes of tax burden, population savings, and capital investments, directed to economic growth.

  16. Optimization of Tape Winding Process Parameters to Enhance the Performance of Solid Rocket Nozzle Throat Back Up Liners using Taguchi's Robust Design Methodology

    Science.gov (United States)

    Nath, Nayani Kishore

    2017-08-01

    The throat back up liners is used to protect the nozzle structural members from the severe thermal environment in solid rocket nozzles. The throat back up liners is made with E-glass phenolic prepregs by tape winding process. The objective of this work is to demonstrate the optimization of process parameters of tape winding process to achieve better insulative resistance using Taguchi's robust design methodology. In this method four control factors machine speed, roller pressure, tape tension, tape temperature that were investigated for the tape winding process. The presented work was to study the cogency and acceptability of Taguchi's methodology in manufacturing of throat back up liners. The quality characteristic identified was Back wall temperature. Experiments carried out using L 9 ' (34) orthogonal array with three levels of four different control factors. The test results were analyzed using smaller the better criteria for Signal to Noise ratio in order to optimize the process. The experimental results were analyzed conformed and successfully used to achieve the minimum back wall temperature of the throat back up liners. The enhancement in performance of the throat back up liners was observed by carrying out the oxy-acetylene tests. The influence of back wall temperature on the performance of throat back up liners was verified by ground firing test.

  17. Superconductivity optimization and phase formation kinetics study of internal-Sn Nb3Sn superconducting wires

    International Nuclear Information System (INIS)

    Zhang, Chaowu

    2007-07-01

    Superconductors Nb 3 Sn wires are one of the most applicable cryogenic superconducting materials and the best choice for high-field magnets exceeding 10 T. One of the most significant utilization is the ITER project which is regarded as the hope of future energy source. The high-Cu composite designs with smaller number of sub-element and non-reactive diffusion barrier, and the RRP (Restacked Rod Process) internal-Sn technology are usually applied for the wire manufacturing. Such designed and processed wires were supplied by MSA/Alstom and WST/NIN in this research. The systematic investigation on internal-Sn superconducting wires includes the optimization of heat treatment (HT) conditions, phase formation and its relation with superconductivity, microstructure analysis, and the phase formation kinetics. Because of the anfractuosity of the configuration design and metallurgical processing, the MF wires are not sufficient for studying a sole factor effect on superconductivity. Therefore, four sets of mono-element (ME) wires with different Sn ratios and different third-element addition were designed and fabricated in order to explore the relationship between phase formation and superconducting performances, particularly the A15 layer growth kinetics. Different characterization technic have been used (magnetization measurements, neutron diffraction and SEM/TEM/EDX analysis). The A15 layer thicknesses of various ME samples were measured and carried out linear and non-linear fits by means of two model equations. The results have clearly demonstrated that the phase formation kinetics of Nb 3 Sn solid-state reaction is in accordance with an n power relation and the n value is increased with the increase of HT temperature and the Sn ratio in the wire composite. (author)

  18. Equilibrium star formation in a constant Q disc: model optimization and initial tests

    Science.gov (United States)

    Zheng, Zheng; Meurer, Gerhardt R.; Heckman, Timothy M.; Thilker, David A.; Zwaan, Martin A.

    2013-10-01

    We develop a model for the distribution of the interstellar medium (ISM) and star formation in galaxies based on recent studies that indicate that galactic discs stabilize to a constant stability parameter, which we combine with prescriptions of how the phases of the ISM are determined and for the star formation law (SFL). The model predicts the gas surface mass density and star formation intensity of a galaxy given its rotation curve, stellar surface mass density and the gas velocity dispersion. This model is tested on radial profiles of neutral and molecular ISM surface mass density and star formation intensity of 12 galaxies selected from the H I Nearby Galaxy Survey sample. Our tests focus on intermediate radii (0.3 to 1 times the optical radius) because there are insufficient data to test the outer discs and the fits are less accurate in detail in the centre. Nevertheless, the model produces reasonable agreement with the ISM mass and star formation rate integrated over the central region in all but one case. To optimize the model, we evaluate four recipes for the stability parameter, three recipes for apportioning the ISM into molecular and neutral components, and eight versions of the SFL. We find no clear-cut best prescription for the two-fluid (gas and stars) stability parameter Q2f and therefore for simplicity, we use the Wang and Silk approximation (QWS). We found that an empirical scaling between the molecular-to-neutral ISM ratio (Rmol) and the stellar surface mass density proposed by Leroy et al. works marginally better than the other two prescriptions for this ratio in predicting the ISM profiles, and noticeably better in predicting the star formation intensity from the ISM profiles produced by our model with the SFLs we tested. Thus, in the context of our modelled ISM profiles, the linear molecular SFL and the two-component SFL work better than the other prescriptions we tested. We incorporate these relations into our `constant Q disc' model.

  19. Spectroscopic synthetic optimizations monitoring of silver nanoparticles formation from Megaphrynium macrostachyum leaf extract

    Directory of Open Access Journals (Sweden)

    François Eya'ane Meva

    Full Text Available ABSTRACT Nanobiotechnology is one of the most promising areas in modern nanoscience and technology. Metallic nanoparticles have found uses in many applications in different fields, such as catalysis, photonics, electronics, medicine and agriculture. Synthesized nanoparticles through chemical and physical methods are expensive and have low biocompatibility. In the present study, silver nanoparticles have been synthesized from Megaphrynium macrostachyum (Benth. & Hook. f. Milne-Redh., Marantaceae, leaf extract. Megaphrynium macrostachyum is a plant with large leaves found in the rainforest of West and Central Africa. Synthetic optimizations following factors such as incubation time, temperature, pH, extract and silver ion concentration during silver formation are discussed. UV–visible spectra gave surface plasmon resonance for synthesized silver nanoparticles based Megaphrynium macrostachyum peaks at 400–450 nm. X-ray diffraction revealed the average size of pure crystallites composed from Ag and AgCl.

  20. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    International Nuclear Information System (INIS)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García; Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D.; Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz

    2015-01-01

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs

  1. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García [Instituto de Astronomía Teórica y Experimental, CONICET-UNC, Laprida 854, X5000BGR, Córdoba (Argentina); Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D. [Consejo Nacional de Investigaciones Científicas y Técnicas, Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz, E-mail: andresnicolas@oac.uncor.edu [Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Santiago (Chile)

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.

  2. Optimization of hybrid laser arc welding of 42CrMo steel to suppress pore formation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yan [Hunan University, State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, Changsha (China); Hunan Institute of Science and Technology, College of Mechanical Engineering, Yueyang (China); Chen, Genyu; Mao, Shuai; Zhou, Cong; Chen, Fei [Hunan University, State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, Changsha (China)

    2017-06-15

    The hybrid laser arc welding (HLAW) of 42CrMo quenched and tempered steel was conducted. The effect of the processing parameters, such as the relative positions of the laser and the arc, the shielding gas flow rate, the defocusing distance, the laser power, the wire feed rate and the welding speed, on the pore formation was analyzed, the morphological characteristics of the pores were analyzed using scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). The results showed that the majority of the pores were invasive. The pores formed at the leading a laser (LA) welding process were fewer than those at the leading a arc (AL) welding process. Increasing the shielding gas flow rate could also facilitate the reduction of pores. The laser power and the welding speed were two key process parameters to reduce the pores. The flow of the molten pool, the weld cooling rate and the pore escaping rate as a result of different parameters could all affect pore formation. An ideal pore-free weld was obtained for the optimal welding process parameters. (orig.)

  3. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  4. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  5. Optimizing and Quantifying CO2 Storage Resource in Saline Formations and Hydrocarbon Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Bosshart, Nicholas W. [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center; Ayash, Scott C. [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center; Azzolina, Nicholas A. [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center; Peck, Wesley D. [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center; Gorecki, Charles D. [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center; Ge, Jun [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center; Jiang, Tao [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center; Burton-Kelly, Matthew E. [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center; Anderson, Parker W. [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center; Dotzenrod, Neil W. [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center; Gorz, Andrew J. [Univ. of North Dakota, Grand Folks, ND (United States). Energy & Environmental Research Center

    2017-06-30

    In an effort to reduce carbon dioxide (CO2) emissions from large stationary sources, carbon capture and storage (CCS) is being investigated as one approach. This work assesses CO2 storage resource estimation methods for deep saline formations (DSFs) and hydrocarbon reservoirs undergoing CO2 enhanced oil recovery (EOR). Project activities were conducted using geologic modeling and simulation to investigate CO2 storage efficiency. CO2 storage rates and efficiencies in DSFs classified by interpreted depositional environment were evaluated at the regional scale over a 100-year time frame. A focus was placed on developing results applicable to future widespread commercial-scale CO2 storage operations in which an array of injection wells may be used to optimize storage in saline formations. The results of this work suggest future investigations of prospective storage resource in closed or semiclosed formations need not have a detailed understanding of the depositional environment of the reservoir to generate meaningful estimates. However, the results of this work also illustrate the relative importance of depositional environment, formation depth, structural geometry, and boundary conditions on the rate of CO2 storage in these types of systems. CO2 EOR occupies an important place in the realm of geologic storage of CO2, as it is likely to be the primary means of geologic CO2 storage during the early stages of commercial implementation, given the lack of a national policy and the viability of the current business case. This work estimates CO2 storage efficiency factors using a unique industry database of CO2 EOR sites and 18 different reservoir simulation models capturing fluvial clastic and shallow shelf carbonate depositional environments for reservoir depths of 1219 and 2438 meters (4000 and 8000 feet) and 7.6-, 20-, and 64-meter (25-, 66

  6. Robust Airline Schedules

    OpenAIRE

    Eggenberg, Niklaus; Salani, Matteo; Bierlaire, Michel

    2010-01-01

    Due to economic pressure industries, when planning, tend to focus on optimizing the expected profit or the yield. The consequence of highly optimized solutions is an increased sensitivity to uncertainty. This generates additional "operational" costs, incurred by possible modifications of the original plan to be performed when reality does not reflect what was expected in the planning phase. The modern research trend focuses on "robustness" of solutions instead of yield or profit. Although ro...

  7. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  8. Optimization of detectors positioning with respect to flying dynamics for future formation flight missions

    Science.gov (United States)

    Civitani, Marta; Djalal, Sophie; Chipaux, Remi

    2009-08-01

    In a X-ray telescope in formation flight configuration, the optics and the focal-plane detectors reside in two different spacecraft. The dynamics of the detector spacecraft (DSC) with respect to the mirror spacecraft (MSC, carrying the mirrors of the telescope) changes continuously the arrival positions of the photons on the detectors. In this paper we analyze this issue for the case of the SIMBOL-X hard X-ray mission, extensively studied by CNES and ASI until 2009 spring. Due to the existing gaps between pixels and between detector modules, the dynamics of the system may produce a relevant photometric effect. The aim of this work is to present the optimization study of the control-law algorithm with respect to the detector's geometry. As the photometric effect may vary depending upon position of the source image on the detector, the analysis-carried out using the simuLOS (INAF, CNES, CEA) simulation tool-is extended over the entire SIMBOL-X field of view.

  9. Optimizing combination of vascular endothelial growth factor and mesenchymal stem cells on ectopic bone formation in SCID mice

    DEFF Research Database (Denmark)

    Dreyer, Chris H; Kjaergaard, Kristian; Ditzel, Nicholas

    2017-01-01

    combined immunodeficient (SCID) mice were used in this study to evaluate optimal time points for VEGF stimulation to increase bone formation. METHODS: Twenty-eight SCID (NOD.CB17-Prkdcscid/J) mice had hydroxyapatite granules seeded with 5 × 105MSCs inserted subcutaneous. Pellets released VEGF on days 1...

  10. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  11. Designing Phononic Crystals with Wide and Robust Band Gaps

    Science.gov (United States)

    Jia, Zian; Chen, Yanyu; Yang, Haoxiang; Wang, Lifeng

    2018-04-01

    Phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with wide and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.

  12. Designing Phononic Crystals with Wide and Robust Band Gaps

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yanyu [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jia, Zian [State University of New York at Stony Brook; Yang, Haoxiang [State University of New York at Stony Brook; Wang, Lifeng [State University of New York at Stony Brook

    2018-04-16

    Phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with wide and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.

  13. Multifactorial Optimization of Contrast-Enhanced Nanofocus Computed Tomography for Quantitative Analysis of Neo-Tissue Formation in Tissue Engineering Constructs.

    Directory of Open Access Journals (Sweden)

    Maarten Sonnaert

    Full Text Available To progress the fields of tissue engineering (TE and regenerative medicine, development of quantitative methods for non-invasive three dimensional characterization of engineered constructs (i.e. cells/tissue combined with scaffolds becomes essential. In this study, we have defined the most optimal staining conditions for contrast-enhanced nanofocus computed tomography for three dimensional visualization and quantitative analysis of in vitro engineered neo-tissue (i.e. extracellular matrix containing cells in perfusion bioreactor-developed Ti6Al4V constructs. A fractional factorial 'design of experiments' approach was used to elucidate the influence of the staining time and concentration of two contrast agents (Hexabrix and phosphotungstic acid and the neo-tissue volume on the image contrast and dataset quality. Additionally, the neo-tissue shrinkage that was induced by phosphotungstic acid staining was quantified to determine the operating window within which this contrast agent can be accurately applied. For Hexabrix the staining concentration was the main parameter influencing image contrast and dataset quality. Using phosphotungstic acid the staining concentration had a significant influence on the image contrast while both staining concentration and neo-tissue volume had an influence on the dataset quality. The use of high concentrations of phosphotungstic acid did however introduce significant shrinkage of the neo-tissue indicating that, despite sub-optimal image contrast, low concentrations of this staining agent should be used to enable quantitative analysis. To conclude, design of experiments allowed us to define the most optimal staining conditions for contrast-enhanced nanofocus computed tomography to be used as a routine screening tool of neo-tissue formation in Ti6Al4V constructs, transforming it into a robust three dimensional quality control methodology.

  14. Optimal Distributed Controller Synthesis for Chain Structures: Applications to Vehicle Formations

    OpenAIRE

    Khorsand, Omid; Alam, Assad; Gattami, Ather

    2012-01-01

    We consider optimal distributed controller synthesis for an interconnected system subject to communication constraints, in linear quadratic settings. Motivated by the problem of finite heavy duty vehicle platooning, we study systems composed of interconnected subsystems over a chain graph. By decomposing the system into orthogonal modes, the cost function can be separated into individual components. Thereby, derivation of the optimal controllers in state-space follows immediately. The optimal...

  15. Integrated Design of a Long-Haul Commercial Aircraft Optimized for Formation Flying

    NARCIS (Netherlands)

    Dijkers, H.P.A.; Van Nunen, R.; Bos, D.A.; Gutleb, T.L.M.; Herinckx, L.E.; Radfar, H.; Van Rompuy, E.; Sayin, S.E.; De Wit, J.; Beelaerts van Blokland, W.W.A.

    2011-01-01

    The airline industry is under continuous pressure to reduce emissions and costs. This paper investigates the feasibility for commercial airlines to use formation flight to reduce emissions and fuel burn. To fly in formation, an aircraft needs to benefit from the wake vortices of the preceding

  16. Expression and assembly of largest foreign protein in chloroplasts: oral delivery of human FVIII made in lettuce chloroplasts robustly suppresses inhibitor formation in haemophilia A mice.

    Science.gov (United States)

    Kwon, Kwang-Chul; Sherman, Alexandra; Chang, Wan-Jung; Kamesh, Aditya; Biswas, Moanaro; Herzog, Roland W; Daniell, Henry

    2017-11-06

    Inhibitor formation is a serious complication of factor VIII (FVIII) replacement therapy for the X-linked bleeding disorder haemophilia A and occurs in 20%-30% of patients. No prophylactic tolerance protocol currently exists. Although we reported oral tolerance induction using FVIII domains expressed in tobacco chloroplasts, significant challenges in clinical advancement include expression of the full-length CTB-FVIII sequence to cover the entire patient population, regardless of individual CD4 + T-cell epitope responses. Codon optimization of FVIII heavy chain (HC) and light chain (LC) increased expression 15- to 42-fold higher than the native human genes. Homoplasmic lettuce lines expressed CTB fusion proteins of FVIII-HC (99.3 kDa), LC (91.8 kDa), C2 (31 kDa) or single chain (SC, 178.2 kDa) up to 3622, 263, 3321 and 852 μg/g in lyophilized plant cells, when grown in a cGMP hydroponic facility (Fraunhofer). CTB-FVIII-SC is the largest foreign protein expressed in chloroplasts; despite a large pentamer size (891 kDa), assembly, folding and disulphide bonds were maintained upon lyophilization and long-term storage as revealed by GM1-ganglioside receptor binding assays. Repeated oral gavages (twice/week for 2 months) of CTB-FVIII-HC/CTB-FVIII-LC reduced inhibitor titres ~10-fold (average 44 BU/mL to 4.7 BU/mL) in haemophilia A mice. Most importantly, increase in the frequency of circulating LAP-expressing CD4 + CD25 + FoxP3 + Treg in tolerized mice could be used as an important cellular biomarker in human clinical trials for plant-based oral tolerance induction. In conclusion, this study reports the first clinical candidate for oral tolerance induction that is urgently needed to protect haemophilia A patients receiving FVIII injections. © 2017 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  17. Optimization of Game Formats in U-10 Soccer Using Logistic Regression Analysis

    Directory of Open Access Journals (Sweden)

    Amatria Mario

    2016-12-01

    Full Text Available Small-sided games provide young soccer players with better opportunities to develop their skills and progress as individual and team players. There is, however, little evidence on the effectiveness of different game formats in different age groups, and furthermore, these formats can vary between and even within countries. The Royal Spanish Soccer Association replaced the traditional grassroots 7-a-side format (F-7 with the 8-a-side format (F-8 in the 2011-12 season and the country’s regional federations gradually followed suit. The aim of this observational methodology study was to investigate which of these formats best suited the learning needs of U-10 players transitioning from 5-aside futsal. We built a multiple logistic regression model to predict the success of offensive moves depending on the game format and the area of the pitch in which the move was initiated. Success was defined as a shot at the goal. We also built two simple logistic regression models to evaluate how the game format influenced the acquisition of technicaltactical skills. It was found that the probability of a shot at the goal was higher in F-7 than in F-8 for moves initiated in the Creation Sector-Own Half (0.08 vs 0.07 and the Creation Sector-Opponent's Half (0.18 vs 0.16. The probability was the same (0.04 in the Safety Sector. Children also had more opportunities to control the ball and pass or take a shot in the F-7 format (0.24 vs 0.20, and these were also more likely to be successful in this format (0.28 vs 0.19.

  18. Integration of uniform design and quantum-behaved particle swarm optimization to the robust design for a railway vehicle suspension system under different wheel conicities and wheel rolling radii

    Science.gov (United States)

    Cheng, Yung-Chang; Lee, Cheng-Kang

    2017-10-01

    This paper proposes a systematic method, integrating the uniform design (UD) of experiments and quantum-behaved particle swarm optimization (QPSO), to solve the problem of a robust design for a railway vehicle suspension system. Based on the new nonlinear creep model derived from combining Hertz contact theory, Kalker's linear theory and a heuristic nonlinear creep model, the modeling and dynamic analysis of a 24 degree-of-freedom railway vehicle system were investigated. The Lyapunov indirect method was used to examine the effects of suspension parameters, wheel conicities and wheel rolling radii on critical hunting speeds. Generally, the critical hunting speeds of a vehicle system resulting from worn wheels with different wheel rolling radii are lower than those of a vehicle system having original wheels without different wheel rolling radii. Because of worn wheels, the critical hunting speed of a running railway vehicle substantially declines over the long term. For safety reasons, it is necessary to design the suspension system parameters to increase the robustness of the system and decrease the sensitive of wheel noises. By applying UD and QPSO, the nominal-the-best signal-to-noise ratio of the system was increased from -48.17 to -34.05 dB. The rate of improvement was 29.31%. This study has demonstrated that the integration of UD and QPSO can successfully reveal the optimal solution of suspension parameters for solving the robust design problem of a railway vehicle suspension system.

  19. Optimization of CO2 Storage in Saline Aquifers Using Water-Alternating Gas (WAG) Scheme - Case Study for Utsira Formation

    Science.gov (United States)

    Agarwal, R. K.; Zhang, Z.; Zhu, C.

    2013-12-01

    For optimization of CO2 storage and reduced CO2 plume migration in saline aquifers, a genetic algorithm (GA) based optimizer has been developed which is combined with the DOE multi-phase flow and heat transfer numerical simulation code TOUGH2. Designated as GA-TOUGH2, this combined solver/optimizer has been verified by performing optimization studies on a number of model problems and comparing the results with brute-force optimization which requires a large number of simulations. Using GA-TOUGH2, an innovative reservoir engineering technique known as water-alternating-gas (WAG) injection has been investigated to determine the optimal WAG operation for enhanced CO2 storage capacity. The topmost layer (layer # 9) of Utsira formation at Sleipner Project, Norway is considered as a case study. A cylindrical domain, which possesses identical characteristics of the detailed 3D Utsira Layer #9 model except for the absence of 3D topography, was used. Topographical details are known to be important in determining the CO2 migration at Sleipner, and are considered in our companion model for history match of the CO2 plume migration at Sleipner. However, simplification on topography here, without compromising accuracy, is necessary to analyze the effectiveness of WAG operation on CO2 migration without incurring excessive computational cost. Selected WAG operation then can be simulated with full topography details later. We consider a cylindrical domain with thickness of 35 m with horizontal flat caprock. All hydrogeological properties are retained from the detailed 3D Utsira Layer #9 model, the most important being the horizontal-to-vertical permeability ratio of 10. Constant Gas Injection (CGI) operation with nine-year average CO2 injection rate of 2.7 kg/s is considered as the baseline case for comparison. The 30-day, 15-day, and 5-day WAG cycle durations are considered for the WAG optimization design. Our computations show that for the simplified Utsira Layer #9 model, the

  20. Robust portfolio selection under norm uncertainty

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2016-06-01

    Full Text Available Abstract In this paper, we consider the robust portfolio selection problem which has a data uncertainty described by the ( p , w $(p,w$ -norm in the objective function. We show that the robust formulation of this problem is equivalent to a linear optimization problem. Moreover, we present some numerical results concerning our robust portfolio selection problem.

  1. Optimization of Polyplex Formation between DNA Oligonucleotide and Poly(l-Lysine): Experimental Study and Modeling Approach

    Science.gov (United States)

    Vasiliu, Tudor; Cojocaru, Corneliu; Rotaru, Alexandru; Pricope, Gabriela; Pinteala, Mariana; Clima, Lilia

    2017-01-01

    The polyplexes formed by nucleic acids and polycations have received a great attention owing to their potential application in gene therapy. In our study, we report experimental results and modeling outcomes regarding the optimization of polyplex formation between the double-stranded DNA (dsDNA) and poly(l-Lysine) (PLL). The quantification of the binding efficiency during polyplex formation was performed by processing of the images captured from the gel electrophoresis assays. The design of experiments (DoE) and response surface methodology (RSM) were employed to investigate the coupling effect of key factors (pH and N/P ratio) affecting the binding efficiency. According to the experimental observations and response surface analysis, the N/P ratio showed a major influence on binding efficiency compared to pH. Model-based optimization calculations along with the experimental confirmation runs unveiled the maximal binding efficiency (99.4%) achieved at pH 5.4 and N/P ratio 125. To support the experimental data and reveal insights of molecular mechanism responsible for the polyplex formation between dsDNA and PLL, molecular dynamics simulations were performed at pH 5.4 and 7.4. PMID:28629130

  2. Optimization of Polyplex Formation between DNA Oligonucleotide and Poly(ʟ-Lysine): Experimental Study and Modeling Approach.

    Science.gov (United States)

    Vasiliu, Tudor; Cojocaru, Corneliu; Rotaru, Alexandru; Pricope, Gabriela; Pinteala, Mariana; Clima, Lilia

    2017-06-17

    The polyplexes formed by nucleic acids and polycations have received a great attention owing to their potential application in gene therapy. In our study, we report experimental results and modeling outcomes regarding the optimization of polyplex formation between the double-stranded DNA (dsDNA) and poly(ʟ-Lysine) (PLL). The quantification of the binding efficiency during polyplex formation was performed by processing of the images captured from the gel electrophoresis assays. The design of experiments (DoE) and response surface methodology (RSM) were employed to investigate the coupling effect of key factors (pH and N/P ratio) affecting the binding efficiency. According to the experimental observations and response surface analysis, the N/P ratio showed a major influence on binding efficiency compared to pH. Model-based optimization calculations along with the experimental confirmation runs unveiled the maximal binding efficiency (99.4%) achieved at pH 5.4 and N/P ratio 125. To support the experimental data and reveal insights of molecular mechanism responsible for the polyplex formation between dsDNA and PLL, molecular dynamics simulations were performed at pH 5.4 and 7.4.

  3. Spectroscopic analysis of Zirconium plasma in different ambient and optimizing conditions for nanoclusters formation

    International Nuclear Information System (INIS)

    Yadav, Dheerendra; Thareja, Raj K.

    2010-01-01

    The laser produced zirconium plasma has been studied by emission spectroscopy and fast photography using intensified charged coupled device at different ambient pressures of nitrogen (0.1, 1.0 and 10 mbar). Formation of zirconium clusters are arising at ambient pressure of 1.0 mbar at the plume periphery due to the chemical reactions between the plasma plume and the ambient and confirmed using optical emission spectroscopy. The optimum parameters for existence cluster formation are reported. The ZrN clusters are deposited on silicon substrate and characterized by AFM, XRD and EDAX techniques. (author)

  4. Optimization of impedance spectroscopy techniques for measuring cutaneous micropore formation after microneedle treatment in an elderly population.

    Science.gov (United States)

    Kelchen, Megan N; Holdren, Grant O; Farley, Matthew J; Zimmerman, M Bridget; Fairley, Janet A; Brogden, Nicole K

    2014-12-01

    The objective of this study was to optimize a reproducible impedance spectroscopy method in elderly subjects as a means to evaluate the effects of microneedles on aging skin. Human volunteers were treated with microneedles at six sites on the upper arm. Repeated impedance measurements were taken pre- and post-microneedle insertion. Two electrode types were evaluated (dry vs. gel), using either light or direct pressure to maintain contact between the electrode and skin surface. Transepidermal water loss (TEWL) was measured as a complementary technique. Five control subjects and nine elderly subjects completed the study. Microneedle insertion produced a significant decrease in impedance from baseline in all subjects (p micropore formation. This was supported by a complementary significant increase in TEWL (p micropore formation in elderly subjects, which will be essential for future studies describing microneedle-assisted transdermal delivery in aging populations.

  5. The multi-criteria optimization for the formation of the multiple-valued logic model of a robotic agent

    International Nuclear Information System (INIS)

    Bykovsky, A Yu; Sherbakov, A A

    2016-01-01

    The C-valued Allen-Givone algebra is the attractive tool for modeling of a robotic agent, but it requires the consensus method of minimization for the simplification of logic expressions. This procedure substitutes some undefined states of the function for the maximal truth value, thus extending the initially given truth table. This further creates the problem of different formal representations for the same initially given function. The multi-criteria optimization is proposed for the deliberate choice of undefined states and model formation. (paper)

  6. Optimizing geologic CO2 sequestration by injection in deep saline formations below oil reservoirs

    International Nuclear Information System (INIS)

    Han, Weon Shik; McPherson, Brian J.

    2009-01-01

    The purpose of this research is to present a best-case paradigm for geologic CO 2 storage: CO 2 injection and sequestration in saline formations below oil reservoirs. This includes the saline-only section below the oil-water contact (OWC) in oil reservoirs, a storage target neglected in many current storage capacity assessments. This also includes saline aquifers (high porosity and permeability formations) immediately below oil-bearing formations. While this is a very specific injection target, we contend that most, if not all, oil-bearing basins in the US contain a great volume of such strata, and represent a rather large CO 2 storage capacity option. We hypothesize that these are the best storage targets in those basins. The purpose of this research is to evaluate this hypothesis. We quantitatively compared CO 2 behavior in oil reservoirs and brine formations by examining the thermophysical properties of CO 2 , CO 2 -brine, and CO 2 -oil in various pressure, temperature, and salinity conditions. In addition, we compared the distribution of gravity number (N), which characterizes a tendency towards buoyancy-driven CO 2 migration, and mobility ratio (M), which characterizes the impeded CO 2 migration, in oil reservoirs and brine formations. Our research suggests competing advantages and disadvantages of CO 2 injection in oil reservoirs vs. brine formations: (1) CO 2 solubility in oil is significantly greater than in brine (over 30 times); (2) the tendency of buoyancy-driven CO 2 migration is smaller in oil reservoirs because density contrast between oil and CO 2 is smaller than it between brine and oil (the approximate density contrast between CO 2 and crude oil is ∼100 kg/m 3 and between CO 2 and brine is ∼350 kg/m 3 ); (3) the increased density of oil and brine due to the CO 2 dissolution is not significant (about 7-15 kg/m 3 ); (4) the viscosity reduction of oil due to CO 2 dissolution is significant (from 5790 to 98 mPa s). We compared these competing

  7. Innovative processes impact on the factors optimal number formation of the enterprise location

    OpenAIRE

    Franiv Ihor Andriiovych

    2014-01-01

    The aim of the article. Determination of the factors to locate enterprises for searching optimal location place, therefore grounding enterprise location place with one or several factors one should consider innovative development nowadays. The results of the analysis. The branches referring to this or that production group will always have relative character and will concern time period, because – various factors value can be changed under influence of innovation introduction. Technologica...

  8. Formation of the Optimal Investment Portfolio as a Precondition for the Bank’s Financial Security

    Directory of Open Access Journals (Sweden)

    Anna Shapovalova

    2015-01-01

    Full Text Available This article analyses the definition of the bank’s financial security and investment activities. It describes a few types of models of bank’s risks management and the method CAPM, which is chosen for use. In support for the chosen CAPM method, we included the mathematical model that allows elaborating an optimal investment portfolio. The model stands at the basis of this method and a case study of one of Ukrainian banks.

  9. Formation of the Optimal Investment Portfolio as a Precondition for the Bank’s Financial Security

    Directory of Open Access Journals (Sweden)

    Anna Shapovalova

    2016-01-01

    Full Text Available This article analyses the definition of the bank’s financial security and investment activities. It describes a few types of models of bank’s risks management and the method CAPM, which is chosen for use. In support for the chosen CAPM method, we included the mathematical model that allows elaborating an optimal investment portfolio. The model stands at the basis of this method and a case study of one of Ukrainian banks.

  10. MATHEMATICAL SOLUTIONS FOR OPTIMAL DIMENSIONING OF NUMBER AND HEIGHTS OF SOME HYDROTECHNIQUE ON TORRENTIAL FORMATION

    Directory of Open Access Journals (Sweden)

    Nicolae Petrescu

    2010-01-01

    Full Text Available This paper is intended to achieve a mathematical model for the optimal dimensioning of number and heights of somedams/thresholds during a downpour, a decrease of water flow rate being obtained and by the solid material depositionsbehind the constructions a new smaller slope of the valley that changes the torrential nature that had before theconstruction is obtained.The choice of the dam and its characteristic dimensions may be an optimization issue and the location of dams on thetorrential (rainfall aspect is dictated by the capabilities of the foundation and restraint so that the chosen solutions willhave to comply with these sites. Finally, the choice of optimal solution to limit torrential (rainfall aspect will be basedon a calculation where the number of thresholds / dams can be a variable related to this, their height properly varying.The calculation method presented is an attempt to demonstrate the multiple opportunities available to implement atechnical issue solving conditions offered by the mathematics against soil erosion, which now is currently very topicalon the environmental protection.

  11. Fiber Bragg grating based notch filter for bit-rate-transparent NRZ to PRZ format conversion with two-degree-of-freedom optimization

    International Nuclear Information System (INIS)

    Cao, Hui; Zuo, Jun; Xiong, Bangyun; Cheng, Jianqun; Shu, Xuewen; Shen, Fangcheng; Liu, Xin; Atai, Javid

    2015-01-01

    We propose a novel notch-filtering scheme for bit-rate transparent all-optical NRZ-to-PRZ format conversion. The scheme is based on a two-degree-of-freedom optimally designed fiber Bragg grating. It is shown that a notch filter optimized for any specific operating bit rate can be used to realize high-Q-factor format conversion over a wide bit rate range without requiring any tuning. (paper)

  12. A green non-acid-catalyzed process for direct N=N-C group formation: comprehensive study, modeling, and optimization.

    Science.gov (United States)

    Khakyzadeh, Vahid; Zolfigol, Mohammad Ali; Derakhshan-Panah, Fatemeh; Jafarian, Majid; Miri, Mir Vahid; Gilandoust, Maryam

    2018-01-04

    The aim of this work is to introduce, model, and optimize a new non-acid-catalyzed system for a direct N[Formula: see text]N-C bond formation. By reacting naphthols or phenol with anilines in the presence of the sodium nitrite as nitrosonium ([Formula: see text] source and triethylammonium acetate (TEAA), a N[Formula: see text]N-C group can be formed in non-acid media. Modeling and optimization of the reaction conditions were investigated by response surface method. Sodium nitrite, TEAA, and water were chosen as variables, and reaction yield was also monitored. Analysis of variance indicates that a second-order polynomial model with F value of 35.7, a P value of 0.0001, and regression coefficient of 0.93 is able to predict the response. Based on the model, the optimum process conditions were introduced as 2.2 mmol sodium nitrite, 2.2 mL of TEAA, and 0.5 mL [Formula: see text] at room temperature. A quadratic (second-order) polynomial model, by analysis of variance, was able to predict the response for a direct N=N-C group formation. Predicted response values were in good agreement with the experimental values. Electrochemistry studies were done to introduce new Michael acceptor moieties. Broad scope, high yields, short reaction time, and mild conditions are some advantages of the presented method.

  13. Depositional features of the Middle Jurassic formation of Field N and their influence on optimal drilling schedule

    International Nuclear Information System (INIS)

    Mishina, D; Rukavishnikov, V; Belozerov, B; Bochkov, A

    2015-01-01

    The Middle Jurassic formation of Field N represented by 4 hydrodynamically connected layers (J5-6, J4, J3 and J2) contains 42% of the field STOIIP. The J2-6 formation is characterized as a gas-oil-condensate massive lithologically and tectonically screened accumulation with a gas cap (J2, J3 layers) and bottom water (J5-6 layer). Oil is predominantly in the J3 and J4 layers. There is a high risk of early gas coning from gas-bearing layers to oil producing wells determined on the basis of production test results, which can significantly decrease the life of the well. To select a more optimal drilling schedule, it is necessary to take the risk of early gas coning into account and determine distinctive features within the gas- saturated zone that can reduce it. The presence of a thick shale barrier between the J2 and J3 layers with thicknesses varying from 0 to 30 m is recognized as the beginning of a transgression cycle, and if the gas cap is only in the J2 layer, this barrier with the thickness of more than 5 m can extensively prevent early gas coning into oil producing wells. The integration of geological information represented by the probability map constructed and petrophysical information represented by the kh map provide the more precise determination of an optimal drilling schedule

  14. Optimization of palm oil physical refining process for reduction of 3-monochloropropane-1,2-diol (3-MCPD) ester formation.

    Science.gov (United States)

    Zulkurnain, Musfirah; Lai, Oi Ming; Tan, Soo Choon; Abdul Latip, Razam; Tan, Chin Ping

    2013-04-03

    The reduction of 3-monochloropropane-1,2-diol (3-MCPD) ester formation in refined palm oil was achieved by incorporation of additional processing steps in the physical refining process to remove chloroester precursors prior to the deodorization step. The modified refining process was optimized for the least 3-MCPD ester formation and acceptable refined palm oil quality using response surface methodology (RSM) with five processing parameters: water dosage, phosphoric acid dosage, degumming temperature, activated clay dosage, and deodorization temperature. The removal of chloroester precursors was largely accomplished by increasing the water dosage, while the reduction of 3-MCPD esters was a compromise in oxidative stability and color of the refined palm oil because some factors such as acid dosage, degumming temperature, and deodorization temperature showed contradictory effects. The optimization resulted in 87.2% reduction of 3-MCPD esters from 2.9 mg/kg in the conventional refining process to 0.4 mg/kg, with color and oil stability index values of 2.4 R and 14.3 h, respectively.

  15. Optimizing ROP in formations difficult to be drilled; Optimierung des Bohrfortschritts in schlecht bohrbaren Formationen

    Energy Technology Data Exchange (ETDEWEB)

    Engmann, M.; Belohlavek, K.U.; Gloth, H. [Technische Univ. Bergakademie Freiberg (Germany); Marx, J.; Luy, R.; Marx, C. [Technische Univ. Clausthal (Germany). Inst. fuer Erdoel- und Erdgastechnik

    1998-08-01

    In Northern Germany drilling engineers encounter the problem of low rates of progress (ROP) while drilling in the geological formations middle and lower Bunter and Keuper. The ROP is quite low in comparison to other regions, e.g. the North Sea. The performance date of more than 100 wells that were drilled in Northern Germany during the last 10 years were studied by statistical methods. The data under investigation comprise more than 1,000 bit runs. The results of the data analysis were used to set up a prognosis with respect to the potential to improve drilling performance in the target horizons by combining improved drilling bits, more powerful downhole motors, and increased bit hydraulics. For the two Bunter formations a special impregnated drilling bit was conceived, applied, and improved with noticeable success. (orig.) [Deutsch] In Norddeutschland wird in den geologischen Formationen unterer und mittlerer Buntsandstein sowie Keuper wegen der geologischen Besonderheiten haeufig nur ein relativ geringer Bohrfortschritt erzielt. Die Leistungsdaten Bohrfortschritt, Standlaenge und Meterkosten von ueber 100 in Norddeutschland waehrend der letzten 10 Jahre abgeteuften Bohrungen mit mehr als 1000 Meisselmaerschen wurden statistisch ausgewertet. Die Ergebnisse aus den statistischen Auswertungen sowie aus neuartigen Trendkurven zwischen unterschiedlichen Leistungsdaten wurden genutzt, um eine qualitative Abschaetzung der Potentiale zur Verbesserung des Bohrfortschritts durch neue Bohrtechnologie vorzunehmen. Diese betreffen verbesserte Bohrwerkzeuge, leistungsstaerkere Bohrmotore sowie die Erhoehung der hydraulischen Leistung auf Sohle. Fuer die Buntsandsteinformationen ist ein impraegnierter Bohrmeissel konzipiert, eingesetzt und erfolgreich weiterentwickelt worden. (orig.)

  16. Format for description of building envelope components for use in an optimization process

    DEFF Research Database (Denmark)

    Rudbeck, Claus Christian; Svendsen, Sv Aa Højgaard

    1999-01-01

    are decided by the architect or kept within limits due to public regulations, but even when these factors have been decided, some are left to be decided. Aspects like durability and the thermal performance are seldom specified by the architect, but might be addressed in national building codes. The national...... building codes specify minimum requirements for the aspects in question, but no trade-offs between the different aspects are allowed, being un-flexible. To allow for the use of optimization procedures in the design process a larger degree of flexibility is needed but first of all there is a need......When designing a building the number of possible combinations of aspects related to the performance of the building envelope are almost unlimited. Due to the physical laws governing e.g. the static performance of the building, some aspects should be kept within a certain interval. Other aspects...

  17. Superconductivity optimization and phase formation kinetics study of internal-Sn Nb{sub 3}Sn superconducting wires

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chaowu

    2007-07-15

    Superconductors Nb{sub 3}Sn wires are one of the most applicable cryogenic superconducting materials and the best choice for high-field magnets exceeding 10 T. One of the most significant utilization is the ITER project which is regarded as the hope of future energy source. The high-Cu composite designs with smaller number of sub-element and non-reactive diffusion barrier, and the RRP (Restacked Rod Process) internal-Sn technology are usually applied for the wire manufacturing. Such designed and processed wires were supplied by MSA/Alstom and WST/NIN in this research. The systematic investigation on internal-Sn superconducting wires includes the optimization of heat treatment (HT) conditions, phase formation and its relation with superconductivity, microstructure analysis, and the phase formation kinetics. Because of the anfractuosity of the configuration design and metallurgical processing, the MF wires are not sufficient for studying a sole factor effect on superconductivity. Therefore, four sets of mono-element (ME) wires with different Sn ratios and different third-element addition were designed and fabricated in order to explore the relationship between phase formation and superconducting performances, particularly the A15 layer growth kinetics. Different characterization technic have been used (magnetization measurements, neutron diffraction and SEM/TEM/EDX analysis). The A15 layer thicknesses of various ME samples were measured and carried out linear and non-linear fits by means of two model equations. The results have clearly demonstrated that the phase formation kinetics of Nb{sub 3}Sn solid-state reaction is in accordance with an n power relation and the n value is increased with the increase of HT temperature and the Sn ratio in the wire composite. (author)

  18. On application of vector optimization in the problem of formation of portfolio of counterparties

    Science.gov (United States)

    Gorbich, A. L.; Medvedeva, M. A.; Medvedev, M. A.

    2016-12-01

    For the effective functioning of any enterprise it is necessary to choose the right partners: suppliers of raw material, buyers of finished products, with which the company interacts in the course of their business. However, the presence on the market of big amounts of enterprises makes the choice the most appropriate among them very difficult and requires the ability to objectively assess of the possible partners, based on multilateral analysis of their activities. This analysis can be carried out based on the solution of multiobjective problems of mathematical programming by using the methods of vector optimization. The work considers existing methods of selection of counterparties, as well as the theoretical foundations for the proposed methodology. It also describes a computer program that analyzes the raw data for contractors and allows choosing the best portfolio of suppliers of enterprise. The feature of selection of counterparties is that today's market has a large number of enterprises in similar activities. Successful choice of contractor will help to avoid unpleasant situations and financial losses, as well as to find a reliable partner in his person for the implementation of the production strategy of the company.

  19. Application of Response Surface Methodology for Optimization of Paracetamol Particles Formation by RESS Method

    International Nuclear Information System (INIS)

    Sabet, J.K.; Ghotbi, C.; Dorkoosh, F.

    2012-01-01

    Ultrafine particles of paracetamol were produced by Rapid Expansion of Supercritical Solution (RESS). The experiments were conducted to investigate the effects of extraction temperature (313-353 K), extraction pressure (10-18 MPa), pre expansion temperature (363-403 K), and post expansion temperature (273-323 K) on particles size and morphology of paracetamol particles. The characterization of the particles was determined by Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM), and Liquid Chromatography/Mass Spectrometry (LC-MS) analysis. The average particle size of the original paracetamol was 20.8 μm, while the average particle size of paracetamol after nan onization via the RESS process was 0.46 μm depending on the experimental conditions used. Moreover, the morphology of the processed particles changed to spherical and regular while the virgin particles of paracetamol were needle-shape and irregular. Response surface methodology (RSM) was used to optimize the process parameters. The extraction temperature, 347 K; extraction pressure, 12 MPa; pre expansion temperature, 403?K; and post expansion temperature, 322 K was found to be the optimum conditions to achieve the minimum average particle size of paracetamol.

  20. Ultrathin Co3O4 Layers Realizing Optimized CO2 Electroreduction to Formate.

    Science.gov (United States)

    Gao, Shan; Jiao, Xingchen; Sun, Zhongti; Zhang, Wenhua; Sun, Yongfu; Wang, Chengming; Hu, Qitao; Zu, Xiaolong; Yang, Fan; Yang, Shuyang; Liang, Liang; Wu, Ju; Xie, Yi

    2016-01-11

    Electroreduction of CO2 into hydrocarbons could contribute to alleviating energy crisis and global warming. However, conventional electrocatalysts usually suffer from low energetic efficiency and poor durability. Herein, atomic layers for transition-metal oxides are proposed to address these problems through offering an ultralarge fraction of active sites, high electronic conductivity, and superior structural stability. As a prototype, 1.72 and 3.51 nm thick Co3O4 layers were synthesized through a fast-heating strategy. The atomic thickness endowed Co3O4 with abundant active sites, ensuring a large CO2 adsorption amount. The increased and more dispersed charge density near Fermi level allowed for enhanced electronic conductivity. The 1.72 nm thick Co3O4 layers showed over 1.5 and 20 times higher electrocatalytic activity than 3.51 nm thick Co3O4 layers and bulk counterpart, respectively. Also, 1.72 nm thick Co3O4 layers showed formate Faradaic efficiency of over 60% in 20 h. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Formation of SiNx:H by PECVD: optimization of the optical, bulk passivation and structural properties for photovoltaic applications

    International Nuclear Information System (INIS)

    Lelievre, J.F.

    2007-04-01

    The hydrogenated silicon nitride SiNx:H is widely used as antireflection coating and passivation layer in the manufacture of silicon photovoltaic cells. The aim of this work was to implement a low frequency (440 kHz) PECVD reactor and to characterize the obtained SiN layers. After having determined the parameters of the optimal deposition, the physico-chemical structure of the layers has been studied. The optical properties have been studied with the aim to improve the antireflection coating of the photovoltaic cells. The surface and bulk passivation properties, induced by the SiN layer in terms of its stoichiometry, have been analyzed and have revealed the excellent passivating efficiency of this material. At last, have been studied the formation conditions of the silicon nano-crystals in the SiN matrix. (O.M.)

  2. Optimizing platelet-rich plasma gel formation by varying time and gravitational forces during centrifugation.

    Science.gov (United States)

    Jo, Chris H; Roh, Young Hak; Kim, Ji Eun; Shin, Sue; Yoon, Kang Sup

    2013-10-01

    Despite the increasing clinical use of topical platelet-rich plasma (PRP) to enhance tissue healing and regeneration, there is no properly standardized method of autologous PRP gel preparation. This study examined the effect of the centrifugation time and gravitational force (g) on the platelet recovery ratio of PRP and determined the most effective centrifugation conditions for preparing PRP. Two-step centrifugations for preparing PRP were used in 39 subjects who had consented prior to the study's start. The separating centrifugation (Step 1, used to separate whole blood into its two main components: red blood cells and plasma) was tested from 500g to 1900g at 200g increments for 5 minutes (min), and from 100g to 1300g at 200g increments for 10 minutes. After separating centrifugation, upper plasma layer was transferred to another plain tube for the condensation centrifugation and remaining lower cell layer was discarded. The condensation centrifugation (Step 2, used to condense the platelets in the separated plasma) was tested at 1000g for 15 min, 1500g for 15 min, 2000g for 5 min and 3000g for 5 min, additionally at 1000g for 10 min and 1500g for 10 min. Platelet gelation was induced by adding 10% calcium gluconate to final PRP with volume ratio of 1:10. The optimal separating centrifugation conditions were followed by 900g for 5 minutes and the condensation conditions were followed by 1500g for 15 minutes, of which recovery ratios were 92.0 ± 3.1% and 84.3 ± 10.0%, respectively.

  3. Three-factor response surface optimization of nano-emulsion formation using a microfluidizer.

    Science.gov (United States)

    Sadeghpour Galooyak, Saeed; Dabir, Bahram

    2015-05-01

    Emulsification of sunflower oil in water by microfluidization was studied. Response surface methodology (RSM) and the central composite design (CCD) were applied to determine the effects of certain process parameters on performance of the apparatus for optimization of nano-emulsion fabrication. Influence of pressure, oil content and number of passes on the disruption of emulsions was studied. Quadratic multiple regression models were chosen for two available responses, namely Sauter mean diameter (SMD) and Polydispersity index (PdI). Analysis of variance (ANOVA) showed a high coefficient of determination (R(2)) value for both responses, confirming adjustment of the models with experimental data. The SMD and the PdI decreased as the pressure of emulsification increased from 408 to 762.3 bar for the oil content of 5 vol% and from 408 to 854.4 bar for the oil content of 13 vol%, and thereafter, increasing the pressure up to 952 bar led to increasing the both responses. The results implied that laminar elongational flow is the alternative disruption mechanism in addition to inertia in turbulence flow, especially at low treatment pressures. Both of responses improved with increase in number of passes from 2 to 4 cycles. The oil content depicted low effect on responses; however, interaction of this parameter with other regressors pointed remarkable impact. Also, the effect of pressure on Kolmogorov micro-scale was studied. The results implied that Kolmogorov equation did not take into account the over-processing and was applicable only for disruption of droplets in the inertial turbulent flow.

  4. Optimal formation of genetically modified and functional pancreatic islet spheroids by using hanging-drop strategy.

    Science.gov (United States)

    Kim, H J; Alam, Z; Hwang, J W; Hwang, Y H; Kim, M J; Yoon, S; Byun, Y; Lee, D Y

    2013-03-01

    Rejection and hypoxia are important factors causing islet loss at an early stage after pancreatic islet transplantation. Recently, islets have been dissociated into single cells for reaggregation into so-called islet spheroids. Herein, we used a hanging-drop strategy to form islet spheroids to achieve functional equivalence to intact islets. To obtain single islet cells, we dissociated islets with trypsin-EDTA digestion for 10 minutes. To obtain spheroids, we dropped various numbers of single cells (125, 250, or 500 cells/30 μL drop) onto a Petri dish, that was inverted for incubation in humidified air containing 5% CO(2) at 37 °C for 7 days. The aggregated spheroids in the droplets were harvested for further culture. The size of the aggregated islet spheroids depended on the number of single cells (125-500 cells/30 μL droplet). Their morphology was similar to that of intact islets without any cellular damage. When treated with various concentrations of glucose to evaluate responsiveness, their glucose-mediated stimulation index value was similar to that of intact islets, an observation that was attributed to strong cell-to-cell interactions in islet spheroids. However, islet spheroids aggregated in general culture dishes showed abnormal glucose responsiveness owing to weak cell-to-cell interactions. Cell-to-cell interactions in islet spheroids were confirmed with an anti-connexin-36 monoclonal antibody. Finally, nonviral poly(ethylene imine)-mediated interleukin-10 cytokine gene delivered beforehand into dissociated single cells before formation of islet spheroids increased the gene transfection efficacy and interleukin-10 secretion from islet spheroids >4-fold compared with intact islets. These results demonstrated the potential application of genetically modified, functional islet spheroids with of controlled size and morphology using an hanging-drop technique. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function

    OpenAIRE

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-01

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength PSA (DW-PSA) is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesi...

  6. Optimal Throughput and Self-adaptability of Robust Real-Time IEEE 802.15.4 MAC for AMI Mesh Network

    International Nuclear Information System (INIS)

    Shabani, Hikma; Ahmed, Musse Mohamud; Khan, Sheroz; Hameed, Shahab Ahmed; Habaebi, Mohamed Hadi

    2013-01-01

    A smart grid refers to a modernization of the electricity system that brings intelligence, reliability, efficiency and optimality to the power grid. To provide an automated and widely distributed energy delivery, the smart grid will be branded by a two-way flow of electricity and information system between energy suppliers and their customers. Thus, the smart grid is a power grid that integrates data communication networks which provide the collected and analysed data at all levels in real time. Therefore, the performance of communication systems is so vital for the success of smart grid. Merit to the ZigBee/IEEE802.15.4std low cost, low power, low data rate, short range, simplicity and free licensed spectrum that makes wireless sensor networks (WSNs) the most suitable wireless technology for smart grid applications. Unfortunately, almost all ZigBee channels overlap with wireless local area network (WLAN) channels, resulting in severe performance degradation due to interference. In order to improve the performance of communication systems, this paper proposes an optimal throughput and self-adaptability of ZigBee/IEEE802.15.4std for smart grid

  7. Optimization of Si–C reaction temperature and Ge thickness in C-mediated Ge dot formation

    Energy Technology Data Exchange (ETDEWEB)

    Satoh, Yuhki, E-mail: yu-ki@ecei.tohoku.ac.jp; Itoh, Yuhki; Kawashima, Tomoyuki; Washio, Katsuyoshi

    2016-03-01

    To form Ge dots on a Si substrate, the effect of thermal reaction temperature of sub-monolayer C with Si (100) was investigated and the deposited Ge thickness was optimized. The samples were prepared by solid-source molecular beam epitaxy with an electron-beam gun for C sublimation and a Knudsen cell for Ge evaporation. C of 0.25 ML was deposited on Si (100) at a substrate temperature of 200 °C, followed by a high-temperature treatment at the reaction temperature (T{sub R}) of 650–1000 °C to create Si–C bonds. Ge equivalent to 2 to 5 nm thick was subsequently deposited at 550 °C. Small and dense dots were obtained for T{sub R} = 750 °C but the dot density decreased and the dot diameter varied widely in the case of lower and higher T{sub R}. A dot density of about 2 × 10{sup 10} cm{sup −2} was achieved for Ge deposition equivalent to 3 to 5 nm thick and a standard deviation of dot diameter was the lowest of 10 nm for 5 nm thick Ge. These results mean that C-mediated Ge dot formation was strongly influenced not only by the c(4 × 4) reconstruction condition through the Si–C reaction but also the relationship between the Ge deposition thickness and the exposed Si (100)-(2 × 1) surface area. - Highlights: • The effect of Si–C reaction temperature on Ge dot formation was investigated. • Small and dense dots were obtained for T{sub R} = 750 °C. • The dot density of about 2 × 10{sup 10} cm{sup −2} was achieved for Ge = 3 to 5 nm. • The standard deviation of dot diameter was the lowest of 10 nm at Ge = 5 nm.

  8. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  9. Applying Robust Design in an Industrial Context

    DEFF Research Database (Denmark)

    Christensen, Martin Ebro

    mechanical architectures. Furthermore a set of 15 robust design principles for reducing the variation in functional performance is compiled in a format directly supporting the work of the design engineer. With these foundational methods in place, the existing tools, methods and KPIs of Robust Design...

  10. Uma abordagem de otimização robusta no planejamento agregado de produção na indústria cítrica A robust optimization approach for the aggregate production planning in the citrus industry

    Directory of Open Access Journals (Sweden)

    José Renato Munhoz

    2012-01-01

    Full Text Available Neste trabalho o planejamento agregado de produção de sucos concentrados congelados de laranja é modelado com considerações de incertezas em alguns parâmetros, de modo a constituir uma ferramenta efetiva de suporte à tomada de decisões. A abordagem de otimização robusta é baseada em um modelo determinístico de programação linear com múltiplos produtos, estágios e períodos proposto em Munhoz e Morabito (2010. Além das decisões de produção, mistura e estocagem de sucos, esse modelo também incorpora o planejamento de colheita da laranja, levando-se em consideração as curvas de maturação das frutas. Um estudo de caso foi realizado em uma empresa do setor localizada no estado de São Paulo, envolvendo várias plantas e uma rede de distribuição internacional. Os resultados computacionais obtidos com a abordagem de otimização robusta, utilizando uma linguagem de modelagem algébrica e um aplicativo de última geração de solução de problemas de programação matemática, indicam que a abordagem tem potencial para ser aplicada com sucesso em situações reais.In this work, a frozen concentrated orange juice aggregate production planning is modeled taking into account uncertainty in some parameters. This generates an effective tool to support decision making. The robust optimization approach is based on a linear programming model with multiple products, stages and periods proposed by Munhoz and Morabito (2010. Besides decisions in the production, blending and storage of juices, the model also includes an orange harvesting plan, which considers orange maturation curves. A case study was developed in an orange juice company located in the State of Sao Paulo. This company has different facilities and a worldwide distribution system. The computational results obtained from this robust optimization approach, using an algebraic modeling language and a state-of-the-art optimization solver; indicate that the approach can be

  11. Uma abordagem de otimização robusta no planejamento agregado de produção na indústria cítrica A robust optimization approach for the aggregate production planning in the citrus industry

    Directory of Open Access Journals (Sweden)

    José Renato Munhoz

    2013-06-01

    Full Text Available Neste trabalho o planejamento agregado de produção de sucos concentrados congelados de laranja é modelado com considerações de incertezas em alguns parâmetros, de modo a constituir uma ferramenta efetiva de suporte à tomada de decisões. A abordagem de otimização robusta é baseada em um modelo determinístico de programação linear com múltiplos produtos, estágios e períodos proposto em Munhoz e Morabito (2010. Além das decisões de produção, mistura e estocagem de sucos, esse modelo também incorpora o planejamento de colheita da laranja, levando-se em consideração as curvas de maturação das frutas. Um estudo de caso foi realizado em uma empresa do setor localizada no estado de São Paulo, envolvendo várias plantas e uma rede de distribuição internacional. Os resultados computacionais obtidos com a abordagem de otimização robusta, utilizando uma linguagem de modelagem algébrica e um aplicativo de última geração de solução de problemas de programação matemática, indicam que a abordagem tem potencial para ser aplicada com sucesso em situações reais.In this work, a frozen concentrated orange juice aggregate production planning is modeled taking into account uncertainty in some parameters. This generates an effective tool to support decision making. The robust optimization approach is based on a linear programming model with multiple products, stages and periods proposed by Munhoz and Morabito (2010. Besides decisions in the production, blending and storage of juices, the model also includes an orange harvesting plan, which considers orange maturation curves. A case study was developed in an orange juice company located in the State of Sao Paulo. This company has different facilities and a worldwide distribution system. The computational results obtained from this robust optimization approach, using an algebraic modeling language and a state-of-the-art optimization solver; indicate that the approach can be

  12. Robust continuous clustering.

    Science.gov (United States)

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  13. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  14. Optimal topotactic conversion of layered octosilicate to RWR-type zeolite by separating the formation stages of interlayer condensation and elimination of organic guest molecules.

    Science.gov (United States)

    Asakura, Yusuke; Osada, Shimon; Hosaka, Nami; Terasawa, Taichi; Kuroda, Kazuyuki

    2014-07-21

    We demonstrate that the separation of two stages of interlayer condensation under refluxing and elimination of organic guests provides the optimal conditions for the formation of RWR-type zeolite from layered octosilicate. The obtained RWR-type zeolite has higher quality than any other RWR-type zeolite reported previously.

  15. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  16. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    International Nuclear Information System (INIS)

    McGowan, S E; Albertini, F; Lomax, A J; Thomas, S J

    2015-01-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties. (paper)

  17. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    Science.gov (United States)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  18. Automated Robust Maneuver Design and Optimization

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA is seeking improvements to the current technologies related to Position, Navigation and Timing. In particular, it is desired to automate precise maneuver...

  19. Optimizing isothiocyanate formation during enzymatic glucosinolate breakdown by adjusting pH value, temperature and dilution in Brassica vegetables and Arabidopsis thaliana

    Science.gov (United States)

    Hanschen, Franziska S.; Klopsch, Rebecca; Oliviero, Teresa; Schreiner, Monika; Verkerk, Ruud; Dekker, Matthijs

    2017-01-01

    Consumption of glucosinolate-rich Brassicales vegetables is associated with a decreased risk of cancer with enzymatic hydrolysis of glucosinolates playing a key role. However, formation of health-promoting isothiocyanates is inhibited by the epithiospecifier protein in favour of nitriles and epithionitriles. Domestic processing conditions, such as changes in pH value, temperature or dilution, might also affect isothiocyanate formation. Therefore, the influences of these three factors were evaluated in accessions of Brassica rapa, Brassica oleracea, and Arabidopsis thaliana. Mathematical modelling was performed to determine optimal isothiocyanate formation conditions and to obtain knowledge on the kinetics of the reactions. At 22 °C and endogenous plant pH, nearly all investigated plants formed nitriles and epithionitriles instead of health-promoting isothiocyanates. Response surface models, however, clearly demonstrated that upon change in pH to domestic acidic (pH 4) or basic pH values (pH 8), isothiocyanate formation considerably increases. While temperature also affects this process, the pH value has the greatest impact. Further, a kinetic model showed that isothiocyanate formation strongly increases due to dilution. Finally, the results show that isothiocyanate intake can be strongly increased by optimizing the conditions of preparation of Brassicales vegetables.

  20. Adaptive Critic Nonlinear Robust Control: A Survey.

    Science.gov (United States)

    Wang, Ding; He, Haibo; Liu, Derong

    2017-10-01

    Adaptive dynamic programming (ADP) and reinforcement learning are quite relevant to each other when performing intelligent optimization. They are both regarded as promising methods involving important components of evaluation and improvement, at the background of information technology, such as artificial intelligence, big data, and deep learning. Although great progresses have been achieved and surveyed when addressing nonlinear optimal control problems, the research on robustness of ADP-based control strategies under uncertain environment has not been fully summarized. Hence, this survey reviews the recent main results of adaptive-critic-based robust control design of continuous-time nonlinear systems. The ADP-based nonlinear optimal regulation is reviewed, followed by robust stabilization of nonlinear systems with matched uncertainties, guaranteed cost control design of unmatched plants, and decentralized stabilization of interconnected systems. Additionally, further comprehensive discussions are presented, including event-based robust control design, improvement of the critic learning rule, nonlinear H ∞ control design, and several notes on future perspectives. By applying the ADP-based optimal and robust control methods to a practical power system and an overhead crane plant, two typical examples are provided to verify the effectiveness of theoretical results. Overall, this survey is beneficial to promote the development of adaptive critic control methods with robustness guarantee and the construction of higher level intelligent systems.

  1. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  2. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  3. A deficit in optimizing task solution but robust and well-retained speed and accuracy gains in complex skill acquisition in Parkinson׳s disease: multi-session training on the Tower of Hanoi Puzzle.

    Science.gov (United States)

    Vakil, Eli; Hassin-Baer, Sharon; Karni, Avi

    2014-05-01

    There are inconsistent results in the research literature relating to whether a procedural memory dysfunction exists as a core deficit in Parkinson׳s disease (PD). To address this issue, we examined the acquisition and long-term retention of a cognitive skill in patients with moderately severe PD. To this end, we used a computerized version of the Tower of Hanoi Puzzle. Sixteen patients with PD (11 males, age 60.9±10.26 years, education 13.8±3.5 years, disease duration 8.6±4.7 years, UPDRS III "On" score 16±5.3) were compared with 20 healthy individuals matched for age, gender, education and MMSE scores. The patients were assessed while taking their anti-Parkinsonian medication. All participants underwent three consecutive practice sessions, 24-48h apart, and a retention-test session six months later. A computerized version of the Tower of Hanoi Puzzle, with four disks, was used for training. Participants completed the task 18 times in each session. Number of moves (Nom) to solution, and time per move (Tpm), were used as measures of acquisition and retention of the learned skill. Robust learning, a significant reduction in Nom and a concurrent decrease in Tpm, were found across all three training sessions, in both groups. Moreover, both patients and controls showed significant savings for both measures at six months post-training. However, while their Tpm was no slower than that of controls, patients with PD required more Nom (in 3rd and 4th sessions) and tended to stabilize on less-than-optimal solutions. The results do not support the notion of a core deficit in gaining speed (fluency) or generating procedural memory in PD. However, PD patients settled on less-than-optimal solutions of the task, i.e., less efficient task solving process. The results are consistent with animal studies of the effects of dopamine depletion on task exploration. Thus, patients with PD may have a problem in exploring for optimal task solution rather than in skill acquisition and

  4. Robust control design with MATLAB

    CERN Document Server

    Gu, Da-Wei; Konstantinov, Mihail M

    2013-01-01

    Robust Control Design with MATLAB® (second edition) helps the student to learn how to use well-developed advanced robust control design methods in practical cases. To this end, several realistic control design examples from teaching-laboratory experiments, such as a two-wheeled, self-balancing robot, to complex systems like a flexible-link manipulator are given detailed presentation. All of these exercises are conducted using MATLAB® Robust Control Toolbox 3, Control System Toolbox and Simulink®. By sharing their experiences in industrial cases with minimum recourse to complicated theories and formulae, the authors convey essential ideas and useful insights into robust industrial control systems design using major H-infinity optimization and related methods allowing readers quickly to move on with their own challenges. The hands-on tutorial style of this text rests on an abundance of examples and features for the second edition: ·        rewritten and simplified presentation of theoretical and meth...

  5. L-ASPARAGINASE FROM BACILLUS SP. RKS-20: PROCESS OPTIMIZATION AND APPLICATION IN THE INHIBITION OF ACRYLAMIDE FORMATION IN FRIED FOODS

    Directory of Open Access Journals (Sweden)

    Richi V. Mahajan

    2014-08-01

    Full Text Available Reports of presence of acrylamide in wide range of fried and baked foods, most notably potato chips and French fries, by Swedish researchers in 2002 has raised a worldwide concern. However, the enzyme Lasparaginase reduces the formation of acrylamide in fried foods by pre-amidohydrolase of L-asparagine present. In this context, we report the hyper production of L-asparaginase from Bacillus sp. RKS-20, by process optimization involving statistical modeling approach. A maximum of 15.10 IU/ml of L-asparaginase were obtained in 18h under statistically optimized conditions wherein KH2 PO4 (3.0 g/L, NaCl (1.0 g/L, L-asparagine (14.0 g/L and glucose (2.0 g/L were the influential factors. This was an approximately 10-fold increase as compared to the initial un-optimized activity of 1.50 IU/ml. The potential of this enzyme for the inhibition of acrylamide formation was confirmed when the potato slices treated with L-asparaginase (40 IU/mg of dry potatoes, showed reduction of 69.80% in acrylamide formation upon frying as compared to untreated potato slices. Hence, this enzyme is potential candidate for healthier production of food.

  6. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  7. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  8. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  9. Robust recognition via information theoretic learning

    CERN Document Server

    He, Ran; Yuan, Xiaotong; Wang, Liang

    2014-01-01

    This Springer Brief represents a comprehensive review of information theoretic methods for robust recognition. A variety of information theoretic methods have been proffered in the past decade, in a large variety of computer vision applications; this work brings them together, attempts to impart the theory, optimization and usage of information entropy.The?authors?resort to a new information theoretic concept, correntropy, as a robust measure and apply it to solve robust face recognition and object recognition problems. For computational efficiency,?the brief?introduces the additive and multip

  10. Assessment of a robust model protocol with accelerated throughput for a human recombinant full length estrogen receptor-alpha binding assay: protocol optimization and intralaboratory assay performance as initial steps towards validation.

    Science.gov (United States)

    Freyberger, Alexius; Wilson, Vickie; Weimer, Marc; Tan, Shirlee; Tran, Hoai-Son; Ahr, Hans-Jürgen

    2010-08-01

    Despite about two decades of research in the field of endocrine active compounds, still no validated human recombinant (hr) estrogen receptor-alpha (ERalpha) binding assay is available, although hr-ERalpha is available from several sources. In a joint effort, US EPA and Bayer Schering Pharma with funding from the EU-sponsored 6th framework project, ReProTect, developed a model protocol for such a binding assay. Important features of this assay are the use of a full length hr-ERalpha and performance in a 96-well plate format. A full length hr-ERalpha was chosen, as it was considered to provide the most accurate and human-relevant results, whereas truncated receptors could perform differently. Besides three reference compounds [17beta-estradiol, norethynodrel, dibutylphthalate] nine test compounds with different affinities for the ERalpha [diethylstilbestrol (DES), ethynylestradiol, meso-hexestrol, equol, genistein, o,p'-DDT, nonylphenol, n-butylparaben, and corticosterone] were used to explore the performance of the assay. Three independent experiments per compound were performed on different days, and dilutions of test compounds from deep-frozen stocks, solutions of radiolabeled ligand and receptor preparation were freshly prepared for each experiment. The ERalpha binding properties of reference and test compounds were well detected. As expected dibutylphthalate and corticosterone were non-binders in this assay. In terms of the relative ranking of binding affinities, there was good agreement with published data obtained from experiments using a human recombinant ERalpha ligand binding domain. Irrespective of the chemical nature of the compound, individual IC(50)-values for a given compound varied by not more than a factor of 2.5. Our data demonstrate that the assay was robust and reliably ranked compounds with strong, weak, and no affinity for the ERalpha with high accuracy. It avoids the manipulation and use of animals, i.e., the preparation of uterine cytosol as

  11. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  12. Topology optimization under stochastic stiffness

    Science.gov (United States)

    Asadpoure, Alireza

    for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.

  13. Comparison of response formats and concurrent hedonic measures for optimal use of the EmoSensory® Wheel.

    Science.gov (United States)

    Schouteten, Joachim J; Gellynck, Xavier; De Bourdeaudhuij, Ilse; Sas, Benedikt; Bredie, Wender L P; Perez-Cueto, Federico J A; De Steur, Hans

    2017-03-01

    The study of emotional and sensory profiling with food products is gaining momentum in the field of sensory research. These methods can be applied in order to obtain a broader consumer perspective on product performance beyond traditional hedonic measurements (Jiang, King, & Prinyawiwatkul, 2014; Varela & Ares, 2012). The EmoSensory® Wheel, a recently introduced method which combines emotional and sensory assessment in a wheel questionnaire format is one example of conducting such a task in a consumer-friendly way. However, little is known about its performance compared to a traditional list-based questionnaire format. This comparison is undertaken in this study for two product categories (chocolate and yogurt). Further, two methodological issues are addressed by (i) comparing the use of Check-All-That-Apply (CATA) and rate-all-that-apply (RATA) response formats and (ii) examining whether the method impacts on the concurrent hedonic product assessment for two product categories (chocolate and yogurt). Although both questionnaire formats showed similar findings, more consumers preferred the wheel questionnaire format. Regarding the latter, CATA and RATA scaling yielded similar performance and no influence on the concurrent hedonic assessment was found. This study lends further support for combining emotional and sensory measurements using the EmoSensory® profile, which is of interest for food scientists and the food industry. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Global Optimization using Interval Analysis : Interval Optimization for Aerospace Applications

    NARCIS (Netherlands)

    Van Kampen, E.

    2010-01-01

    Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in

  15. Optimized eight-dimensional lattice modulation format for IM-DD 56 Gb/s optical interconnections using 850 nm VCSELs

    DEFF Research Database (Denmark)

    Lu, Xiaofeng; Tatarczak, Anna; Lyubopytov, Vladimir

    2017-01-01

    In this paper a novel eight-dimensional lattice optimized modulation format, Block Based 8-dimensional/8-level (BB8), is proposed, taking into account the tradeoff between high performance and modulation simplicity. We provide an experimental performance comparison with its n-level pulse amplitude...... threshold. A simplified bit-to-symbol mapping and corresponding symbol-to-bit demapping algorithms, together with a hyperspace hard-decision, are designed specifically for applications of short-reach data links. These algorithms are expected to use affordable computational resources with relatively low...

  16. LDRD final report : robust analysis of large-scale combinatorial applications.

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Robert D.; Morrison, Todd (University of Colorado, Denver, CO); Hart, William Eugene; Benavides, Nicolas L. (Santa Clara University, Santa Clara, CA); Greenberg, Harvey J. (University of Colorado, Denver, CO); Watson, Jean-Paul; Phillips, Cynthia Ann

    2007-09-01

    Discrete models of large, complex systems like national infrastructures and complex logistics frameworks naturally incorporate many modeling uncertainties. Consequently, there is a clear need for optimization techniques that can robustly account for risks associated with modeling uncertainties. This report summarizes the progress of the Late-Start LDRD 'Robust Analysis of Largescale Combinatorial Applications'. This project developed new heuristics for solving robust optimization models, and developed new robust optimization models for describing uncertainty scenarios.

  17. Efficient robust conditional random fields.

    Science.gov (United States)

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  18. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  19. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  20. Robust Approaches to Forecasting

    OpenAIRE

    Jennifer Castle; David Hendry; Michael P. Clements

    2014-01-01

    We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods ar...