WorldWideScience

Sample records for sample-based stochastic optimal

  1. Stochastic global optimization as a filtering problem

    International Nuclear Information System (INIS)

    Stinis, Panos

    2012-01-01

    We present a reformulation of stochastic global optimization as a filtering problem. The motivation behind this reformulation comes from the fact that for many optimization problems we cannot evaluate exactly the objective function to be optimized. Similarly, we may not be able to evaluate exactly the functions involved in iterative optimization algorithms. For example, we may only have access to noisy measurements of the functions or statistical estimates provided through Monte Carlo sampling. This makes iterative optimization algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection of realizations of this stochastic map and picking the realization with the best properties. This motivates the use of filtering techniques to allow focusing on realizations that are more promising than others. In particular, we present a filtering reformulation of global optimization in terms of a special case of sequential importance sampling methods called particle filters. The increasing popularity of particle filters is based on the simplicity of their implementation and their flexibility. We utilize the flexibility of particle filters to construct a stochastic global optimization algorithm which can converge to the optimal solution appreciably faster than naive global optimization. Several examples of parametric exponential density estimation are provided to demonstrate the efficiency of the approach.

  2. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  3. STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Nataša Krejić

    2014-12-01

    Full Text Available This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based Stochastic Approximation and Sample Average Approximation methods. The concept of stochastic gradient approximation of the true gradient can be successfully extended to deterministic problems. Methods of this kind are presented for the data fitting and machine learning problems.

  4. Stochastic Finite Elements in Reliability-Based Structural Optimization

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Engelund, S.

    1995-01-01

    Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect to optimi......Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect...... to optimization variables can be performed. A computer implementation is described and an illustrative example is given....

  5. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  6. Stochastic Finite Elements in Reliability-Based Structural Optimization

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Engelund, S.

    Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect to optimi......Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect...

  7. Stochastic Optimal Dispatch of Virtual Power Plant considering Correlation of Distributed Generations

    Directory of Open Access Journals (Sweden)

    Jie Yu

    2015-01-01

    Full Text Available Virtual power plant (VPP is an aggregation of multiple distributed generations, energy storage, and controllable loads. Affected by natural conditions, the uncontrollable distributed generations within VPP, such as wind and photovoltaic generations, are extremely random and relative. Considering the randomness and its correlation of uncontrollable distributed generations, this paper constructs the chance constraints stochastic optimal dispatch of VPP including stochastic variables and its random correlation. The probability distributions of independent wind and photovoltaic generations are described by empirical distribution functions, and their joint probability density model is established by Frank-copula function. And then, sample average approximation (SAA is applied to convert the chance constrained stochastic optimization model into a deterministic optimization model. Simulation cases are calculated based on the AIMMS. Simulation results of this paper mathematic model are compared with the results of deterministic optimization model without stochastic variables and stochastic optimization considering stochastic variables but not random correlation. Furthermore, this paper analyzes how SAA sampling frequency and the confidence level influence the results of stochastic optimization. The numerical example results show the effectiveness of the stochastic optimal dispatch of VPP considering the randomness and its correlations of distributed generations.

  8. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    Science.gov (United States)

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  9. Stochastic optimization-based study of dimerization kinetics

    Indian Academy of Sciences (India)

    To this end, we study dimerization kinetics of protein as a model system. We follow the dimerization kinetics using a stochastic simulation algorithm and ... optimization; dimerization kinetics; sensitivity analysis; stochastic simulation ... tion in large molecules and clusters, or the design ..... An unbiased strategy of allocating.

  10. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  11. Optimal condition-based maintenance decisions for systems with dependent stochastic degradation of components

    International Nuclear Information System (INIS)

    Hong, H.P.; Zhou, W.; Zhang, S.; Ye, W.

    2014-01-01

    Components in engineered systems are subjected to stochastic deterioration due to the operating environmental conditions, and the uncertainty in material properties. The components need to be inspected and possibly replaced based on preventive or failure replacement criteria to provide the intended and safe operation of the system. In the present study, we investigate the influence of dependent stochastic degradation of multiple components on the optimal maintenance decisions. We use copula to model the dependent stochastic degradation of components, and formulate the optimal decision problem based on the minimum expected cost rule and the stochastic dominance rules. The latter is used to cope with decision maker's risk attitude. We illustrate the developed probabilistic analysis approach and the influence of the dependency of the stochastic degradation on the preferred decisions through numerical examples

  12. An efficient scenario-based stochastic programming framework for multi-objective optimal micro-grid operation

    International Nuclear Information System (INIS)

    Niknam, Taher; Azizipanah-Abarghooee, Rasoul; Narimani, Mohammad Rasoul

    2012-01-01

    Highlights: ► Proposes a stochastic model for optimal energy management. ► Consider uncertainties related to the forecasted values for load demand. ► Consider uncertainties of forecasted values of output power of wind and photovoltaic units. ► Consider uncertainties of forecasted values of market price. ► Present an improved multi-objective teaching–learning-based optimization. -- Abstract: This paper proposes a stochastic model for optimal energy management with the goal of cost and emission minimization. In this model, the uncertainties related to the forecasted values for load demand, available output power of wind and photovoltaic units and market price are modeled by a scenario-based stochastic programming. In the presented method, scenarios are generated by a roulette wheel mechanism based on probability distribution functions of the input random variables. Through this method, the inherent stochastic nature of the proposed problem is released and the problem is decomposed into a deterministic problem. An improved multi-objective teaching–learning-based optimization is implemented to yield the best expected Pareto optimal front. In the proposed stochastic optimization method, a novel self adaptive probabilistic modification strategy is offered to improve the performance of the presented algorithm. Also, a set of non-dominated solutions are stored in a repository during the simulation process. Meanwhile, the size of the repository is controlled by usage of a fuzzy-based clustering technique. The best expected compromise solution stored in the repository is selected via the niching mechanism in a way that solutions are encouraged to seek the lesser explored regions. The proposed framework is applied in a typical grid-connected micro grid in order to verify its efficiency and feasibility.

  13. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  14. Optimal Control and Optimization of Stochastic Supply Chain Systems

    CERN Document Server

    Song, Dong-Ping

    2013-01-01

    Optimal Control and Optimization of Stochastic Supply Chain Systems examines its subject in the context of the presence of a variety of uncertainties. Numerous examples with intuitive illustrations and tables are provided, to demonstrate the structural characteristics of the optimal control policies in various stochastic supply chains and to show how to make use of these characteristics to construct easy-to-operate sub-optimal policies.                 In Part I, a general introduction to stochastic supply chain systems is provided. Analytical models for various stochastic supply chain systems are formulated and analysed in Part II. In Part III the structural knowledge of the optimal control policies obtained in Part II is utilized to construct easy-to-operate sub-optimal control policies for various stochastic supply chain systems accordingly. Finally, Part IV discusses the optimisation of threshold-type control policies and their robustness. A key feature of the book is its tying together of ...

  15. Portfolio Optimization with Stochastic Dividends and Stochastic Volatility

    Science.gov (United States)

    Varga, Katherine Yvonne

    2015-01-01

    We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…

  16. Stochastic optimization: beyond mathematical programming

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Stochastic optimization, among which bio-inspired algorithms, is gaining momentum in areas where more classical optimization algorithms fail to deliver satisfactory results, or simply cannot be directly applied. This presentation will introduce baseline stochastic optimization algorithms, and illustrate their efficiency in different domains, from continuous non-convex problems to combinatorial optimization problem, to problems for which a non-parametric formulation can help exploring unforeseen possible solution spaces.

  17. Stochastic optimization of a multi-feedstock lignocellulosic-based bioethanol supply chain under multiple uncertainties

    International Nuclear Information System (INIS)

    Osmani, Atif; Zhang, Jun

    2013-01-01

    An integrated multi-feedstock (i.e. switchgrass and crop residue) lignocellulosic-based bioethanol supply chain is studied under jointly occurring uncertainties in switchgrass yield, crop residue purchase price, bioethanol demand and sales price. A two-stage stochastic mathematical model is proposed to maximize expected profit by optimizing the strategic and tactical decisions. A case study based on ND (North Dakota) state in the U.S. demonstrates that in a stochastic environment it is cost effective to meet 100% of ND's annual gasoline demand from bioethanol by using switchgrass as a primary and crop residue as a secondary biomass feedstock. Although results show that the financial performance is degraded as variability of the uncertain parameters increases, the proposed stochastic model increasingly outperforms the deterministic model under uncertainties. The locations of biorefineries (i.e. first-stage integer variables) are insensitive to the uncertainties. Sensitivity analysis shows that “mean” value of stochastic parameters has a significant impact on the expected profit and optimal values of first-stage continuous variables. Increase in level of mean ethanol demand and mean sale price results in higher bioethanol production. When mean switchgrass yield is at low level and mean crop residue price is at high level, all the available marginal land is used for switchgrass cultivation. - Highlights: • Two-stage stochastic MILP model for maximizing profit of a multi-feedstock lignocellulosic-based bioethanol supply chain. • Multiple uncertainties in switchgrass yield, crop residue purchase price, bioethanol demand, and bioethanol sale price. • Proposed stochastic model outperforms the traditional deterministic model under uncertainties. • Stochastic parameters significantly affect marginal land allocation for switchgrass cultivation and bioethanol production. • Location of biorefineries is found to be insensitive to the stochastic environment

  18. Scenario-based stochastic optimal operation of wind, photovoltaic, pump-storage hybrid system in frequency- based pricing

    International Nuclear Information System (INIS)

    Zare Oskouei, Morteza; Sadeghi Yazdankhah, Ahmad

    2015-01-01

    Highlights: • Two-stage objective function is proposed for optimization problem. • Hourly-based optimal contractual agreement is calculated. • Scenario-based stochastic optimization problem is solved. • Improvement of system frequency by utilizing PSH unit. - Abstract: This paper proposes the operating strategy of a micro grid connected wind farm, photovoltaic and pump-storage hybrid system. The strategy consists of two stages. In the first stage, the optimal hourly contractual agreement is determined. The second stage corresponds to maximizing its profit by adapting energy management strategy of wind and photovoltaic in coordination with optimum operating schedule of storage device under frequency based pricing for a day ahead electricity market. The pump-storage hydro plant is utilized to minimize unscheduled interchange flow and maximize the system benefit by participating in frequency control based on energy price. Because of uncertainties in power generation of renewable sources and market prices, generation scheduling is modeled by a stochastic optimization problem. Uncertainties of parameters are modeled by scenario generation and scenario reduction method. A powerful optimization algorithm is proposed using by General Algebraic Modeling System (GAMS)/CPLEX. In order to verify the efficiency of the method, the algorithm is applied to various scenarios with different wind and photovoltaic power productions in a day ahead electricity market. The numerical results demonstrate the effectiveness of the proposed approach.

  19. Optimal Control for Stochastic Delay Evolution Equations

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Qingxin, E-mail: mqx@hutc.zj.cn [Huzhou University, Department of Mathematical Sciences (China); Shen, Yang, E-mail: skyshen87@gmail.com [York University, Department of Mathematics and Statistics (Canada)

    2016-08-15

    In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we apply stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.

  20. Dynamic stochastic optimization

    CERN Document Server

    Ermoliev, Yuri; Pflug, Georg

    2004-01-01

    Uncertainties and changes are pervasive characteristics of modern systems involving interactions between humans, economics, nature and technology. These systems are often too complex to allow for precise evaluations and, as a result, the lack of proper management (control) may create significant risks. In order to develop robust strategies we need approaches which explic­ itly deal with uncertainties, risks and changing conditions. One rather general approach is to characterize (explicitly or implicitly) uncertainties by objec­ tive or subjective probabilities (measures of confidence or belief). This leads us to stochastic optimization problems which can rarely be solved by using the standard deterministic optimization and optimal control methods. In the stochastic optimization the accent is on problems with a large number of deci­ sion and random variables, and consequently the focus ofattention is directed to efficient solution procedures rather than to (analytical) closed-form solu­ tions. Objective an...

  1. Economic and environmental optimization of a large scale sustainable dual feedstock lignocellulosic-based bioethanol supply chain in a stochastic environment

    International Nuclear Information System (INIS)

    Osmani, Atif; Zhang, Jun

    2014-01-01

    Highlights: • 2-Stage stochastic MILP model for optimizing the performance of a sustainable lignocellulosic-based biofuel supply chain. • Multiple uncertainties in biomass supply, purchase price of biomass, bioethanol demand, and sale price of bioethanol. • Stochastic parameters significantly impact the allocation of biomass processing capacities of biorefineries. • Location of biorefineries and choice of conversion technology is found to be insensitive to the stochastic environment. • Use of Sample Average Approximation (SAA) algorithm as a decomposition technique. - Abstract: This work proposes a two-stage stochastic optimization model to maximize the expected profit and simultaneously minimize carbon emissions of a dual-feedstock lignocellulosic-based bioethanol supply chain (LBSC) under uncertainties in supply, demand and prices. The model decides the optimal first-stage decisions and the expected values of the second-stage decisions. A case study based on a 4-state Midwestern region in the US demonstrates the effectiveness of the proposed stochastic model over a deterministic model under uncertainties. Two regional modes are considered for the geographic scale of the LBSC. Under co-operation mode the 4 states are considered as a combined region while under stand-alone mode each of the 4 states is considered as an individual region. Each state under co-operation mode gives better financial and environmental outcomes when compared to stand-alone mode. Uncertainty has a significant impact on the biomass processing capacity of biorefineries. While the location and the choice of conversion technology for biorefineries i.e. biochemical vs. thermochemical, are insensitive to the stochastic environment. As variability of the stochastic parameters increases, the financial and environmental performance is degraded. Sensitivity analysis shows that levels of tax credit and carbon price have a major impact on the choice of conversion technology for a selected

  2. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  3. Stochastic Optimization of Wind Turbine Power Factor Using Stochastic Model of Wind Power

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Siano, Pierluigi; Bak-Jensen, Birgitte

    2010-01-01

    This paper proposes a stochastic optimization algorithm that aims to minimize the expectation of the system power losses by controlling wind turbine (WT) power factors. This objective of the optimization is subject to the probability constraints of bus voltage and line current requirements....... The optimization algorithm utilizes the stochastic models of wind power generation (WPG) and load demand to take into account their stochastic variation. The stochastic model of WPG is developed on the basis of a limited autoregressive integrated moving average (LARIMA) model by introducing a crosscorrelation...... structure to the LARIMA model. The proposed stochastic optimization is carried out on a 69-bus distribution system. Simulation results confirm that, under various combinations of WPG and load demand, the system power losses are considerably reduced with the optimal setting of WT power factor as compared...

  4. The optimization model for multi-type customers assisting wind power consumptive considering uncertainty and demand response based on robust stochastic theory

    International Nuclear Information System (INIS)

    Tan, Zhongfu; Ju, Liwei; Reed, Brent; Rao, Rao; Peng, Daoxin; Li, Huanhuan; Pan, Ge

    2015-01-01

    Highlights: • Our research focuses on demand response behaviors of multi-type customers. • A wind power simulation method is proposed based on the Brownian motion theory. • Demand response revenue functions are proposed for multi-type customers. • A robust stochastic optimization model is proposed for wind power consumptive. • Models are built to measure the impacts of demand response on wind power consumptive. - Abstract: In order to relieve the influence of wind power uncertainty on power system operation, demand response and robust stochastic theory are introduced to build a stochastic scheduling optimization model. Firstly, this paper presents a simulation method for wind power considering external environment based on Brownian motion theory. Secondly, price-based demand response and incentive-based demand response are introduced to build demand response model. Thirdly, the paper constructs the demand response revenue functions for electric vehicle customers, business customers, industry customers and residential customers. Furthermore, robust stochastic optimization theory is introduced to build a wind power consumption stochastic optimization model. Finally, simulation analysis is taken in the IEEE 36 nodes 10 units system connected with 650 MW wind farms. The results show the robust stochastic optimization theory is better to overcome wind power uncertainty. Demand response can improve system wind power consumption capability. Besides, price-based demand response could transform customers’ load demand distribution, but its load curtailment capacity is not as obvious as incentive-based demand response. Since price-based demand response cannot transfer customer’s load demand as the same as incentive-based demand response, the comprehensive optimization effect will reach best when incentive-based demand response and price-based demand response are both introduced.

  5. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  6. Stochastic quasi-gradient based optimization algorithms for dynamic reliability applications

    International Nuclear Information System (INIS)

    Bourgeois, F.; Labeau, P.E.

    2001-01-01

    On one hand, PSA results are increasingly used in decision making, system management and optimization of system design. On the other hand, when severe accidental transients are considered, dynamic reliability appears appropriate to account for the complex interaction between the transitions between hardware configurations, the operator behavior and the dynamic evolution of the system. This paper presents an exploratory work in which the estimation of the system unreliability in a dynamic context is coupled with an optimization algorithm to determine the 'best' safety policy. Because some reliability parameters are likely to be distributed, the cost function to be minimized turns out to be a random variable. Stochastic programming techniques are therefore envisioned to determine an optimal strategy. Monte Carlo simulation is used at all stages of the computations, from the estimation of the system unreliability to that of the stochastic quasi-gradient. The optimization algorithm is illustrated on a HNO 3 supply system

  7. Multistage stochastic optimization

    CERN Document Server

    Pflug, Georg Ch

    2014-01-01

    Multistage stochastic optimization problems appear in many ways in finance, insurance, energy production and trading, logistics and transportation, among other areas. They describe decision situations under uncertainty and with a longer planning horizon. This book contains a comprehensive treatment of today’s state of the art in multistage stochastic optimization.  It covers the mathematical backgrounds of approximation theory as well as numerous practical algorithms and examples for the generation and handling of scenario trees. A special emphasis is put on estimation and bounding of the modeling error using novel distance concepts, on time consistency and the role of model ambiguity in the decision process. An extensive treatment of examples from electricity production, asset liability management and inventory control concludes the book

  8. Essays on variational approximation techniques for stochastic optimization problems

    Science.gov (United States)

    Deride Silva, Julio A.

    This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence

  9. Sparse Learning with Stochastic Composite Optimization.

    Science.gov (United States)

    Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei

    2017-06-01

    In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).

  10. Optimal Liquidation under Stochastic Liquidity

    OpenAIRE

    Becherer, Dirk; Bilarev, Todor; Frentrup, Peter

    2016-01-01

    We solve explicitly a two-dimensional singular control problem of finite fuel type for infinite time horizon. The problem stems from the optimal liquidation of an asset position in a financial market with multiplicative and transient price impact. Liquidity is stochastic in that the volume effect process, which determines the inter-temporal resilience of the market in spirit of Predoiu, Shaikhet and Shreve (2011), is taken to be stochastic, being driven by own random noise. The optimal contro...

  11. Stochastic optimal control, forward-backward stochastic differential equations and the Schroedinger equation

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Wolfgang; Koeppe, Jeanette [Institut fuer Physik, Martin Luther Universitaet, 06099 Halle (Germany); Grecksch, Wilfried [Institut fuer Mathematik, Martin Luther Universitaet, 06099 Halle (Germany)

    2016-07-01

    The standard approach to solve a non-relativistic quantum problem is through analytical or numerical solution of the Schroedinger equation. We show a way to go around it. This way is based on the derivation of the Schroedinger equation from conservative diffusion processes and the establishment of (several) stochastic variational principles leading to the Schroedinger equation under the assumption of a kinematics described by Nelson's diffusion processes. Mathematically, the variational principle can be considered as a stochastic optimal control problem linked to the forward-backward stochastic differential equations of Nelson's stochastic mechanics. The Hamilton-Jacobi-Bellmann equation of this control problem is the Schroedinger equation. We present the mathematical background and how to turn it into a numerical scheme for analyzing a quantum system without using the Schroedinger equation and exemplify the approach for a simple 1d problem.

  12. Optimal Control Inventory Stochastic With Production Deteriorating

    Science.gov (United States)

    Affandi, Pardi

    2018-01-01

    In this paper, we are using optimal control approach to determine the optimal rate in production. Most of the inventory production models deal with a single item. First build the mathematical models inventory stochastic, in this model we also assume that the items are in the same store. The mathematical model of the problem inventory can be deterministic and stochastic models. In this research will be discussed how to model the stochastic as well as how to solve the inventory model using optimal control techniques. The main tool in the study problems for the necessary optimality conditions in the form of the Pontryagin maximum principle involves the Hamilton function. So we can have the optimal production rate in a production inventory system where items are subject deterioration.

  13. Convergence of Sample Path Optimal Policies for Stochastic Dynamic Programming

    National Research Council Canada - National Science Library

    Fu, Michael C; Jin, Xing

    2005-01-01

    .... These results have practical implications for Monte Carlo simulation-based solution approaches to stochastic dynamic programming problems where it is impractical to extract the explicit transition...

  14. Optimization of stochastic discrete systems and control on complex networks computational networks

    CERN Document Server

    Lozovanu, Dmitrii

    2014-01-01

    This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic con...

  15. Approximative solutions of stochastic optimization problem

    Czech Academy of Sciences Publication Activity Database

    Lachout, Petr

    2010-01-01

    Roč. 46, č. 3 (2010), s. 513-523 ISSN 0023-5954 R&D Projects: GA ČR GA201/08/0539 Institutional research plan: CEZ:AV0Z10750506 Keywords : Stochastic optimization problem * sensitivity * approximative solution Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/lachout-approximative solutions of stochastic optimization problem.pdf

  16. Optimal control of stochastic difference Volterra equations an introduction

    CERN Document Server

    Shaikhet, Leonid

    2015-01-01

    This book showcases a subclass of hereditary systems, that is, systems with behaviour depending not only on their current state but also on their past history; it is an introduction to the mathematical theory of optimal control for stochastic difference Volterra equations of neutral type. As such, it will be of much interest to researchers interested in modelling processes in physics, mechanics, automatic regulation, economics and finance, biology, sociology and medicine for all of which such equations are very popular tools. The text deals with problems of optimal control such as meeting given performance criteria, and stabilization, extending them to neutral stochastic difference Volterra equations. In particular, it contrasts the difference analogues of solutions to optimal control and optimal estimation problems for stochastic integral Volterra equations with optimal solutions for corresponding problems in stochastic difference Volterra equations. Optimal Control of Stochastic Difference Volterra Equation...

  17. Risk-Based Two-Stage Stochastic Optimization Problem of Micro-Grid Operation with Renewables and Incentive-Based Demand Response Programs

    Directory of Open Access Journals (Sweden)

    Pouria Sheikhahmadi

    2018-03-01

    Full Text Available The operation problem of a micro-grid (MG in grid-connected mode is an optimization one in which the main objective of the MG operator (MGO is to minimize the operation cost with optimal scheduling of resources and optimal trading energy with the main grid. The MGO can use incentive-based demand response programs (DRPs to pay an incentive to the consumers to change their demands in the peak hours. Moreover, the MGO forecasts the output power of renewable energy resources (RERs and models their uncertainties in its problem. In this paper, the operation problem of an MGO is modeled as a risk-based two-stage stochastic optimization problem. To model the uncertainties of RERs, two-stage stochastic programming is considered and conditional value at risk (CVaR index is used to manage the MGO’s risk-level. Moreover, the non-linear economic models of incentive-based DRPs are used by the MGO to change the peak load. The numerical studies are done to investigate the effect of incentive-based DRPs on the operation problem of the MGO. Moreover, to show the effect of the risk-averse parameter on MGO decisions, a sensitivity analysis is carried out.

  18. COOMA: AN OBJECT-ORIENTED STOCHASTIC OPTIMIZATION ALGORITHM

    Directory of Open Access Journals (Sweden)

    Stanislav Alexandrovich Tavridovich

    2017-09-01

    Full Text Available Stochastic optimization methods such as genetic algorithm, particle swarm optimization algorithm, and others are successfully used to solve optimization problems. They are all based on similar ideas and need minimal adaptation when being implemented. But several factors complicate the application of stochastic search methods in practice: multimodality of the objective function, optimization with constraints, finding the best parameter configuration of the algorithm, the increasing of the searching space, etc. This paper proposes a new Cascade Object Optimization and Modification Algorithm (COOMA which develops the best ideas of known stochastic optimization methods and can be applied to a wide variety of real-world problems described in the terms of object-oriented models with practically any types of parameters, variables, and associations between objects. The objects of different classes are organized in pools and pools form the hierarchical structure according to the associations between classes. The algorithm is also executed according to the pool structure: the methods of the upper-level pools before changing their objects call the analogous methods of all their subpools. The algorithm starts with initialization step and then passes through a number of iterations during which the objects are modified until the stop criteria are satisfied. The objects are modified using movement, replication and mutation operations. Two-level version of COOMA realizes a built-in self-adaptive mechanism. The optimization statistics for a number of test problems shows that COOMA is able to solve multi-level problems (with objects of different associated classes, problems with multimodal fitness functions and systems of constraints. COOMA source code on Java is available on request.

  19. A Stochastic Multiobjective Optimization Framework for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shibo He

    2010-01-01

    Full Text Available In wireless sensor networks (WSNs, there generally exist many different objective functions to be optimized. In this paper, we propose a stochastic multiobjective optimization approach to solve such kind of problem. We first formulate a general multiobjective optimization problem. We then decompose the optimization formulation through Lagrange dual decomposition and adopt the stochastic quasigradient algorithm to solve the primal-dual problem in a distributed way. We show theoretically that our algorithm converges to the optimal solution of the primal problem by using the knowledge of stochastic programming. Furthermore, the formulation provides a general stochastic multiobjective optimization framework for WSNs. We illustrate how the general framework works by considering an example of the optimal rate allocation problem in multipath WSNs with time-varying channel. Extensive simulation results are given to demonstrate the effectiveness of our algorithm.

  20. Optimizing signal recycling for detecting a stochastic gravitational-wave background

    Science.gov (United States)

    Tao, Duo; Christensen, Nelson

    2018-06-01

    Signal recycling is applied in laser interferometers such as the Advanced Laser Interferometer Gravitational-Wave Observatory (aLIGO) to increase their sensitivity to gravitational waves. In this study, signal recycling configurations for detecting a stochastic gravitational wave background are optimized based on aLIGO parameters. Optimal transmission of the signal recycling mirror (SRM) and detuning phase of the signal recycling cavity under a fixed laser power and low-frequency cutoff are calculated. Based on the optimal configurations, the compatibility with a binary neutron star (BNS) search is discussed. Then, different laser powers and low-frequency cutoffs are considered. Two models for the dimensionless energy density of gravitational waves , the flat model and the model, are studied. For a stochastic background search, it is found that an interferometer using signal recycling has a better sensitivity than an interferometer not using it. The optimal stochastic search configurations are typically found when both the SRM transmission and the signal recycling detuning phase are low. In this region, the BNS range mostly lies between 160 and 180 Mpc. When a lower laser power is used the optimal signal recycling detuning phase increases, the optimal SRM transmission increases and the optimal sensitivity improves. A reduced low-frequency cutoff gives a better sensitivity limit. For both models of , a typical optimal sensitivity limit on the order of 10‑10 is achieved at a reference frequency of Hz.

  1. Stochastic Linear Quadratic Optimal Control Problems

    International Nuclear Information System (INIS)

    Chen, S.; Yong, J.

    2001-01-01

    This paper is concerned with the stochastic linear quadratic optimal control problem (LQ problem, for short) for which the coefficients are allowed to be random and the cost functional is allowed to have a negative weight on the square of the control variable. Some intrinsic relations among the LQ problem, the stochastic maximum principle, and the (linear) forward-backward stochastic differential equations are established. Some results involving Riccati equation are discussed as well

  2. Stochastic optimal charging of electric-drive vehicles with renewable energy

    International Nuclear Information System (INIS)

    Pantoš, Miloš

    2011-01-01

    The paper presents the stochastic optimization algorithm that may eventually be used by electric energy suppliers to coordinate charging of electric-drive vehicles (EDVs) in order to maximize the use of renewable energy in transportation. Due to the stochastic nature of transportation patterns, the Monte Carlo simulation is applied to model uncertainties presented by numerous scenarios. To reduce the problem complexity, the simulated driving patterns are not individually considered in the optimization but clustered into fleets using the GAMS/SCENRED tool. Uncertainties of production of renewable energy sources (RESs) are presented by statistical central moments that are further considered in Hong’s 2-point + 1 estimation method in order to define estimate points considered in the optimization. Case studies illustrate the application of the proposed optimization in achieving maximal exploitation of RESs in transportation by EDVs. -- Highlights: ► Optimization model for EDV charging applying linear programming. ► Formation of EDV fleets based on the driving patterns assessment applying the GAMS/SCENRED. ► Consideration of uncertainties of RES production and energy prices in the market. ► Stochastic optimization. ► Application of Hong’s 2-point + 1 estimation method.

  3. Annealing evolutionary stochastic approximation Monte Carlo for global optimization

    KAUST Repository

    Liang, Faming

    2010-04-08

    In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.

  4. A review on condition-based maintenance optimization models for stochastically deteriorating system

    International Nuclear Information System (INIS)

    Alaswad, Suzan; Xiang, Yisha

    2017-01-01

    Condition-based maintenance (CBM) is a maintenance strategy that collects and assesses real-time information, and recommends maintenance decisions based on the current condition of the system. In recent decades, research on CBM has been rapidly growing due to the rapid development of computer-based monitoring technologies. Research studies have proven that CBM, if planned properly, can be effective in improving equipment reliability at reduced costs. This paper presents a review of CBM literature with emphasis on mathematical modeling and optimization approaches. We focus this review on important aspects of the CBM, such as optimization criteria, inspection frequency, maintenance degree, solution methodology, etc. Since the modeling choice for the stochastic deterioration process greatly influences CBM strategy decisions, this review classifies the literature on CBM models based on the underlying deterioration processes, namely discrete- and continuous-state deterioration, and proportional hazard model. CBM models for multi-unit systems are also reviewed in this paper. This paper provides useful references for CBM management professionals and researchers working on CBM modeling and optimization. - Highlights: • A review on Condition-based maintenance (CBM) optimization models is presented. • The CBM models are classified based on the underlying deterioration processes. • Existing CBM models for both single- and multi-unit systems are reviewed. • Future essential research directions on CBM are identified.

  5. Trip-oriented stochastic optimal energy management strategy for plug-in hybrid electric bus

    International Nuclear Information System (INIS)

    Du, Yongchang; Zhao, Yue; Wang, Qinpu; Zhang, Yuanbo; Xia, Huaicheng

    2016-01-01

    A trip-oriented stochastic optimal energy management strategy for plug-in hybrid electric bus is presented in this paper, which includes the offline stochastic dynamic programming part and the online implementation part performed by equivalent consumption minimization strategy. In the offline part, historical driving cycles of the fixed route are divided into segments according to the position of bus stops, and then a segment-based stochastic driving condition model based on Markov chain is built. With the segment-based stochastic model obtained, the control set for real-time implemented equivalent consumption minimization strategy can be achieved by solving the offline stochastic dynamic programming problem. Results of stochastic dynamic programming are converted into a 3-dimensional lookup table of parameters for online implemented equivalent consumption minimization strategy. The proposed strategy is verified by both simulation and hardware-in-loop test of real-world driving cycle on an urban bus route. Simulation results show that the proposed method outperforms both the well-tuned equivalent consumption minimization strategy and the rule-based strategy in terms of fuel economy, and even proved to be close to the optimal result obtained by dynamic programming. Furthermore, the practical application potential of the proposed control method was proved by hardware-in-loop test. - Highlights: • A stochastic problem was formed based on a stochastic segment-based driving condition model. • Offline stochastic dynamic programming was employed to solve the stochastic problem. • The instant power split decision was made by the online equivalent consumption minimization strategy. • Good performance in fuel economy of the proposed method was verified by simulation results. • Practical application potential of the proposed method was verified by the hardware-in-loop test results.

  6. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    Science.gov (United States)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  7. Advances in stochastic and deterministic global optimization

    CERN Document Server

    Zhigljavsky, Anatoly; Žilinskas, Julius

    2016-01-01

    Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...

  8. Optimal design of distributed energy resource systems based on two-stage stochastic programming

    International Nuclear Information System (INIS)

    Yang, Yun; Zhang, Shijie; Xiao, Yunhan

    2017-01-01

    Highlights: • A two-stage stochastic programming model is built to design DER systems under uncertainties. • Uncertain energy demands have a significant effect on the optimal design. • Uncertain energy prices and renewable energy intensity have little effect on the optimal design. • The economy is overestimated if the system is designed without considering the uncertainties. • The uncertainty in energy prices has the significant and greatest effect on the economy. - Abstract: Multiple uncertainties exist in the optimal design of distributed energy resource (DER) systems. The expected energy, economic, and environmental benefits may not be achieved and a deficit in energy supply may occur if the uncertainties are not handled properly. This study focuses on the optimal design of DER systems with consideration of the uncertainties. A two-stage stochastic programming model is built in consideration of the discreteness of equipment capacities, equipment partial load operation and output bounds as well as of the influence of ambient temperature on gas turbine performance. The stochastic model is then transformed into its deterministic equivalent and solved. For an illustrative example, the model is applied to a hospital in Lianyungang, China. Comparative studies are performed to evaluate the effect of the uncertainties in load demands, energy prices, and renewable energy intensity separately and simultaneously on the system’s economy and optimal design. Results show that the uncertainties in load demands have a significant effect on the optimal system design, whereas the uncertainties in energy prices and renewable energy intensity have almost no effect. Results regarding economy show that it is obviously overestimated if the system is designed without considering the uncertainties.

  9. Design and analysis of stochastic DSS query optimizers in a distributed database system

    Directory of Open Access Journals (Sweden)

    Manik Sharma

    2016-07-01

    Full Text Available Query optimization is a stimulating task of any database system. A number of heuristics have been applied in recent times, which proposed new algorithms for substantially improving the performance of a query. The hunt for a better solution still continues. The imperishable developments in the field of Decision Support System (DSS databases are presenting data at an exceptional rate. The massive volume of DSS data is consequential only when it is able to access and analyze by distinctive researchers. Here, an innovative stochastic framework of DSS query optimizer is proposed to further optimize the design of existing query optimization genetic approaches. The results of Entropy Based Restricted Stochastic Query Optimizer (ERSQO are compared with the results of Exhaustive Enumeration Query Optimizer (EAQO, Simple Genetic Query Optimizer (SGQO, Novel Genetic Query Optimizer (NGQO and Restricted Stochastic Query Optimizer (RSQO. In terms of Total Costs, EAQO outperforms SGQO, NGQO, RSQO and ERSQO. However, stochastic approaches dominate in terms of runtime. The Total Costs produced by ERSQO is better than SGQO, NGQO and RGQO by 12%, 8% and 5% respectively. Moreover, the effect of replicating data on the Total Costs of DSS query is also examined. In addition, the statistical analysis revealed a 2-tailed significant correlation between the number of join operations and the Total Costs of distributed DSS query. Finally, in regard to the consistency of stochastic query optimizers, the results of SGQO, NGQO, RSQO and ERSQO are 96.2%, 97.2%, 97.45 and 97.8% consistent respectively.

  10. Stochastic Recursive Algorithms for Optimization Simultaneous Perturbation Methods

    CERN Document Server

    Bhatnagar, S; Prashanth, L A

    2013-01-01

    Stochastic Recursive Algorithms for Optimization presents algorithms for constrained and unconstrained optimization and for reinforcement learning. Efficient perturbation approaches form a thread unifying all the algorithms considered. Simultaneous perturbation stochastic approximation and smooth fractional estimators for gradient- and Hessian-based methods are presented. These algorithms: • are easily implemented; • do not require an explicit system model; and • work with real or simulated data. Chapters on their application in service systems, vehicular traffic control and communications networks illustrate this point. The book is self-contained with necessary mathematical results placed in an appendix. The text provides easy-to-use, off-the-shelf algorithms that are given detailed mathematical treatment so the material presented will be of significant interest to practitioners, academic researchers and graduate students alike. The breadth of applications makes the book appropriate for reader from sim...

  11. Dynamic optimization deterministic and stochastic models

    CERN Document Server

    Hinderer, Karl; Stieglitz, Michael

    2016-01-01

    This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.

  12. Sampling strategies and stopping criteria for stochastic dual dynamic programming: a case study in long-term hydrothermal scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Homem-de-Mello, Tito [University of Illinois at Chicago, Department of Mechanical and Industrial Engineering, Chicago, IL (United States); Matos, Vitor L. de; Finardi, Erlon C. [Universidade Federal de Santa Catarina, LabPlan - Laboratorio de Planejamento de Sistemas de Energia Eletrica, Florianopolis (Brazil)

    2011-03-15

    The long-term hydrothermal scheduling is one of the most important problems to be solved in the power systems area. This problem aims to obtain an optimal policy, under water (energy) resources uncertainty, for hydro and thermal plants over a multi-annual planning horizon. It is natural to model the problem as a multi-stage stochastic program, a class of models for which algorithms have been developed. The original stochastic process is represented by a finite scenario tree and, because of the large number of stages, a sampling-based method such as the Stochastic Dual Dynamic Programming (SDDP) algorithm is required. The purpose of this paper is two-fold. Firstly, we study the application of two alternative sampling strategies to the standard Monte Carlo - namely, Latin hypercube sampling and randomized quasi-Monte Carlo - for the generation of scenario trees, as well as for the sampling of scenarios that is part of the SDDP algorithm. Secondly, we discuss the formulation of stopping criteria for the optimization algorithm in terms of statistical hypothesis tests, which allows us to propose an alternative criterion that is more robust than that originally proposed for the SDDP. We test these ideas on a problem associated with the whole Brazilian power system, with a three-year planning horizon. (orig.)

  13. On benchmarking Stochastic Global Optimization Algorithms

    NARCIS (Netherlands)

    Hendrix, E.M.T.; Lancinskas, A.

    2015-01-01

    A multitude of heuristic stochastic optimization algorithms have been described in literature to obtain good solutions of the box-constrained global optimization problem often with a limit on the number of used function evaluations. In the larger question of which algorithms behave well on which

  14. Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid.

    Science.gov (United States)

    Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu

    2016-06-17

    Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely "archipelago micro-grid (MG)", which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS.

  15. NN-Based Implicit Stochastic Optimization of Multi-Reservoir Systems Management

    Directory of Open Access Journals (Sweden)

    Matteo Sangiorgio

    2018-03-01

    Full Text Available Multi-reservoir systems management is complex because of the uncertainty on future events and the variety of purposes, usually conflicting, of the involved actors. An efficient management of these systems can help improving resource allocation, preventing political crisis and reducing the conflicts between the stakeholders. Bellman stochastic dynamic programming (SDP is the most famous among the many proposed approaches to solve this optimal control problem. Unfortunately, SDP is affected by the curse of dimensionality: computational effort increases exponentially with the complexity of the considered system (i.e., number of reservoirs, and the problem rapidly becomes intractable. This paper proposes an implicit stochastic optimization approach for the solution of the reservoir management problem. The core idea is using extremely flexible functions, such as artificial neural networks (NN, for designing release rules which approximate the optimal policies obtained by an open-loop approach. These trained NNs can then be used to take decisions in real time. The approach thus requires a sufficiently long series of historical or synthetic inflows, and the definition of a compromise solution to be approximated. This work analyzes with particular emphasis the importance of the information which represents the input of the control laws, investigating the effects of different degrees of completeness. The methodology is applied to the Nile River basin considering the main management objectives (minimization of the irrigation water deficit and maximization of the hydropower production, but can be easily adopted also in other cases.

  16. Sensory optimization by stochastic tuning.

    Science.gov (United States)

    Jurica, Peter; Gepshtein, Sergei; Tyukin, Ivan; van Leeuwen, Cees

    2013-10-01

    Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system's preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: The higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  17. Robust Topology Optimization Based on Stochastic Collocation Methods under Loading Uncertainties

    Directory of Open Access Journals (Sweden)

    Qinghai Zhao

    2015-01-01

    Full Text Available A robust topology optimization (RTO approach with consideration of loading uncertainties is developed in this paper. The stochastic collocation method combined with full tensor product grid and Smolyak sparse grid transforms the robust formulation into a weighted multiple loading deterministic problem at the collocation points. The proposed approach is amenable to implementation in existing commercial topology optimization software package and thus feasible to practical engineering problems. Numerical examples of two- and three-dimensional topology optimization problems are provided to demonstrate the proposed RTO approach and its applications. The optimal topologies obtained from deterministic and robust topology optimization designs under tensor product grid and sparse grid with different levels are compared with one another to investigate the pros and cons of optimization algorithm on final topologies, and an extensive Monte Carlo simulation is also performed to verify the proposed approach.

  18. Bayesian posterior sampling via stochastic gradient Fisher scoring

    NARCIS (Netherlands)

    Ahn, S.; Korattikara, A.; Welling, M.; Langford, J.; Pineau, J.

    2012-01-01

    In this paper we address the following question: "Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?". An algorithm based on the Langevin equation with stochastic gradients (SGLD) was

  19. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    Directory of Open Access Journals (Sweden)

    Neslihan Fidan Keçeci

    2016-10-01

    Full Text Available The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG package, which has precoded modules for optimization with SSD constraints, mean-variance and minimum variance portfolio optimization. We have done in-sample and out-of-sample simulations for portfolios of stocks from the Dow Jones, S&P 100 and DAX indices. The considered portfolios’ SSD dominate the Dow Jones, S&P 100 and DAX indices. Simulation demonstrated a superior performance of portfolios with SD constraints, versus mean-variance and minimum variance portfolios.

  20. Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process

    Science.gov (United States)

    Yan, Wei; Chang, Yuwen

    2016-12-01

    Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.

  1. Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid

    Directory of Open Access Journals (Sweden)

    Qingyu Yang

    2016-06-01

    Full Text Available Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS supported by Internet of Things (IoT techniques, namely “archipelago micro-grid (MG”, which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs are used to replace a portion of Conventional Vehicles (CVs to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS and Limited Coordinated Scheme (LCS, respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS.

  2. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  3. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  4. Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.

    Science.gov (United States)

    Chang, Joshua; Paydarfar, David

    2014-12-01

    Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.

  5. Optimal Stochastic Modeling and Control of Flexible Structures

    Science.gov (United States)

    1988-09-01

    1.37] and McLane [1.18] considered multivariable systems and derived their optimal control characteristics. Kleinman, Gorman and Zaborsky considered...Leondes [1.72,1.73] studied various aspects of multivariable linear stochastic, discrete-time systems that are partly deterministic, and partly stochastic...June 1966. 1.8. A.V. Balaknishnan, Applied Functional Analaysis , 2nd ed., New York, N.Y.: Springer-Verlag, 1981 1.9. Peter S. Maybeck, Stochastic

  6. Stochastic analysis and robust optimization for a deck lid inner panel stamping

    International Nuclear Information System (INIS)

    Hou, Bo; Wang, Wurong; Li, Shuhui; Lin, Zhongqin; Xia, Z. Cedric

    2010-01-01

    FE-simulation and optimization are widely used in the stamping process to improve design quality and shorten development cycle. However, the current simulation and optimization may lead to non-robust results due to not considering the variation of material and process parameters. In this study, a novel stochastic analysis and robust optimization approach is proposed to improve the stamping robustness, where the uncertainties are involved to reflect manufacturing reality. A meta-model based stochastic analysis method is developed, where FE-simulation, uniform design and response surface methodology (RSM) are used to construct meta-model, based on which Monte-Carlo simulation is performed to predict the influence of input parameters variation on the final product quality. By applying the stochastic analysis, uniform design and RSM, the mean and the standard deviation (SD) of product quality are calculated as functions of the controllable process parameters. The robust optimization model composed of mean and SD is constructed and solved, the result of which is compared with the deterministic one to show its advantages. It is demonstrated that the product quality variations are reduced significantly, and quality targets (reject rate) are achieved under the robust optimal solution. The developed approach offers rapid and reliable results for engineers to deal with potential stamping problems during the early phase of product and tooling design, saving more time and resources.

  7. Topology optimization under stochastic stiffness

    Science.gov (United States)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations

  8. Decoding suprathreshold stochastic resonance with optimal weights

    International Nuclear Information System (INIS)

    Xu, Liyan; Vladusich, Tony; Duan, Fabing; Gunn, Lachlan J.; Abbott, Derek; McDonnell, Mark D.

    2015-01-01

    We investigate an array of stochastic quantizers for converting an analog input signal into a discrete output in the context of suprathreshold stochastic resonance. A new optimal weighted decoding is considered for different threshold level distributions. We show that for particular noise levels and choices of the threshold levels optimally weighting the quantizer responses provides a reduced mean square error in comparison with the original unweighted array. However, there are also many parameter regions where the original array provides near optimal performance, and when this occurs, it offers a much simpler approach than optimally weighting each quantizer's response. - Highlights: • A weighted summing array of independently noisy binary comparators is investigated. • We present an optimal linearly weighted decoding scheme for combining the comparator responses. • We solve for the optimal weights by applying least squares regression to simulated data. • We find that the MSE distortion of weighting before summation is superior to unweighted summation of comparator responses. • For some parameter regions, the decrease in MSE distortion due to weighting is negligible

  9. Optimal Advertising with Stochastic Demand

    OpenAIRE

    George E. Monahan

    1983-01-01

    A stochastic, sequential model is developed to determine optimal advertising expenditures as a function of product maturity and past advertising. Random demand for the product depends upon an aggregate measure of current and past advertising called "goodwill," and the position of the product in its life cycle measured by sales-to-date. Conditions on the parameters of the model are established that insure that it is optimal to advertise less as the product matures. Additional characteristics o...

  10. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  11. Reliability-Based Topology Optimization Using Stochastic Response Surface Method with Sparse Grid Design

    Directory of Open Access Journals (Sweden)

    Qinghai Zhao

    2015-01-01

    Full Text Available A mathematical framework is developed which integrates the reliability concept into topology optimization to solve reliability-based topology optimization (RBTO problems under uncertainty. Two typical methodologies have been presented and implemented, including the performance measure approach (PMA and the sequential optimization and reliability assessment (SORA. To enhance the computational efficiency of reliability analysis, stochastic response surface method (SRSM is applied to approximate the true limit state function with respect to the normalized random variables, combined with the reasonable design of experiments generated by sparse grid design, which was proven to be an effective and special discretization technique. The uncertainties such as material property and external loads are considered on three numerical examples: a cantilever beam, a loaded knee structure, and a heat conduction problem. Monte-Carlo simulations are also performed to verify the accuracy of the failure probabilities computed by the proposed approach. Based on the results, it is demonstrated that application of SRSM with SGD can produce an efficient reliability analysis in RBTO which enables a more reliable design than that obtained by DTO. It is also found that, under identical accuracy, SORA is superior to PMA in view of computational efficiency.

  12. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    CERN Document Server

    Parasuraman, Ramviyas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide red...

  13. Stochastic modelling of turbulent combustion for design optimization of gas turbine combustors

    Science.gov (United States)

    Mehanna Ismail, Mohammed Ali

    The present work covers the development and the implementation of an efficient algorithm for the design optimization of gas turbine combustors. The purpose is to explore the possibilities and indicate constructive suggestions for optimization techniques as alternative methods for designing gas turbine combustors. The algorithm is general to the extent that no constraints are imposed on the combustion phenomena or on the combustor configuration. The optimization problem is broken down into two elementary problems: the first is the optimum search algorithm, and the second is the turbulent combustion model used to determine the combustor performance parameters. These performance parameters constitute the objective and physical constraints in the optimization problem formulation. The examination of both turbulent combustion phenomena and the gas turbine design process suggests that the turbulent combustion model represents a crucial part of the optimization algorithm. The basic requirements needed for a turbulent combustion model to be successfully used in a practical optimization algorithm are discussed. In principle, the combustion model should comply with the conflicting requirements of high fidelity, robustness and computational efficiency. To that end, the problem of turbulent combustion is discussed and the current state of the art of turbulent combustion modelling is reviewed. According to this review, turbulent combustion models based on the composition PDF transport equation are found to be good candidates for application in the present context. However, these models are computationally expensive. To overcome this difficulty, two different models based on the composition PDF transport equation were developed: an improved Lagrangian Monte Carlo composition PDF algorithm and the generalized stochastic reactor model. Improvements in the Lagrangian Monte Carlo composition PDF model performance and its computational efficiency were achieved through the

  14. Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks

    International Nuclear Information System (INIS)

    Sun, Z.; Sen, A.K.; Longman, R.W.

    2006-01-01

    An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used

  15. Handbook of simulation optimization

    CERN Document Server

    Fu, Michael C

    2014-01-01

    The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science,...

  16. Importance Sampling for Stochastic Timed Automata

    DEFF Research Database (Denmark)

    Jegourel, Cyrille; Larsen, Kim Guldstrand; Legay, Axel

    2016-01-01

    We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state-wise cha......We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state...

  17. Stochastic optimal control in infinite dimension dynamic programming and HJB equations

    CERN Document Server

    Fabbri, Giorgio; Święch, Andrzej

    2017-01-01

    Providing an introduction to stochastic optimal control in infinite dimension, this book gives a complete account of the theory of second-order HJB equations in infinite-dimensional Hilbert spaces, focusing on its applicability to associated stochastic optimal control problems. It features a general introduction to optimal stochastic control, including basic results (e.g. the dynamic programming principle) with proofs, and provides examples of applications. A complete and up-to-date exposition of the existing theory of viscosity solutions and regular solutions of second-order HJB equations in Hilbert spaces is given, together with an extensive survey of other methods, with a full bibliography. In particular, Chapter 6, written by M. Fuhrman and G. Tessitore, surveys the theory of regular solutions of HJB equations arising in infinite-dimensional stochastic control, via BSDEs. The book is of interest to both pure and applied researchers working in the control theory of stochastic PDEs, and in PDEs in infinite ...

  18. Semilinear Kolmogorov Equations and Applications to Stochastic Optimal Control

    International Nuclear Information System (INIS)

    Masiero, Federica

    2005-01-01

    Semilinear parabolic differential equations are solved in a mild sense in an infinite-dimensional Hilbert space. Applications to stochastic optimal control problems are studied by solving the associated Hamilton-Jacobi-Bellman equation. These results are applied to some controlled stochastic partial differential equations

  19. Optimal Stochastic Control Problem for General Linear Dynamical Systems in Neuroscience

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2017-01-01

    Full Text Available This paper considers a d-dimensional stochastic optimization problem in neuroscience. Suppose the arm’s movement trajectory is modeled by high-order linear stochastic differential dynamic system in d-dimensional space, the optimal trajectory, velocity, and variance are explicitly obtained by using stochastic control method, which allows us to analytically establish exact relationships between various quantities. Moreover, the optimal trajectory is almost a straight line for a reaching movement; the optimal velocity bell-shaped and the optimal variance are consistent with the experimental Fitts law; that is, the longer the time of a reaching movement, the higher the accuracy of arriving at the target position, and the results can be directly applied to designing a reaching movement performed by a robotic arm in a more general environment.

  20. Stochastic bounded consensus tracking of leader-follower multi-agent systems with measurement noises based on sampled data with general sampling delay

    International Nuclear Information System (INIS)

    Wu Zhi-Hai; Peng Li; Xie Lin-Bo; Wen Ji-Wei

    2013-01-01

    In this paper we provide a unified framework for consensus tracking of leader-follower multi-agent systems with measurement noises based on sampled data with a general sampling delay. First, a stochastic bounded consensus tracking protocol based on sampled data with a general sampling delay is presented by employing the delay decomposition technique. Then, necessary and sufficient conditions are derived for guaranteeing leader-follower multi-agent systems with measurement noises and a time-varying reference state to achieve mean square bounded consensus tracking. The obtained results cover no sampling delay, a small sampling delay and a large sampling delay as three special cases. Last, simulations are provided to demonstrate the effectiveness of the theoretical results. (interdisciplinary physics and related areas of science and technology)

  1. Adaptive optics stochastic optical reconstruction microscopy (AO-STORM) by particle swarm optimization.

    Science.gov (United States)

    Tehrani, Kayvan F; Zhang, Yiwen; Shen, Ping; Kner, Peter

    2017-11-01

    Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm.

  2. Necessary optimality conditions of the second oder in a stochastic optimal control problem with delay argument

    Directory of Open Access Journals (Sweden)

    Rashad O. Mastaliev

    2016-12-01

    Full Text Available The optimal control problem of nonlinear stochastic systems which mathematical model is given by Ito stochastic differential equation with delay argument is considered. Assuming that the concerned region is open for the control by the first and the second variation (classical sense of the quality functional we obtain the necessary optimality condition of the first and the second order. In the particular case we receive the stochastic analog of the Legendre—Clebsch condition and some constructively verified conclusions from the second order necessary condition. We investigate the Legendre–Clebsch conditions for the degeneration case and obtain the necessary conditions of optimality for a special control, in the classical sense.

  3. Fuzzy Stochastic Optimization Theory, Models and Applications

    CERN Document Server

    Wang, Shuming

    2012-01-01

    Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies.   The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...

  4. A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation

    International Nuclear Information System (INIS)

    Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.

    2010-01-01

    Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.

  5. Intelligent stochastic optimization routine for in-core fuel cycle design

    International Nuclear Information System (INIS)

    Parks, G.T.

    1988-01-01

    Any reactor fuel management strategy must specify the fuel design, batch sizes, loading configurations, and operational procedures for each cycle. To permit detailed design studies, the complex core characteristics must necessarily be computer modeled. Thus, the identification of an optimal fuel cycle design represents an optimization problem with a nonlinear objective function (OF), nonlinear safety constraints, many control variables, and no direct derivative information. Most available library routines cannot tackle such problems; this paper introduces an intelligent stochastic optimization routine that can. There has been considerable interest recently in the application of stochastic methods to difficult optimization problems, based on the statistical mechanics algorithms originally attributed to Metropolis. Previous work showed that, in optimizing the performance of a British advanced gas-cooled reactor fuel stringer, a rudimentary version of the Metropolis algorithm performed as efficiently as the only suitable routine in the Numerical Algorithms Group library. Since then the performance of the Metropolis algorithm has been considerably enhanced by the introduction of self-tuning capabilities by which the routine adjusts its control parameters and search pattern as it progresses. Both features can be viewed as examples of artificial intelligence, in which the routine uses the accumulation of data, or experience, to guide its future actions

  6. Sampling from stochastic reservoir models constrained by production data

    Energy Technology Data Exchange (ETDEWEB)

    Hegstad, Bjoern Kaare

    1997-12-31

    When a petroleum reservoir is evaluated, it is important to forecast future production of oil and gas and to assess forecast uncertainty. This is done by defining a stochastic model for the reservoir characteristics, generating realizations from this model and applying a fluid flow simulator to the realizations. The reservoir characteristics define the geometry of the reservoir, initial saturation, petrophysical properties etc. This thesis discusses how to generate realizations constrained by production data, that is to say, the realizations should reproduce the observed production history of the petroleum reservoir within the uncertainty of these data. The topics discussed are: (1) Theoretical framework, (2) History matching, forecasting and forecasting uncertainty, (3) A three-dimensional test case, (4) Modelling transmissibility multipliers by Markov random fields, (5) Up scaling, (6) The link between model parameters, well observations and production history in a simple test case, (7) Sampling the posterior using optimization in a hierarchical model, (8) A comparison of Rejection Sampling and Metropolis-Hastings algorithm, (9) Stochastic simulation and conditioning by annealing in reservoir description, and (10) Uncertainty assessment in history matching and forecasting. 139 refs., 85 figs., 1 tab.

  7. Stochastic Dynamic AC Optimal Power Flow Based on a Multivariate Short-Term Wind Power Scenario Forecasting Model

    Directory of Open Access Journals (Sweden)

    Wenlei Bai

    2017-12-01

    Full Text Available The deterministic methods generally used to solve DC optimal power flow (OPF do not fully capture the uncertainty information in wind power, and thus their solutions could be suboptimal. However, the stochastic dynamic AC OPF problem can be used to find an optimal solution by fully capturing the uncertainty information of wind power. That uncertainty information of future wind power can be well represented by the short-term future wind power scenarios that are forecasted using the generalized dynamic factor model (GDFM—a novel multivariate statistical wind power forecasting model. Furthermore, the GDFM can accurately represent the spatial and temporal correlations among wind farms through the multivariate stochastic process. Fully capturing the uncertainty information in the spatially and temporally correlated GDFM scenarios can lead to a better AC OPF solution under a high penetration level of wind power. Since the GDFM is a factor analysis based model, the computational time can also be reduced. In order to further reduce the computational time, a modified artificial bee colony (ABC algorithm is used to solve the AC OPF problem based on the GDFM forecasting scenarios. Using the modified ABC algorithm based on the GDFM forecasting scenarios has resulted in better AC OPF’ solutions on an IEEE 118-bus system at every hour for 24 h.

  8. Multiscale study on stochastic reconstructions of shale samples

    Science.gov (United States)

    Lili, J.; Lin, M.; Jiang, W. B.

    2016-12-01

    Shales are known to have multiscale pore systems, composed of macroscale fractures, micropores, and nanoscale pores within gas or oil-producing organic material. Also, shales are fissile and laminated, and the heterogeneity in horizontal is quite different from that in vertical. Stochastic reconstructions are extremely useful in situations where three-dimensional information is costly and time consuming. Thus the purpose of our paper is to reconstruct stochastically equiprobable 3D models containing information from several scales. In this paper, macroscale and microscale images of shale structure in the Lower Silurian Longmaxi are obtained by X-ray microtomography and nanoscale images are obtained by scanning electron microscopy. Each image is representative for all given scales and phases. Especially, the macroscale is four times coarser than the microscale, which in turn is four times lower in resolution than the nanoscale image. Secondly, the cross correlation-based simulation method (CCSIM) and the three-step sampling method are combined together to generate stochastic reconstructions for each scale. It is important to point out that the boundary points of pore and matrix are selected based on multiple-point connectivity function in the sampling process, and thus the characteristics of the reconstructed image can be controlled indirectly. Thirdly, all images with the same resolution are developed through downscaling and upscaling by interpolation, and then we merge multiscale categorical spatial data into a single 3D image with predefined resolution (the microscale image). 30 realizations using the given images and the proposed method are generated. The result reveals that the proposed method is capable of preserving the multiscale pore structure, both vertically and horizontally, which is necessary for accurate permeability prediction. The variogram curves and pore-size distribution for both original 3D sample and the generated 3D realizations are compared

  9. Feasibility of Stochastic Voltage/VAr Optimization Considering Renewable Energy Resources for Smart Grid

    Science.gov (United States)

    Momoh, James A.; Salkuti, Surender Reddy

    2016-06-01

    This paper proposes a stochastic optimization technique for solving the Voltage/VAr control problem including the load demand and Renewable Energy Resources (RERs) variation. The RERs often take along some inputs like stochastic behavior. One of the important challenges i. e., Voltage/VAr control is a prime source for handling power system complexity and reliability, hence it is the fundamental requirement for all the utility companies. There is a need for the robust and efficient Voltage/VAr optimization technique to meet the peak demand and reduction of system losses. The voltages beyond the limit may damage costly sub-station devices and equipments at consumer end as well. Especially, the RERs introduces more disturbances and some of the RERs are not even capable enough to meet the VAr demand. Therefore, there is a strong need for the Voltage/VAr control in RERs environment. This paper aims at the development of optimal scheme for Voltage/VAr control involving RERs. In this paper, Latin Hypercube Sampling (LHS) method is used to cover full range of variables by maximally satisfying the marginal distribution. Here, backward scenario reduction technique is used to reduce the number of scenarios effectively and maximally retain the fitting accuracy of samples. The developed optimization scheme is tested on IEEE 24 bus Reliability Test System (RTS) considering the load demand and RERs variation.

  10. Stochastic Optimal Control for Online Seller under Reputational Mechanisms

    Directory of Open Access Journals (Sweden)

    Milan Bradonjić

    2015-12-01

    Full Text Available In this work we propose and analyze a model which addresses the pulsing behavior of sellers in an online auction (store. This pulsing behavior is observed when sellers switch between advertising and processing states. We assert that a seller switches her state in order to maximize her profit, and further that this switch can be identified through the seller’s reputation. We show that for each seller there is an optimal reputation, i.e., the reputation at which the seller should switch her state in order to maximize her total profit. We design a stochastic behavioral model for an online seller, which incorporates the dynamics of resource allocation and reputation. The design of the model is optimized by using a stochastic advertising model from [1] and used effectively in the Stochastic Optimal Control of Advertising [2]. This model of reputation is combined with the effect of online reputation on sales price empirically verified in [3]. We derive the Hamilton-Jacobi-Bellman (HJB differential equation, whose solution relates optimal wealth level to a seller’s reputation. We formulate both a full model, as well as a reduced model with fewer parameters, both of which have the same qualitative description of the optimal seller behavior. Coincidentally, the reduced model has a closed form analytical solution that we construct.

  11. Using genetic algorithm to solve a new multi-period stochastic optimization model

    Science.gov (United States)

    Zhang, Xin-Li; Zhang, Ke-Cun

    2009-09-01

    This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.

  12. Stochastic optimal control of single neuron spike trains

    DEFF Research Database (Denmark)

    Iolov, Alexandre; Ditlevsen, Susanne; Longtin, Andrë

    2014-01-01

    stimulation of a neuron to achieve a target spike train under the physiological constraint to not damage tissue. Approach. We pose a stochastic optimal control problem to precisely specify the spike times in a leaky integrate-and-fire (LIF) model of a neuron with noise assumed to be of intrinsic or synaptic...... origin. In particular, we allow for the noise to be of arbitrary intensity. The optimal control problem is solved using dynamic programming when the controller has access to the voltage (closed-loop control), and using a maximum principle for the transition density when the controller only has access...... to the spike times (open-loop control). Main results. We have developed a stochastic optimal control algorithm to obtain precise spike times. It is applicable in both the supra-threshold and sub-threshold regimes, under open-loop and closed-loop conditions and with an arbitrary noise intensity; the accuracy...

  13. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  14. Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation

    Science.gov (United States)

    Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.

    2017-06-01

    Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.

  15. Stochastic maintenance optimization at Candu power plants

    International Nuclear Information System (INIS)

    Doyle, E.K.; Duchesne, T.; Lee, C.G.; Cho, D.I.

    2004-01-01

    The use of various innovative maintenance optimization techniques at Bruce has lead to cost effective preventive maintenance applications for complex systems as previously reported at ICONE 6 in New Orleans (1996). Further refinement of the station maintenance strategy was evaluated via the applicability of statistical analysis of historical failure data. The viability of stochastic methods in Candu maintenance was illustrated at ICONE 10 in Washington DC (2002). The next phase consists of investigating the validity of using subjective elicitation techniques to obtain component lifetime distributions. This technique provides access to the elusive failure statistics, the lack of which is often referred to in the literature as the principal impediment preventing the use of stochastic methods in large industry. At the same time the technique allows very valuable information to be captured from the fast retiring 'baby boom generation'. Initial indications have been quite positive. The current reality of global competition necessitates the pursuit of all financial optimizers. The next construction phase in the power generation industry will soon begin on a worldwide basis. With the relatively high initial capital cost of new nuclear generation all possible avenues of financial optimization must be evaluated and implemented. (authors)

  16. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  17. Stochastic multi-objective model for optimal energy exchange optimization of networked microgrids with presence of renewable generation under risk-based strategies.

    Science.gov (United States)

    Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad

    2018-02-01

    The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A model based on stochastic dynamic programming for determining China's optimal strategic petroleum reserve policy

    International Nuclear Information System (INIS)

    Zhang Xiaobing; Fan Ying; Wei Yiming

    2009-01-01

    China's Strategic Petroleum Reserve (SPR) is currently being prepared. But how large the optimal stockpile size for China should be, what the best acquisition strategies are, how to release the reserve if a disruption occurs, and other related issues still need to be studied in detail. In this paper, we develop a stochastic dynamic programming model based on a total potential cost function of establishing SPRs to evaluate the optimal SPR policy for China. Using this model, empirical results are presented for the optimal size of China's SPR and the best acquisition and drawdown strategies for a few specific cases. The results show that with comprehensive consideration, the optimal SPR size for China is around 320 million barrels. This size is equivalent to about 90 days of net oil import amount in 2006 and should be reached in the year 2017, three years earlier than the national goal, which implies that the need for China to fill the SPR is probably more pressing; the best stockpile release action in a disruption is related to the disruption levels and expected continuation probabilities. The information provided by the results will be useful for decision makers.

  19. A numerical scheme for optimal transition paths of stochastic chemical kinetic systems

    International Nuclear Information System (INIS)

    Liu Di

    2008-01-01

    We present a new framework for finding the optimal transition paths of metastable stochastic chemical kinetic systems with large system size. The optimal transition paths are identified to be the most probable paths according to the Large Deviation Theory of stochastic processes. Dynamical equations for the optimal transition paths are derived using the variational principle. A modified Minimum Action Method (MAM) is proposed as a numerical scheme to solve the optimal transition paths. Applications to Gene Regulatory Networks such as the toggle switch model and the Lactose Operon Model in Escherichia coli are presented as numerical examples

  20. Optimal Integration of Intermittent Renewables: A System LCOE Stochastic Approach

    Directory of Open Access Journals (Sweden)

    Carlo Lucheroni

    2018-03-01

    Full Text Available We propose a system level approach to value the impact on costs of the integration of intermittent renewable generation in a power system, based on expected breakeven cost and breakeven cost risk. To do this, we carefully reconsider the definition of Levelized Cost of Electricity (LCOE when extended to non-dispatchable generation, by examining extra costs and gains originated by the costly management of random power injections. We are thus lead to define a ‘system LCOE’ as a system dependent LCOE that takes properly into account intermittent generation. In order to include breakeven cost risk we further extend this deterministic approach to a stochastic setting, by introducing a ‘stochastic system LCOE’. This extension allows us to discuss the optimal integration of intermittent renewables from a broad, system level point of view. This paper thus aims to provide power producers and policy makers with a new methodological scheme, still based on the LCOE but which updates this valuation technique to current energy system configurations characterized by a large share of non-dispatchable production. Quantifying and optimizing the impact of intermittent renewables integration on power system costs, risk and CO 2 emissions, the proposed methodology can be used as powerful tool of analysis for assessing environmental and energy policies.

  1. Reliability-Based Shape Optimization using Stochastic Finite Element Methods

    DEFF Research Database (Denmark)

    Enevoldsen, Ib; Sørensen, John Dalsgaard; Sigurdsson, G.

    1991-01-01

    stochastic fields (e.g. loads and material parameters such as Young's modulus and the Poisson ratio). In this case stochastic finite element techniques combined with FORM analysis can be used to obtain measures of the reliability of the structural systems, see Der Kiureghian & Ke (6) and Liu & Der Kiureghian...

  2. NLP model and stochastic multi-start optimization approach for heat exchanger networks

    International Nuclear Information System (INIS)

    Núñez-Serna, Rosa I.; Zamora, Juan M.

    2016-01-01

    Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.

  3. Computing the optimal path in stochastic dynamical systems

    International Nuclear Information System (INIS)

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-01-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  4. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    Science.gov (United States)

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  5. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    Directory of Open Access Journals (Sweden)

    Ramviyas Parasuraman

    2014-12-01

    Full Text Available The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS. When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities, there is a possibility that some electronic components may fail randomly (due to radiation effects, which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.

  6. Optimal timing of joint replacement using mathematical programming and stochastic programming models.

    Science.gov (United States)

    Keren, Baruch; Pliskin, Joseph S

    2011-12-01

    The optimal timing for performing radical medical procedures as joint (e.g., hip) replacement must be seriously considered. In this paper we show that under deterministic assumptions the optimal timing for joint replacement is a solution of a mathematical programming problem, and under stochastic assumptions the optimal timing can be formulated as a stochastic programming problem. We formulate deterministic and stochastic models that can serve as decision support tools. The results show that the benefit from joint replacement surgery is heavily dependent on timing. Moreover, for a special case where the patient's remaining life is normally distributed along with a normally distributed survival of the new joint, the expected benefit function from surgery is completely solved. This enables practitioners to draw the expected benefit graph, to find the optimal timing, to evaluate the benefit for each patient, to set priorities among patients and to decide if joint replacement should be performed and when.

  7. Efficient Multilevel and Multi-index Sampling Methods in Stochastic Differential Equations

    KAUST Repository

    Haji-Ali, Abdul Lateef

    2016-05-22

    of this thesis is the novel Multi-index Monte Carlo (MIMC) method which is an extension of MLMC in high dimensional problems with significant computational savings. Under reasonable assumptions on the weak and variance convergence, which are related to the mixed regularity of the underlying problem and the discretization method, the order of the computational complexity of MIMC is, at worst up to a logarithmic factor, independent of the dimensionality of the underlying parametric equation. We also apply the same multi-index methodology to another sampling method, namely the Stochastic Collocation method. Hence, the novel Multi-index Stochastic Collocation method is proposed and is shown to be more efficient in problems with sufficient mixed regularity than our novel MIMC method and other standard methods. Finally, MIMC is applied to approximate quantities of interest of stochastic particle systems in the mean-field when the number of particles tends to infinity. To approximate these quantities of interest up to an error tolerance, TOL, MIMC has a computational complexity of O(TOL-2log(TOL)2). This complexity is achieved by building a hierarchy based on two discretization parameters: the number of time steps in an Milstein scheme and the number of particles in the particle system. Moreover, we use a partitioning estimator to increase the correlation between two stochastic particle systems with different sizes. In comparison, the optimal computational complexity of MLMC in this case is O(TOL-3) and the computational complexity of Monte Carlo is O(TOL-4).

  8. Smooth Solutions to Optimal Investment Models with Stochastic Volatilities and Portfolio Constraints

    International Nuclear Information System (INIS)

    Pham, H.

    2002-01-01

    This paper deals with an extension of Merton's optimal investment problem to a multidimensional model with stochastic volatility and portfolio constraints. The classical dynamic programming approach leads to a characterization of the value function as a viscosity solution of the highly nonlinear associated Bellman equation. A logarithmic transformation expresses the value function in terms of the solution to a semilinear parabolic equation with quadratic growth on the derivative term. Using a stochastic control representation and some approximations, we prove the existence of a smooth solution to this semilinear equation. An optimal portfolio is shown to exist, and is expressed in terms of the classical solution to this semilinear equation. This reduction is useful for studying numerical schemes for both the value function and the optimal portfolio. We illustrate our results with several examples of stochastic volatility models popular in the financial literature

  9. Optimal Stochastic Advertising Strategies for the U.S. Beef Industry

    OpenAIRE

    Kun C. Lee; Stanley Schraufnagel; Earl O. Heady

    1982-01-01

    An important decision variable in the promotional strategy for the beef sector is the optimal level of advertising expenditures over time. Optimal stochastic and deterministic advertising expenditures are derived for the U.S. beef industry for the period `1966 through 1980. They are compared with historical levels and gains realized by optimal advertising strategies are measured. Finally, the optimal advertising expenditures in the future are forecasted.

  10. Stochastic search, optimization and regression with energy applications

    Science.gov (United States)

    Hannah, Lauren A.

    models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.

  11. A Proposed Stochastic Finite Difference Approach Based on Homogenous Chaos Expansion

    Directory of Open Access Journals (Sweden)

    O. H. Galal

    2013-01-01

    Full Text Available This paper proposes a stochastic finite difference approach, based on homogenous chaos expansion (SFDHC. The said approach can handle time dependent nonlinear as well as linear systems with deterministic or stochastic initial and boundary conditions. In this approach, included stochastic parameters are modeled as second-order stochastic processes and are expanded using Karhunen-Loève expansion, while the response function is approximated using homogenous chaos expansion. Galerkin projection is used in converting the original stochastic partial differential equation (PDE into a set of coupled deterministic partial differential equations and then solved using finite difference method. Two well-known equations were used for efficiency validation of the method proposed. First one being the linear diffusion equation with stochastic parameter and the second is the nonlinear Burger's equation with stochastic parameter and stochastic initial and boundary conditions. In both of these examples, the probability distribution function of the response manifested close conformity to the results obtained from Monte Carlo simulation with optimized computational cost.

  12. RES: Regularized Stochastic BFGS Algorithm

    Science.gov (United States)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  13. Estimation of an Optimal Stimulus Amplitude for Using Vestibular Stochastic Stimulation to Improve Balance Function

    Science.gov (United States)

    Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Peters, B.; Cohen, H.; hide

    2015-01-01

    Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). The goal of this project was to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection.

  14. Optimal management strategies in variable environments: Stochastic optimal control methods

    Science.gov (United States)

    Williams, B.K.

    1985-01-01

    Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both

  15. Generating optimized stochastic power management strategies for electric car components

    Energy Technology Data Exchange (ETDEWEB)

    Fruth, Matthias [TraceTronic GmbH, Dresden (Germany); Bastian, Steve [Technische Univ. Dresden (Germany)

    2012-11-01

    With the increasing prevalence of electric vehicles, reducing the power consumption of car components becomes a necessity. For the example of a novel traffic-light assistance system, which makes speed recommendations based on the expected length of red-light phases, power-management strategies are used to control under which conditions radio communication, positioning systems and other components are switched to low-power (e.g. sleep) or high-power (e.g. idle/busy) states. We apply dynamic power management, an optimization technique well-known from other domains, in order to compute energy-optimal power-management strategies, sometimes resulting in these strategies being stochastic. On the example of the traffic-light assistant, we present a MATLAB/Simulink-implemented framework for the generation, simulation and formal analysis of optimized power-management strategies, which is based on this technique. We study capabilities and limitations of this approach and sketch further applications in the automotive domain. (orig.)

  16. Annealing evolutionary stochastic approximation Monte Carlo for global optimization

    KAUST Repository

    Liang, Faming

    2010-01-01

    outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.

  17. Adaptively Constrained Stochastic Model Predictive Control for the Optimal Dispatch of Microgrid

    Directory of Open Access Journals (Sweden)

    Xiaogang Guo

    2018-01-01

    Full Text Available In this paper, an adaptively constrained stochastic model predictive control (MPC is proposed to achieve less-conservative coordination between energy storage units and uncertain renewable energy sources (RESs in a microgrid (MG. Besides the economic objective of MG operation, the limits of state-of-charge (SOC and discharging/charging power of the energy storage unit are formulated as chance constraints when accommodating uncertainties of RESs, considering mild violations of these constraints are allowed during long-term operation, and a closed-loop online update strategy is performed to adaptively tighten or relax constraints according to the actual deviation probability of violation level from the desired one as well as the current change rate of deviation probability. Numerical studies show that the proposed adaptively constrained stochastic MPC for MG optimal operation is much less conservative compared with the scenario optimization based robust MPC, and also presents a better convergence performance to the desired constraint violation level than other online update strategies.

  18. A penalty guided stochastic fractal search approach for system reliability optimization

    International Nuclear Information System (INIS)

    Mellal, Mohamed Arezki; Zio, Enrico

    2016-01-01

    Modern industry requires components and systems with high reliability levels. In this paper, we address the system reliability optimization problem. A penalty guided stochastic fractal search approach is developed for solving reliability allocation, redundancy allocation, and reliability–redundancy allocation problems. Numerical results of ten case studies are presented as benchmark problems for highlighting the superiority of the proposed approach compared to others from literature. - Highlights: • System reliability optimization is investigated. • A penalty guided stochastic fractal search approach is developed. • Results of ten case studies are compared with previously published methods. • Performance of the approach is demonstrated.

  19. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  20. Optimal configuration of microstructure in ferroelectric materials by stochastic optimization

    Science.gov (United States)

    Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.

    2010-07-01

    An optimization procedure determining the ideal configuration at the microstructural level of ferroelectric (FE) materials is applied to maximize piezoelectricity. Piezoelectricity in ceramic FEs differs significantly from that of single crystals because of the presence of crystallites (grains) possessing crystallographic axes aligned imperfectly. The piezoelectric properties of a polycrystalline (ceramic) FE is inextricably related to the grain orientation distribution (texture). The set of combination of variables, known as solution space, which dictates the texture of a ceramic is unlimited and hence the choice of the optimal solution which maximizes the piezoelectricity is complicated. Thus, a stochastic global optimization combined with homogenization is employed for the identification of the optimal granular configuration of the FE ceramic microstructure with optimum piezoelectric properties. The macroscopic equilibrium piezoelectric properties of polycrystalline FE is calculated using mathematical homogenization at each iteration step. The configuration of grains characterized by its orientations at each iteration is generated using a randomly selected set of orientation distribution parameters. The optimization procedure applied to the single crystalline phase compares well with the experimental data. Apparent enhancement of piezoelectric coefficient d33 is observed in an optimally oriented BaTiO3 single crystal. Based on the good agreement of results with the published data in single crystals, we proceed to apply the methodology in polycrystals. A configuration of crystallites, simultaneously constraining the orientation distribution of the c-axis (polar axis) while incorporating ab-plane randomness, which would multiply the overall piezoelectricity in ceramic BaTiO3 is also identified. The orientation distribution of the c-axes is found to be a narrow Gaussian distribution centered around 45°. The piezoelectric coefficient in such a ceramic is found to

  1. Real-Time Demand Side Management Algorithm Using Stochastic Optimization

    Directory of Open Access Journals (Sweden)

    Moses Amoasi Acquah

    2018-05-01

    Full Text Available A demand side management technique is deployed along with battery energy-storage systems (BESS to lower the electricity cost by mitigating the peak load of a building. Most of the existing methods rely on manual operation of the BESS, or even an elaborate building energy-management system resorting to a deterministic method that is susceptible to unforeseen growth in demand. In this study, we propose a real-time optimal operating strategy for BESS based on density demand forecast and stochastic optimization. This method takes into consideration uncertainties in demand when accounting for an optimal BESS schedule, making it robust compared to the deterministic case. The proposed method is verified and tested against existing algorithms. Data obtained from a real site in South Korea is used for verification and testing. The results show that the proposed method is effective, even for the cases where the forecasted demand deviates from the observed demand.

  2. Optimal Tax Reduction by Depreciation : A Stochastic Model

    NARCIS (Netherlands)

    Berg, M.; De Waegenaere, A.M.B.; Wielhouwer, J.L.

    1996-01-01

    This paper focuses on the choice of a depreciation method, when trying to minimize the expected value of the present value of future tax payments.In a quite general model that allows for stochastic future cash- ows and a tax structure with tax brackets, we determine the optimal choice between the

  3. Stochastic optimization of loading pattern for PWR

    International Nuclear Information System (INIS)

    Smuc, T.; Pevec, D.

    1994-01-01

    The application of stochastic optimization methods in solving in-core fuel management problems is restrained by the need for a large number of proposed solutions loading patterns, if a high quality final solution is wanted. Proposed loading patterns have to be evaluated by core neutronics simulator, which can impose unrealistic computer time requirements. A new loading pattern optimization code Monte Carlo Loading Pattern Search has been developed by coupling the simulated annealing optimization algorithm with a fast one-and-a-half dimensional core depletion simulator. The structure of the optimization method provides more efficient performance and allows the user to empty precious experience in the search process, thus reducing the search space size. Hereinafter, we discuss the characteristics of the method and illustrate them on the results obtained by solving the PWR reload problem. (authors). 7 refs., 1 tab., 1 fig

  4. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  5. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  6. Counting, enumerating and sampling of execution plans in a cost-based query optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    1999-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on

  7. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    Science.gov (United States)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  8. Stochastic network optimization with application to communication and queueing systems

    CERN Document Server

    Neely, Michael

    2010-01-01

    This text presents a modern theory of analysis, control, and optimization for dynamic networks. Mathematical techniques of Lyapunov drift and Lyapunov optimization are developed and shown to enable constrained optimization of time averages in general stochastic systems. The focus is on communication and queueing systems, including wireless networks with time-varying channels, mobility, and randomly arriving traffic. A simple drift-plus-penalty framework is used to optimize time averages such as throughput, throughput-utility, power, and distortion. Explicit performance-delay tradeoffs are prov

  9. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    Science.gov (United States)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  10. A Connection between Singular Stochastic Control and Optimal Stopping

    International Nuclear Information System (INIS)

    Espen Benth, Fred; Reikvam, Kristin

    2003-01-01

    We show that the value function of a singular stochastic control problem is equal to the integral of the value function of an associated optimal stopping problem. The connection is proved for a general class of diffusions using the method of viscosity solutions

  11. Counting, Enumerating and Sampling of Execution Plans in a Cost-Based Query Optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    2000-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query

  12. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors

    Directory of Open Access Journals (Sweden)

    Spiros Pagiatakis

    2009-10-01

    Full Text Available In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times. It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF. It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at −40 °C, −20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  13. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    Science.gov (United States)

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  14. Simulated Stochastic Approximation Annealing for Global Optimization With a Square-Root Cooling Schedule

    KAUST Repository

    Liang, Faming

    2014-04-03

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.

  15. Application of stochastic optimization to nuclear power plant asset management decisions

    International Nuclear Information System (INIS)

    Morton, D.; Koc, A.; Hess, S. M.

    2013-01-01

    We describe the development and application of stochastic optimization models and algorithms to address an issue of critical importance in the strategic allocation of resources; namely, the selection of a portfolio of capital investment projects under the constraints of a limited and uncertain budget. This issue is significant and one that faces decision-makers across all industries. The objective of this strategic decision process is generally self evident - to maximize the value obtained from the portfolio of selected projects (with value usually measured in terms of the portfolio's net present value). However, heretofore, many organizations have developed processes to make these investment decisions using simplistic rule-based rank-ordering schemes. This approach has the significant limitation of not accounting for the (often large) uncertainties in the costs or economic benefits associated with the candidate projects or in the uncertainties in the actual funds available to be expended over the projected period of time. As a result, the simple heuristic approaches that typically are employed in industrial practice generate outcomes that are non-optimal and do not achieve the level of benefits intended. In this paper we describe the results of research performed to utilize stochastic optimization models and algorithms to address this limitation by explicitly incorporating the evaluation of uncertainties in the analysis and decision making process. (authors)

  16. Application of stochastic optimization to nuclear power plant asset management decisions

    Energy Technology Data Exchange (ETDEWEB)

    Morton, D. [Graduate Program in Operations Research and Industrial Engineering, University of Texas at Austin, Austin, TX, 78712 (United States); Koc, A. [IBM T.J. Watson Research Center, Business Analytics and Mathematical Sciences Dept., 1101 Kitchawan Rd., Yorktown Heights, NY, 10598 (United States); Hess, S. M. [Electric Power Research Institute, 300 Baywood Road, West Chester, PA, 19382 (United States)

    2013-07-01

    We describe the development and application of stochastic optimization models and algorithms to address an issue of critical importance in the strategic allocation of resources; namely, the selection of a portfolio of capital investment projects under the constraints of a limited and uncertain budget. This issue is significant and one that faces decision-makers across all industries. The objective of this strategic decision process is generally self evident - to maximize the value obtained from the portfolio of selected projects (with value usually measured in terms of the portfolio's net present value). However, heretofore, many organizations have developed processes to make these investment decisions using simplistic rule-based rank-ordering schemes. This approach has the significant limitation of not accounting for the (often large) uncertainties in the costs or economic benefits associated with the candidate projects or in the uncertainties in the actual funds available to be expended over the projected period of time. As a result, the simple heuristic approaches that typically are employed in industrial practice generate outcomes that are non-optimal and do not achieve the level of benefits intended. In this paper we describe the results of research performed to utilize stochastic optimization models and algorithms to address this limitation by explicitly incorporating the evaluation of uncertainties in the analysis and decision making process. (authors)

  17. Performance improvement of optical CDMA networks with stochastic artificial bee colony optimization technique

    Science.gov (United States)

    Panda, Satyasen

    2018-05-01

    This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.

  18. Stochastic Turing Patterns: Analysis of Compartment-Based Approaches

    KAUST Repository

    Cao, Yang; Erban, Radek

    2014-01-01

    © 2014, Society for Mathematical Biology. Turing patterns can be observed in reaction-diffusion systems where chemical species have different diffusion constants. In recent years, several studies investigated the effects of noise on Turing patterns and showed that the parameter regimes, for which stochastic Turing patterns are observed, can be larger than the parameter regimes predicted by deterministic models, which are written in terms of partial differential equations (PDEs) for species concentrations. A common stochastic reaction-diffusion approach is written in terms of compartment-based (lattice-based) models, where the domain of interest is divided into artificial compartments and the number of molecules in each compartment is simulated. In this paper, the dependence of stochastic Turing patterns on the compartment size is investigated. It has previously been shown (for relatively simpler systems) that a modeler should not choose compartment sizes which are too small or too large, and that the optimal compartment size depends on the diffusion constant. Taking these results into account, we propose and study a compartment-based model of Turing patterns where each chemical species is described using a different set of compartments. It is shown that the parameter regions where spatial patterns form are different from the regions obtained by classical deterministic PDE-based models, but they are also different from the results obtained for the stochastic reaction-diffusion models which use a single set of compartments for all chemical species. In particular, it is argued that some previously reported results on the effect of noise on Turing patterns in biological systems need to be reinterpreted.

  19. Stochastic Turing Patterns: Analysis of Compartment-Based Approaches

    KAUST Repository

    Cao, Yang

    2014-11-25

    © 2014, Society for Mathematical Biology. Turing patterns can be observed in reaction-diffusion systems where chemical species have different diffusion constants. In recent years, several studies investigated the effects of noise on Turing patterns and showed that the parameter regimes, for which stochastic Turing patterns are observed, can be larger than the parameter regimes predicted by deterministic models, which are written in terms of partial differential equations (PDEs) for species concentrations. A common stochastic reaction-diffusion approach is written in terms of compartment-based (lattice-based) models, where the domain of interest is divided into artificial compartments and the number of molecules in each compartment is simulated. In this paper, the dependence of stochastic Turing patterns on the compartment size is investigated. It has previously been shown (for relatively simpler systems) that a modeler should not choose compartment sizes which are too small or too large, and that the optimal compartment size depends on the diffusion constant. Taking these results into account, we propose and study a compartment-based model of Turing patterns where each chemical species is described using a different set of compartments. It is shown that the parameter regions where spatial patterns form are different from the regions obtained by classical deterministic PDE-based models, but they are also different from the results obtained for the stochastic reaction-diffusion models which use a single set of compartments for all chemical species. In particular, it is argued that some previously reported results on the effect of noise on Turing patterns in biological systems need to be reinterpreted.

  20. A two-stage stochastic programming model for the optimal design of distributed energy systems

    International Nuclear Information System (INIS)

    Zhou, Zhe; Zhang, Jianyun; Liu, Pei; Li, Zheng; Georgiadis, Michael C.; Pistikopoulos, Efstratios N.

    2013-01-01

    Highlights: ► The optimal design of distributed energy systems under uncertainty is studied. ► A stochastic model is developed using genetic algorithm and Monte Carlo method. ► The proposed system possesses inherent robustness under uncertainty. ► The inherent robustness is due to energy storage facilities and grid connection. -- Abstract: A distributed energy system is a multi-input and multi-output energy system with substantial energy, economic and environmental benefits. The optimal design of such a complex system under energy demand and supply uncertainty poses significant challenges in terms of both modelling and corresponding solution strategies. This paper proposes a two-stage stochastic programming model for the optimal design of distributed energy systems. A two-stage decomposition based solution strategy is used to solve the optimization problem with genetic algorithm performing the search on the first stage variables and a Monte Carlo method dealing with uncertainty in the second stage. The model is applied to the planning of a distributed energy system in a hotel. Detailed computational results are presented and compared with those generated by a deterministic model. The impacts of demand and supply uncertainty on the optimal design of distributed energy systems are systematically investigated using proposed modelling framework and solution approach.

  1. Optimal processing pathway selection for microalgae-based biorefinery under uncertainty

    DEFF Research Database (Denmark)

    Rizwan, Muhammad; Zaman, Muhammad; Lee, Jay H.

    2015-01-01

    We propose a systematic framework for the selection of optimal processing pathways for a microalgaebased biorefinery under techno-economic uncertainty. The proposed framework promotes robust decision making by taking into account the uncertainties that arise due to inconsistencies among...... and shortage in the available technical information. A stochastic mixed integer nonlinear programming (sMINLP) problem is formulated for determining the optimal biorefinery configurations based on a superstructure model where parameter uncertainties are modeled and included as sampled scenarios. The solution...... the accounting of uncertainty are compared with respect to different objectives. (C) 2015 Elsevier Ltd. All rights reserved....

  2. Local Approximation and Hierarchical Methods for Stochastic Optimization

    Science.gov (United States)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the

  3. Optimization of Aeroengine Shop Visit Decisions Based on Remaining Useful Life and Stochastic Repair Time

    Directory of Open Access Journals (Sweden)

    Jing Cai

    2016-01-01

    Full Text Available Considering the wide application of condition-based maintenance in aeroengine maintenance practice, it becomes possible for aeroengines to carry out their preventive maintenance in just-in-time (JIT manner by reasonably planning their shop visits (SVs. In this study, an approach is proposed to make aeroengine SV decisions following the concept of JIT. Firstly, a state space model (SSM for aeroengine based on exhaust gas temperature margin is developed to predict the remaining useful life (RUL of aeroengine. Secondly, the effect of SV decisions on risk and service level (SL is analyzed, and an optimization of the aeroengine SV decisions based on RUL and stochastic repair time is performed to carry out JIT manner with the requirement of safety and SL. Finally, a case study considering two CFM-56 aeroengines is presented to demonstrate the proposed approach. The results show that predictive accuracy of RUL with SSM is higher than with linear regression, and the process of SV decisions is simple and feasible for airlines to improve the inventory management level of their aeroengines.

  4. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  5. Minimal representation of matrix valued white stochastic processes and U–D factorisation of algorithms for optimal control

    NARCIS (Netherlands)

    Willigenburg, van L.G.; Koning, de W.L.

    2013-01-01

    Two different descriptions are used in the literature to formulate the optimal dynamic output feedback control problem for linear dynamical systems with white stochastic parameters and quadratic criteria, called the optimal compensation problem. One describes the matrix valued white stochastic

  6. Synthesis of Optimal Processing Pathway for Microalgae-based Biorefinery under Uncertainty

    DEFF Research Database (Denmark)

    Rizwan, Muhammad; Lee, Jay H.; Gani, Rafiqul

    2015-01-01

    decision making, we propose a systematic framework for the synthesis and optimal design of microalgae-based processing network under uncertainty. By incorporating major uncertainties into the biorefinery superstructure model we developed previously, a stochastic mixed integer nonlinear programming (s......The research in the field of microalgae-based biofuels and chemicals is in early phase of the development, and therefore a wide range of uncertainties exist due to inconsistencies among and shortage of technical information. In order to handle and address these uncertainties to ensure robust......MINLP) problem is formulated for determining the optimal biorefinery structure under given parameter uncertainties modelled as sampled scenarios. The solution to the sMINLP problem determines the optimal decisions with respect to processing technologies, material flows, and product portfolio in the presence...

  7. STOCHASTIC MODELING OF OPTIMIZED CREDIT STRATEGY OF A DISTRIBUTING COMPANY ON THE PHARMACEUTICAL MARKET

    Directory of Open Access Journals (Sweden)

    M. Boychuk

    2015-10-01

    Full Text Available The activity of distribution companies is multifaceted. Ihey establish contacts with producers and consumers, determine the range of prices of medicines, do promotions, hold stocks of pharmaceuticals and take risks in their further selling.Their internal problems are complicated by the political crisis in the country, decreased purchasing power of national currency, and the rise in interest rates on loans. Therefore the usage of stochastic models of dynamic systems for the research into optimizing the management of pharmaceutical products distribution companies taking into account credit payments is of great current interest. A stochastic model of the optimal credit strategy of a pharmaceutical distributor in the market of pharmaceutical products has been constructed in the article considering credit payments and income limitations. From the mathematical point of view the obtained problem is the one of stochastic optimal control where the amount of monetary credit is the control and the amount of pharmaceutical product is the solution curve. The model allows to identify the optimal cash loan and the corresponding optimal quantity of pharmaceutical product that comply with the differential model of the existing quantity of pharmaceutical products in the form of Ito; the condition of the existing initial stock of pharmaceutical products; the limitation on the amount of credit and profit received from the product selling and maximize the average integral income. The research of the stochastic optimal control problem involves the construction of the left process of crediting with determination of the shift point of that control, the choice of the right crediting process and the formation of the optimal credit process. It was found that the optimal control of the credit amount and the shift point of that control are the determined values and don’t depend on the coefficient in the Wiener process and the optimal trajectory of the amount of

  8. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  9. Portfolios dominating indices: Optimization with second-order stochastic dominance constraints vs. minimum and mean variance portfolios

    OpenAIRE

    Keçeci, Neslihan Fidan; Kuzmenko, Viktor; Uryasev, Stan

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  10. Portfolios Dominating Indices: Optimization with Second-Order Stochastic Dominance Constraints vs. Minimum and Mean Variance Portfolios

    OpenAIRE

    Neslihan Fidan Keçeci; Viktor Kuzmenko; Stan Uryasev

    2016-01-01

    The paper compares portfolio optimization with the Second-Order Stochastic Dominance (SSD) constraints with mean-variance and minimum variance portfolio optimization. As a distribution-free decision rule, stochastic dominance takes into account the entire distribution of return rather than some specific characteristic, such as variance. The paper is focused on practical applications of the portfolio optimization and uses the Portfolio Safeguard (PSG) package, which has precoded modules for op...

  11. Exponential Synchronization for Stochastic Neural Networks with Mixed Time Delays and Markovian Jump Parameters via Sampled Data

    Directory of Open Access Journals (Sweden)

    Yingwei Li

    2014-01-01

    Full Text Available The exponential synchronization issue for stochastic neural networks (SNNs with mixed time delays and Markovian jump parameters using sampled-data controller is investigated. Based on a novel Lyapunov-Krasovskii functional, stochastic analysis theory, and linear matrix inequality (LMI approach, we derived some novel sufficient conditions that guarantee that the master systems exponentially synchronize with the slave systems. The design method of the desired sampled-data controller is also proposed. To reflect the most dynamical behaviors of the system, both Markovian jump parameters and stochastic disturbance are considered, where stochastic disturbances are given in the form of a Brownian motion. The results obtained in this paper are a little conservative comparing the previous results in the literature. Finally, two numerical examples are given to illustrate the effectiveness of the proposed methods.

  12. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    Science.gov (United States)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  13. Modified Convolutional Neural Network Based on Dropout and the Stochastic Gradient Descent Optimizer

    Directory of Open Access Journals (Sweden)

    Jing Yang

    2018-03-01

    Full Text Available This study proposes a modified convolutional neural network (CNN algorithm that is based on dropout and the stochastic gradient descent (SGD optimizer (MCNN-DS, after analyzing the problems of CNNs in extracting the convolution features, to improve the feature recognition rate and reduce the time-cost of CNNs. The MCNN-DS has a quadratic CNN structure and adopts the rectified linear unit as the activation function to avoid the gradient problem and accelerate convergence. To address the overfitting problem, the algorithm uses an SGD optimizer, which is implemented by inserting a dropout layer into the all-connected and output layers, to minimize cross entropy. This study used the datasets MNIST, HCL2000, and EnglishHand as the benchmark data, analyzed the performance of the SGD optimizer under different learning parameters, and found that the proposed algorithm exhibited good recognition performance when the learning rate was set to [0.05, 0.07]. The performances of WCNN, MLP-CNN, SVM-ELM, and MCNN-DS were compared. Statistical results showed the following: (1 For the benchmark MNIST, the MCNN-DS exhibited a high recognition rate of 99.97%, and the time-cost of the proposed algorithm was merely 21.95% of MLP-CNN, and 10.02% of SVM-ELM; (2 Compared with SVM-ELM, the average improvement in the recognition rate of MCNN-DS was 2.35% for the benchmark HCL2000, and the time-cost of MCNN-DS was only 15.41%; (3 For the EnglishHand test set, the lowest recognition rate of the algorithm was 84.93%, the highest recognition rate was 95.29%, and the average recognition rate was 89.77%.

  14. Efficient Output Solution for Nonlinear Stochastic Optimal Control Problem with Model-Reality Differences

    Directory of Open Access Journals (Sweden)

    Sie Long Kek

    2015-01-01

    Full Text Available A computational approach is proposed for solving the discrete time nonlinear stochastic optimal control problem. Our aim is to obtain the optimal output solution of the original optimal control problem through solving the simplified model-based optimal control problem iteratively. In our approach, the adjusted parameters are introduced into the model used such that the differences between the real system and the model used can be computed. Particularly, system optimization and parameter estimation are integrated interactively. On the other hand, the output is measured from the real plant and is fed back into the parameter estimation problem to establish a matching scheme. During the calculation procedure, the iterative solution is updated in order to approximate the true optimal solution of the original optimal control problem despite model-reality differences. For illustration, a wastewater treatment problem is studied and the results show the efficiency of the approach proposed.

  15. Assessing Exhaustiveness of Stochastic Sampling for Integrative Modeling of Macromolecular Structures.

    Science.gov (United States)

    Viswanath, Shruthi; Chemmama, Ilan E; Cimermancic, Peter; Sali, Andrej

    2017-12-05

    Modeling of macromolecular structures involves structural sampling guided by a scoring function, resulting in an ensemble of good-scoring models. By necessity, the sampling is often stochastic, and must be exhaustive at a precision sufficient for accurate modeling and assessment of model uncertainty. Therefore, the very first step in analyzing the ensemble is an estimation of the highest precision at which the sampling is exhaustive. Here, we present an objective and automated method for this task. As a proxy for sampling exhaustiveness, we evaluate whether two independently and stochastically generated sets of models are sufficiently similar. The protocol includes testing 1) convergence of the model score, 2) whether model scores for the two samples were drawn from the same parent distribution, 3) whether each structural cluster includes models from each sample proportionally to its size, and 4) whether there is sufficient structural similarity between the two model samples in each cluster. The evaluation also provides the sampling precision, defined as the smallest clustering threshold that satisfies the third, most stringent test. We validate the protocol with the aid of enumerated good-scoring models for five illustrative cases of binary protein complexes. Passing the proposed four tests is necessary, but not sufficient for thorough sampling. The protocol is general in nature and can be applied to the stochastic sampling of any set of models, not just structural models. In addition, the tests can be used to stop stochastic sampling as soon as exhaustiveness at desired precision is reached, thereby improving sampling efficiency; they may also help in selecting a model representation that is sufficiently detailed to be informative, yet also sufficiently coarse for sampling to be exhaustive. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. Optimal Design and Operation of In-Situ Chemical Oxidation Using Stochastic Cost Optimization Toolkit

    Science.gov (United States)

    Kim, U.; Parker, J.; Borden, R. C.

    2014-12-01

    In-situ chemical oxidation (ISCO) has been applied at many dense non-aqueous phase liquid (DNAPL) contaminated sites. A stirred reactor-type model was developed that considers DNAPL dissolution using a field-scale mass transfer function, instantaneous reaction of oxidant with aqueous and adsorbed contaminant and with readily oxidizable natural oxygen demand ("fast NOD"), and second-order kinetic reactions with "slow NOD." DNAPL dissolution enhancement as a function of oxidant concentration and inhibition due to manganese dioxide precipitation during permanganate injection are included in the model. The DNAPL source area is divided into multiple treatment zones with different areas, depths, and contaminant masses based on site characterization data. The performance model is coupled with a cost module that involves a set of unit costs representing specific fixed and operating costs. Monitoring of groundwater and/or soil concentrations in each treatment zone is employed to assess ISCO performance and make real-time decisions on oxidant reinjection or ISCO termination. Key ISCO design variables include the oxidant concentration to be injected, time to begin performance monitoring, groundwater and/or soil contaminant concentrations to trigger reinjection or terminate ISCO, number of monitoring wells or geoprobe locations per treatment zone, number of samples per sampling event and location, and monitoring frequency. Design variables for each treatment zone may be optimized to minimize expected cost over a set of Monte Carlo simulations that consider uncertainty in site parameters. The model is incorporated in the Stochastic Cost Optimization Toolkit (SCOToolkit) program, which couples the ISCO model with a dissolved plume transport model and with modules for other remediation strategies. An example problem is presented that illustrates design tradeoffs required to deal with characterization and monitoring uncertainty. Monitoring soil concentration changes during ISCO

  17. Initiating stochastic maintenance optimization at Candu Power Plants

    International Nuclear Information System (INIS)

    Doyle, E.K.

    2003-01-01

    As previously reported at ICONE 6 in New Orleans (1996), the use of various innovative maintenance optimization techniques at Bruce has lead to cost effective preventive maintenance applications for complex systems. Further cost refinement of the station maintenance strategy is being evaluated via the applicability of statistical analysis of historical failure data. Since the statistical evaluation was initiated in 1999 significant progress has been made in demonstrating the viability of stochastic methods in Candu maintenance. Some of the relevant results were presented at ICONE 10 in Washington DC (2002). Success with the graphical displays and the relatively easy to implement stochastic computer programs was sufficient to move the program along to the next significant phase. This next phase consists of investigating the validity of using subjective elicitation techniques to obtain component lifetime distributions. This technique provides access to the elusive failure statistics, the lack of which is often referred to in the literature as the principle impediment preventing the use of stochastic methods in large industry. At the same time the technique allows very valuable information to be captured from the fast retiring 'baby boom generation'. Initial indications have been quite positive. (author)

  18. Stochastic Optimized Relevance Feedback Particle Swarm Optimization for Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    Full Text Available One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF coupled with support vector machine (SVM has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO. The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations.

  19. A stochastic optimization approach to reduce greenhouse gas emissions from buildings and transportation

    International Nuclear Information System (INIS)

    Karan, Ebrahim; Asadi, Somayeh; Ntaimo, Lewis

    2016-01-01

    The magnitude of building- and transportation-related GHG (greenhouse gas) emissions makes the adoption of all-EVs (electric vehicles) powered with renewable power as one of the most effective strategies to reduce emission of GHGs. This paper formulates the problem of GHG mitigation strategy under uncertain conditions and optimizes the strategies in which EVs are powered by solar energy. Under a pre-specified budget, the objective is to determine the type of EV and power generation capacity of the solar system in such a way as to maximize GHG emissions reductions. The model supports the three primary solar systems: off-grid, grid-tied, and hybrid. First, a stochastic optimization model using probability distributions of stochastic variables and EV and solar system specifications is developed. The model is then validated by comparing the estimated values of the optimal strategies and actual values. It is found that the mitigation strategies in which EVs are powered by a hybrid solar system lead to the best cost-expected reduction of CO_2 emissions ratio. The results show an accuracy of about 4% for mitigation strategies in which EVs are powered by a grid-tied or hybrid solar system and 11% when applied to estimate the CO_2 emissions reductions of an off-grid system. - Highlights: • The problem of GHG mitigation is formulated as a stochastic optimization problem. • The objective is to maximize CO_2 emissions reductions within a specified budget. • The stochastic model is validated using actual data. • The results show an estimation accuracy of 4–11%.

  20. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  1. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  2. Stochastic optimized life cycle models for risk mitigation in power system applications

    International Nuclear Information System (INIS)

    Sageder, A.

    1998-01-01

    This ork shows the relevance of stochastic optimization in complex power system applications. It was proven that usual deterministic mean value models not only predict inaccurate results but are also most often on the risky side. The change in the market effects all kind of evaluation processes (e.g. fuel type and technology but especially financial engineering evaluations) in the endeavor of a strict risk mitigation comparison. But not only IPPs also traditional Utilities dash for risk/return optimized investment opportunities. In this study I developed a 2-phase model which can support a decision-maker in finding optimal solutions on investment and profitability. It has to be stated, that in this study no objective function will be optimized in an algorithmically way. On the one hand focus is laid on finding optimal solutions out of different choices (highest return at lowest possible risk); on the other hand the endeavor was to provide a decision makers with a better assessment of the likelihood of outcomes on investment considerations. The first (deterministic) phase computes in a Total Cost of Ownership (TCO) approach (Life cycle Calculation; DCF method). Most of the causal relations (day of operation, escalation of personal expanses, inflation, depreciation period, etc.) are defined within this phase. The second (stochastic) phase is a total new way in optimizing risk/return relations. With the some decision theory mathematics an expected value of stochastic solutions can be calculated. Furthermore probability function have to be defined out of historical data. The model not only supports profitability analysis (including regress and sensitivity analysis) but also supports a decision-maker in a decision process. Emphasis was laid on risk-return analysis, which can give the decision-maker first hand informations of the type of risk return problem (risk concave, averse or linear). Five important parameters were chosen which have the characteristics of typical

  3. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  4. Volatile decision dynamics: experiments, stochastic description, intermittency control and traffic optimization

    Science.gov (United States)

    Helbing, Dirk; Schönhof, Martin; Kern, Daniel

    2002-06-01

    The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc, they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified methods of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems and for information service providers. One of the promising fields of application is traffic optimization.

  5. LIBRJMCMC: AN OPEN-SOURCE GENERIC C++ LIBRARY FOR STOCHASTIC OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    M. Brédif

    2012-07-01

    Full Text Available The librjmcmc is an open source C++ library that solves optimization problems using a stochastic framework. The library is primarily intended for but not limited to research purposes in computer vision, photogrammetry and remote sensing, as it has initially been developed in the context of extracting building footprints from digital elevation models using a marked point process of rectangles. It has been designed to be both highly modular and extensible, and have computational times comparable to a code specifically designed for a particular application, thanks to the powerful paradigms of metaprogramming and generic programming. The proposed stochastic optimization is built on the coupling of a stochastic Reversible-Jump Markov Chain Monte Carlo (RJMCMC sampler and a simulated annealing relaxation. This framework allows, with theoretical guarantees, the optimization of an unrestricted objective function without requiring any initial solution. The modularity of our library allows the processing of any kind of input data, whether they are 1D signals (e.g. LiDAR or SAR waveforms, 2D images, 3D point clouds... The library user has just to define a few modules describing its domain specific context: the encoding of a configuration (e.g. its object type in a marked point process context, reversible jump kernels (e.g. birth, death, modifications..., the optimized energies (e.g. data and regularization terms and the probabilized search space given by the reference process. Similar to this extensibility in the application domain, concepts are clearly and orthogonally separated such that it is straightforward to customize the convergence test, the temperature schedule, or to add visitors enabling visual feedback during the optimization. The library offers dedicated modules for marked point processes, allowing the user to optimize a Maximum A Posteriori (MAP criterion with an image data term energy on a marked point process of rectangles.

  6. Analytical study on the criticality of the stochastic optimal velocity model

    International Nuclear Information System (INIS)

    Kanai, Masahiro; Nishinari, Katsuhiro; Tokihiro, Tetsuji

    2006-01-01

    In recent works, we have proposed a stochastic cellular automaton model of traffic flow connecting two exactly solvable stochastic processes, i.e., the asymmetric simple exclusion process and the zero range process, with an additional parameter. It is also regarded as an extended version of the optimal velocity model, and moreover it shows particularly notable properties. In this paper, we report that when taking optimal velocity function to be a step function, all of the flux-density graph (i.e. the fundamental diagram) can be estimated. We first find that the fundamental diagram consists of two line segments resembling an inversed-λ form, and next identify their end-points from a microscopic behaviour of vehicles. It is notable that by using a microscopic parameter which indicates a driver's sensitivity to the traffic situation, we give an explicit formula for the critical point at which a traffic jam phase arises. We also compare these analytical results with those of the optimal velocity model, and point out the crucial differences between them

  7. Stochastic Optimization of Supply Chain Risk Measures –a Methodology for Improving Supply Security of Subsidized Fuel Oil in Indonesia

    Directory of Open Access Journals (Sweden)

    Adinda Yuanita

    2015-08-01

    Full Text Available Monte Carlo simulation-based methods for stochastic optimization of risk measures is required to solve complex problems in supply security of subsidized fuel oil in Indonesia. In order to overcome constraints in distribution of subsidized fuel in Indonesia, which has the fourth largest population in the world—more than 250,000,000 people with 66.5% of productive population, and has more than 17,000 islands with its population centered around the nation's capital only—it is necessary to have a measurable and integrated risk analysis with monitoring system for the purpose of supply security of subsidized fuel. In consideration of this complex issue, uncertainty and probability heavily affected this research. Therefore, this research did the Monte Carlo sampling-based stochastic simulation optimization with the state-of-the-art "FIRST" parameter combined with the Sensitivity Analysis to determine the priority of integrated risk mitigation handling so that the implication of the new model design from this research may give faster risk mitigation time. The results of the research identified innovative ideas of risk based audit on supply chain risk management and new FIRST (Fairness, Independence, Reliable, Sustainable, Transparent parameters on risk measures. In addition to that, the integration of risk analysis confirmed the innovative level of priority on sensitivity analysis. Moreover, the findings showed that the new risk mitigation time was 60% faster than the original risk mitigation time.

  8. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    International Nuclear Information System (INIS)

    Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei; Biteus, Jonas

    2014-01-01

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation

  9. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    Energy Technology Data Exchange (ETDEWEB)

    Tahvili, Sahar [Mälardalen University (Sweden); Österberg, Jonas; Silvestrov, Sergei [Division of Applied Mathematics, Mälardalen University (Sweden); Biteus, Jonas [Scania CV (Sweden)

    2014-12-10

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.

  10. A Unified Pricing of Variable Annuity Guarantees under the Optimal Stochastic Control Framework

    Directory of Open Access Journals (Sweden)

    Pavel V. Shevchenko

    2016-07-01

    Full Text Available In this paper, we review pricing of the variable annuity living and death guarantees offered to retail investors in many countries. Investors purchase these products to take advantage of market growth and protect savings. We present pricing of these products via an optimal stochastic control framework and review the existing numerical methods. We also discuss pricing under the complete/incomplete financial market models, stochastic mortality and optimal/sub-optimal policyholder behavior, and in the presence of taxes. For numerical valuation of these contracts in the case of simple risky asset process, we develop a direct integration method based on the Gauss-Hermite quadratures with a one-dimensional cubic spline for calculation of the expected contract value, and a bi-cubic spline interpolation for applying the jump conditions across the contract cashflow event times. This method is easier to implement and faster when compared to the partial differential equation methods if the transition density (or its moments of the risky asset underlying the contract is known in closed form between the event times. We present accurate numerical results for pricing of a Guaranteed Minimum Accumulation Benefit (GMAB guarantee available on the market that can serve as a numerical benchmark for practitioners and researchers developing pricing of variable annuity guarantees to assess the accuracy of their numerical implementation.

  11. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    Science.gov (United States)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block

  12. Dynamic and stochastic multi-project planning

    CERN Document Server

    Melchiors, Philipp

    2015-01-01

    This book deals with dynamic and stochastic methods for multi-project planning. Based on the idea of using queueing networks for the analysis of dynamic-stochastic multi-project environments this book addresses two problems: detailed scheduling of project activities, and integrated order acceptance and capacity planning. In an extensive simulation study, the book thoroughly investigates existing scheduling policies. To obtain optimal and near optimal scheduling policies new models and algorithms are proposed based on the theory of Markov decision processes and Approximate Dynamic programming.

  13. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-08-15

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

  14. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung; Noh, Jae Man

    2015-01-01

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media

  15. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Hosking, John Joseph Absalom, E-mail: j.j.a.hosking@cma.uio.no [University of Oslo, Centre of Mathematics for Applications (CMA) (Norway)

    2012-12-15

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966-979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197-216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  16. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    International Nuclear Information System (INIS)

    Hosking, John Joseph Absalom

    2012-01-01

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966–979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197–216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  17. ALOPEX stochastic optimization for pumping management in fresh water coastal aquifers

    International Nuclear Information System (INIS)

    Stratis, P N; Saridakis, Y G; Zakynthinaki, M S; Papadopoulou, E P

    2014-01-01

    Saltwater intrusion in freshwater aquifers is a problem of increasing significance in areas nearby the coastline. Apart from natural disastrous phenomena, such as earthquakes or floods, intense pumping human activities over the aquifer areas may change the chemical composition of the freshwater aquifer. Working towards the direction of real time management of freshwater pumping from coastal aquifers, we have considered the deployment of the stochastic optimization Algorithm of Pattern Extraction (ALOPEX), coupled with several penalty strategies that produce convenient management policies. The present study, which further extents recently derived results, considers the analytical solution of a classical model for underground flow and the ALOPEX stochastic optimization technique to produce an efficient approach for pumping management over coastal aquifers. Numerical experimentation also includes a case study at Vathi area on the Greek island of Kalymnos, to compare with known results in the literature as well as to demonstrate different management strategies

  18. Optimal control strategy for an impulsive stochastic competition system with time delays and jumps

    Science.gov (United States)

    Liu, Lidan; Meng, Xinzhu; Zhang, Tonghua

    2017-07-01

    Driven by both white and jump noises, a stochastic delayed model with two competitive species in a polluted environment is proposed and investigated. By using the comparison theorem of stochastic differential equations and limit superior theory, sufficient conditions for persistence in mean and extinction of two species are established. In addition, we obtain that the system is asymptotically stable in distribution by using ergodic method. Furthermore, the optimal harvesting effort and the maximum of expectation of sustainable yield (ESY) are derived from Hessian matrix method and optimal harvesting theory of differential equations. Finally, some numerical simulations are provided to illustrate the theoretical results.

  19. Optimal operating rules definition in complex water resource systems combining fuzzy logic, expert criteria and stochastic programming

    Science.gov (United States)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2016-04-01

    This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to

  20. Sizing for fuel cell/supercapacitor hybrid vehicles based on stochastic driving cycles

    International Nuclear Information System (INIS)

    Feroldi, Diego; Carignano, Mauro

    2016-01-01

    Highlights: • A sizing procedure based on the fulfilment of real driving conditions is proposed. • A methodology to generate long-term stochastic driving cycles is proposed. • A parametric optimization of the real-time EMS is conducted. • A trade-off design is adopted from a Pareto front. • A comparison with optimal consumption via Dynamic Programming is performed. - Abstract: In this article, a methodology for the sizing and analysis of fuel cell/supercapacitor hybrid vehicles is presented. The proposed sizing methodology is based on the fulfilment of power requirements, including sustained speed tests and stochastic driving cycles. The procedure to generate driving cycles is also presented in this paper. The sizing algorithm explicitly accounts for the Equivalent Consumption Minimization Strategy (ECMS). The performance is compared with optimal consumption, which is found using an off-line strategy via Dynamic Programming. The sizing methodology provides guidance for sizing the fuel cell and the supercapacitor number. The results also include analysis on oversizing the fuel cell and varying the parameters of the energy management strategy. The simulation results highlight the importance of integrating sizing and energy management into fuel cell hybrid vehicles.

  1. Experimental study of the semi-active control of a nonlinear two-span bridge using stochastic optimal polynomial control

    Science.gov (United States)

    El-Khoury, O.; Kim, C.; Shafieezadeh, A.; Hur, J. E.; Heo, G. H.

    2015-06-01

    This study performs a series of numerical simulations and shake-table experiments to design and assess the performance of a nonlinear clipped feedback control algorithm based on optimal polynomial control (OPC) to mitigate the response of a two-span bridge equipped with a magnetorheological (MR) damper. As an extended conventional linear quadratic regulator, OPC provides more flexibility in the control design and further enhances system performance. The challenges encountered in this case are (1) the linearization of the nonlinear behavior of various components and (2) the selection of the weighting matrices in the objective function of OPC. The first challenge is addressed by using stochastic linearization which replaces the nonlinear portion of the system behavior with an equivalent linear time-invariant model considering the stochasticity in the excitation. Furthermore, a genetic algorithm is employed to find optimal weighting matrices for the control design. The input current to the MR damper installed between adjacent spans is determined using a clipped stochastic optimal polynomial control algorithm. The performance of the controlled system is assessed through a set of shake-table experiments for far-field and near-field ground motions. The proposed method showed considerable improvements over passive cases especially for the far-field ground motion.

  2. Experimental study of the semi-active control of a nonlinear two-span bridge using stochastic optimal polynomial control

    International Nuclear Information System (INIS)

    El-Khoury, O; Shafieezadeh, A; Hur, J E; Kim, C; Heo, G H

    2015-01-01

    This study performs a series of numerical simulations and shake-table experiments to design and assess the performance of a nonlinear clipped feedback control algorithm based on optimal polynomial control (OPC) to mitigate the response of a two-span bridge equipped with a magnetorheological (MR) damper. As an extended conventional linear quadratic regulator, OPC provides more flexibility in the control design and further enhances system performance. The challenges encountered in this case are (1) the linearization of the nonlinear behavior of various components and (2) the selection of the weighting matrices in the objective function of OPC. The first challenge is addressed by using stochastic linearization which replaces the nonlinear portion of the system behavior with an equivalent linear time-invariant model considering the stochasticity in the excitation. Furthermore, a genetic algorithm is employed to find optimal weighting matrices for the control design. The input current to the MR damper installed between adjacent spans is determined using a clipped stochastic optimal polynomial control algorithm. The performance of the controlled system is assessed through a set of shake-table experiments for far-field and near-field ground motions. The proposed method showed considerable improvements over passive cases especially for the far-field ground motion. (paper)

  3. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  4. The ESPAT tool: a general-purpose DSS shell for solving stochastic optimization problems in complex river-aquifer systems

    Science.gov (United States)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel; Tilmant, Amaury

    2015-04-01

    Stochastic programming methods are better suited to deal with the inherent uncertainty of inflow time series in water resource management. However, one of the most important hurdles in their use in practical implementations is the lack of generalized Decision Support System (DSS) shells, usually based on a deterministic approach. The purpose of this contribution is to present a general-purpose DSS shell, named Explicit Stochastic Programming Advanced Tool (ESPAT), able to build and solve stochastic programming problems for most water resource systems. It implements a hydro-economic approach, optimizing the total system benefits as the sum of the benefits obtained by each user. It has been coded using GAMS, and implements a Microsoft Excel interface with a GAMS-Excel link that allows the user to introduce the required data and recover the results. Therefore, no GAMS skills are required to run the program. The tool is divided into four modules according to its capabilities: 1) the ESPATR module, which performs stochastic optimization procedures in surface water systems using a Stochastic Dual Dynamic Programming (SDDP) approach; 2) the ESPAT_RA module, which optimizes coupled surface-groundwater systems using a modified SDDP approach; 3) the ESPAT_SDP module, capable of performing stochastic optimization procedures in small-size surface systems using a standard SDP approach; and 4) the ESPAT_DET module, which implements a deterministic programming procedure using non-linear programming, able to solve deterministic optimization problems in complex surface-groundwater river basins. The case study of the Mijares river basin (Spain) is used to illustrate the method. It consists in two reservoirs in series, one aquifer and four agricultural demand sites currently managed using historical (XIV century) rights, which give priority to the most traditional irrigation district over the XX century agricultural developments. Its size makes it possible to use either the SDP or

  5. A generic methodology for the optimisation of sewer systems using stochastic programming and self-optimizing control

    DEFF Research Database (Denmark)

    Maurico-Iglesias, Miguel; Castro, Ignacio Montero; Mollerup, Ane Loft

    2015-01-01

    . Such controller is aimed at keeping the system close to the optimal performance, thanks to an optimal selection of controlled variables. The definition of an optimal performance was carried out by a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current......The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flow and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems...

  6. Learn-and-Adapt Stochastic Dual Gradients for Network Resource Allocation

    OpenAIRE

    Chen, Tianyi; Ling, Qing; Giannakis, Georgios B.

    2017-01-01

    Network resource allocation shows revived popularity in the era of data deluge and information explosion. Existing stochastic optimization approaches fall short in attaining a desirable cost-delay tradeoff. Recognizing the central role of Lagrange multipliers in network resource allocation, a novel learn-and-adapt stochastic dual gradient (LA-SDG) method is developed in this paper to learn the sample-optimal Lagrange multiplier from historical data, and accordingly adapt the upcoming resource...

  7. Optimal stochastic management of renewable MG (micro-grids) considering electro-thermal model of PV (photovoltaic)

    International Nuclear Information System (INIS)

    Najibi, Fatemeh; Niknam, Taher; Kavousi-Fard, Abdollah

    2016-01-01

    This paper aims to report the results of the research conducted to one thermal and electrical model for photovoltaic. Moreover, one probabilistic framework is introduced for considering all uncertainties in the optimal energy management of Micro-Grid problem. It should be noted that one typical Micro-Grid is being studied as a case, including different renewable energy sources, such as Photovoltaic, Micro Turbine, Wind Turbine, and one battery as a storage device for storing energy. The uncertainties of market price variation, photovoltaic and wind turbine output power change and load demand error are covered by the suggested probabilistic framework. The Micro-Grid problem is of nonlinear nature because of the stochastic behavior of the renewable energy sources such as Photovoltaic and Wind Turbine units, and hence there is need for a powerful tool to solve the problem. Therefore, in addition to the simulated thermal model and suggested probabilistic framework, a new algorithm is also introduced. The Backtracking Search Optimization Algorithm is described as a useful method to optimize the MG (micro-grids) problem. This algorithm has the benefit of escaping from the local optima while converging fast, too. The proposed algorithm is also tested on the typical Micro-Grid. - Highlights: • Proposing an electro-thermal model for PV. • Proposing a new stochastic formulation for optimal operation of renewable MGs. • Introduction of a new optimization method based on BSO to explore the problem search space.

  8. A new unbiased stochastic derivative estimator for discontinuous sample performances with structural parameters

    NARCIS (Netherlands)

    Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd

    In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)

  9. Constrained Optimization via Stochastic approximation with a simultaneous perturbation gradient approximation

    DEFF Research Database (Denmark)

    Sadegh, Payman

    1997-01-01

    This paper deals with a projection algorithm for stochastic approximation using simultaneous perturbation gradient approximation for optimization under inequality constraints where no direct gradient of the loss function is available and the inequality constraints are given as explicit functions...... of the optimization parameters. It is shown that, under application of the projection algorithm, the parameter iterate converges almost surely to a Kuhn-Tucker point, The procedure is illustrated by a numerical example, (C) 1997 Elsevier Science Ltd....

  10. A diffusion-based approach to stochastic individual growth and energy budget, with consequences to life-history optimization and population dynamics.

    Science.gov (United States)

    Filin, I

    2009-06-01

    Using diffusion processes, I model stochastic individual growth, given exogenous hazards and starvation risk. By maximizing survival to final size, optimal life histories (e.g. switching size for habitat/dietary shift) are determined by two ratios: mean growth rate over growth variance (diffusion coefficient) and mortality rate over mean growth rate; all are size dependent. For example, switching size decreases with either ratio, if both are positive. I provide examples and compare with previous work on risk-sensitive foraging and the energy-predation trade-off. I then decompose individual size into reversibly and irreversibly growing components, e.g. reserves and structure. I provide a general expression for optimal structural growth, when reserves grow stochastically. I conclude that increased growth variance of reserves delays structural growth (raises threshold size for its commencement) but may eventually lead to larger structures. The effect depends on whether the structural trait is related to foraging or defence. Implications for population dynamics are discussed.

  11. Stochastic Modeling and Optimization in a Microgrid: A Survey

    Directory of Open Access Journals (Sweden)

    Hao Liang

    2014-03-01

    Full Text Available The future smart grid is expected to be an interconnected network of small-scale and self-contained microgrids, in addition to a large-scale electric power backbone. By utilizing microsources, such as renewable energy sources and combined heat and power plants, microgrids can supply electrical and heat loads in local areas in an economic and environment friendly way. To better adopt the intermittent and weather-dependent renewable power generation, energy storage devices, such as batteries, heat buffers and plug-in electric vehicles (PEVs with vehicle-to-grid systems can be integrated in microgrids. However, significant technical challenges arise in the planning, operation and control of microgrids, due to the randomness in renewable power generation, the buffering effect of energy storage devices and the high mobility of PEVs. The two-way communication functionalities of the future smart grid provide an opportunity to address these challenges, by offering the communication links for microgrid status information collection. However, how to utilize stochastic modeling and optimization tools for efficient, reliable and economic planning, operation and control of microgrids remains an open issue. In this paper, we investigate the key features of microgrids and provide a comprehensive literature survey on the stochastic modeling and optimization tools for a microgrid. Future research directions are also identified.

  12. Stochastic coupled cluster theory: Efficient sampling of the coupled cluster expansion

    Science.gov (United States)

    Scott, Charles J. C.; Thom, Alex J. W.

    2017-09-01

    We consider the sampling of the coupled cluster expansion within stochastic coupled cluster theory. Observing the limitations of previous approaches due to the inherently non-linear behavior of a coupled cluster wavefunction representation, we propose new approaches based on an intuitive, well-defined condition for sampling weights and on sampling the expansion in cluster operators of different excitation levels. We term these modifications even and truncated selections, respectively. Utilising both approaches demonstrates dramatically improved calculation stability as well as reduced computational and memory costs. These modifications are particularly effective at higher truncation levels owing to the large number of terms within the cluster expansion that can be neglected, as demonstrated by the reduction of the number of terms to be sampled when truncating at triple excitations by 77% and hextuple excitations by 98%.

  13. Unit Commitment Towards Decarbonized Network Facing Fixed and Stochastic Resources Applying Water Cycle Optimization

    Directory of Open Access Journals (Sweden)

    Heba-Allah I. ElAzab

    2018-05-01

    Full Text Available This paper presents a trustworthy unit commitment study to schedule both Renewable Energy Resources (RERs with conventional power plants to potentially decarbonize the electrical network. The study has employed a system with three IEEE thermal (coal-fired power plants as dispatchable distributed generators, one wind plant, one solar plant as stochastic distributed generators, and Plug-in Electric Vehicles (PEVs which can work either loads or generators based on their charging schedule. This paper investigates the unit commitment scheduling objective to minimize the Combined Economic Emission Dispatch (CEED. To reduce combined emission costs, integrating more renewable energy resources (RER and PEVs, there is an essential need to decarbonize the existing system. Decarbonizing the system means reducing the percentage of CO2 emissions. The uncertain behavior of wind and solar energies causes imbalance penalty costs. PEVs are proposed to overcome the intermittent nature of wind and solar energies. It is important to optimally integrate and schedule stochastic resources including the wind and solar energies, and PEVs charge and discharge processes with dispatched resources; the three IEEE thermal (coal-fired power plants. The Water Cycle Optimization Algorithm (WCOA is an efficient and intelligent meta-heuristic technique employed to solve the economically emission dispatch problem for both scheduling dispatchable and stochastic resources. The goal of this study is to obtain the solution for unit commitment to minimize the combined cost function including CO2 emission costs applying the Water Cycle Optimization Algorithm (WCOA. To validate the WCOA technique, the results are compared with the results obtained from applying the Dynamic Programming (DP algorithm, which is considered as a conventional numerical technique, and with the Genetic Algorithm (GA as a meta-heuristic technique.

  14. Optimizing continuous cover management of boreal forest when timber prices and tree growth are stochastic

    Directory of Open Access Journals (Sweden)

    Timo Pukkala

    2015-03-01

    Full Text Available Background Decisions on forest management are made under risk and uncertainty because the stand development cannot be predicted exactly and future timber prices are unknown. Deterministic calculations may lead to biased advice on optimal forest management. The study optimized continuous cover management of boreal forest in a situation where tree growth, regeneration, and timber prices include uncertainty. Methods Both anticipatory and adaptive optimization approaches were used. The adaptive approach optimized the reservation price function instead of fixed cutting years. The future prices of different timber assortments were described by cross-correlated auto-regressive models. The high variation around ingrowth model was simulated using a model that describes the cross- and autocorrelations of the regeneration results of different species and years. Tree growth was predicted with individual tree models, the predictions of which were adjusted on the basis of a climate-induced growth trend, which was stochastic. Residuals of the deterministic diameter growth model were also simulated. They consisted of random tree factors and cross- and autocorrelated temporal terms. Results Of the analyzed factors, timber price caused most uncertainty in the calculation of the net present value of a certain management schedule. Ingrowth and climate trend were less significant sources of risk and uncertainty than tree growth. Stochastic anticipatory optimization led to more diverse post-cutting stand structures than obtained in deterministic optimization. Cutting interval was shorter when risk and uncertainty were included in the analyses. Conclusions Adaptive optimization and management led to 6%–14% higher net present values than obtained in management that was based on anticipatory optimization. Increasing risk aversion of the forest landowner led to earlier cuttings in a mature stand. The effect of risk attitude on optimization results was small.

  15. Comparative analysis of cogeneration power plants optimization based on stochastic method using superstructure and process simulator

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Leonardo Rodrigues de [Instituto Federal do Espirito Santo, Vitoria, ES (Brazil)], E-mail: leoaraujo@ifes.edu.br; Donatelli, Joao Luiz Marcon [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil)], E-mail: joaoluiz@npd.ufes.br; Silva, Edmar Alino da Cruz [Instituto Tecnologico de Aeronautica (ITA/CTA), Sao Jose dos Campos, SP (Brazil); Azevedo, Joao Luiz F. [Instituto de Aeronautica e Espaco (CTA/IAE/ALA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    Thermal systems are essential in facilities such as thermoelectric plants, cogeneration plants, refrigeration systems and air conditioning, among others, in which much of the energy consumed by humanity is processed. In a world with finite natural sources of fuels and growing energy demand, issues related with thermal system design, such as cost estimative, design complexity, environmental protection and optimization are becoming increasingly important. Therefore the need to understand the mechanisms that degrade energy, improve energy sources use, reduce environmental impacts and also reduce project, operation and maintenance costs. In recent years, a consistent development of procedures and techniques for computational design of thermal systems has occurred. In this context, the fundamental objective of this study is a performance comparative analysis of structural and parametric optimization of a cogeneration system using stochastic methods: genetic algorithm and simulated annealing. This research work uses a superstructure, modelled in a process simulator, IPSEpro of SimTech, in which the appropriate design case studied options are included. Accordingly, the cogeneration system optimal configuration is determined as a consequence of the optimization process, restricted within the configuration options included in the superstructure. The optimization routines are written in MsExcel Visual Basic, in order to work perfectly coupled to the simulator process. At the end of the optimization process, the system optimal configuration, given the characteristics of each specific problem, should be defined. (author)

  16. Stochastic Modelling and Optimization of Complex Infrastructure Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    In this paper it is shown that recent progress in stochastic modelling and optimization in combination with advanced computer systems has now made it possible to improve the design and the maintenance strategies for infrastructure systems. The paper concentrates on highway networks and single large...... bridges. united states has perhaps the largest highway networks in the world with more than 0.5 million highway bridges; see Chase, S.B. 1999. About 40% of these bridges are considered deficient and more than $50 billion is estimated needed to correct the deficiencies; see Roberts, J.E. 2001...

  17. Optimizing ZigBee Security using Stochastic Model Checking

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    , we identify an important gap in the specification on key updates, and present a methodology for determining optimal key update policies and security parameters. We exploit the stochastic model checking approach using the probabilistic model checker PRISM, and assess the security needs for realistic......ZigBee is a fairly new but promising wireless sensor network standard that offers the advantages of simple and low resource communication. Nevertheless, security is of great concern to ZigBee, and enhancements are prescribed in the latest ZigBee specication: ZigBee-2007. In this technical report...

  18. An Augmented Incomplete Factorization Approach for Computing the Schur Complement in Stochastic Optimization

    KAUST Repository

    Petra, Cosmin G.; Schenk, Olaf; Lubin, Miles; Gä ertner, Klaus

    2014-01-01

    We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.

  19. Portfolio Optimization under Local-Stochastic Volatility: Coefficient Taylor Series Approximations & Implied Sharpe Ratio

    OpenAIRE

    Lorig, Matthew; Sircar, Ronnie

    2015-01-01

    We study the finite horizon Merton portfolio optimization problem in a general local-stochastic volatility setting. Using model coefficient expansion techniques, we derive approximations for the both the value function and the optimal investment strategy. We also analyze the `implied Sharpe ratio' and derive a series approximation for this quantity. The zeroth-order approximation of the value function and optimal investment strategy correspond to those obtained by Merton (1969) when the risky...

  20. Stochastic Learning of Multi-Instance Dictionary for Earth Mover's Distance based Histogram Comparison

    OpenAIRE

    Fan, Jihong; Liang, Ru-Ze

    2016-01-01

    Dictionary plays an important role in multi-instance data representation. It maps bags of instances to histograms. Earth mover's distance (EMD) is the most effective histogram distance metric for the application of multi-instance retrieval. However, up to now, there is no existing multi-instance dictionary learning methods designed for EMD based histogram comparison. To fill this gap, we develop the first EMD-optimal dictionary learning method using stochastic optimization method. In the stoc...

  1. Stochastic goal programming based groundwater remediation management under human-health-risk uncertainty

    International Nuclear Information System (INIS)

    Li, Jing; He, Li; Lu, Hongwei; Fan, Xing

    2014-01-01

    Highlights: • We propose an integrated optimal groundwater remediation design approach. • The approach can address stochasticity in carcinogenic risks. • Goal programming is used to make the system approaching to ideal operation and remediation effects. • The uncertainty in slope factor is evaluated under different confidence levels. • Optimal strategies are obtained to support remediation design under uncertainty. - Abstract: An optimal design approach for groundwater remediation is developed through incorporating numerical simulation, health risk assessment, uncertainty analysis and nonlinear optimization within a general framework. Stochastic analysis and goal programming are introduced into the framework to handle uncertainties in real-world groundwater remediation systems. Carcinogenic risks associated with remediation actions are further evaluated at four confidence levels. The differences between ideal and predicted constraints are minimized by goal programming. The approach is then applied to a contaminated site in western Canada for creating a set of optimal remediation strategies. Results from the case study indicate that factors including environmental standards, health risks and technical requirements mutually affected and restricted themselves. Stochastic uncertainty existed in the entire process of remediation optimization, which should to be taken into consideration in groundwater remediation design

  2. Adaptive Near-Optimal Multiuser Detection Using a Stochastic and Hysteretic Hopfield Net Receiver

    Directory of Open Access Journals (Sweden)

    Gábor Jeney

    2003-01-01

    Full Text Available This paper proposes a novel adaptive MUD algorithm for a wide variety (practically any kind of interference limited systems, for example, code division multiple access (CDMA. The algorithm is based on recently developed neural network techniques and can perform near optimal detection in the case of unknown channel characteristics. The proposed algorithm consists of two main blocks; one estimates the symbols sent by the transmitters, the other identifies each channel of the corresponding communication links. The estimation of symbols is carried out either by a stochastic Hopfield net (SHN or by a hysteretic neural network (HyNN or both. The channel identification is based on either the self-organizing feature map (SOM or the learning vector quantization (LVQ. The combination of these two blocks yields a powerful real-time detector with near optimal performance. The performance is analyzed by extensive simulations.

  3. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    International Nuclear Information System (INIS)

    Sutrisno; Widowati; Solikhin

    2016-01-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well. (paper)

  4. Graph-based stochastic control with constraints: A unified approach with perfect and imperfect measurements

    KAUST Repository

    Agha-mohammadi, Ali-akbar

    2013-06-01

    This paper is concerned with the problem of stochastic optimal control (possibly with imperfect measurements) in the presence of constraints. We propose a computationally tractable framework to address this problem. The method lends itself to sampling-based methods where we construct a graph in the state space of the problem, on which a Dynamic Programming (DP) is solved and a closed-loop feedback policy is computed. The constraints are seamlessly incorporated to the control policy selection by including their effect on the transition probabilities of the graph edges. We present a unified framework that is applicable both in the state space (with perfect measurements) and in the information space (with imperfect measurements).

  5. Learning-based stochastic object models for characterizing anatomical variations

    Science.gov (United States)

    Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua

    2018-03-01

    It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.

  6. Dynamic-Programming Approaches to Single- and Multi-Stage Stochastic Knapsack Problems for Portfolio Optimization

    National Research Council Canada - National Science Library

    Khoo, Wai

    1999-01-01

    .... These problems model stochastic portfolio optimization problems (SPOPs) which assume deterministic unit weight, and normally distributed unit return with known mean and variance for each item type...

  7. Lot Sizing Based on Stochastic Demand and Service Level Constraint

    Directory of Open Access Journals (Sweden)

    hajar shirneshan

    2012-06-01

    Full Text Available Considering its application, stochastic lot sizing is a significant subject in production planning. Also the concept of service level is more applicable than shortage cost from managers' viewpoint. In this paper, the stochastic multi period multi item capacitated lot sizing problem has been investigated considering service level constraint. First, the single item model has been developed considering service level and with no capacity constraint and then, it has been solved using dynamic programming algorithm and the optimal solution has been derived. Then the model has been generalized to multi item problem with capacity constraint. The stochastic multi period multi item capacitated lot sizing problem is NP-Hard, hence the model could not be solved by exact optimization approaches. Therefore, simulated annealing method has been applied for solving the problem. Finally, in order to evaluate the efficiency of the model, low level criterion has been used .

  8. AN ADAPTIVE OPTIMAL KALMAN FILTER FOR STOCHASTIC VIBRATION CONTROL SYSTEM WITH UNKNOWN NOISE VARIANCES

    Institute of Scientific and Technical Information of China (English)

    Li Shu; Zhuo Jiashou; Ren Qingwen

    2000-01-01

    In this paper, an optimal criterion is presented for adaptive Kalman filter in a control sys tem with unknown variances of stochastic vibration by constructing a function of noise variances and minimizing the function. We solve the model and measure variances by using DFP optimal method to guarantee the results of Kalman filter to be optimized. Finally, the control of vibration can be implemented by LQG method.

  9. Optimal harvesting of a stochastic delay tri-trophic food-chain model with Lévy jumps

    Science.gov (United States)

    Qiu, Hong; Deng, Wenmin

    2018-02-01

    In this paper, the optimal harvesting of a stochastic delay tri-trophic food-chain model with Lévy jumps is considered. We introduce two kinds of environmental perturbations in this model. One is called white noise which is continuous and is described by a stochastic integral with respect to the standard Brownian motion. And the other one is jumping noise which is modeled by a Lévy process. Under some mild assumptions, the critical values between extinction and persistent in the mean of each species are established. The sufficient and necessary criteria for the existence of optimal harvesting policy are established and the optimal harvesting effort and the maximum of sustainable yield are also obtained. We utilize the ergodic method to discuss the optimal harvesting problem. The results show that white noises and Lévy noises significantly affect the optimal harvesting policy while time delays is harmless for the optimal harvesting strategy in some cases. At last, some numerical examples are introduced to show the validity of our results.

  10. Discounted cost model for condition-based maintenance optimization

    International Nuclear Information System (INIS)

    Weide, J.A.M. van der; Pandey, M.D.; Noortwijk, J.M. van

    2010-01-01

    This paper presents methods to evaluate the reliability and optimize the maintenance of engineering systems that are damaged by shocks or transients arriving randomly in time and overall degradation is modeled as a cumulative stochastic point process. The paper presents a conceptually clear and comprehensive derivation of formulas for computing the discounted cost associated with a maintenance policy combining both condition-based and age-based criteria for preventive maintenance. The proposed discounted cost model provides a more realistic basis for optimizing the maintenance policies than those based on the asymptotic, non-discounted cost rate criterion.

  11. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  12. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  13. Robust optimization-based DC optimal power flow for managing wind generation uncertainty

    Science.gov (United States)

    Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn

    2012-11-01

    Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.

  14. Multistage Stochastic Programming and its Applications in Energy Systems Modeling and Optimization

    Science.gov (United States)

    Golari, Mehdi

    Electric energy constitutes one of the most crucial elements to almost every aspect of life of people. The modern electric power systems face several challenges such as efficiency, economics, sustainability, and reliability. Increase in electrical energy demand, distributed generations, integration of uncertain renewable energy resources, and demand side management are among the main underlying reasons of such growing complexity. Additionally, the elements of power systems are often vulnerable to failures because of many reasons, such as system limits, weak conditions, unexpected events, hidden failures, human errors, terrorist attacks, and natural disasters. One common factor complicating the operation of electrical power systems is the underlying uncertainties from the demands, supplies and failures of system components. Stochastic programming provides a mathematical framework for decision making under uncertainty. It enables a decision maker to incorporate some knowledge of the intrinsic uncertainty into the decision making process. In this dissertation, we focus on application of two-stage and multistage stochastic programming approaches to electric energy systems modeling and optimization. Particularly, we develop models and algorithms addressing the sustainability and reliability issues in power systems. First, we consider how to improve the reliability of power systems under severe failures or contingencies prone to cascading blackouts by so called islanding operations. We present a two-stage stochastic mixed-integer model to find optimal islanding operations as a powerful preventive action against cascading failures in case of extreme contingencies. Further, we study the properties of this problem and propose efficient solution methods to solve this problem for large-scale power systems. We present the numerical results showing the effectiveness of the model and investigate the performance of the solution methods. Next, we address the sustainability issue

  15. Tuning of an optimal fuzzy PID controller with stochastic algorithms for networked control systems with random time delay.

    Science.gov (United States)

    Pan, Indranil; Das, Saptarshi; Gupta, Amitava

    2011-01-01

    An optimal PID and an optimal fuzzy PID have been tuned by minimizing the Integral of Time multiplied Absolute Error (ITAE) and squared controller output for a networked control system (NCS). The tuning is attempted for a higher order and a time delay system using two stochastic algorithms viz. the Genetic Algorithm (GA) and two variants of Particle Swarm Optimization (PSO) and the closed loop performances are compared. The paper shows that random variation in network delay can be handled efficiently with fuzzy logic based PID controllers over conventional PID controllers. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    Science.gov (United States)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2014-06-01

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  17. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    Energy Technology Data Exchange (ETDEWEB)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita [School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)

    2014-06-19

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  18. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    International Nuclear Information System (INIS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita

    2014-01-01

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio

  19. Evaluation of the need for stochastic optimization of out-of-core nuclear fuel management decisions

    International Nuclear Information System (INIS)

    Thomas, R.L. Jr.

    1989-01-01

    Work has been completed on utilizing mathematical optimization techniques to optimize out-of-core nuclear fuel management decisions. The objective of such optimization is to minimize the levelized fuel cycle cost over some planning horizon. Typical decision variables include feed enrichments and number of assemblies, burnable poison requirements, and burned fuel to reinsert for every cycle in the planning horizon. Engineering constraints imposed consist of such items as discharge burnup limits, maximum enrichment limit, and target cycle energy productions. Earlier the authors reported on the development of the OCEON code, which employs the integer Monte Carlo Programming method as the mathematical optimization method. The discharge burnpups, and feed enrichment and burnable poison requirements are evaluated, initially employing a linear reactivity core physics model and refined using a coarse mesh nodal model. The economic evaluation is completed using a modification of the CINCAS methodology. Interest now is to assess the need for stochastic optimization, which will account for cost components and cycle energy production uncertainties. The implication of the present studies is that stochastic optimization in regard to cost component uncertainties need not be completed since deterministic optimization will identify nearly the same family of near-optimum cycling schemes

  20. A genetic-algorithm-aided stochastic optimization model for regional air quality management under uncertainty.

    Science.gov (United States)

    Qin, Xiaosheng; Huang, Guohe; Liu, Lei

    2010-01-01

    A genetic-algorithm-aided stochastic optimization (GASO) model was developed in this study for supporting regional air quality management under uncertainty. The model incorporated genetic algorithm (GA) and Monte Carlo simulation techniques into a general stochastic chance-constrained programming (CCP) framework and allowed uncertainties in simulation and optimization model parameters to be considered explicitly in the design of least-cost strategies. GA was used to seek the optimal solution of the management model by progressively evaluating the performances of individual solutions. Monte Carlo simulation was used to check the feasibility of each solution. A management problem in terms of regional air pollution control was studied to demonstrate the applicability of the proposed method. Results of the case study indicated the proposed model could effectively communicate uncertainties into the optimization process and generate solutions that contained a spectrum of potential air pollutant treatment options with risk and cost information. Decision alternatives could be obtained by analyzing tradeoffs between the overall pollutant treatment cost and the system-failure risk due to inherent uncertainties.

  1. A minimax stochastic optimal semi-active control strategy for uncertain quasi-integrable Hamiltonian systems using magneto-rheological dampers

    DEFF Research Database (Denmark)

    Feng, Ju; Ying, Zu-Guang; Zhu, Wei-Qiu

    2012-01-01

    A minimax stochastic optimal semi-active control strategy for stochastically excited quasi-integrable Hamiltonian systems with parametric uncertainty by using magneto-rheological (MR) dampers is proposed. Firstly, the control problem is formulated as an n-degree-of-freedom (DOF) controlled, uncer...

  2. Multi-objective optimal power flow for active distribution network considering the stochastic characteristic of photovoltaic

    Science.gov (United States)

    Zhou, Bao-Rong; Liu, Si-Liang; Zhang, Yong-Jun; Yi, Ying-Qi; Lin, Xiao-Ming

    2017-05-01

    To mitigate the impact on the distribution networks caused by the stochastic characteristic and high penetration of photovoltaic, a multi-objective optimal power flow model is proposed in this paper. The regulation capability of capacitor, inverter of photovoltaic and energy storage system embedded in active distribution network are considered to minimize the expected value of active power the T loss and probability of voltage violation in this model. Firstly, a probabilistic power flow based on cumulant method is introduced to calculate the value of the objectives. Secondly, NSGA-II algorithm is adopted for optimization to obtain the Pareto optimal solutions. Finally, the best compromise solution can be achieved through fuzzy membership degree method. By the multi-objective optimization calculation of IEEE34-node distribution network, the results show that the model can effectively improve the voltage security and economy of the distribution network on different levels of photovoltaic penetration.

  3. A Three-Stage Optimization Algorithm for the Stochastic Parallel Machine Scheduling Problem with Adjustable Production Rates

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2013-01-01

    Full Text Available We consider a parallel machine scheduling problem with random processing/setup times and adjustable production rates. The objective functions to be minimized consist of two parts; the first part is related with the due date performance (i.e., the tardiness of the jobs, while the second part is related with the setting of machine speeds. Therefore, the decision variables include both the production schedule (sequences of jobs and the production rate of each machine. The optimization process, however, is significantly complicated by the stochastic factors in the manufacturing system. To address the difficulty, a simulation-based three-stage optimization framework is presented in this paper for high-quality robust solutions to the integrated scheduling problem. The first stage (crude optimization is featured by the ordinal optimization theory, the second stage (finer optimization is implemented with a metaheuristic called differential evolution, and the third stage (fine-tuning is characterized by a perturbation-based local search. Finally, computational experiments are conducted to verify the effectiveness of the proposed approach. Sensitivity analysis and practical implications are also discussed.

  4. Optimizing Green Computing Awareness for Environmental Sustainability and Economic Security as a Stochastic Optimization Problem

    Directory of Open Access Journals (Sweden)

    Emmanuel Okewu

    2017-10-01

    Full Text Available The role of automation in sustainable development is not in doubt. Computerization in particular has permeated every facet of human endeavour, enhancing the provision of information for decision-making that reduces cost of operation, promotes productivity and socioeconomic prosperity and cohesion. Hence, a new field called information and communication technology for development (ICT4D has emerged. Nonetheless, the need to ensure environmentally friendly computing has led to this research study with particular focus on green computing in Africa. This is against the backdrop that the continent is feared to suffer most from the vulnerability of climate change and the impact of environmental risk. Using Nigeria as a test case, this paper gauges the green computing awareness level of Africans via sample survey. It also attempts to institutionalize green computing maturity model with a view to optimizing the level of citizens awareness amid inherent uncertainties like low bandwidth, poor network and erratic power in an emerging African market. Consequently, we classified the problem as a stochastic optimization problem and applied metaheuristic search algorithm to determine the best sensitization strategy. Although there are alternative ways of promoting green computing education, the metaheuristic search we conducted indicated that an online real-time solution that not only drives but preserves timely conversations on electronic waste (e-waste management and energy saving techniques among the citizenry is cutting edge. The authors therefore reviewed literature, gathered requirements, modelled the proposed solution using Universal Modelling Language (UML and developed a prototype. The proposed solution is a web-based multi-tier e-Green computing system that educates computer users on innovative techniques of managing computers and accessories in an environmentally friendly way. We found out that such a real-time web-based interactive forum does not

  5. Optimization of environmental management strategies through a dynamic stochastic possibilistic multiobjective program

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaodong, E-mail: xiaodong.zhang@beg.utexas.edu [Bureau of Economic Geology, Jackson School of Geosciences, The University of Texas at Austin, Austin, TX 78713 (United States); Huang, Gordon [Institute of Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada)

    2013-02-15

    Highlights: ► A dynamic stochastic possibilistic multiobjective programming model is developed. ► Greenhouse gas emission control is considered. ► Three planning scenarios are analyzed and compared. ► Optimal decision schemes under three scenarios and different p{sub i} levels are obtained. ► Tradeoffs between economics and environment are reflected. -- Abstract: Greenhouse gas (GHG) emissions from municipal solid waste (MSW) management facilities have become a serious environmental issue. In MSW management, not only economic objectives but also environmental objectives should be considered simultaneously. In this study, a dynamic stochastic possibilistic multiobjective programming (DSPMP) model is developed for supporting MSW management and associated GHG emission control. The DSPMP model improves upon the existing waste management optimization methods through incorporation of fuzzy possibilistic programming and chance-constrained programming into a general mixed-integer multiobjective linear programming (MOP) framework where various uncertainties expressed as fuzzy possibility distributions and probability distributions can be effectively reflected. Two conflicting objectives are integrally considered, including minimization of total system cost and minimization of total GHG emissions from waste management facilities. Three planning scenarios are analyzed and compared, representing different preferences of the decision makers for economic development and environmental-impact (i.e. GHG-emission) issues in integrated MSW management. Optimal decision schemes under three scenarios and different p{sub i} levels (representing the probability that the constraints would be violated) are generated for planning waste flow allocation and facility capacity expansions as well as GHG emission control. The results indicate that economic and environmental tradeoffs can be effectively reflected through the proposed DSPMP model. The generated decision variables can help

  6. Understanding Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning.

    Science.gov (United States)

    Nguyen, A; Yosinski, J; Clune, J

    2016-01-01

    The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.

  7. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    Science.gov (United States)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  8. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    Science.gov (United States)

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  9. Stochastic algorithm for channel optimized vector quantization: application to robust narrow-band speech coding

    International Nuclear Information System (INIS)

    Bouzid, M.; Benkherouf, H.; Benzadi, K.

    2011-01-01

    In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.

  10. Convergence of quasi-optimal Stochastic Galerkin methods for a class of PDES with random coefficients

    KAUST Repository

    Beck, Joakim; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2014-01-01

    In this work we consider quasi-optimal versions of the Stochastic Galerkin method for solving linear elliptic PDEs with stochastic coefficients. In particular, we consider the case of a finite number N of random inputs and an analytic dependence of the solution of the PDE with respect to the parameters in a polydisc of the complex plane CN. We show that a quasi-optimal approximation is given by a Galerkin projection on a weighted (anisotropic) total degree space and prove a (sub)exponential convergence rate. As a specific application we consider a thermal conduction problem with non-overlapping inclusions of random conductivity. Numerical results show the sharpness of our estimates. © 2013 Elsevier Ltd. All rights reserved.

  11. Convergence of quasi-optimal Stochastic Galerkin methods for a class of PDES with random coefficients

    KAUST Repository

    Beck, Joakim

    2014-03-01

    In this work we consider quasi-optimal versions of the Stochastic Galerkin method for solving linear elliptic PDEs with stochastic coefficients. In particular, we consider the case of a finite number N of random inputs and an analytic dependence of the solution of the PDE with respect to the parameters in a polydisc of the complex plane CN. We show that a quasi-optimal approximation is given by a Galerkin projection on a weighted (anisotropic) total degree space and prove a (sub)exponential convergence rate. As a specific application we consider a thermal conduction problem with non-overlapping inclusions of random conductivity. Numerical results show the sharpness of our estimates. © 2013 Elsevier Ltd. All rights reserved.

  12. Dealing with equality and benefit for water allocation in a lake watershed: A Gini-coefficient based stochastic optimization approach

    Science.gov (United States)

    Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.

    2018-06-01

    A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.

  13. A stochastic multi-agent optimization model for energy infrastructure planning under uncertainty and competition.

    Science.gov (United States)

    2017-07-04

    This paper presents a stochastic multi-agent optimization model that supports energy infrastruc- : ture planning under uncertainty. The interdependence between dierent decision entities in the : system is captured in an energy supply chain network, w...

  14. Stochastic simulation and robust design optimization of integrated photonic filters

    Directory of Open Access Journals (Sweden)

    Weng Tsui-Wei

    2016-07-01

    Full Text Available Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%–35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  15. An intelligent stochastic optimization routine for nuclear fuel cycle design

    International Nuclear Information System (INIS)

    Parks, G.T.

    1990-01-01

    A simulated annealing (Metropolis algorithm) optimization routine named AMETROP, which has been developed for use on realistic nuclear fuel cycle problems, is introduced. Each stage of the algorithm is described and the means by which it overcomes or avoids the difficulties posed to conventional optimization routines by such problems are explained. Special attention is given to innovations that enhance AMETROP's performance both through artificial intelligence features, in which the routine uses the accumulation of data to influence its future actions, and through a family of simple performance aids, which allow the designer to use his heuristic knowledge to guide the routine's essentially random search. Using examples from a typical fuel cycle optimization problem, the performance of the stochastic Metropolis algorithm is compared to that of the only suitable deterministic routine in a standard software library, showing AMETROP to have many advantages

  16. A Data-Driven Stochastic Reactive Power Optimization Considering Uncertainties in Active Distribution Networks and Decomposition Method

    DEFF Research Database (Denmark)

    Ding, Tao; Yang, Qingrun; Yang, Yongheng

    2018-01-01

    To address the uncertain output of distributed generators (DGs) for reactive power optimization in active distribution networks, the stochastic programming model is widely used. The model is employed to find an optimal control strategy with minimum expected network loss while satisfying all......, in this paper, a data-driven modeling approach is introduced to assume that the probability distribution from the historical data is uncertain within a confidence set. Furthermore, a data-driven stochastic programming model is formulated as a two-stage problem, where the first-stage variables find the optimal...... control for discrete reactive power compensation equipment under the worst probability distribution of the second stage recourse. The second-stage variables are adjusted to uncertain probability distribution. In particular, this two-stage problem has a special structure so that the second-stage problem...

  17. A stochastic discrete optimization model for designing container terminal facilities

    Science.gov (United States)

    Zukhruf, Febri; Frazila, Russ Bona; Burhani, Jzolanda Tsavalista

    2017-11-01

    As uncertainty essentially affect the total transportation cost, it remains important in the container terminal that incorporates several modes and transshipments process. This paper then presents a stochastic discrete optimization model for designing the container terminal, which involves the decision of facilities improvement action. The container terminal operation model is constructed by accounting the variation of demand and facilities performance. In addition, for illustrating the conflicting issue that practically raises in the terminal operation, the model also takes into account the possible increment delay of facilities due to the increasing number of equipment, especially the container truck. Those variations expectantly reflect the uncertainty issue in the container terminal operation. A Monte Carlo simulation is invoked to propagate the variations by following the observed distribution. The problem is constructed within the framework of the combinatorial optimization problem for investigating the optimal decision of facilities improvement. A new variant of glow-worm swarm optimization (GSO) is thus proposed for solving the optimization, which is rarely explored in the transportation field. The model applicability is tested by considering the actual characteristics of the container terminal.

  18. Stochastic Optimization Model to STudy the Operational Impacts of High Wind Penetrations in Ireland

    DEFF Research Database (Denmark)

    Meibom, Peter; Barth, R.; Hasche, B.

    2011-01-01

    A stochastic mixed integer linear optimization scheduling model minimizing system operation costs and treating load and wind power production as stochastic inputs is presented. The schedules are updated in a rolling manner as more up-to-date information becomes available. This is a fundamental...... change relative to day-ahead unit commitment approaches. The need for reserves dependent on forecast horizon and share of wind power has been estimated with a statistical model combining load and wind power forecast errors with scenarios of forced outages. The model is used to study operational impacts...

  19. Simulation-based optimal Bayesian experimental design for nonlinear systems

    KAUST Repository

    Huan, Xun

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics. © 2012 Elsevier Inc.

  20. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    Science.gov (United States)

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  1. Comparison of Traditional Design Nonlinear Programming Optimization and Stochastic Methods for Structural Design

    Science.gov (United States)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2010-01-01

    Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  2. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  3. Stochastic Optimal Control of a Heave Point Wave Energy Converter Based on a Modified LQG Approach

    DEFF Research Database (Denmark)

    Sun, Tao; Nielsen, Søren R. K.

    2018-01-01

    and actuator force are approximately considered by counteracting the absorbed power in the objective quadratic functional. Based on rational approximations to the radiation force and the wave load, the integrated dynamic system can be reformulated as a linear stochastic differential equation which is driven...

  4. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design-Part I. Model development

    Energy Technology Data Exchange (ETDEWEB)

    He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  5. Optimal control of switching time in switched stochastic systems with multi-switching times and different costs

    Science.gov (United States)

    Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian

    2017-08-01

    In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.

  6. Semi-blind identification of wideband MIMO channels via stochastic sampling

    OpenAIRE

    Andrieu, Christophe; Piechocki, Robert J.; McGeehan, Joe P.; Armour, Simon M.

    2003-01-01

    In this paper we address the problem of wide-band multiple-input multiple-output (MIMO) channel (multidimensional time invariant FIR filter) identification using Markov chains Monte Carlo methods. Towards this end we develop a novel stochastic sampling technique that produces a sequence of multidimensional channel samples. The method is semi-blind in the sense that it uses a very short training sequence. In such a framework the problem is no longer analytically tractable; hence we resort to s...

  7. Multi-Index Stochastic Collocation for random PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-03-28

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  8. Multi-Index Stochastic Collocation for random PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2016-01-01

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  9. Optimization algorithm based on densification and dynamic canonical descent

    Science.gov (United States)

    Bousson, K.; Correia, S. D.

    2006-07-01

    Stochastic methods have gained some popularity in global optimization in that most of them do not assume the cost functions to be differentiable. They have capabilities to avoid being trapped by local optima, and may converge even faster than gradient-based optimization methods on some problems. The present paper proposes an optimization method, which reduces the search space by means of densification curves, coupled with the dynamic canonical descent algorithm. The performances of the new method are shown on several known problems classically used for testing optimization algorithms, and proved to outperform competitive algorithms such as simulated annealing and genetic algorithms.

  10. Learning and anticipation in online dynamic optimization with evolutionary algorithms: The stochastic case

    NARCIS (Netherlands)

    P.A.N. Bosman (Peter); J.A. La Poutré (Han); D. Thierens (Dirk)

    2007-01-01

    htmlabstractThe focus of this paper is on how to design evolutionary algorithms (EAs) for solving stochastic dynamic optimization problems online, i.e. as time goes by. For a proper design, the EA must not only be capable of tracking shifting optima, it must also take into account the future

  11. Estimation of local concentration from measurements of stochastic adsorption dynamics using carbon nanotube-based sensors

    International Nuclear Information System (INIS)

    Jang, Hong; Lee, Jay H.; Braatz, Richard D.

    2016-01-01

    This paper proposes a maximum likelihood estimation (MLE) method for estimating time varying local concentration of the target molecule proximate to the sensor from the time profile of monomolecular adsorption and desorption on the surface of the sensor at nanoscale. Recently, several carbon nanotube sensors have been developed that can selectively detect target molecules at a trace concentration level. These sensors use light intensity changes mediated by adsorption or desorption phenomena on their surfaces. The molecular events occurring at trace concentration levels are inherently stochastic, posing a challenge for optimal estimation. The stochastic behavior is modeled by the chemical master equation (CME), composed of a set of ordinary differential equations describing the time evolution of probabilities for the possible adsorption states. Given the significant stochastic nature of the underlying phenomena, rigorous stochastic estimation based on the CME should lead to an improved accuracy over than deterministic estimation formulated based on the continuum model. Motivated by this expectation, we formulate the MLE based on an analytical solution of the relevant CME, both for the constant and the time-varying local concentrations, with the objective of estimating the analyte concentration field in real time from the adsorption readings of the sensor array. The performances of the MLE and the deterministic least squares are compared using data generated by kinetic Monte Carlo (KMC) simulations of the stochastic process. Some future challenges are described for estimating and controlling the concentration field in a distributed domain using the sensor technology.

  12. K-Minimax Stochastic Programming Problems

    Science.gov (United States)

    Nedeva, C.

    2007-10-01

    The purpose of this paper is a discussion of a numerical procedure based on the simplex method for stochastic optimization problems with partially known distribution functions. The convergence of this procedure is proved by the condition on dual problems.

  13. Using linear programming to analyze and optimize stochastic flow lines

    DEFF Research Database (Denmark)

    Helber, Stefan; Schimmelpfeng, Katja; Stolletz, Raik

    2011-01-01

    This paper presents a linear programming approach to analyze and optimize flow lines with limited buffer capacities and stochastic processing times. The basic idea is to solve a huge but simple linear program that models an entire simulation run of a multi-stage production process in discrete time...... programming and hence allows us to solve buffer allocation problems. We show under which conditions our method works well by comparing its results to exact values for two-machine models and approximate simulation results for longer lines....

  14. Stochastic control and the second law of thermodynamics

    Science.gov (United States)

    Brockett, R. W.; Willems, J. C.

    1979-01-01

    The second law of thermodynamics is studied from the point of view of stochastic control theory. We find that the feedback control laws which are of interest are those which depend only on average values, and not on sample path behavior. We are lead to a criterion which, when satisfied, permits one to assign a temperature to a stochastic system in such a way as to have Carnot cycles be the optimal trajectories of optimal control problems. Entropy is also defined and we are able to prove an equipartition of energy theorem using this definition of temperature. Our formulation allows one to treat irreversibility in a quite natural and completely precise way.

  15. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...... a fixed time interval. Rate optimal and effcient estimators areobtained for a one-dimensional diffusion parameter. Stable convergence in distribution isused to achieve a practically applicable Gaussian limit distribution for suitably normalisedestimators. In a simulation example, the limit distributions...... multidimensional parameter. Conditions for rate optimality and effciency of estimatorsof drift-jump and diffusion parameters are given in some special cases. Theseconditions are found to extend the pre-existing conditions applicable to continuous diffusions,and impose much stronger requirements on the estimating...

  16. Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Application

    KAUST Repository

    Chambolle, Antonin; Ehrhardt, Matthias J.; Richtarik, Peter; Schö nlieb, Carola-Bibiane

    2017-01-01

    We propose a stochastic extension of the primal-dual hybrid gradient algorithm studied by Chambolle and Pock in 2011 to solve saddle point problems that are separable in the dual variable. The analysis is carried out for general convex-concave saddle point problems and problems that are either partially smooth / strongly convex or fully smooth / strongly convex. We perform the analysis for arbitrary samplings of dual variables, and obtain known deterministic results as a special case. Several variants of our stochastic method significantly outperform the deterministic variant on a variety of imaging tasks.

  17. Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Application

    KAUST Repository

    Chambolle, Antonin

    2017-06-15

    We propose a stochastic extension of the primal-dual hybrid gradient algorithm studied by Chambolle and Pock in 2011 to solve saddle point problems that are separable in the dual variable. The analysis is carried out for general convex-concave saddle point problems and problems that are either partially smooth / strongly convex or fully smooth / strongly convex. We perform the analysis for arbitrary samplings of dual variables, and obtain known deterministic results as a special case. Several variants of our stochastic method significantly outperform the deterministic variant on a variety of imaging tasks.

  18. A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems

    Directory of Open Access Journals (Sweden)

    Angel A. Juan

    2015-12-01

    Full Text Available Many combinatorial optimization problems (COPs encountered in real-world logistics, transportation, production, healthcare, financial, telecommunication, and computing applications are NP-hard in nature. These real-life COPs are frequently characterized by their large-scale sizes and the need for obtaining high-quality solutions in short computing times, thus requiring the use of metaheuristic algorithms. Metaheuristics benefit from different random-search and parallelization paradigms, but they frequently assume that the problem inputs, the underlying objective function, and the set of optimization constraints are deterministic. However, uncertainty is all around us, which often makes deterministic models oversimplified versions of real-life systems. After completing an extensive review of related work, this paper describes a general methodology that allows for extending metaheuristics through simulation to solve stochastic COPs. ‘Simheuristics’ allow modelers for dealing with real-life uncertainty in a natural way by integrating simulation (in any of its variants into a metaheuristic-driven framework. These optimization-driven algorithms rely on the fact that efficient metaheuristics already exist for the deterministic version of the corresponding COP. Simheuristics also facilitate the introduction of risk and/or reliability analysis criteria during the assessment of alternative high-quality solutions to stochastic COPs. Several examples of applications in different fields illustrate the potential of the proposed methodology.

  19. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    Science.gov (United States)

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  20. Relationship between Maximum Principle and Dynamic Programming for Stochastic Recursive Optimal Control Problems and Applications

    Directory of Open Access Journals (Sweden)

    Jingtao Shi

    2013-01-01

    Full Text Available This paper is concerned with the relationship between maximum principle and dynamic programming for stochastic recursive optimal control problems. Under certain differentiability conditions, relations among the adjoint processes, the generalized Hamiltonian function, and the value function are given. A linear quadratic recursive utility portfolio optimization problem in the financial engineering is discussed as an explicitly illustrated example of the main result.

  1. Stochastic samples versus vacuum expectation values in cosmology

    International Nuclear Information System (INIS)

    Tsamis, N.C.; Tzetzias, Aggelos; Woodard, R.P.

    2010-01-01

    Particle theorists typically use expectation values to study the quantum back-reaction on inflation, whereas many cosmologists stress the stochastic nature of the process. While expectation values certainly give misleading results for some things, such as the stress tensor, we argue that operators exist for which there is no essential problem. We quantify this by examining the stochastic properties of a noninteracting, massless, minimally coupled scalar on a locally de Sitter background. The square of the stochastic realization of this field seems to provide an example of great relevance for which expectation values are not misleading. We also examine the frequently expressed concern that significant back-reaction from expectation values necessarily implies large stochastic fluctuations between nearby spatial points. Rather than viewing the stochastic formalism in opposition to expectation values, we argue that it provides a marvelously simple way of capturing the leading infrared logarithm corrections to the latter, as advocated by Starobinsky

  2. Backward Stochastic Riccati Equations and Infinite Horizon L-Q Optimal Control with Infinite Dimensional State Space and Random Coefficients

    International Nuclear Information System (INIS)

    Guatteri, Giuseppina; Tessitore, Gianmario

    2008-01-01

    We study the Riccati equation arising in a class of quadratic optimal control problems with infinite dimensional stochastic differential state equation and infinite horizon cost functional. We allow the coefficients, both in the state equation and in the cost, to be random.In such a context backward stochastic Riccati equations are backward stochastic differential equations in the whole positive real axis that involve quadratic non-linearities and take values in a non-Hilbertian space. We prove existence of a minimal non-negative solution and, under additional assumptions, its uniqueness. We show that such a solution allows to perform the synthesis of the optimal control and investigate its attractivity properties. Finally the case where the coefficients are stationary is addressed and an example concerning a controlled wave equation in random media is proposed

  3. A stochastic logical system approach to model and optimal control of cyclic variation of residual gas fraction in combustion engines

    International Nuclear Information System (INIS)

    Wu, Yuhu; Kumar, Madan; Shen, Tielong

    2016-01-01

    Highlights: • An in-cylinder pressure based measuring method for the RGF is derived. • A stochastic logical dynamical model is proposed to represent the transient behavior of the RGF. • The receding horizon controller is designed to reduce the variance of the RGF. • The effectiveness of the proposed model and control approach is validated by the experimental evidence. - Abstract: In four stroke internal combustion engines, residual gas from the previous cycle is an important factor influencing the combustion quality of the current cycle, and the residual gas fraction (RGF) is a popular index to monitor the influence of residual gas. This paper investigates the cycle-to-cycle transient behavior of the RGF in the view of systems theory and proposes a multi-valued logic-based control strategy for attenuation of RGF fluctuation. First, an in-cylinder pressure sensor-based method for measuring the RGF is provided by following the physics of the in-cylinder transient state of four-stroke internal combustion engines. Then, the stochastic property of the RGF is examined based on statistical data obtained by conducting experiments on a full-scale gasoline engine test bench. Based on the observation of the examination, a stochastic logical transient model is proposed to represent the cycle-to-cycle transient behavior of the RGF, and with the model an optimal feedback control law, which targets on rejection of the RGF fluctuation, is derived in the framework of stochastic logical system theory. Finally, experimental results are demonstrated to show the effectiveness of the proposed model and the control strategy.

  4. A condition-based maintenance policy for stochastically deteriorating systems

    International Nuclear Information System (INIS)

    Grall, A.; Berenguer, C.; Dieulle, L.

    2002-01-01

    We focus on the analytical modeling of a condition-based inspection/replacement policy for a stochastically and continuously deteriorating single-unit system. We consider both the replacement threshold and the inspection schedule as decision variables for this maintenance problem and we propose to implement the maintenance policy using a multi-level control-limit rule. In order to assess the performance of the proposed maintenance policy and to minimize the long run expected maintenance cost per unit time, a mathematical model for the maintained system cost is derived, supported by the existence of a stationary law for the maintained system state. Numerical experiments illustrate the performance of the proposed policy and confirm that the maintenance cost rate on an infinite horizon can be minimized by a joint optimization of the maintenance structure thresholds, or equivalently by a joint optimization of a system replacement threshold and the aperiodic inspection schedule

  5. A generic methodology for the optimisation of sewer systems using stochastic programming and self-optimizing control.

    Science.gov (United States)

    Mauricio-Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane L; Sin, Gürkan

    2015-05-15

    The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flow and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems. Such controller is aimed at keeping the system close to the optimal performance, thanks to an optimal selection of controlled variables. The definition of an optimal performance was carried out by a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow given the probability of a future rain event. The methodology is successfully applied to design an optimising control strategy for a subcatchment area in Copenhagen. The results are promising and expected to contribute to the advance of the operation and control problem of sewer systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Stochastic learning of multi-instance dictionary for earth mover’s distance-based histogram comparison

    KAUST Repository

    Fan, Jihong

    2016-09-17

    Dictionary plays an important role in multi-instance data representation. It maps bags of instances to histograms. Earth mover’s distance (EMD) is the most effective histogram distance metric for the application of multi-instance retrieval. However, up to now, there is no existing multi-instance dictionary learning methods designed for EMD-based histogram comparison. To fill this gap, we develop the first EMD-optimal dictionary learning method using stochastic optimization method. In the stochastic learning framework, we have one triplet of bags, including one basic bag, one positive bag, and one negative bag. These bags are mapped to histograms using a multi-instance dictionary. We argue that the EMD between the basic histogram and the positive histogram should be smaller than that between the basic histogram and the negative histogram. Base on this condition, we design a hinge loss. By minimizing this hinge loss and some regularization terms of the dictionary, we update the dictionary instances. The experiments over multi-instance retrieval applications shows its effectiveness when compared to other dictionary learning methods over the problems of medical image retrieval and natural language relation classification. © 2016 The Natural Computing Applications Forum

  7. Stochastic Power Control for Time-Varying Long-Term Fading Wireless Networks

    Directory of Open Access Journals (Sweden)

    Charalambous Charalambos D

    2006-01-01

    Full Text Available A new time-varying (TV long-term fading (LTF channel model which captures both the space and time variations of wireless systems is developed. The proposed TV LTF model is based on a stochastic differential equation driven by Brownian motion. This model is more realistic than the static models usually encountered in the literature. It allows viewing the wireless channel as a dynamical system, thus enabling well-developed tools of adaptive and nonadaptive estimation and identification techniques to be applied to this class of problems. In contrast with the traditional models, the statistics of the proposed model are shown to be TV, but converge in steady state to their static counterparts. Moreover, optimal power control algorithms (PCAs based on the new model are proposed. A centralized PCA is shown to reduce to a simple linear programming problem if predictable power control strategies (PPCS are used. In addition, an iterative distributed stochastic PCA is used to solve for the optimization problem using stochastic approximations. The latter solely requires each mobile to know its received signal-to-interference ratio. Generalizations of the power control problem based on convex optimization techniques are provided if PPCS are not assumed. Numerical results show that there are potentially large gains to be achieved by using TV stochastic models, and the distributed stochastic PCA provides better power stability and consumption than the distributed deterministic PCA.

  8. Stochastic resonance a mathematical approach in the small noise limit

    CERN Document Server

    Herrmann, Samuel; Pavlyukevich, Ilya; Peithmann, Dierk

    2013-01-01

    Stochastic resonance is a phenomenon arising in a wide spectrum of areas in the sciences ranging from physics through neuroscience to chemistry and biology. This book presents a mathematical approach to stochastic resonance which is based on a large deviations principle (LDP) for randomly perturbed dynamical systems with a weak inhomogeneity given by an exogenous periodicity of small frequency. Resonance, the optimal tuning between period length and noise amplitude, is explained by optimizing the LDP's rate function. The authors show that not all physical measures of tuning quality are robust with respect to dimension reduction. They propose measures of tuning quality based on exponential transition rates explained by large deviations techniques and show that these measures are robust. The book sheds some light on the shortcomings and strengths of different concepts used in the theory and applications of stochastic resonance without attempting to give a comprehensive overview of the many facets of stochastic ...

  9. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  10. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  11. Stochastic optimal foraging: tuning intensive and extensive dynamics in random searches.

    Directory of Open Access Journals (Sweden)

    Frederic Bartumeus

    Full Text Available Recent theoretical developments had laid down the proper mathematical means to understand how the structural complexity of search patterns may improve foraging efficiency. Under information-deprived scenarios and specific landscape configurations, Lévy walks and flights are known to lead to high search efficiencies. Based on a one-dimensional comparative analysis we show a mechanism by which, at random, a searcher can optimize the encounter with close and distant targets. The mechanism consists of combining an optimal diffusivity (optimally enhanced diffusion with a minimal diffusion constant. In such a way the search dynamics adequately balances the tension between finding close and distant targets, while, at the same time, shifts the optimal balance towards relatively larger close-to-distant target encounter ratios. We find that introducing a multiscale set of reorientations ensures both a thorough local space exploration without oversampling and a fast spreading dynamics at the large scale. Lévy reorientation patterns account for these properties but other reorientation strategies providing similar statistical signatures can mimic or achieve comparable efficiencies. Hence, the present work unveils general mechanisms underlying efficient random search, beyond the Lévy model. Our results suggest that animals could tune key statistical movement properties (e.g. enhanced diffusivity, minimal diffusion constant to cope with the very general problem of balancing out intensive and extensive random searching. We believe that theoretical developments to mechanistically understand stochastic search strategies, such as the one here proposed, are crucial to develop an empirically verifiable and comprehensive animal foraging theory.

  12. Topologically determined optimal stochastic resonance responses of spatially embedded networks

    International Nuclear Information System (INIS)

    Gosak, Marko; Marhl, Marko; Korosak, Dean

    2011-01-01

    We have analyzed the stochastic resonance phenomenon on spatial networks of bistable and excitable oscillators, which are connected according to their location and the amplitude of external forcing. By smoothly altering the network topology from a scale-free (SF) network with dominating long-range connections to a network where principally only adjacent oscillators are connected, we reveal that besides an optimal noise intensity, there is also a most favorable interaction topology at which the best correlation between the response of the network and the imposed weak external forcing is achieved. For various distributions of the amplitudes of external forcing, the optimal topology is always found in the intermediate regime between the highly heterogeneous SF network and the strong geometric regime. Our findings thus indicate that a suitable number of hubs and with that an optimal ratio between short- and long-range connections is necessary in order to obtain the best global response of a spatial network. Furthermore, we link the existence of the optimal interaction topology to a critical point indicating the transition from a long-range interactions-dominated network to a more lattice-like network structure.

  13. Stochastic Optimal Wind Power Bidding Strategy in Short-Term Electricity Market

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Bak-Jensen, Birgitte

    2012-01-01

    Due to the fluctuating nature and non-perfect forecast of the wind power, the wind power owners are penalized for the imbalance costs of the regulation, when they trade wind power in the short-term liberalized electricity market. Therefore, in this paper a formulation of an imbalance cost...... minimization problem for trading wind power in the short-term electricity market is described, to help the wind power owners optimize their bidding strategy. Stochastic optimization and a Monte Carlo method are adopted to find the optimal bidding strategy for trading wind power in the short-term electricity...... market in order to deal with the uncertainty of the regulation price, the activated regulation of the power system and the forecasted wind power generation. The Danish short-term electricity market and a wind farm in western Denmark are chosen as study cases due to the high wind power penetration here...

  14. Efficient stochastic thermostatting of path integral molecular dynamics.

    Science.gov (United States)

    Ceriotti, Michele; Parrinello, Michele; Markland, Thomas E; Manolopoulos, David E

    2010-09-28

    The path integral molecular dynamics (PIMD) method provides a convenient way to compute the quantum mechanical structural and thermodynamic properties of condensed phase systems at the expense of introducing an additional set of high frequency normal modes on top of the physical vibrations of the system. Efficiently sampling such a wide range of frequencies provides a considerable thermostatting challenge. Here we introduce a simple stochastic path integral Langevin equation (PILE) thermostat which exploits an analytic knowledge of the free path integral normal mode frequencies. We also apply a recently developed colored noise thermostat based on a generalized Langevin equation (GLE), which automatically achieves a similar, frequency-optimized sampling. The sampling efficiencies of these thermostats are compared with that of the more conventional Nosé-Hoover chain (NHC) thermostat for a number of physically relevant properties of the liquid water and hydrogen-in-palladium systems. In nearly every case, the new PILE thermostat is found to perform just as well as the NHC thermostat while allowing for a computationally more efficient implementation. The GLE thermostat also proves to be very robust delivering a near-optimum sampling efficiency in all of the cases considered. We suspect that these simple stochastic thermostats will therefore find useful application in many future PIMD simulations.

  15. Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling

    Science.gov (United States)

    Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.

    2016-11-01

    A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is

  16. Aspects if stochastic models for short-term hydropower scheduling and bidding

    Energy Technology Data Exchange (ETDEWEB)

    Belsnes, Michael Martin [Sintef Energy, Trondheim (Norway); Follestad, Turid [Sintef Energy, Trondheim (Norway); Wolfgang, Ove [Sintef Energy, Trondheim (Norway); Fosso, Olav B. [Dep. of electric power engineering NTNU, Trondheim (Norway)

    2012-07-01

    This report discusses challenges met when turning from deterministic to stochastic decision support models for short-term hydropower scheduling and bidding. The report describes characteristics of the short-term scheduling and bidding problem, different market and bidding strategies, and how a stochastic optimization model can be formulated. A review of approaches for stochastic short-term modelling and stochastic modelling for the input variables inflow and market prices is given. The report discusses methods for approximating the predictive distribution of uncertain variables by scenario trees. Benefits of using a stochastic over a deterministic model are illustrated by a case study, where increased profit is obtained to a varying degree depending on the reservoir filling and price structure. Finally, an approach for assessing the effect of using a size restricted scenario tree to approximate the predictive distribution for stochastic input variables is described. The report is a summary of the findings of Work package 1 of the research project #Left Double Quotation Mark#Optimal short-term scheduling of wind and hydro resources#Right Double Quotation Mark#. The project aims at developing a prototype for an operational stochastic short-term scheduling model. Based on the investigations summarized in the report, it is concluded that using a deterministic equivalent formulation of the stochastic optimization problem is convenient and sufficient for obtaining a working prototype. (author)

  17. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    Science.gov (United States)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity

  18. A stochastic aerodynamic model for stationary blades in unsteady 3D wind fields

    International Nuclear Information System (INIS)

    Fluck, Manuel; Crawford, Curran

    2016-01-01

    Dynamic loads play an important roll in the design of wind turbines, but establishing the life-time aerodynamic loads (e.g. extreme and fatigue loads) is a computationally expensive task. Conventional (deterministic) methods to analyze long term loads, which rely on the repeated analysis of multiple different wind samples, are usually too expensive to be included in optimization routines. We present a new stochastic approach, which solves the aerodynamic system equations (Lagrangian vortex model) in the stochastic space, and thus arrive directly at a stochastic description of the coupled loads along a turbine blade. This new approach removes the requirement of analyzing multiple different realizations. Instead, long term loads can be extracted from a single stochastic solution, a procedure that is obviously significantly faster. Despite the reduced analysis time, results obtained from the stochastic approach match deterministic result well for a simple test-case (a stationary blade). In future work, the stochastic method will be extended to rotating blades, thus opening up new avenues to include long term loads into turbine optimization. (paper)

  19. 3rd GAMM/IFIP-Workshop on “Stochastic Optimization: Numerical Methods and Technical Applications” held at the Federal Armed Forces University Munich

    CERN Document Server

    Kall, Peter

    1998-01-01

    Optimization problems arising in practice usually contain several random parameters. Hence, in order to obtain optimal solutions being robust with respect to random parameter variations, the mostly available statistical information about the random parameters should be considered already at the planning phase. The original problem with random parameters must be replaced by an appropriate deterministic substitute problem, and efficient numerical solution or approximation techniques have to be developed for those problems. This proceedings volume contains a selection of papers on modelling techniques, approximation methods, numerical solution procedures for stochastic optimization problems and applications to the reliability-based optimization of concrete technical or economic systems.

  20. Stochastic Watershed Models for Risk Based Decision Making

    Science.gov (United States)

    Vogel, R. M.

    2017-12-01

    Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation

  1. A cavitation model based on Eulerian stochastic fields

    Science.gov (United States)

    Magagnato, F.; Dumond, J.

    2013-12-01

    Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  2. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  3. Optimization of advanced gas-cooled reactor fuel performance by a stochastic method

    International Nuclear Information System (INIS)

    Parks, G.T.

    1987-01-01

    A brief description is presented of a model representing the in-core behaviour of a single advanced gas-cooled reactor fuel channel, developed specifically for optimization studies. The performances of the only suitable Numerical Algorithms Group (NAG) library package and a Metropolis algorithm routine on this problem are discussed and contrasted. It is concluded that, for the problem in question, the stochastic Metropolis algorithm has distinct advantages over the deterministic NAG routine. (author)

  4. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  5. GIS-based approach for optimal siting and sizing of renewables considering techno-environmental constraints and the stochastic nature of meteorological inputs

    Science.gov (United States)

    Daskalou, Olympia; Karanastasi, Maria; Markonis, Yannis; Dimitriadis, Panayiotis; Koukouvinos, Antonis; Efstratiadis, Andreas; Koutsoyiannis, Demetris

    2016-04-01

    Following the legislative EU targets and taking advantage of its high renewable energy potential, Greece can obtain significant benefits from developing its water, solar and wind energy resources. In this context we present a GIS-based methodology for the optimal sizing and siting of solar and wind energy systems at the regional scale, which is tested in the Prefecture of Thessaly. First, we assess the wind and solar potential, taking into account the stochastic nature of the associated meteorological processes (i.e. wind speed and solar radiation, respectively), which is essential component for both planning (i.e., type selection and sizing of photovoltaic panels and wind turbines) and management purposes (i.e., real-time operation of the system). For the optimal siting, we assess the efficiency and economic performance of the energy system, also accounting for a number of constraints, associated with topographic limitations (e.g., terrain slope, proximity to road and electricity grid network, etc.), the environmental legislation and other land use constraints. Based on this analysis, we investigate favorable alternatives using technical, environmental as well as financial criteria. The final outcome is GIS maps that depict the available energy potential and the optimal layout for photovoltaic panels and wind turbines over the study area. We also consider a hypothetical scenario of future development of the study area, in which we assume the combined operation of the above renewables with major hydroelectric dams and pumped-storage facilities, thus providing a unique hybrid renewable system, extended at the regional scale.

  6. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    Science.gov (United States)

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  7. Performance for the hybrid method using stochastic and deterministic searching for shape optimization of electromagnetic devices

    International Nuclear Information System (INIS)

    Yokose, Yoshio; Noguchi, So; Yamashita, Hideo

    2002-01-01

    Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)

  8. Event-Triggered Faults Tolerant Control for Stochastic Systems with Time Delays

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2016-01-01

    Full Text Available This paper is concerned with the state-feedback controller design for stochastic networked control systems (NCSs with random actuator failures and transmission delays. Firstly, an event-triggered scheme is introduced to optimize the performance of the stochastic NCSs. Secondly, stochastic NCSs under event-triggered scheme are modeled as stochastic time-delay systems. Thirdly, some less conservative delay-dependent stability criteria in terms of linear matrix inequalities for the codesign of both the controller gain and the trigger parameters are obtained by using delay-decomposition technique and convex combination approach. Finally, a numerical example is provided to show the less sampled data transmission and less conservatism of the proposed theory.

  9. Adaptive logical stochastic resonance in time-delayed synthetic genetic networks

    Science.gov (United States)

    Zhang, Lei; Zheng, Wenbin; Song, Aiguo

    2018-04-01

    In the paper, the concept of logical stochastic resonance is applied to implement logic operation and latch operation in time-delayed synthetic genetic networks derived from a bacteriophage λ. Clear logic operation and latch operation can be obtained when the network is tuned by modulated periodic force and time-delay. In contrast with the previous synthetic genetic networks based on logical stochastic resonance, the proposed system has two advantages. On one hand, adding modulated periodic force to the background noise can increase the length of the optimal noise plateau of obtaining desired logic response and make the system adapt to varying noise intensity. On the other hand, tuning time-delay can extend the optimal noise plateau to larger range. The result provides possible help for designing new genetic regulatory networks paradigm based on logical stochastic resonance.

  10. Optimal adaptive control for a class of stochastic systems

    NARCIS (Netherlands)

    Bagchi, Arunabha; Chen, Han-Fu

    1995-01-01

    We study linear-quadratic adaptive tracking problems for a special class of stochastic systems expressed in the state-space form. This is a long-standing problem in the control of aircraft flying through atmospheric turbulence. Using an ELS-based algorithm and introducing dither in the control law

  11. Study on Stochastic Optimal Electric Power Procurement Strategies with Uncertain Market Prices

    Science.gov (United States)

    Sakchai, Siripatanakulkhajorn; Saisho, Yuichi; Fujii, Yasumasa; Yamaji, Kenji

    The player in deregulated electricity markets can be categorized into three groups of GENCO (Generator Companies), TRNASCO (Transmission Companies), DISCO (Distribution Companies). This research focuses on the role of Distribution Companies, which purchase electricity from market at randomly fluctuating prices, and provide it to their customers at given fixed prices. Therefore Distribution companies have to take the risk stemming from price fluctuation of electricity instead of the customers. This entails the necessity to develop a certain method to make an optimal strategy for electricity procurement. In such a circumstance, this research has the purpose for proposing the mathematical method based on stochastic dynamic programming to evaluate the value of a long-term bilateral contract of electricity trade, and also a project of combination of the bilateral contract and power generation with their own generators for procuring electric power in deregulated market.

  12. Automated Flight Routing Using Stochastic Dynamic Programming

    Science.gov (United States)

    Ng, Hok K.; Morando, Alex; Grabbe, Shon

    2010-01-01

    Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.

  13. The stochastic goodwill problem

    OpenAIRE

    Marinelli, Carlo

    2003-01-01

    Stochastic control problems related to optimal advertising under uncertainty are considered. In particular, we determine the optimal strategies for the problem of maximizing the utility of goodwill at launch time and minimizing the disutility of a stream of advertising costs that extends until the launch time for some classes of stochastic perturbations of the classical Nerlove-Arrow dynamics. We also consider some generalizations such as problems with constrained budget and with discretionar...

  14. The two-regime method for optimizing stochastic reaction-diffusion simulations

    KAUST Repository

    Flegg, M. B.; Chapman, S. J.; Erban, R.

    2011-01-01

    Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches

  15. The costs of electricity systems with a high share of fluctutating renewables. A stochastic investment and dispatch optimization model for Europe

    International Nuclear Information System (INIS)

    Nagl, Stephan; Fuersch, Michaela; Lindenberger, Dietmar

    2012-01-01

    Renewable energies are meant to produce a large share of the future electricity demand. However, the availability of wind and solar power depends on local weather conditions and therefore weather characteristics must be considered when optimizing the future electricity mix. In this article we analyze the impact of the stochastic availability of wind and solar energy on the cost-minimal power plant mix and the related total system costs. To determine optimal conventional, renewable and storage capacities for different shares of renewables, we apply a stochastic investment and dispatch optimization model to the European electricity market. The model considers stochastic feed-in structures and full load hours of wind and solar technologies and different correlations between regions and technologies. Key findings include the overestimation of fluctuating renewables and underestimation of total system costs compared to deterministic investment and dispatch models. Furthermore, solar technologies are - relative to wind turbines - underestimated when neglecting negative correlations between wind speeds and solar radiation.

  16. Distributed Fusion Estimation for Multisensor Multirate Systems with Stochastic Observation Multiplicative Noises

    Directory of Open Access Journals (Sweden)

    Peng Fangfang

    2014-01-01

    Full Text Available This paper studies the fusion estimation problem of a class of multisensor multirate systems with observation multiplicative noises. The dynamic system is sampled uniformly. Sampling period of each sensor is uniform and the integer multiple of the state update period. Moreover, different sensors have the different sampling rates and observations of sensors are subject to the stochastic uncertainties of multiplicative noises. At first, local filters at the observation sampling points are obtained based on the observations of each sensor. Further, local estimators at the state update points are obtained by predictions of local filters at the observation sampling points. They have the reduced computational cost and a good real-time property. Then, the cross-covariance matrices between any two local estimators are derived at the state update points. At last, using the matrix weighted optimal fusion estimation algorithm in the linear minimum variance sense, the distributed optimal fusion estimator is obtained based on the local estimators and the cross-covariance matrices. An example shows the effectiveness of the proposed algorithms.

  17. Stochastic processes, optimization, and control theory a volume in honor of Suresh Sethi

    CERN Document Server

    Yan, Houmin

    2006-01-01

    This edited volume contains 16 research articles. It presents recent and pressing issues in stochastic processes, control theory, differential games, optimization, and their applications in finance, manufacturing, queueing networks, and climate control. One of the salient features is that the book is highly multi-disciplinary. The book is dedicated to Professor Suresh Sethi on the occasion of his 60th birthday, in view of his distinguished career.

  18. ADAPTIVE METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS VIA NATURAL EMBEDDINGS AND REJECTION SAMPLING WITH MEMORY.

    Science.gov (United States)

    Rackauckas, Christopher; Nie, Qing

    2017-01-01

    Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.

  19. ARIMA-Based Time Series Model of Stochastic Wind Power Generation

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Pedersen, Troels; Bak-Jensen, Birgitte

    2010-01-01

    This paper proposes a stochastic wind power model based on an autoregressive integrated moving average (ARIMA) process. The model takes into account the nonstationarity and physical limits of stochastic wind power generation. The model is constructed based on wind power measurement of one year from...... the Nysted offshore wind farm in Denmark. The proposed limited-ARIMA (LARIMA) model introduces a limiter and characterizes the stochastic wind power generation by mean level, temporal correlation and driving noise. The model is validated against the measurement in terms of temporal correlation...... and probability distribution. The LARIMA model outperforms a first-order transition matrix based discrete Markov model in terms of temporal correlation, probability distribution and model parameter number. The proposed LARIMA model is further extended to include the monthly variation of the stochastic wind power...

  20. Multi-Index Stochastic Collocation (MISC) for random elliptic PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2016-01-01

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  1. Multi-Index Stochastic Collocation (MISC) for random elliptic PDEs

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-01-06

    In this work we introduce the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of a PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We then provide a complexity analysis that assumes decay rates of product type for such mixed differences, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, comparing it with other related methods available in the literature, such as the Multi-Index and Multilevel Monte Carlo, Multilevel Stochastic Collocation, Quasi Optimal Stochastic Collocation and Sparse Composite Collocation methods.

  2. A stochastic optimal feedforward and feedback control methodology for superagility

    Science.gov (United States)

    Halyo, Nesim; Direskeneli, Haldun; Taylor, Deborah B.

    1992-01-01

    A new control design methodology is developed: Stochastic Optimal Feedforward and Feedback Technology (SOFFT). Traditional design techniques optimize a single cost function (which expresses the design objectives) to obtain both the feedforward and feedback control laws. This approach places conflicting demands on the control law such as fast tracking versus noise atttenuation/disturbance rejection. In the SOFFT approach, two cost functions are defined. The feedforward control law is designed to optimize one cost function, the feedback optimizes the other. By separating the design objectives and decoupling the feedforward and feedback design processes, both objectives can be achieved fully. A new measure of command tracking performance, Z-plots, is also developed. By analyzing these plots at off-nominal conditions, the sensitivity or robustness of the system in tracking commands can be predicted. Z-plots provide an important tool for designing robust control systems. The Variable-Gain SOFFT methodology was used to design a flight control system for the F/A-18 aircraft. It is shown that SOFFT can be used to expand the operating regime and provide greater performance (flying/handling qualities) throughout the extended flight regime. This work was performed under the NASA SBIR program. ICS plans to market the software developed as a new module in its commercial CACSD software package: ACET.

  3. Stochastic time-dependent vehicle routing problem: Mathematical models and ant colony algorithm

    Directory of Open Access Journals (Sweden)

    Zhengyu Duan

    2015-11-01

    Full Text Available This article addresses the stochastic time-dependent vehicle routing problem. Two mathematical models named robust optimal schedule time model and minimum expected schedule time model are proposed for stochastic time-dependent vehicle routing problem, which can guarantee delivery within the time windows of customers. The robust optimal schedule time model only requires the variation range of link travel time, which can be conveniently derived from historical traffic data. In addition, the robust optimal schedule time model based on robust optimization method can be converted into a time-dependent vehicle routing problem. Moreover, an ant colony optimization algorithm is designed to solve stochastic time-dependent vehicle routing problem. As the improvements in initial solution and transition probability, ant colony optimization algorithm has a good performance in convergence. Through computational instances and Monte Carlo simulation tests, robust optimal schedule time model is proved to be better than minimum expected schedule time model in computational efficiency and coping with the travel time fluctuations. Therefore, robust optimal schedule time model is applicable in real road network.

  4. Economic policy optimization based on both one stochastic model and the parametric control theory

    Science.gov (United States)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  5. Adaptive grid based multi-objective Cauchy differential evolution for stochastic dynamic economic emission dispatch with wind power uncertainty.

    Science.gov (United States)

    Zhang, Huifeng; Lei, Xiaohui; Wang, Chao; Yue, Dong; Xie, Xiangpeng

    2017-01-01

    Since wind power is integrated into the thermal power operation system, dynamic economic emission dispatch (DEED) has become a new challenge due to its uncertain characteristics. This paper proposes an adaptive grid based multi-objective Cauchy differential evolution (AGB-MOCDE) for solving stochastic DEED with wind power uncertainty. To properly deal with wind power uncertainty, some scenarios are generated to simulate those possible situations by dividing the uncertainty domain into different intervals, the probability of each interval can be calculated using the cumulative distribution function, and a stochastic DEED model can be formulated under different scenarios. For enhancing the optimization efficiency, Cauchy mutation operation is utilized to improve differential evolution by adjusting the population diversity during the population evolution process, and an adaptive grid is constructed for retaining diversity distribution of Pareto front. With consideration of large number of generated scenarios, the reduction mechanism is carried out to decrease the scenarios number with covariance relationships, which can greatly decrease the computational complexity. Moreover, the constraint-handling technique is also utilized to deal with the system load balance while considering transmission loss among thermal units and wind farms, all the constraint limits can be satisfied under the permitted accuracy. After the proposed method is simulated on three test systems, the obtained results reveal that in comparison with other alternatives, the proposed AGB-MOCDE can optimize the DEED problem while handling all constraint limits, and the optimal scheme of stochastic DEED can decrease the conservation of interval optimization, which can provide a more valuable optimal scheme for real-world applications.

  6. Optimal stochastic energy management of retailer based on selling price determination under smart grid environment in the presence of demand response program

    International Nuclear Information System (INIS)

    Nojavan, Sayyad; Zare, Kazem; Mohammadi-Ivatloo, Behnam

    2017-01-01

    Highlights: • Stochastic energy management of retailer under smart grid environment is proposed. • Optimal selling price is determined in the smart grid environment. • Fixed, time-of-use and real-time pricing are determined for selling to customers. • Charge/discharge of ESS is determined to increase the expected profit of retailer. • Demand response program is proposed to increase the expected profit of retailer. - Abstract: In this paper, bilateral contracting and selling price determination problems for an electricity retailer in the smart grid environment under uncertainties have been considered. Multiple energy procurement sources containing pool market (PM), bilateral contracts (BCs), distributed generation (DG) units, renewable energy sources (photovoltaic (PV) system and wind turbine (WT)) and energy storage system (ESS) as well as demand response program (DRP) as virtual generation unit are considered. The scenario-based stochastic framework is used for uncertainty modeling of pool market prices, client group demand and variable climate condition containing temperature, irradiation and wind speed. In the proposed model, the selling price is determined and compared by the retailer in the smart grid in three cases containing fixed pricing, time-of-use (TOU) pricing and real-time pricing (RTP). It is shown that the selling price determination based on RTP by the retailer leads to higher expected profit. Furthermore, demand response program (DRP) has been implemented to flatten the load profile to minimize the cost for end-user customers as well as increasing the retailer profit. To validate the proposed model, three case studies are used and the results are compared.

  7. Optimal harvesting policy of a stochastic two-species competitive model with Lévy noise in a polluted environment

    Science.gov (United States)

    Zhao, Yu; Yuan, Sanling

    2017-07-01

    As well known that the sudden environmental shocks and toxicant can affect the population dynamics of fish species, a mechanistic understanding of how sudden environmental change and toxicant influence the optimal harvesting policy requires development. This paper presents the optimal harvesting of a stochastic two-species competitive model with Lévy noise in a polluted environment, where the Lévy noise is used to describe the sudden climate change. Due to the discontinuity of the Lévy noise, the classical optimal harvesting methods based on the explicit solution of the corresponding Fokker-Planck equation are invalid. The object of this paper is to fill up this gap and establish the optimal harvesting policy. By using of aggregation and ergodic methods, the approximation of the optimal harvesting effort and maximum expectation of sustainable yields are obtained. Numerical simulations are carried out to support these theoretical results. Our analysis shows that the Lévy noise and the mean stress measure of toxicant in organism may affect the optimal harvesting policy significantly.

  8. Stochastic Optimization in The Power Management of Bottled Water Production Planning

    Science.gov (United States)

    Antoro, Budi; Nababan, Esther; Mawengkang, Herman

    2018-01-01

    This paper review a model developed to minimize production costs on bottled water production planning through stochastic optimization. As we know, that planning a management means to achieve the goal that have been applied, since each management level in the organization need a planning activities. The built models is a two-stage stochastic models that aims to minimize the cost on production of bottled water by observing that during the production process, neither interfernce nor vice versa occurs. The models were develop to minimaze production cost, assuming the availability of packing raw materials used considered to meet for each kind of bottles. The minimum cost for each kind production of bottled water are expressed in the expectation of each production with a scenario probability. The probability of uncertainly is a representation of the number of productions and the timing of power supply interruption. This is to ensure that the number of interruption that occur does not exceed the limit of the contract agreement that has been made by the company with power suppliers.

  9. The two-regime method for optimizing stochastic reaction-diffusion simulations

    KAUST Repository

    Flegg, M. B.

    2011-10-19

    Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches to less detailed compartment-based simulations. Compartment-based approaches yield quick and accurate mesoscopic results, but lack the level of detail that is characteristic of the computationally intensive molecular-based models. Often microscopic detail is only required in a small region (e.g. close to the cell membrane). Currently, the best way to achieve microscopic detail is to use a resource-intensive simulation over the whole domain. We develop the two-regime method (TRM) in which a molecular-based algorithm is used where desired and a compartment-based approach is used elsewhere. We present easy-to-implement coupling conditions which ensure that the TRM results have the same accuracy as a detailed molecular-based model in the whole simulation domain. Therefore, the TRM combines strengths of previously developed stochastic reaction-diffusion software to efficiently explore the behaviour of biological models. Illustrative examples and the mathematical justification of the TRM are also presented.

  10. Stochastic optimization of energy hub operation with consideration of thermal energy market and demand response

    International Nuclear Information System (INIS)

    Vahid-Pakdel, M.J.; Nojavan, Sayyad; Mohammadi-ivatloo, B.; Zare, Kazem

    2017-01-01

    Highlights: • Studying heating market impact on energy hub operation considering price uncertainty. • Investigating impact of implementation of heat demand response on hub operation. • Presenting stochastic method to consider wind generation and prices uncertainties. - Abstract: Multi carrier energy systems or energy hubs has provided more flexibility for energy management systems. On the other hand, due to mutual impact of different energy carriers in energy hubs, energy management studies become more challengeable. The initial patterns of energy demands from grids point of view can be modified by optimal scheduling of energy hubs. In this work, optimal operation of multi carrier energy system has been studied in the presence of wind farm, electrical and thermal storage systems, electrical and thermal demand response programs, electricity market and thermal energy market. Stochastic programming is implemented for modeling the system uncertainties such as demands, market prices and wind speed. It is shown that adding new source of heat energy for providing demand of consumers with market mechanism changes the optimal operation point of multi carrier energy system. Presented mixed integer linear formulation for the problem has been solved by executing CPLEX solver of GAMS optimization software. Simulation results shows that hub’s operation cost reduces up to 4.8% by enabling the option of using thermal energy market for meeting heat demand.

  11. Optimal Rules for Single Machine Scheduling with Stochastic Breakdowns

    Directory of Open Access Journals (Sweden)

    Jinwei Gu

    2014-01-01

    Full Text Available This paper studies the problem of scheduling a set of jobs on a single machine subject to stochastic breakdowns, where jobs have to be restarted if preemptions occur because of breakdowns. The breakdown process of the machine is independent of the jobs processed on the machine. The processing times required to complete the jobs are constants if no breakdown occurs. The machine uptimes are independently and identically distributed (i.i.d. and are subject to a uniform distribution. It is proved that the Longest Processing Time first (LPT rule minimizes the expected makespan. For the large-scale problem, it is also showed that the Shortest Processing Time first (SPT rule is optimal to minimize the expected total completion times of all jobs.

  12. Stochastic models, estimation, and control

    CERN Document Server

    Maybeck, Peter S

    1982-01-01

    This volume builds upon the foundations set in Volumes 1 and 2. Chapter 13 introduces the basic concepts of stochastic control and dynamic programming as the fundamental means of synthesizing optimal stochastic control laws.

  13. Cloud computing-based energy optimization control framework for plug-in hybrid electric bus

    International Nuclear Information System (INIS)

    Yang, Chao; Li, Liang; You, Sixiong; Yan, Bingjie; Du, Xian

    2017-01-01

    Considering the complicated characteristics of traffic flow in city bus route and the nonlinear vehicle dynamics, optimal energy management integrated with clustering and recognition of driving conditions in plug-in hybrid electric bus is still a challenging problem. Motivated by this issue, this paper presents an innovative energy optimization control framework based on the cloud computing for plug-in hybrid electric bus. This framework, which includes offline part and online part, can realize the driving conditions clustering in offline part, and the energy management in online part. In offline part, utilizing the operating data transferred from a bus to the remote monitoring center, K-means algorithm is adopted to cluster the driving conditions, and then Markov probability transfer matrixes are generated to predict the possible operating demand of the bus driver. Next in online part, the current driving condition is real-time identified by a well-trained support vector machine, and Markov chains-based driving behaviors are accordingly selected. With the stochastic inputs, stochastic receding horizon control method is adopted to obtain the optimized energy management of hybrid powertrain. Simulations and hardware-in-loop test are carried out with the real-world city bus route, and the results show that the presented strategy could greatly improve the vehicle fuel economy, and as the traffic flow data feedback increases, the fuel consumption of every plug-in hybrid electric bus running in a specific bus route tends to be a stable minimum. - Highlights: • Cloud computing-based energy optimization control framework is proposed. • Driving cycles are clustered into 6 types by K-means algorithm. • Support vector machine is employed to realize the online recognition of driving condition. • Stochastic receding horizon control-based energy management strategy is designed for plug-in hybrid electric bus. • The proposed framework is verified by simulation and hard

  14. International Diversification Versus Domestic Diversification: Mean-Variance Portfolio Optimization and Stochastic Dominance Approaches

    Directory of Open Access Journals (Sweden)

    Fathi Abid

    2014-05-01

    Full Text Available This paper applies the mean-variance portfolio optimization (PO approach and the stochastic dominance (SD test to examine preferences for international diversification versus domestic diversification from American investors’ viewpoints. Our PO results imply that the domestic diversification strategy dominates the international diversification strategy at a lower risk level and the reverse is true at a higher risk level. Our SD analysis shows that there is no arbitrage opportunity between international and domestic stock markets; domestically diversified portfolios with smaller risk dominate internationally diversified portfolios with larger risk and vice versa; and at the same risk level, there is no difference between the domestically and internationally diversified portfolios. Nonetheless, we cannot find any domestically diversified portfolios that stochastically dominate all internationally diversified portfolios, but we find some internationally diversified portfolios with small risk that dominate all the domestically diversified portfolios.

  15. Stationary and related stochastic processes sample function properties and their applications

    CERN Document Server

    Cramér, Harald

    2004-01-01

    This graduate-level text offers a comprehensive account of the general theory of stationary processes, with special emphasis on the properties of sample functions. Assuming a familiarity with the basic features of modern probability theory, the text develops the foundations of the general theory of stochastic processes, examines processes with a continuous-time parameter, and applies the general theory to procedures key to the study of stationary processes. Additional topics include analytic properties of the sample functions and the problem of time distribution of the intersections between a

  16. Modeling and Optimization of the Multiobjective Stochastic Joint Replenishment and Delivery Problem under Supply Chain Environment

    Directory of Open Access Journals (Sweden)

    Lin Wang

    2013-01-01

    Full Text Available As a practical inventory and transportation problem, it is important to synthesize several objectives for the joint replenishment and delivery (JRD decision. In this paper, a new multiobjective stochastic JRD (MSJRD of the one-warehouse and n-retailer systems considering the balance of service level and total cost simultaneously is proposed. The goal of this problem is to decide the reasonable replenishment interval, safety stock factor, and traveling routing. Secondly, two approaches are designed to handle this complex multi-objective optimization problem. Linear programming (LP approach converts the multi-objective to single objective, while a multi-objective evolution algorithm (MOEA solves a multi-objective problem directly. Thirdly, three intelligent optimization algorithms, differential evolution algorithm (DE, hybrid DE (HDE, and genetic algorithm (GA, are utilized in LP-based and MOEA-based approaches. Results of the MSJRD with LP-based and MOEA-based approaches are compared by a contrastive numerical example. To analyses the nondominated solution of MOEA, a metric is also used to measure the distribution of the last generation solution. Results show that HDE outperforms DE and GA whenever LP or MOEA is adopted.

  17. Optimal reducibility of all W states equivalent under stochastic local operations and classical communication

    Energy Technology Data Exchange (ETDEWEB)

    Rana, Swapan; Parashar, Preeti [Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 BT Road, Kolkata (India)

    2011-11-15

    We show that all multipartite pure states that are stochastic local operation and classical communication (SLOCC) equivalent to the N-qubit W state can be uniquely determined (among arbitrary states) from their bipartite marginals. We also prove that only (N-1) of the bipartite marginals are sufficient and that this is also the optimal number. Thus, contrary to the Greenberger-Horne-Zeilinger (GHZ) class, W-type states preserve their reducibility under SLOCC. We also study the optimal reducibility of some larger classes of states. The generic Dicke states |GD{sub N}{sup l}> are shown to be optimally determined by their (l+1)-partite marginals. The class of ''G'' states (superposition of W and W) are shown to be optimally determined by just two (N-2)-partite marginals.

  18. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  19. Delayed Stochastic Linear-Quadratic Control Problem and Related Applications

    Directory of Open Access Journals (Sweden)

    Li Chen

    2012-01-01

    stochastic differential equations (FBSDEs with Itô’s stochastic delay equations as forward equations and anticipated backward stochastic differential equations as backward equations. Especially, we present the optimal feedback regulator for the time delay system via a new type of Riccati equations and also apply to a population optimal control problem.

  20. PERFORMANCE COMPARISON OF SCENARIO-GENERATION METHODS APPLIED TO A STOCHASTIC OPTIMIZATION ASSET-LIABILITY MANAGEMENT MODEL

    Directory of Open Access Journals (Sweden)

    Alan Delgado de Oliveira

    Full Text Available ABSTRACT In this paper, we provide an empirical discussion of the differences among some scenario tree-generation approaches for stochastic programming. We consider the classical Monte Carlo sampling and Moment matching methods. Moreover, we test the Resampled average approximation, which is an adaptation of Monte Carlo sampling and Monte Carlo with naive allocation strategy as the benchmark. We test the empirical effects of each approach on the stability of the problem objective function and initial portfolio allocation, using a multistage stochastic chance-constrained asset-liability management (ALM model as the application. The Moment matching and Resampled average approximation are more stable than the other two strategies.

  1. Reflecting metallic metasurfaces designed with stochastic optimization as waveplates for manipulating light polarization

    Science.gov (United States)

    Haberko, Jakub; Wasylczyk, Piotr

    2018-03-01

    We demonstrate that a stochastic optimization algorithm with a properly chosen, weighted fitness function, following a global variation of parameters upon each step can be used to effectively design reflective polarizing optical elements. Two sub-wavelength metallic metasurfaces, corresponding to broadband half- and quarter-waveplates are demonstrated with simple structure topology, a uniform metallic coating and with the design suited for the currently available microfabrication techniques, such as ion milling or 3D printing.

  2. Risk averse optimal operation of a virtual power plant using two stage stochastic programming

    International Nuclear Information System (INIS)

    Tajeddini, Mohammad Amin; Rahimi-Kian, Ashkan; Soroudi, Alireza

    2014-01-01

    VPP (Virtual Power Plant) is defined as a cluster of energy conversion/storage units which are centrally operated in order to improve the technical and economic performance. This paper addresses the optimal operation of a VPP considering the risk factors affecting its daily operation profits. The optimal operation is modelled in both day ahead and balancing markets as a two-stage stochastic mixed integer linear programming in order to maximize a GenCo (generation companies) expected profit. Furthermore, the CVaR (Conditional Value at Risk) is used as a risk measure technique in order to control the risk of low profit scenarios. The uncertain parameters, including the PV power output, wind power output and day-ahead market prices are modelled through scenarios. The proposed model is successfully applied to a real case study to show its applicability and the results are presented and thoroughly discussed. - Highlights: • Virtual power plant modelling considering a set of energy generating and conversion units. • Uncertainty modelling using two stage stochastic programming technique. • Risk modelling using conditional value at risk. • Flexible operation of renewable energy resources. • Electricity price uncertainty in day ahead energy markets

  3. Stochastic Optical Reconstruction Microscopy (STORM).

    Science.gov (United States)

    Xu, Jianquan; Ma, Hongqiang; Liu, Yang

    2017-07-05

    Super-resolution (SR) fluorescence microscopy, a class of optical microscopy techniques at a spatial resolution below the diffraction limit, has revolutionized the way we study biology, as recognized by the Nobel Prize in Chemistry in 2014. Stochastic optical reconstruction microscopy (STORM), a widely used SR technique, is based on the principle of single molecule localization. STORM routinely achieves a spatial resolution of 20 to 30 nm, a ten-fold improvement compared to conventional optical microscopy. Among all SR techniques, STORM offers a high spatial resolution with simple optical instrumentation and standard organic fluorescent dyes, but it is also prone to image artifacts and degraded image resolution due to improper sample preparation or imaging conditions. It requires careful optimization of all three aspects-sample preparation, image acquisition, and image reconstruction-to ensure a high-quality STORM image, which will be extensively discussed in this unit. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  4. A stochastic framework for clearing of reactive power market

    International Nuclear Information System (INIS)

    Amjady, N.; Rabiee, A.; Shayanfar, H.A.

    2010-01-01

    This paper presents a new stochastic framework for clearing of day-ahead reactive power market. The uncertainty of generating units in the form of system contingencies are considered in the reactive power market-clearing procedure by the stochastic model in two steps. The Monte-Carlo Simulation (MCS) is first used to generate random scenarios. Then, in the second step, the stochastic market-clearing procedure is implemented as a series of deterministic optimization problems (scenarios) including non-contingent scenario and different post-contingency states. In each of these deterministic optimization problems, the objective function is total payment function (TPF) of generators which refers to the payment paid to the generators for their reactive power compensation. The effectiveness of the proposed model is examined based on the IEEE 24-bus Reliability Test System (IEEE 24-bus RTS). (author)

  5. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  6. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  7. The Optimal Strategy to Research Pension Funds in China Based on the Loss Function

    OpenAIRE

    Gao, Jian-wei; Guo, Hong-zhen; Ye, Yan-cheng

    2007-01-01

    Based on the theory of actuarial present value, a pension fund investment goal can be formulated as an objective function. The mean-variance model is extended by defining the objective loss function. Furthermore, using the theory of stochastic optimal control, an optimal investment model is established under the minimum expectation of loss function. In the light of the Hamilton-Jacobi-Bellman (HJB) equation, the analytic solution of the optimal investment strategy problem is derived.

  8. Optimal exploitation of a renewable resource with stochastic nonconvex technology: An analysis of extinction and survival

    International Nuclear Information System (INIS)

    Mitra, Tapan; Roy, Santanu

    1992-11-01

    This paper analyzes the possibilities of extinction and survival of a renewable resource whose technology of reproduction is both stochastic and nonconvex. In particular, the production function is subject to random shocks over time and is allowed to be nonconcave, though it eventually exhibits bounded growth. The existence of a minimum biomass below which the resource can only decrease, is allowed for. Society harvests a part of the current stock every time period over an infinite horizon so as to maximize the expected discounted sum of one period social utilities from the harvested resource. The social utility function is strictly concave. The stochastic process of optimal stocks generated by the optimal stationary policy is analyzed. The nonconvexity in the optimization problem implies that the optimal policy functions are not 'well behaved'. The behaviour of the probability of extinction (and the expected time to extinction), as a function of initial stock, is characterized for various possible configurations of the optimal policy and the technology. Sufficient conditions on the utility and production functions and the rate of impatience, are specified in order to ensure survival of the resource with probability one from some stock level (the minimum safe standard of conservation). Sufficient conditions for almost sure extinction and almost sure survival from all stock levels are also specified. These conditions are related to the corresponding conditions derived in models with deterministic and/or convex technology. 4 figs., 29 refs

  9. Optimal exploitation of a renewable resource with stochastic nonconvex technology: An analysis of extinction and survival

    Energy Technology Data Exchange (ETDEWEB)

    Mitra, Tapan [Department of Economics, Cornell University, Ithaca, NY (United States); Roy, Santanu [Econometric Institute, Erasmus University, Rotterdam (Netherlands)

    1992-11-01

    This paper analyzes the possibilities of extinction and survival of a renewable resource whose technology of reproduction is both stochastic and nonconvex. In particular, the production function is subject to random shocks over time and is allowed to be nonconcave, though it eventually exhibits bounded growth. The existence of a minimum biomass below which the resource can only decrease, is allowed for. Society harvests a part of the current stock every time period over an infinite horizon so as to maximize the expected discounted sum of one period social utilities from the harvested resource. The social utility function is strictly concave. The stochastic process of optimal stocks generated by the optimal stationary policy is analyzed. The nonconvexity in the optimization problem implies that the optimal policy functions are not `well behaved`. The behaviour of the probability of extinction (and the expected time to extinction), as a function of initial stock, is characterized for various possible configurations of the optimal policy and the technology. Sufficient conditions on the utility and production functions and the rate of impatience, are specified in order to ensure survival of the resource with probability one from some stock level (the minimum safe standard of conservation). Sufficient conditions for almost sure extinction and almost sure survival from all stock levels are also specified. These conditions are related to the corresponding conditions derived in models with deterministic and/or convex technology. 4 figs., 29 refs.

  10. CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties

    Science.gov (United States)

    2017-03-01

    cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1... Inverse transform on CDF to extract random sample of variable x. (b) Histogram of samples. Figure 6: (a) Successive approximation circuit for... inverse transform evaluation on CDF. (b) Inverse transform transients. 262 generator (RNG) generates a sample value u. SA circuit evaluates F(xin

  11. Stochastic energy management of renewable micro-grids in the correlated environment using unscented transformation

    International Nuclear Information System (INIS)

    Tabatabaee, Sajad; Mortazavi, Seyed Saeedallah; Niknam, Taher

    2016-01-01

    This paper addresses the optimal stochastic scheduling of the distributed generation units in a micro-grid. In this way, it introduces a new sufficient stochastic framework to model the correlated uncertainties in the micro-grid that includes different types of RESs such as photovoltaics, wind turbines, micro-turbine, fuel cell as well as battery as the storage device. The proposed stochastic method makes use of unscented transforms to model correlated uncertain parameters. The ability of the unscented transform method to model correlated uncertain variables is particularly appealing in the context of power systems, wherein noticeable inherent correlation exists. Due to the highly complex nature of the problem, a new optimization method based on the harmony search algorithm along with an intelligent modification method is devised to solve the proposed optimization problem, efficiently. The proposed optimization algorithm is equipped with powerful search mechanisms that make it suitable for solving both discrete and continuous problems. In comparison with the original harmony search algorithm, the proposed modified optimization algorithm has few setting parameters. The new modified harmony search algorithm provides proper balance between the local and global searches. The feasibility and satisfactory performance of performance of the proposed method are examined on two typical grid-connected MGs. - Highlights: • Introducing a new artificial optimization algorithm based on HS evolutionary technique. • Introducing a new stochastic framework based on unscented transform to model the uncertainties of the problem. • Proposing a new modification method for HS to improve its total search ability.

  12. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    Science.gov (United States)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  13. Optimal design and planning of glycerol-based biorefinery supply chains under uncertainty

    DEFF Research Database (Denmark)

    Loureiro da Costa Lira Gargalo, Carina; Carvalho, Ana; Gernaey, Krist V.

    2017-01-01

    -echelon mixed integer linear programming problem is proposed based upon a previous model, GlyThink. In the new formulation, market uncertainties are taken into account at the strategic planning level. The robustness of the supply chain structures is analyzed based on statistical data provided...... by the implementation of the Monte Carlo method, where a deterministic optimization problem is solved for each scenario. Furthermore, the solution of the stochastic multi-objective optimization model, points to the Pareto set of trade-off solutions obtained when maximizing the NPV and minimizing environmental......The optimal design and planning of glycerol-based biorefinery supply chains is critical for the development and implementation of this concept in a sustainable manner. To achieve this, a decision-making framework is proposed in this work, to holistically optimize the design and planning...

  14. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  15. The role of stochasticity in an information-optimal neural population code

    International Nuclear Information System (INIS)

    Stocks, N G; Nikitin, A P; McDonnell, M D; Morse, R P

    2009-01-01

    In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems. The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise; in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and, hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations. In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.

  16. Copula-based modeling of stochastic wind power in Europe and implications for the Swiss power grid

    International Nuclear Information System (INIS)

    Hagspiel, Simeon; Papaemannouil, Antonis; Schmid, Matthias; Andersson, Göran

    2012-01-01

    Highlights: ► We model stochastic wind power using copula theory. ► Stochastic wind power is integrated in a European system adequacy evaluation. ► The Swiss power grid is put at risk by further integrating wind power in Europe. ► System elements located at or close to Swiss borders are affected the most. ► A criticality indicator allows prioritizing expansion plans on a probabilistic basis. -- Abstract: Large scale integration of wind energy poses new challenges to the European power system due to its stochastic nature and often remote location. In this paper a multivariate uncertainty analysis problem is formulated for the integration of stochastic wind energy in the European grid. By applying copula theory a synthetic set of data is generated from scarce wind speed reanalysis data in order to achieve the increased sample size for the subsequent Monte Carlo simulation. In the presented case study, European wind power samples are generated from the modeled stochastic process. Under the precondition of a modeled perfect market environment, wind power impacts dispatch decisions and therefore leads to alterations in power balances. Stochastic power balances are implemented in a detailed model of the European electricity network, based on the generated samples. Finally, a Monte Carlo method is used to determine power flows and contingencies in the system. An indicator is elaborated in order to analyze risk of overloading and to prioritize necessary grid reinforcements. Implications for the Swiss power grid are investigated in detail, revealing that the current system is significantly put at risk in certain areas by the further integration of wind power in Europe. It is the first time that the results of a probabilistic model for wind energy are further deployed within a power system analysis of the interconnected European grid. The method presented in this paper allows to account for stochastic wind energy in a load flow analysis and to evaluate

  17. Energy Management Strategy in Consideration of Battery Health for PHEV via Stochastic Control and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Yuying Wang

    2017-11-01

    Full Text Available This paper presents an energy management strategy for plug-in hybrid electric vehicles (PHEVs that not only tries to minimize the energy consumption, but also considers the battery health. First, a battery model that can be applied to energy management optimization is given. In this model, battery health damage can be estimated in the different states of charge (SOC and temperature of the battery pack. Then, because of the inevitability that limiting the battery health degradation will increase energy consumption, a Pareto energy management optimization problem is formed. This multi-objective optimal control problem is solved numerically by using stochastic dynamic programming (SDP and particle swarm optimization (PSO for satisfying the vehicle power demand and considering the tradeoff between energy consumption and battery health at the same time. The optimization solution is obtained offline by utilizing real historical traffic data and formed as mappings on the system operating states so as to implement online in the actual driving conditions. Finally, the simulation results carried out on the GT-SUITE-based PHEV test platform are illustrated to demonstrate that the proposed multi-objective optimal control strategy would effectively yield benefits.

  18. Material discovery by combining stochastic surface walking global optimization with a neural network.

    Science.gov (United States)

    Huang, Si-Da; Shang, Cheng; Zhang, Xiao-Jie; Liu, Zhi-Pan

    2017-09-01

    While the underlying potential energy surface (PES) determines the structure and other properties of a material, it has been frustrating to predict new materials from theory even with the advent of supercomputing facilities. The accuracy of the PES and the efficiency of PES sampling are two major bottlenecks, not least because of the great complexity of the material PES. This work introduces a "Global-to-Global" approach for material discovery by combining for the first time a global optimization method with neural network (NN) techniques. The novel global optimization method, named the stochastic surface walking (SSW) method, is carried out massively in parallel for generating a global training data set, the fitting of which by the atom-centered NN produces a multi-dimensional global PES; the subsequent SSW exploration of large systems with the analytical NN PES can provide key information on the thermodynamics and kinetics stability of unknown phases identified from global PESs. We describe in detail the current implementation of the SSW-NN method with particular focuses on the size of the global data set and the simultaneous energy/force/stress NN training procedure. An important functional material, TiO 2 , is utilized as an example to demonstrate the automated global data set generation, the improved NN training procedure and the application in material discovery. Two new TiO 2 porous crystal structures are identified, which have similar thermodynamics stability to the common TiO 2 rutile phase and the kinetics stability for one of them is further proved from SSW pathway sampling. As a general tool for material simulation, the SSW-NN method provides an efficient and predictive platform for large-scale computational material screening.

  19. Synchronization of a Class of Memristive Stochastic Bidirectional Associative Memory Neural Networks with Mixed Time-Varying Delays via Sampled-Data Control

    Directory of Open Access Journals (Sweden)

    Manman Yuan

    2018-01-01

    Full Text Available The paper addresses the issue of synchronization of memristive bidirectional associative memory neural networks (MBAMNNs with mixed time-varying delays and stochastic perturbation via a sampled-data controller. First, we propose a new model of MBAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying distributed delays and discrete delays. Second, we design a new method of sampled-data control for the stochastic MBAMNNs. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the methods are carefully designed to confirm the synchronization processes are suitable for the feather of the memristor. Third, sufficient criteria guaranteeing the synchronization of the systems are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.

  20. On the efficiency of chaos optimization algorithms for global optimization

    International Nuclear Information System (INIS)

    Yang Dixiong; Li Gang; Cheng Gengdong

    2007-01-01

    Chaos optimization algorithms as a novel method of global optimization have attracted much attention, which were all based on Logistic map. However, we have noticed that the probability density function of the chaotic sequences derived from Logistic map is a Chebyshev-type one, which may affect the global searching capacity and computational efficiency of chaos optimization algorithms considerably. Considering the statistical property of the chaotic sequences of Logistic map and Kent map, the improved hybrid chaos-BFGS optimization algorithm and the Kent map based hybrid chaos-BFGS algorithm are proposed. Five typical nonlinear functions with multimodal characteristic are tested to compare the performance of five hybrid optimization algorithms, which are the conventional Logistic map based chaos-BFGS algorithm, improved Logistic map based chaos-BFGS algorithm, Kent map based chaos-BFGS algorithm, Monte Carlo-BFGS algorithm, mesh-BFGS algorithm. The computational performance of the five algorithms is compared, and the numerical results make us question the high efficiency of the chaos optimization algorithms claimed in some references. It is concluded that the efficiency of the hybrid optimization algorithms is influenced by the statistical property of chaotic/stochastic sequences generated from chaotic/stochastic algorithms, and the location of the global optimum of nonlinear functions. In addition, it is inappropriate to advocate the high efficiency of the global optimization algorithms only depending on several numerical examples of low-dimensional functions

  1. Stochastic Generalized Method of Moments

    KAUST Repository

    Yin, Guosheng; Ma, Yanyuan; Liang, Faming; Yuan, Ying

    2011-01-01

    The generalized method of moments (GMM) is a very popular estimation and inference procedure based on moment conditions. When likelihood-based methods are difficult to implement, one can often derive various moment conditions and construct the GMM objective function. However, minimization of the objective function in the GMM may be challenging, especially over a large parameter space. Due to the special structure of the GMM, we propose a new sampling-based algorithm, the stochastic GMM sampler, which replaces the multivariate minimization problem by a series of conditional sampling procedures. We develop the theoretical properties of the proposed iterative Monte Carlo method, and demonstrate its superior performance over other GMM estimation procedures in simulation studies. As an illustration, we apply the stochastic GMM sampler to a Medfly life longevity study. Supplemental materials for the article are available online. © 2011 American Statistical Association.

  2. Stochastic Generalized Method of Moments

    KAUST Repository

    Yin, Guosheng

    2011-08-16

    The generalized method of moments (GMM) is a very popular estimation and inference procedure based on moment conditions. When likelihood-based methods are difficult to implement, one can often derive various moment conditions and construct the GMM objective function. However, minimization of the objective function in the GMM may be challenging, especially over a large parameter space. Due to the special structure of the GMM, we propose a new sampling-based algorithm, the stochastic GMM sampler, which replaces the multivariate minimization problem by a series of conditional sampling procedures. We develop the theoretical properties of the proposed iterative Monte Carlo method, and demonstrate its superior performance over other GMM estimation procedures in simulation studies. As an illustration, we apply the stochastic GMM sampler to a Medfly life longevity study. Supplemental materials for the article are available online. © 2011 American Statistical Association.

  3. The optimal sampling of outsourcing product

    International Nuclear Information System (INIS)

    Yang Chao; Pei Jiacheng

    2014-01-01

    In order to improve quality and cost, the sampling c = 0 has been introduced to the inspection of outsourcing product. According to the current quality level (p = 0.4%), we confirmed the optimal sampling that is: Ac = 0; if N ≤ 3000, n = 55; 3001 ≤ N ≤ 10000, n = 86; N ≥ 10001, n = 108. Through analyzing the OC curve, we came to the conclusion that when N ≤ 3000, the protective ability of optimal sampling for product quality is stronger than current sampling. Corresponding to the same 'consumer risk', the product quality of optimal sampling is superior to current sampling. (authors)

  4. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  5. Boosting Lyα and He II λ1640 Line Fluxes from Population III Galaxies: Stochastic IMF Sampling and Departures from Case-B

    Science.gov (United States)

    Mas-Ribas, Lluís; Dijkstra, Mark; Forero-Romero, Jaime E.

    2016-12-01

    We revisit calculations of nebular hydrogen Lyα and He II λ1640 line strengths for Population III (Pop III) galaxies, undergoing continuous, and bursts of, star formation. We focus on initial mass functions (IMFs) motivated by recent theoretical studies, which generally span a lower range of stellar masses than earlier works. We also account for case-B departures and the stochastic sampling of the IMF. In agreement with previous work, we find that departures from case-B can enhance the Lyα flux by a factor of a few, but we argue that this enhancement is driven mainly by collisional excitation and ionization, and not due to photoionization from the n = 2 state of atomic hydrogen. The increased sensitivity of the Lyα flux to the high-energy end of the galaxy spectrum makes it more subject to stochastic sampling of the IMF. The latter introduces a dispersion in the predicted nebular line fluxes around the deterministic value by as much as a factor of ˜4. In contrast, the stochastic sampling of the IMF has less impact on the emerging Lyman Werner photon flux. When case-B departures and stochasticity effects are combined, nebular line emission from Pop III galaxies can be up to one order of magnitude brighter than predicted by “standard” calculations that do not include these effects. This enhances the prospects for detection with future facilities such as the James Webb Space Telescope and large, ground-based telescopes.

  6. The Optimal Strategy to Research Pension Funds in China Based on the Loss Function

    Directory of Open Access Journals (Sweden)

    Jian-wei Gao

    2007-10-01

    Full Text Available Based on the theory of actuarial present value, a pension fund investment goal can be formulated as an objective function. The mean-variance model is extended by defining the objective loss function. Furthermore, using the theory of stochastic optimal control, an optimal investment model is established under the minimum expectation of loss function. In the light of the Hamilton-Jacobi-Bellman (HJB equation, the analytic solution of the optimal investment strategy problem is derived.

  7. Modeling stochastic frontier based on vine copulas

    Science.gov (United States)

    Constantino, Michel; Candido, Osvaldo; Tabak, Benjamin M.; da Costa, Reginaldo Brito

    2017-11-01

    This article models a production function and analyzes the technical efficiency of listed companies in the United States, Germany and England between 2005 and 2012 based on the vine copula approach. Traditional estimates of the stochastic frontier assume that data is multivariate normally distributed and there is no source of asymmetry. The proposed method based on vine copulas allow us to explore different types of asymmetry and multivariate distribution. Using data on product, capital and labor, we measure the relative efficiency of the vine production function and estimate the coefficient used in the stochastic frontier literature for comparison purposes. This production vine copula predicts the value added by firms with given capital and labor in a probabilistic way. It thereby stands in sharp contrast to the production function, where the output of firms is completely deterministic. The results show that, on average, S&P500 companies are more efficient than companies listed in England and Germany, which presented similar average efficiency coefficients. For comparative purposes, the traditional stochastic frontier was estimated and the results showed discrepancies between the coefficients obtained by the application of the two methods, traditional and frontier-vine, opening new paths of non-linear research.

  8. A novel two-stage stochastic programming model for uncertainty characterization in short-term optimal strategy for a distribution company

    International Nuclear Information System (INIS)

    Ahmadi, Abdollah; Charwand, Mansour; Siano, Pierluigi; Nezhad, Ali Esmaeel; Sarno, Debora; Gitizadeh, Mohsen; Raeisi, Fatima

    2016-01-01

    In order to supply the demands of the end users in a competitive market, a distribution company purchases energy from the wholesale market while other options would be in access in the case of possessing distributed generation units and interruptible loads. In this regard, this study presents a two-stage stochastic programming model for a distribution company energy acquisition market model to manage the involvement of different electric energy resources characterized by uncertainties with the minimum cost. In particular, the distribution company operations planning over a day-ahead horizon is modeled as a stochastic mathematical optimization, with the objective of minimizing costs. By this, distribution company decisions on grid purchase, owned distributed generation units and interruptible load scheduling are determined. Then, these decisions are considered as boundary constraints to a second step, which deals with distribution company's operations in the hour-ahead market with the objective of minimizing the short-term cost. The uncertainties in spot market prices and wind speed are modeled by means of probability distribution functions of their forecast errors and the roulette wheel mechanism and lattice Monte Carlo simulation are used to generate scenarios. Numerical results show the capability of the proposed method. - Highlights: • Proposing a new a stochastic-based two-stage operations framework in retail competitive markets. • Proposing a Mixed Integer Non-Linear stochastic programming. • Employing roulette wheel mechanism and Lattice Monte Carlo Simulation.

  9. Stochastic Optimization of Supply Chain Risk Measures –a Methodology for Improving Supply Security of Subsidized Fuel Oil in Indonesia

    OpenAIRE

    Adinda Yuanita; Andi Noorsaman Sommeng; Anondho Wijonarko

    2015-01-01

    Monte Carlo simulation-based methods for stochastic optimization of risk measures is required to solve complex problems in supply security of subsidized fuel oil in Indonesia. In order to overcome constraints in distribution of subsidized fuel in Indonesia, which has the fourth largest population in the world—more than 250,000,000 people with 66.5% of productive population, and has more than 17,000 islands with its population centered around the nation's capital only—it is necessary to have a...

  10. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  11. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  12. Optimizing basin-scale coupled water quantity and water quality management with stochastic dynamic programming

    DEFF Research Database (Denmark)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo

    2015-01-01

    Few studies address water quality in hydro-economic models, which often focus primarily on optimal allocation of water quantities. Water quality and water quantity are closely coupled, and optimal management with focus solely on either quantity or quality may cause large costs in terms of the oth......-er component. In this study, we couple water quality and water quantity in a joint hydro-economic catchment-scale optimization problem. Stochastic dynamic programming (SDP) is used to minimize the basin-wide total costs arising from water allocation, water curtailment and water treatment. The simple water...... quality module can handle conservative pollutants, first order depletion and non-linear reactions. For demonstration purposes, we model pollutant releases as biochemical oxygen demand (BOD) and use the Streeter-Phelps equation for oxygen deficit to compute the resulting min-imum dissolved oxygen...

  13. Universal resources for approximate and stochastic measurement-based quantum computation

    International Nuclear Information System (INIS)

    Mora, Caterina E.; Piani, Marco; Miyake, Akimasa; Van den Nest, Maarten; Duer, Wolfgang; Briegel, Hans J.

    2010-01-01

    We investigate which quantum states can serve as universal resources for approximate and stochastic measurement-based quantum computation in the sense that any quantum state can be generated from a given resource by means of single-qubit (local) operations assisted by classical communication. More precisely, we consider the approximate and stochastic generation of states, resulting, for example, from a restriction to finite measurement settings or from possible imperfections in the resources or local operations. We show that entanglement-based criteria for universality obtained in M. Van den Nest et al. [New J. Phys. 9, 204 (2007)] for the exact, deterministic case can be lifted to the much more general approximate, stochastic case. This allows us to move from the idealized situation (exact, deterministic universality) considered in previous works to the practically relevant context of nonperfect state preparation. We find that any entanglement measure fulfilling some basic requirements needs to reach its maximum value on some element of an approximate, stochastic universal family of resource states, as the resource size grows. This allows us to rule out various families of states as being approximate, stochastic universal. We prove that approximate, stochastic universality is in general a weaker requirement than deterministic, exact universality and provide resources that are efficient approximate universal, but not exact deterministic universal. We also study the robustness of universal resources for measurement-based quantum computation under realistic assumptions about the (imperfect) generation and manipulation of entangled states, giving an explicit expression for the impact that errors made in the preparation of the resource have on the possibility to use it for universal approximate and stochastic state preparation. Finally, we discuss the relation between our entanglement-based criteria and recent results regarding the uselessness of states with a high

  14. Dynamic electricity pricing for electric vehicles using stochastic programming

    International Nuclear Information System (INIS)

    Soares, João; Ghazvini, Mohammad Ali Fotouhi; Borges, Nuno; Vale, Zita

    2017-01-01

    Electric Vehicles (EVs) are an important source of uncertainty, due to their variable demand, departure time and location. In smart grids, the electricity demand can be controlled via Demand Response (DR) programs. Smart charging and vehicle-to-grid seem highly promising methods for EVs control. However, high capital costs remain a barrier to implementation. Meanwhile, incentive and price-based schemes that do not require high level of control can be implemented to influence the EVs' demand. Having effective tools to deal with the increasing level of uncertainty is increasingly important for players, such as energy aggregators. This paper formulates a stochastic model for day-ahead energy resource scheduling, integrated with the dynamic electricity pricing for EVs, to address the challenges brought by the demand and renewable sources uncertainty. The two-stage stochastic programming approach is used to obtain the optimal electricity pricing for EVs. A realistic case study projected for 2030 is presented based on Zaragoza network. The results demonstrate that it is more effective than the deterministic model and that the optimal pricing is preferable. This study indicates that adequate DR schemes like the proposed one are promising to increase the customers' satisfaction in addition to improve the profitability of the energy aggregation business. - Highlights: • A stochastic model for energy scheduling tackling several uncertainty sources. • A two-stage stochastic programming is used to tackle the developed model. • Optimal EV electricity pricing seems to improve the profits. • The propose results suggest to increase the customers' satisfaction.

  15. Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control

    Science.gov (United States)

    Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.

    2016-02-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  16. American option pricing with stochastic volatility processes

    Directory of Open Access Journals (Sweden)

    Ping LI

    2017-12-01

    Full Text Available In order to solve the problem of option pricing more perfectly, the option pricing problem with Heston stochastic volatility model is considered. The optimal implementation boundary of American option and the conditions for its early execution are analyzed and discussed. In view of the fact that there is no analytical American option pricing formula, through the space discretization parameters, the stochastic partial differential equation satisfied by American options with Heston stochastic volatility is transformed into the corresponding differential equations, and then using high order compact finite difference method, numerical solutions are obtained for the option price. The numerical experiments are carried out to verify the theoretical results and simulation. The two kinds of optimal exercise boundaries under the conditions of the constant volatility and the stochastic volatility are compared, and the results show that the optimal exercise boundary also has stochastic volatility. Under the setting of parameters, the behavior and the nature of volatility are analyzed, the volatility curve is simulated, the calculation results of high order compact difference method are compared, and the numerical option solution is obtained, so that the method is verified. The research result provides reference for solving the problems of option pricing under stochastic volatility such as multiple underlying asset option pricing and barrier option pricing.

  17. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  18. Asymptotically optimal production policies in dynamic stochastic jobshops with limited buffers

    Science.gov (United States)

    Hou, Yumei; Sethi, Suresh P.; Zhang, Hanqin; Zhang, Qing

    2006-05-01

    We consider a production planning problem for a jobshop with unreliable machines producing a number of products. There are upper and lower bounds on intermediate parts and an upper bound on finished parts. The machine capacities are modelled as finite state Markov chains. The objective is to choose the rate of production so as to minimize the total discounted cost of inventory and production. Finding an optimal control policy for this problem is difficult. Instead, we derive an asymptotic approximation by letting the rates of change of the machine states approach infinity. The asymptotic analysis leads to a limiting problem in which the stochastic machine capacities are replaced by their equilibrium mean capacities. The value function for the original problem is shown to converge to the value function of the limiting problem. The convergence rate of the value function together with the error estimate for the constructed asymptotic optimal production policies are established.

  19. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    Science.gov (United States)

    Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert

    2016-01-01

    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650

  20. Deterministic and stochastic approach for safety and reliability optimization of captive power plant maintenance scheduling using GA/SA-based hybrid techniques: A comparison of results

    International Nuclear Information System (INIS)

    Mohanta, Dusmanta Kumar; Sadhu, Pradip Kumar; Chakrabarti, R.

    2007-01-01

    This paper presents a comparison of results for optimization of captive power plant maintenance scheduling using genetic algorithm (GA) as well as hybrid GA/simulated annealing (SA) techniques. As utilities catered by captive power plants are very sensitive to power failure, therefore both deterministic and stochastic reliability objective functions have been considered to incorporate statutory safety regulations for maintenance of boilers, turbines and generators. The significant contribution of this paper is to incorporate stochastic feature of generating units and that of load using levelized risk method. Another significant contribution of this paper is to evaluate confidence interval for loss of load probability (LOLP) because some variations from optimum schedule are anticipated while executing maintenance schedules due to different real-life unforeseen exigencies. Such exigencies are incorporated in terms of near-optimum schedules obtained from hybrid GA/SA technique during the final stages of convergence. Case studies corroborate that same optimum schedules are obtained using GA and hybrid GA/SA for respective deterministic and stochastic formulations. The comparison of results in terms of interval of confidence for LOLP indicates that levelized risk method adequately incorporates the stochastic nature of power system as compared with levelized reserve method. Also the interval of confidence for LOLP denotes the possible risk in a quantified manner and it is of immense use from perspective of captive power plants intended for quality power

  1. Periodic modulation-based stochastic resonance algorithm applied to quantitative analysis for weak liquid chromatography-mass spectrometry signal of granisetron in plasma

    Science.gov (United States)

    Xiang, Suyun; Wang, Wei; Xiang, Bingren; Deng, Haishan; Xie, Shaofei

    2007-05-01

    The periodic modulation-based stochastic resonance algorithm (PSRA) was used to amplify and detect the weak liquid chromatography-mass spectrometry (LC-MS) signal of granisetron in plasma. In the algorithm, the stochastic resonance (SR) was achieved by introducing an external periodic force to the nonlinear system. The optimization of parameters was carried out in two steps to give attention to both the signal-to-noise ratio (S/N) and the peak shape of output signal. By applying PSRA with the optimized parameters, the signal-to-noise ratio of LC-MS peak was enhanced significantly and distorted peak shape that often appeared in the traditional stochastic resonance algorithm was corrected by the added periodic force. Using the signals enhanced by PSRA, this method extended the limit of detection (LOD) and limit of quantification (LOQ) of granisetron in plasma from 0.05 and 0.2 ng/mL, respectively, to 0.01 and 0.02 ng/mL, and exhibited good linearity, accuracy and precision, which ensure accurate determination of the target analyte.

  2. Optimization based tuning approach for offset free MPC

    DEFF Research Database (Denmark)

    Olesen, Daniel Haugård; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2012-01-01

    We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The advantage of ARX model representations is that standard system...... identifiation techniques using convex optimization can be used for identification of such models from input-output data. The stochastic model of the ARX model identified from input-output data is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The ARMAX...... model description resulting from the extension can be realized as a state space model in innovation form. The MPC is designed and implemented based on this state space model in innovation form. Expressions for the closed-loop dynamics of the unconstrained system is used to derive the sensitivity...

  3. A new approach to developing and optimizing organization strategy based on stochastic quantitative model of strategic performance

    Directory of Open Access Journals (Sweden)

    Marko Hell

    2014-03-01

    Full Text Available This paper presents a highly formalized approach to strategy formulation and optimization of strategic performance through proper resource allocation. A stochastic quantitative model of strategic performance (SQMSP is used to evaluate the efficiency of the strategy developed. The SQMSP follows the theoretical notions of the balanced scorecard (BSC and strategy map methodologies, initially developed by Kaplan and Norton. Parameters of the SQMSP are suggested to be random variables and be evaluated by experts who give two-point (optimistic and pessimistic values and three-point (optimistic, most probable and pessimistic values evaluations. The Monte-Carlo method is used to simulate strategic performance. Having been implemented within a computer application and applied to solve the real problem (planning of an IT-strategy at the Faculty of Economics, University of Split the proposed approach demonstrated its high potential as a basis for development of decision support tools related to strategic planning.

  4. Stochastic optimal control of non-stationary response of a single-degree-of-freedom vehicle model

    Science.gov (United States)

    Narayanan, S.; Raju, G. V.

    1990-09-01

    An active suspension system to control the non-stationary response of a single-degree-of-freedom (sdf) vehicle model with variable velocity traverse over a rough road is investigated. The suspension is optimized with respect to ride comfort and road holding, using stochastic optimal control theory. The ground excitation is modelled as a spatial homogeneous random process, being the output of a linear shaping filter to white noise. The effect of the rolling contact of the tyre is considered by an additional filter in cascade. The non-stationary response with active suspension is compared with that of a passive system.

  5. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    Science.gov (United States)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  6. Optimal design of the heat pipe using TLBO (teaching–learning-based optimization) algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2015-01-01

    Heat pipe is a highly efficient and reliable heat transfer component. It is a closed container designed to transfer a large amount of heat in system. Since the heat pipe operates on a closed two-phase cycle, the heat transfer capacity is greater than for solid conductors. Also, the thermal response time is less than with solid conductors. The three major elemental parts of the rotating heat pipe are: a cylindrical evaporator, a truncated cone condenser, and a fixed amount of working fluid. In this paper, a recently proposed new stochastic advanced optimization algorithm called TLBO (Teaching–Learning-Based Optimization) algorithm is used for single objective as well as multi-objective design optimization of heat pipe. It is easy to implement, does not make use of derivatives and it can be applied to unconstrained or constrained problems. Two examples of heat pipe are presented in this paper. The results of application of TLBO algorithm for the design optimization of heat pipe are compared with the NPGA (Niched Pareto Genetic Algorithm), GEM (Grenade Explosion Method) and GEO (Generalized External optimization). It is found that the TLBO algorithm has produced better results as compared to those obtained by using NPGA, GEM and GEO algorithms. - Highlights: • The TLBO (Teaching–Learning-Based Optimization) algorithm is used for the design and optimization of a heat pipe. • Two examples of heat pipe design and optimization are presented. • The TLBO algorithm is proved better than the other optimization algorithms in terms of results and the convergence

  7. Monthly Optimal Reservoirs Operation for Multicrop Deficit Irrigation under Fuzzy Stochastic Uncertainties

    Directory of Open Access Journals (Sweden)

    Liudong Zhang

    2014-01-01

    Full Text Available An uncertain monthly reservoirs operation and multicrop deficit irrigation model was proposed under conjunctive use of underground and surface water for water resources optimization management. The objective is to maximize the total crop yield of the entire irrigation districts. Meanwhile, ecological water remained for the downstream demand. Because of the shortage of water resources, the monthly crop water production function was adopted for multiperiod deficit irrigation management. The model reflects the characteristics of water resources repetitive transformation in typical inland rivers irrigation system. The model was used as an example for water resources optimization management in Shiyang River Basin, China. Uncertainties in reservoir management shown as fuzzy probability were treated through chance-constraint parameter for decision makers. Necessity of dominance (ND was used to analyse the advantages of the method. The optimization results including reservoirs real-time operation policy, deficit irrigation management, and the available water resource allocation could be used to provide decision support for local irrigation management. Besides, the strategies obtained could help with the risk analysis of reservoirs operation stochastically.

  8. Correlative Stochastic Optical Reconstruction Microscopy and Electron Microscopy

    Science.gov (United States)

    Kim, Doory; Deerinck, Thomas J.; Sigal, Yaron M.; Babcock, Hazen P.; Ellisman, Mark H.; Zhuang, Xiaowei

    2015-01-01

    Correlative fluorescence light microscopy and electron microscopy allows the imaging of spatial distributions of specific biomolecules in the context of cellular ultrastructure. Recent development of super-resolution fluorescence microscopy allows the location of molecules to be determined with nanometer-scale spatial resolution. However, correlative super-resolution fluorescence microscopy and electron microscopy (EM) still remains challenging because the optimal specimen preparation and imaging conditions for super-resolution fluorescence microscopy and EM are often not compatible. Here, we have developed several experiment protocols for correlative stochastic optical reconstruction microscopy (STORM) and EM methods, both for un-embedded samples by applying EM-specific sample preparations after STORM imaging and for embedded and sectioned samples by optimizing the fluorescence under EM fixation, staining and embedding conditions. We demonstrated these methods using a variety of cellular targets. PMID:25874453

  9. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  10. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  11. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  12. The role of stochasticity in an information-optimal neural population code

    Energy Technology Data Exchange (ETDEWEB)

    Stocks, N G; Nikitin, A P [School of Engineering, University of Warwick, Coventry CV4 7AL (United Kingdom); McDonnell, M D [Institute for Telecommunications Research, University of South Australia, SA 5095 (Australia); Morse, R P, E-mail: n.g.stocks@warwick.ac.u [School of Life and Health Sciences, Aston University, Birmingham B4 7ET (United Kingdom)

    2009-12-01

    In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems. The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise; in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and, hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations. In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.

  13. On capital allocation for stochastic arrangement increasing actuarial risks

    Directory of Open Access Journals (Sweden)

    Pan Xiaoqing

    2017-01-01

    Full Text Available This paper studies the increasing convex ordering of the optimal discounted capital allocations for stochastic arrangement increasing risks with stochastic arrangement decreasing occurrence times. The application to optimal allocation of policy limits is presented as an illustration as well.

  14. BOOSTING LY α   AND He ii λ 1640 LINE FLUXES FROM POPULATION III GALAXIES: STOCHASTIC IMF SAMPLING AND DEPARTURES FROM CASE-B

    International Nuclear Information System (INIS)

    Mas-Ribas, Lluís; Dijkstra, Mark; Forero-Romero, Jaime E.

    2016-01-01

    We revisit calculations of nebular hydrogen Ly α and He ii λ 1640 line strengths for Population III (Pop III) galaxies, undergoing continuous, and bursts of, star formation. We focus on initial mass functions (IMFs) motivated by recent theoretical studies, which generally span a lower range of stellar masses than earlier works. We also account for case-B departures and the stochastic sampling of the IMF. In agreement with previous work, we find that departures from case-B can enhance the Ly α flux by a factor of a few, but we argue that this enhancement is driven mainly by collisional excitation and ionization, and not due to photoionization from the n  = 2 state of atomic hydrogen. The increased sensitivity of the Ly α flux to the high-energy end of the galaxy spectrum makes it more subject to stochastic sampling of the IMF. The latter introduces a dispersion in the predicted nebular line fluxes around the deterministic value by as much as a factor of ∼4. In contrast, the stochastic sampling of the IMF has less impact on the emerging Lyman Werner photon flux. When case-B departures and stochasticity effects are combined, nebular line emission from Pop III galaxies can be up to one order of magnitude brighter than predicted by “standard” calculations that do not include these effects. This enhances the prospects for detection with future facilities such as the James Webb Space Telescope and large, ground-based telescopes.

  15. Stochastic network interdiction optimization via capacitated network reliability modeling and probabilistic solution discovery

    International Nuclear Information System (INIS)

    Ramirez-Marquez, Jose Emmanuel; Rocco S, Claudio M.

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve stochastic network interdiction problems (SNIP). The network interdiction problem solved considers the minimization of the cost associated with an interdiction strategy such that the maximum flow that can be transmitted between a source node and a sink node for a fixed network design is greater than or equal to a given reliability requirement. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link and that such interdiction has a probability of being successful. This version of the SNIP is for the first time modeled as a capacitated network reliability problem allowing for the implementation of computation and solution techniques previously unavailable. The solution process is based on an evolutionary algorithm that implements: (1) Monte-Carlo simulation, to generate potential network interdiction strategies, (2) capacitated network reliability techniques to analyze strategies' source-sink flow reliability and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks are used throughout the paper to illustrate the approach

  16. A framework for model-based optimization of bioprocesses under uncertainty: Lignocellulosic ethanol production case

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Meyer, Anne S.; Gernaey, Krist

    2012-01-01

    of up to 0.13 USD/gal-ethanol. Further stochastic optimization demonstrated the options for further reduction of the production costs with different processing configurations, reaching a reduction of up to 28% in the production cost in the SHCF configuration compared to the base case operation. Further...

  17. Multi-objective stochastic scheduling optimization model for connecting a virtual power plant to wind-photovoltaic-electric vehicles considering uncertainties and demand response

    International Nuclear Information System (INIS)

    Ju, Liwei; Li, Huanhuan; Zhao, Junwei; Chen, Kangting; Tan, Qingkun; Tan, Zhongfu

    2016-01-01

    Highlights: • Our research focuses on virtual power plant. • Electric vehicle group and demand response are integrated into virtual power plant. • Stochastic chance constraint planning is applied to overcome uncertainties. • A multi-objective stochastic scheduling model is proposed for virtual power plant. • A three-stage hybrid intelligent solution algorithm is proposed for solving the model. - Abstract: A stochastic chance-constrained planning method is applied to build a multi-objective optimization model for virtual power plant scheduling. Firstly, the implementation cost of demand response is calculated using the system income difference. Secondly, a wind power plant, photovoltaic power, an electric vehicle group and a conventional power plant are aggregated into a virtual power plant. A stochastic scheduling model is proposed for the virtual power plant, considering uncertainties under three objective functions. Thirdly, a three-stage hybrid intelligent solution algorithm is proposed, featuring the particle swarm optimization algorithm, the entropy weight method and the fuzzy satisfaction theory. Finally, the Yunnan distributed power demonstration project in China is utilized for example analysis. Simulation results demonstrate that when considering uncertainties, the system will reduce the grid connection of the wind power plant and photovoltaic power to decrease the power shortage punishment cost. The average reduction of the system power shortage punishment cost and the operation revenue of virtual power plant are 61.5% and 1.76%, respectively, while the average increase of the system abandoned energy cost is 40.4%. The output of the virtual power plant exhibits a reverse distribution with the confidence degree of the uncertainty variable. The proposed algorithm rapidly calculates a global optimal set. The electric vehicle group could provide spinning reserve to ensure stability of the output of the virtual power plant. Demand response could

  18. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  19. An inexact two-stage stochastic robust programming for residential micro-grid management-based on random demand

    International Nuclear Information System (INIS)

    Ji, L.; Niu, D.X.; Huang, G.H.

    2014-01-01

    In this paper a stochastic robust optimization problem of residential micro-grid energy management is presented. Combined cooling, heating and electricity technology (CCHP) is introduced to satisfy various energy demands. Two-stage programming is utilized to find the optimal installed capacity investment and operation control of CCHP (combined cooling heating and power). Moreover, interval programming and robust stochastic optimization methods are exploited to gain interval robust solutions under different robustness levels which are feasible for uncertain data. The obtained results can help micro-grid managers minimizing the investment and operation cost with lower system failure risk when facing fluctuant energy market and uncertain technology parameters. The different robustness levels reflect the risk preference of micro-grid manager. The proposed approach is applied to residential area energy management in North China. Detailed computational results under different robustness level are presented and analyzed for providing investment decision and operation strategies. - Highlights: • An inexact two-stage stochastic robust programming model for CCHP management. • The energy market and technical parameters uncertainties were considered. • Investment decision, operation cost, and system safety were analyzed. • Uncertainties expressed as discrete intervals and probability distributions

  20. Portfolio Optimization and Mortgage Choice

    Directory of Open Access Journals (Sweden)

    Maj-Britt Nordfang

    2017-01-01

    Full Text Available This paper studies the optimal mortgage choice of an investor in a simple bond market with a stochastic interest rate and access to term life insurance. The study is based on advances in stochastic control theory, which provides analytical solutions to portfolio problems with a stochastic interest rate. We derive the optimal portfolio of a mortgagor in a simple framework and formulate stylized versions of mortgage products offered in the market today. This allows us to analyze the optimal investment strategy in terms of optimal mortgage choice. We conclude that certain extreme investors optimally choose either a traditional fixed rate mortgage or an adjustable rate mortgage, while investors with moderate risk aversion and income prefer a mix of the two. By matching specific investor characteristics to existing mortgage products, our study provides a better understanding of the complex and yet restricted mortgage choice faced by many household investors. In addition, the simple analytical framework enables a detailed analysis of how changes to market, income and preference parameters affect the optimal mortgage choice.

  1. Stochastic Stability of Endogenous Growth: Theory and Applications

    OpenAIRE

    Boucekkine, Raouf; Pintus, Patrick; Zou, Benteng

    2015-01-01

    We examine the issue of stability of stochastic endogenous growth. First, stochastic stability concepts are introduced and applied to stochastic linear homogenous differen- tial equations to which several stochastic endogenous growth models reduce. Second, we apply the mathematical theory to two models, starting with the stochastic AK model. It’s shown that in this case exponential balanced paths, which characterize optimal trajectories in the absence of uncertainty, are not robust to uncerta...

  2. Improving Sensorimotor Function and Adaptation using Stochastic Vestibular Stimulation

    Science.gov (United States)

    Galvan, R. C.; Bloomberg, J. J.; Mulavara, A. P.; Clark, T. K.; Merfeld, D. M.; Oman, C. M.

    2014-01-01

    Astronauts experience sensorimotor changes during adaption to G-transitions that occur when entering and exiting microgravity. Post space flight, these sensorimotor disturbances can include postural and gait instability, visual performance changes, manual control disruptions, spatial disorientation, and motion sickness, all of which can hinder the operational capabilities of the astronauts. Crewmember safety would be significantly increased if sensorimotor changes brought on by gravitational changes could be mitigated and adaptation could be facilitated. The goal of this research is to investigate and develop the use of electrical stochastic vestibular stimulation (SVS) as a countermeasure to augment sensorimotor function and facilitate adaptation. For this project, SVS will be applied via electrodes on the mastoid processes at imperceptible amplitude levels. We hypothesize that SVS will improve sensorimotor performance through the phenomena of stochastic resonance, which occurs when the response of a nonlinear system to a weak input signal is optimized by the application of a particular nonzero level of noise. In line with the theory of stochastic resonance, a specific optimal level of SVS will be found and tested for each subject [1]. Three experiments are planned to investigate the use of SVS in sensory-dependent tasks and performance. The first experiment will aim to demonstrate stochastic resonance in the vestibular system through perception based motion recognition thresholds obtained using a 6-degree of freedom Stewart platform in the Jenks Vestibular Laboratory at Massachusetts Eye and Ear Infirmary. A range of SVS amplitudes will be applied to each subject and the subjectspecific optimal SVS level will be identified as that which results in the lowest motion recognition threshold, through previously established, well developed methods [2,3,4]. The second experiment will investigate the use of optimal SVS in facilitating sensorimotor adaptation to system

  3. Introduction to stochastic dynamic programming

    CERN Document Server

    Ross, Sheldon M; Lukacs, E

    1983-01-01

    Introduction to Stochastic Dynamic Programming presents the basic theory and examines the scope of applications of stochastic dynamic programming. The book begins with a chapter on various finite-stage models, illustrating the wide range of applications of stochastic dynamic programming. Subsequent chapters study infinite-stage models: discounting future returns, minimizing nonnegative costs, maximizing nonnegative returns, and maximizing the long-run average return. Each of these chapters first considers whether an optimal policy need exist-providing counterexamples where appropriate-and the

  4. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  5. Stochastic dynamics of resistive switching: fluctuations lead to optimal particle number

    International Nuclear Information System (INIS)

    Radtke, Paul K; Schimansky-Geier, Lutz; Hazel, Andrew L; Straube, Arthur V

    2017-01-01

    Resistive switching (RS) is one of the foremost candidates for building novel types of non-volatile random access memories. Any practical implementation of such a memory cell calls for a strong miniaturization, at which point fluctuations start playing a role that cannot be neglected. A detailed understanding of switching mechanisms and reliability is essential. For this reason, we formulate a particle model based on the stochastic motion of oxygen vacancies. It allows us to investigate fluctuations in the resistance states of a switch with two active zones. The vacancies’ dynamics are governed by a master equation. Upon the application of a voltage pulse, the vacancies travel collectively through the switch. By deriving a generalized Burgers equation we can interpret this collective motion as nonlinear traveling waves, and numerically verify this result. Further, we define binary logical states by means of the underlying vacancy distributions, and establish a framework of writing and reading such memory element with voltage pulses. Considerations about the discriminability of these operations under fluctuations together with the markedness of the RS effect itself lead to the conclusion, that an intermediate vacancy number is optimal for performance. (paper)

  6. Stochastic dynamics of resistive switching: fluctuations lead to optimal particle number

    Science.gov (United States)

    Radtke, Paul K.; Hazel, Andrew L.; Straube, Arthur V.; Schimansky-Geier, Lutz

    2017-09-01

    Resistive switching (RS) is one of the foremost candidates for building novel types of non-volatile random access memories. Any practical implementation of such a memory cell calls for a strong miniaturization, at which point fluctuations start playing a role that cannot be neglected. A detailed understanding of switching mechanisms and reliability is essential. For this reason, we formulate a particle model based on the stochastic motion of oxygen vacancies. It allows us to investigate fluctuations in the resistance states of a switch with two active zones. The vacancies’ dynamics are governed by a master equation. Upon the application of a voltage pulse, the vacancies travel collectively through the switch. By deriving a generalized Burgers equation we can interpret this collective motion as nonlinear traveling waves, and numerically verify this result. Further, we define binary logical states by means of the underlying vacancy distributions, and establish a framework of writing and reading such memory element with voltage pulses. Considerations about the discriminability of these operations under fluctuations together with the markedness of the RS effect itself lead to the conclusion, that an intermediate vacancy number is optimal for performance.

  7. Optimal stochastic scheduling of CHP-PEMFC, WT, PV units and hydrogen storage in reconfigurable micro grids considering reliability enhancement

    International Nuclear Information System (INIS)

    Bornapour, Mosayeb; Hooshmand, Rahmat-Allah; Khodabakhshian, Amin; Parastegari, Moein

    2017-01-01

    Highlights: • Stochastic model is proposed for coordinated scheduling of renewable energy sources. • The effect of combined heat and power is considered. • Uncertainties of wind speed, solar radiation and electricity market price are considered. • Profit maximization, emission and AENS minimization are considered as objective functions. • Modified firefly algorithm is employed to solve the problem. - Abstract: Nowadays the operation of renewable energy sources and combined heat and power (CHP) units is increased in micro grids; therefore, to reach optimal performance, optimal scheduling of these units is required. In this regard, in this paper a micro grid consisting of proton exchange membrane fuel cell-combined heat and power (PEMFC-CHP), wind turbines (WT) and photovoltaic (PV) units, is modeled to determine the optimal scheduling state of these units by considering uncertain behavior of renewable energy resources. For this purpose, a scenario-based method is used for modeling the uncertainties of electrical market price, the wind speed, and solar irradiance. It should be noted that the hydrogen storage strategy is also applied in this study for PEMFC-CHP units. Market profit, total emission production, and average energy not supplied (AENS) are the objective functions considered in this paper simultaneously. Consideration of the above-mentioned objective functions converts the proposed problem to a mixed integer nonlinear programming. To solve this problem, a multi-objective firefly algorithm is used. The uncertainties of parameters convert the mixed integer nonlinear programming problem to a stochastic mixed integer nonlinear programming problem. Moreover, optimal coordinated scheduling of renewable energy resources and thermal units in micro-grids improve the value of the objective functions. Simulation results obtained from a modified 33-bus distributed network as a micro grid illustrates the effectiveness of the proposed method.

  8. Stochastic programming framework for Lithuanian pension payout modelling

    Directory of Open Access Journals (Sweden)

    Audrius Kabašinskas

    2014-12-01

    Full Text Available The paper provides a scientific approach to the problem of selecting a pension fund by taking into account some specific characteristics of the Lithuanian Republic (LR pension accumulation system. The decision making model, which can be used to plan a long-term pension accrual of the Lithuanian Republic (LR citizens, in an optimal way is presented. This model focuses on factors that influence the sustainability of the pension system selection under macroeconomic, social and demographic uncertainty. The model is formalized as a single stage stochastic optimization problem where the long-term optimal strategy can be obtained based on the possible scenarios generated for a particular participant. Stochastic programming methods allow including the pension fund rebalancing moment and direction of investment, and taking into account possible changes of personal income, changes of society and the global financial market. The collection of methods used to generate scenario trees was found useful to solve strategic planning problems.

  9. H∞ Filtering for Networked Markovian Jump Systems with Multiple Stochastic Communication Delays

    Directory of Open Access Journals (Sweden)

    Hui Dong

    2015-01-01

    Full Text Available This paper is concerned with the H∞ filtering for a class of networked Markovian jump systems with multiple communication delays. Due to the existence of communication constraints, the measurement signal cannot arrive at the filter completely on time, and the stochastic communication delays are considered in the filter design. Firstly, a set of stochastic variables is introduced to model the occurrence probabilities of the delays. Then based on the stochastic system approach, a sufficient condition is obtained such that the filtering error system is stable in the mean-square sense and with a prescribed H∞ disturbance attenuation level. The optimal filter gain parameters can be determined by solving a convex optimization problem. Finally, a simulation example is given to show the effectiveness of the proposed filter design method.

  10. Biological Inspired Stochastic Optimization Technique (PSO for DOA and Amplitude Estimation of Antenna Arrays Signal Processing in RADAR Communication System

    Directory of Open Access Journals (Sweden)

    Khurram Hammed

    2016-01-01

    Full Text Available This paper presents a stochastic global optimization technique known as Particle Swarm Optimization (PSO for joint estimation of amplitude and direction of arrival of the targets in RADAR communication system. The proposed scheme is an excellent optimization methodology and a promising approach for solving the DOA problems in communication systems. Moreover, PSO is quite suitable for real time scenario and easy to implement in hardware. In this study, uniform linear array is used and targets are supposed to be in far field of the arrays. Formulation of the fitness function is based on mean square error and this function requires a single snapshot to obtain the best possible solution. To check the accuracy of the algorithm, all of the results are taken by varying the number of antenna elements and targets. Finally, these results are compared with existing heuristic techniques to show the accuracy of PSO.

  11. Classical and Impulse Stochastic Control on the Optimization of Dividends with Residual Capital at Bankruptcy

    Directory of Open Access Journals (Sweden)

    Peimin Chen

    2017-01-01

    Full Text Available In this paper, we consider the optimization problem of dividends for the terminal bankruptcy model, in which some money would be returned to shareholders at the state of terminal bankruptcy, while accounting for the tax rate and transaction cost for dividend payout. Maximization of both expected total discounted dividends before bankruptcy and expected discounted returned money at the state of terminal bankruptcy becomes a mixed classical-impulse stochastic control problem. In order to solve this problem, we reduce it to quasi-variational inequalities with a nonzero boundary condition. We explicitly construct and verify solutions of these inequalities and present the value function together with the optimal policy.

  12. Aperiodic signals processing via parameter-tuning stochastic resonance in a photorefractive ring cavity

    Directory of Open Access Journals (Sweden)

    Xuefeng Li

    2014-04-01

    Full Text Available Based on solving numerically the generalized nonlinear Langevin equation describing the nonlinear dynamics of stochastic resonance by Fourth-order Runge-Kutta method, an aperiodic stochastic resonance based on an optical bistable system is numerically investigated. The numerical results show that a parameter-tuning stochastic resonance system can be realized by choosing the appropriate optical bistable parameters, which performs well in reconstructing aperiodic signals from a very high level of noise background. The influences of optical bistable parameters on the stochastic resonance effect are numerically analyzed via cross-correlation, and a maximum cross-correlation gain of 8 is obtained by optimizing optical bistable parameters. This provides a prospective method for reconstructing noise-hidden weak signals in all-optical signal processing systems.

  13. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...

  14. Neutral Backward Stochastic Functional Differential Equations and Their Application

    OpenAIRE

    Wei, Wenning

    2013-01-01

    In this paper we are concerned with a new type of backward equations with anticipation which we call neutral backward stochastic functional differential equations. We obtain the existence and uniqueness and prove a comparison theorem. As an application, we discuss the optimal control of neutral stochastic functional differential equations, establish a Pontryagin maximum principle, and give an explicit optimal value for the linear optimal control.

  15. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    Science.gov (United States)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  16. Global synchronization of general delayed complex networks with stochastic disturbances

    International Nuclear Information System (INIS)

    Tu Li-Lan

    2011-01-01

    In this paper, global synchronization of general delayed complex networks with stochastic disturbances, which is a zero-mean real scalar Wiener process, is investigated. The networks under consideration are continuous-time networks with time-varying delay. Based on the stochastic Lyapunov stability theory, Ito's differential rule and the linear matrix inequality (LMI) optimization technique, several delay-dependent synchronous criteria are established, which guarantee the asymptotical mean-square synchronization of drive networks and response networks with stochastic disturbances. The criteria are expressed in terms of LMI, which can be easily solved using the Matlab LMI Control Toolbox. Finally, two examples show the effectiveness and feasibility of the proposed synchronous conditions. (general)

  17. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  18. Rational risk-based decision support for drinking water well managers by optimized monitoring designs

    Science.gov (United States)

    Enzenhöfer, R.; Geiges, A.; Nowak, W.

    2011-12-01

    Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill

  19. Joint market clearing in a stochastic framework considering power system security

    International Nuclear Information System (INIS)

    Aghaei, J.; Shayanfar, H.A.; Amjady, N.

    2009-01-01

    This paper presents a new stochastic framework for provision of reserve requirements (spinning and non-spinning reserves) as well as energy in day-ahead simultaneous auctions by pool-based aggregated market scheme. The uncertainty of generating units in the form of system contingencies are considered in the market clearing procedure by the stochastic model. The solution methodology consists of two stages, which firstly, employs Monte-Carlo Simulation (MCS) for random scenario generation. Then, the stochastic market clearing procedure is implemented as a series of deterministic optimization problems (scenarios) including non-contingent scenario and different post-contingency states. The objective function of each of these deterministic optimization problems consists of offered cost function (including both energy and reserves offer costs), Lost Opportunity Cost (LOC) and Expected Interruption Cost (EIC). Each optimization problem is solved considering AC power flow and security constraints of the power system. The model is applied to the IEEE 24-bus Reliability Test System (IEEE 24-bus RTS) and simulation studies are carried out to examine the effectiveness of the proposed method.

  20. Statistical inference for discrete-time samples from affine stochastic delay differential equations

    DEFF Research Database (Denmark)

    Küchler, Uwe; Sørensen, Michael

    2013-01-01

    Statistical inference for discrete time observations of an affine stochastic delay differential equation is considered. The main focus is on maximum pseudo-likelihood estimators, which are easy to calculate in practice. A more general class of prediction-based estimating functions is investigated...

  1. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  2. Minimizing the stochasticity of halos in large-scale structure surveys

    Science.gov (United States)

    Hamaus, Nico; Seljak, Uroš; Desjacques, Vincent; Smith, Robert E.; Baldauf, Tobias

    2010-08-01

    In recent work (Seljak, Hamaus, and Desjacques 2009) it was found that weighting central halo galaxies by halo mass can significantly suppress their stochasticity relative to the dark matter, well below the Poisson model expectation. This is useful for constraining relations between galaxies and the dark matter, such as the galaxy bias, especially in situations where sampling variance errors can be eliminated. In this paper we extend this study with the goal of finding the optimal mass-dependent halo weighting. We use N-body simulations to perform a general analysis of halo stochasticity and its dependence on halo mass. We investigate the stochasticity matrix, defined as Cij≡⟨(δi-biδm)(δj-bjδm)⟩, where δm is the dark matter overdensity in Fourier space, δi the halo overdensity of the i-th halo mass bin, and bi the corresponding halo bias. In contrast to the Poisson model predictions we detect nonvanishing correlations between different mass bins. We also find the diagonal terms to be sub-Poissonian for the highest-mass halos. The diagonalization of this matrix results in one large and one low eigenvalue, with the remaining eigenvalues close to the Poisson prediction 1/n¯, where n¯ is the mean halo number density. The eigenmode with the lowest eigenvalue contains most of the information and the corresponding eigenvector provides an optimal weighting function to minimize the stochasticity between halos and dark matter. We find this optimal weighting function to match linear mass weighting at high masses, while at the low-mass end the weights approach a constant whose value depends on the low-mass cut in the halo mass function. This weighting further suppresses the stochasticity as compared to the previously explored mass weighting. Finally, we employ the halo model to derive the stochasticity matrix and the scale-dependent bias from an analytical perspective. It is remarkably successful in reproducing our numerical results and predicts that the

  3. Reliability Based Optimal Design of Vertical Breakwaters Modelled as a Series System Failure

    DEFF Research Database (Denmark)

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1996-01-01

    Reliability based design of monolithic vertical breakwaters is considered. Probabilistic models of important failure modes such as sliding and rupture failure in the rubble mound and the subsoil are described. Characterisation of the relevant stochastic parameters are presented, and relevant design...... variables are identified and an optimal system reliability formulation is presented. An illustrative example is given....

  4. A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization

    Science.gov (United States)

    Liu, Shuang; Hu, Xiangyun; Liu, Tianyou

    2014-07-01

    Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.

  5. Stochastic optimization of subprime residential mortgage loan funding and its risks / by B. de Waal

    OpenAIRE

    De Waal, Bernadine

    2010-01-01

    The subprime mortgage crisis (SMC) is an ongoing housing and nancial crisis that was triggered by a marked increase in mortgage delinquencies and foreclosures in the U.S. It has had major adverse consequences for banks and nancial markets around the globe since it became apparent in 2007. In our research, we examine an originator's (OR's) nonlinear stochastic optimal control problem related to choices regarding deposit inflow rates and marketable securities allocation. Here, ...

  6. Stochastic Funding of a Defined Contribution Pension Plan with Proportional Administrative Costs and Taxation under Mean-Variance Optimization Approach

    Directory of Open Access Journals (Sweden)

    Charles I Nkeki

    2014-11-01

    Full Text Available This paper aim at studying a mean-variance portfolio selection problem with stochastic salary, proportional administrative costs and taxation in the accumulation phase of a defined contribution (DC pension scheme. The fund process is subjected to taxation while the contribution of the pension plan member (PPM is tax exempt. It is assumed that the flow of contributions of a PPM are invested into a market that is characterized by a cash account and a stock. The optimal portfolio processes and expected wealth for the PPM are established. The efficient and parabolic frontiers of a PPM portfolios in mean-variance are obtained. It was found that capital market line can be attained when initial fund and the contribution rate are zero. It was also found that the optimal portfolio process involved an inter-temporal hedging term that will offset any shocks to the stochastic salary of the PPM.

  7. Stochastic PSO-based heat and power dispatch under environmental constraints incorporating CHP and wind power units

    Energy Technology Data Exchange (ETDEWEB)

    Piperagkas, G.S.; Anastasiadis, A.G.; Hatziargyriou, N.D. [National Technical University of Athens, School of Electrical and Computer Engineering, Electric Power Division, 9, Iroon Polytechneiou Str., GR-15773 Zografou, Athens (Greece)

    2011-01-15

    In this paper an extended stochastic multi-objective model for economic dispatch (ED) is proposed, that incorporates in the optimization process heat and power from CHP units and expected wind power. Stochastic restrictions for the CO{sub 2}, SO{sub 2} and NO{sub x} emissions are used as inequality constraints. The ED problem is solved using a multi-objective particle swarm optimization technique. The available wind power is estimated from a transformation of the wind speed considered as a random variable to wind power. Simulations are performed on the modified IEEE 30 bus network with 2 cogeneration units and actual wind data. Results concerning minimum cost and emissions reduction options are finally drawn. (author)

  8. Optimization of environmental management strategies through a dynamic stochastic possibilistic multiobjective program.

    Science.gov (United States)

    Zhang, Xiaodong; Huang, Gordon

    2013-02-15

    Greenhouse gas (GHG) emissions from municipal solid waste (MSW) management facilities have become a serious environmental issue. In MSW management, not only economic objectives but also environmental objectives should be considered simultaneously. In this study, a dynamic stochastic possibilistic multiobjective programming (DSPMP) model is developed for supporting MSW management and associated GHG emission control. The DSPMP model improves upon the existing waste management optimization methods through incorporation of fuzzy possibilistic programming and chance-constrained programming into a general mixed-integer multiobjective linear programming (MOP) framework where various uncertainties expressed as fuzzy possibility distributions and probability distributions can be effectively reflected. Two conflicting objectives are integrally considered, including minimization of total system cost and minimization of total GHG emissions from waste management facilities. Three planning scenarios are analyzed and compared, representing different preferences of the decision makers for economic development and environmental-impact (i.e. GHG-emission) issues in integrated MSW management. Optimal decision schemes under three scenarios and different p(i) levels (representing the probability that the constraints would be violated) are generated for planning waste flow allocation and facility capacity expansions as well as GHG emission control. The results indicate that economic and environmental tradeoffs can be effectively reflected through the proposed DSPMP model. The generated decision variables can help the decision makers justify and/or adjust their waste management strategies based on their implicit knowledge and preferences. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Optimal design of cluster-based ad-hoc networks using probabilistic solution discovery

    International Nuclear Information System (INIS)

    Cook, Jason L.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    The reliability of ad-hoc networks is gaining popularity in two areas: as a topic of academic interest and as a key performance parameter for defense systems employing this type of network. The ad-hoc network is dynamic and scalable and these descriptions are what attract its users. However, these descriptions are also synonymous for undefined and unpredictable when considering the impacts to the reliability of the system. The configuration of an ad-hoc network changes continuously and this fact implies that no single mathematical expression or graphical depiction can describe the system reliability-wise. Previous research has used mobility and stochastic models to address this challenge successfully. In this paper, the authors leverage the stochastic approach and build upon it a probabilistic solution discovery (PSD) algorithm to optimize the topology for a cluster-based mobile ad-hoc wireless network (MAWN). Specifically, the membership of nodes within the back-bone network or networks will be assigned in such as way as to maximize reliability subject to a constraint on cost. The constraint may also be considered as a non-monetary cost, such as weight, volume, power, or the like. When a cost is assigned to each component, a maximum cost threshold is assigned to the network, and the method is run; the result is an optimized allocation of the radios enabling back-bone network(s) to provide the most reliable network possible without exceeding the allowable cost. The method is intended for use directly as part of the architectural design process of a cluster-based MAWN to efficiently determine an optimal or near-optimal design solution. It is capable of optimizing the topology based upon all-terminal reliability (ATR), all-operating terminal reliability (AoTR), or two-terminal reliability (2TR)

  10. Data Assimilation with Optimal Maps

    Science.gov (United States)

    El Moselhy, T.; Marzouk, Y.

    2012-12-01

    Tarek El Moselhy and Youssef Marzouk Massachusetts Institute of Technology We present a new approach to Bayesian inference that entirely avoids Markov chain simulation and sequential importance resampling, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. The map is written as a multivariate polynomial expansion and computed efficiently through the solution of a stochastic optimization problem. While our previous work [1] focused on static Bayesian inference problems, we now extend the map-based approach to sequential data assimilation, i.e., nonlinear filtering and smoothing. One scheme involves pushing forward a fixed reference measure to each filtered state distribution, while an alternative scheme computes maps that push forward the filtering distribution from one stage to the other. We compare the performance of these schemes and extend the former to problems of smoothing, using a map implementation of the forward-backward smoothing formula. Advantages of a map-based representation of the filtering and smoothing distributions include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent uniformly-weighted posterior samples without additional evaluations of the dynamical model. Perhaps the main advantage, however, is that the map approach inherently avoids issues of sample impoverishment, since it explicitly represents the posterior as the pushforward of a reference measure, rather than with a particular set of samples. The computational complexity of our algorithm is comparable to state-of-the-art particle filters. Moreover, the accuracy of the approach is controlled via the convergence criterion of the underlying optimization problem. We demonstrate the efficiency and accuracy of the map approach via data assimilation in

  11. Testing the robustness of deterministic models of optimal dynamic pricing and lot-sizing for deteriorating items under stochastic conditions

    DEFF Research Database (Denmark)

    Ghoreishi, Maryam

    2018-01-01

    Many models within the field of optimal dynamic pricing and lot-sizing models for deteriorating items assume everything is deterministic and develop a differential equation as the core of analysis. Two prominent examples are the papers by Rajan et al. (Manag Sci 38:240–262, 1992) and Abad (Manag......, we will try to expose the model by Abad (1996) and Rajan et al. (1992) to stochastic inputs; however, designing these stochastic inputs such that they as closely as possible are aligned with the assumptions of those papers. We do our investigation through a numerical test where we test the robustness...... of the numerical results reported in Rajan et al. (1992) and Abad (1996) in a simulation model. Our numerical results seem to confirm that the results stated in these papers are indeed robust when being imposed to stochastic inputs....

  12. Optimization of observation plan based on the stochastic characteristics of the geodetic network

    Directory of Open Access Journals (Sweden)

    Pachelski Wojciech

    2016-06-01

    Full Text Available Optimal design of geodetic network is a basic subject of many engineering projects. An observation plan is a concluding part of the process. Any particular observation within the network has through adjustment a different contribution and impact on values and accuracy characteristics of unknowns. The problem of optimal design can be solved by means of computer simulation. This paper presents a new method of simulation based on sequential estimation of individual observations in a step-by-step manner, by means of the so-called filtering equations. The algorithm aims at satisfying different criteria of accuracy according to various interpretations of the covariance matrix. Apart of them, the optimization criterion is also amount of effort, defined as the minimum number of observations required.

  13. Probabilistic numerical methods for high-dimensional stochastic control and valuation problems on electricity markets

    International Nuclear Information System (INIS)

    Langrene, Nicolas

    2014-01-01

    This thesis deals with the numerical solution of general stochastic control problems, with notable applications for electricity markets. We first propose a structural model for the price of electricity, allowing for price spikes well above the marginal fuel price under strained market conditions. This model allows to price and partially hedge electricity derivatives, using fuel forwards as hedging instruments. Then, we propose an algorithm, which combines Monte-Carlo simulations with local basis regressions, to solve general optimal switching problems. A comprehensive rate of convergence of the method is provided. Moreover, we manage to make the algorithm parsimonious in memory (and hence suitable for high dimensional problems) by generalizing to this framework a memory reduction method that avoids the storage of the sample paths. We illustrate this on the problem of investments in new power plants (our structural power price model allowing the new plants to impact the price of electricity). Finally, we study more general stochastic control problems (the control can be continuous and impact the drift and volatility of the state process), the solutions of which belong to the class of fully nonlinear Hamilton-Jacobi-Bellman equations, and can be handled via constrained Backward Stochastic Differential Equations, for which we develop a backward algorithm based on control randomization and parametric optimizations. A rate of convergence between the constraPned BSDE and its discrete version is provided, as well as an estimate of the optimal control. This algorithm is then applied to the problem of super replication of options under uncertain volatilities (and correlations). (author)

  14. Discrete least squares polynomial approximation with random evaluations − application to parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2015-04-08

    Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.

  15. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  16. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  17. A hybrid evolutionary algorithm for multi-objective anatomy-based dose optimization in high-dose-rate brachytherapy

    International Nuclear Information System (INIS)

    Lahanas, M; Baltas, D; Zamboglou, N

    2003-01-01

    Multiple objectives must be considered in anatomy-based dose optimization for high-dose-rate brachytherapy and a large number of parameters must be optimized to satisfy often competing objectives. For objectives expressed solely in terms of dose variances, deterministic gradient-based algorithms can be applied and a weighted sum approach is able to produce a representative set of non-dominated solutions. As the number of objectives increases, or non-convex objectives are used, local minima can be present and deterministic or stochastic algorithms such as simulated annealing either cannot be used or are not efficient. In this case we employ a modified hybrid version of the multi-objective optimization algorithm NSGA-II. This, in combination with the deterministic optimization algorithm, produces a representative sample of the Pareto set. This algorithm can be used with any kind of objectives, including non-convex, and does not require artificial importance factors. A representation of the trade-off surface can be obtained with more than 1000 non-dominated solutions in 2-5 min. An analysis of the solutions provides information on the possibilities available using these objectives. Simple decision making tools allow the selection of a solution that provides a best fit for the clinical goals. We show an example with a prostate implant and compare results obtained by variance and dose-volume histogram (DVH) based objectives

  18. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  19. Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

    KAUST Repository

    Loizou, Nicolas

    2017-12-27

    In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.

  20. Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

    KAUST Repository

    Loizou, Nicolas; Richtarik, Peter

    2017-01-01

    In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.

  1. A complementarity model for solving stochastic natural gas market equilibria

    International Nuclear Information System (INIS)

    Jifang Zhuang; Gabriel, S.A.

    2008-01-01

    This paper presents a stochastic equilibrium model for deregulated natural gas markets. Each market participant (pipeline operators, producers, etc.) solves a stochastic optimization problem whose optimality conditions, when combined with market-clearing conditions give rise to a certain mixed complementarity problem (MiCP). The stochastic aspects are depicted by a recourse problem for each player in which the first-stage decisions relate to long-term contracts and the second-stage decisions relate to spot market activities for three seasons. Besides showing that such a market model is an instance of a MiCP, we provide theoretical results concerning long-term and spot market prices and solve the resulting MiCP for a small yet representative market. We also note an interesting observation for the value of the stochastic solution for non-optimization problems. (author)

  2. A complementarity model for solving stochastic natural gas market equilibria

    International Nuclear Information System (INIS)

    Zhuang Jifang; Gabriel, Steven A.

    2008-01-01

    This paper presents a stochastic equilibrium model for deregulated natural gas markets. Each market participant (pipeline operators, producers, etc.) solves a stochastic optimization problem whose optimality conditions, when combined with market-clearing conditions give rise to a certain mixed complementarity problem (MiCP). The stochastic aspects are depicted by a recourse problem for each player in which the first-stage decisions relate to long-term contracts and the second-stage decisions relate to spot market activities for three seasons. Besides showing that such a market model is an instance of a MiCP, we provide theoretical results concerning long-term and spot market prices and solve the resulting MiCP for a small yet representative market. We also note an interesting observation for the value of the stochastic solution for non-optimization problems

  3. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  4. Evolving Stochastic Learning Algorithm based on Tsallis entropic index

    Science.gov (United States)

    Anastasiadis, A. D.; Magoulas, G. D.

    2006-03-01

    In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method.

  5. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  6. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  7. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  8. A bi-level stochastic scheduling optimization model for a virtual power plant connected to a wind–photovoltaic–energy storage system considering the uncertainty and demand response

    International Nuclear Information System (INIS)

    Ju, Liwei; Tan, Zhongfu; Yuan, Jinyun; Tan, Qingkun; Li, Huanhuan; Dong, Fugui

    2016-01-01

    Highlights: • Our research focuses on Virtual Power Plant (VPP). • Virtual Power Plant consists of WPP, PV, CGT, ESSs and DRPs. • Robust optimization theory is introduced to analyze uncertainties. • A bi-level stochastic scheduling optimization model is proposed for VPP. • Models are built to measure the impacts of ESSs and DERPs on VPP operation. - Abstract: To reduce the uncertain influence of wind power and solar photovoltaic power on virtual power plant (VPP) operation, robust optimization theory (ROT) is introduced to build a stochastic scheduling model for VPP considering the uncertainty, price-based demand response (PBDR) and incentive-based demand response (IBDR). First, the VPP components are described including the wind power plant (WPP), photovoltaic generators (PV), convention gas turbine (CGT), energy storage systems (ESSs) and demand resource providers (DRPs). Then, a scenario generation and reduction frame is proposed for analyzing and simulating output stochastics based on the interval method and the Kantorovich distance. Second, a bi-level robust scheduling model is proposed with a double robust coefficient for WPP and PV. In the upper layer model, the maximum VPP operation income is taken as the optimization objective for building the scheduling model with the day-ahead prediction output of WPP and PV. In the lower layer model, the day-ahead scheduling scheme is revised with the actual output of the WPP and PV under the objectives of the minimum system net load and the minimum system operation cost. Finally, the independent micro-grid in a coastal island in eastern China is used for the simulation analysis. The results illustrate that the model can overcome the influence of uncertainty on VPP operations and reduce the system power shortage cost by connecting the day-ahead scheduling with the real-time scheduling. ROT could provide a flexible decision tool for decision makers, effectively addressing system uncertainties. ESSs could

  9. Driving-behavior-aware stochastic model predictive control for plug-in hybrid electric buses

    International Nuclear Information System (INIS)

    Li, Liang; You, Sixiong; Yang, Chao; Yan, Bingjie; Song, Jian; Chen, Zheng

    2016-01-01

    Highlights: • The novel approximated global optimal energy management strategy has been proposed for hybrid powertrains. • Eight typical driving behaviors have been classified with K-means to deal with the multiplicative traffic conditions. • The stochastic driver models of different driving behaviors were established based on the Markov chains. • ECMS was used to modify the SMPC-based energy management strategy to improve its fuel economy. • The approximated global optimal energy management strategy for plug-in hybrid electric buses has been verified and analyzed. - Abstract: Driving cycles of a city bus is statistically characterized by some repetitive features, which makes the predictive energy management strategy very desirable to obtain approximate optimal fuel economy of a plug-in hybrid electric bus. But dealing with the complicated traffic conditions and finding an approximated global optimal strategy which is applicable to the plug-in hybrid electric bus still remains a challenging technique. To solve this problem, a novel driving-behavior-aware modified stochastic model predictive control method is proposed for the plug-in hybrid electric bus. Firstly, the K-means is employed to classify driving behaviors, and the driver models based on Markov chains is obtained under different kinds of driving behaviors. While the obtained driver behaviors are regarded as stochastic disturbance inputs, the local minimum fuel consumption might be obtained with a traditional stochastic model predictive control at each step, taking tracking the reference battery state of charge trajectory into consideration in the finite predictive horizons. However, this technique is still accompanied by some working points with reduced/worsened fuel economy. Thus, the stochastic model predictive control is modified with the equivalent consumption minimization strategy to eliminate these undesirable working points. The results in real-world city bus routines show that the

  10. A primal-dual decomposition based interior point approach to two-stage stochastic linear programming

    NARCIS (Netherlands)

    A.B. Berkelaar (Arjan); C.L. Dert (Cees); K.P.B. Oldenkamp; S. Zhang (Shuzhong)

    1999-01-01

    textabstractDecision making under uncertainty is a challenge faced by many decision makers. Stochastic programming is a major tool developed to deal with optimization with uncertainties that has found applications in, e.g. finance, such as asset-liability and bond-portfolio management.

  11. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  12. A Stochastic Programming Approach with Improved Multi-Criteria Scenario-Based Solution Method for Sustainable Reverse Logistics Design of Waste Electrical and Electronic Equipment (WEEE

    Directory of Open Access Journals (Sweden)

    Hao Yu

    2016-12-01

    Full Text Available Today, the increased public concern about sustainable development and more stringent environmental regulations have become important driving forces for value recovery from end-of-life and end-of use products through reverse logistics. Waste electrical and electronic equipment (WEEE contains both valuable components that need to be recycled and hazardous substances that have to be properly treated or disposed of, so the design of a reverse logistics system for sustainable treatment of WEEE is of paramount importance. This paper presents a stochastic mixed integer programming model for designing and planning a generic multi-source, multi-echelon, capacitated, and sustainable reverse logistics network for WEEE management under uncertainty. The model takes into account both economic efficiency and environmental impacts in decision-making, and the environmental impacts are evaluated in terms of carbon emissions. A multi-criteria two-stage scenario-based solution method is employed and further developed in this study for generating the optimal solution for the stochastic optimization problem. The proposed model and solution method are validated through a numerical experiment and sensitivity analyses presented later in this paper, and an analysis of the results is also given to provide a deep managerial insight into the application of the proposed stochastic optimization model.

  13. Bond-based linear indices of the non-stochastic and stochastic edge-adjacency matrix. 1. Theory and modeling of ChemPhys properties of organic molecules.

    Science.gov (United States)

    Marrero-Ponce, Yovani; Martínez-Albelo, Eugenio R; Casañola-Martín, Gerardo M; Castillo-Garit, Juan A; Echevería-Díaz, Yunaimy; Zaldivar, Vicente Romero; Tygat, Jan; Borges, José E Rodriguez; García-Domenech, Ramón; Torrens, Francisco; Pérez-Giménez, Facundo

    2010-11-01

    Novel bond-level molecular descriptors are proposed, based on linear maps similar to the ones defined in algebra theory. The kth edge-adjacency matrix (E(k)) denotes the matrix of bond linear indices (non-stochastic) with regard to canonical basis set. The kth stochastic edge-adjacency matrix, ES(k), is here proposed as a new molecular representation easily calculated from E(k). Then, the kth stochastic bond linear indices are calculated using ES(k) as operators of linear transformations. In both cases, the bond-type formalism is developed. The kth non-stochastic and stochastic total linear indices are calculated by adding the kth non-stochastic and stochastic bond linear indices, respectively, of all bonds in molecule. First, the new bond-based molecular descriptors (MDs) are tested for suitability, for the QSPRs, by analyzing regressions of novel indices for selected physicochemical properties of octane isomers (first round). General performance of the new descriptors in this QSPR studies is evaluated with regard to the well-known sets of 2D/3D MDs. From the analysis, we can conclude that the non-stochastic and stochastic bond-based linear indices have an overall good modeling capability proving their usefulness in QSPR studies. Later, the novel bond-level MDs are also used for the description and prediction of the boiling point of 28 alkyl-alcohols (second round), and to the modeling of the specific rate constant (log k), partition coefficient (log P), as well as the antibacterial activity of 34 derivatives of 2-furylethylenes (third round). The comparison with other approaches (edge- and vertices-based connectivity indices, total and local spectral moments, and quantum chemical descriptors as well as E-state/biomolecular encounter parameters) exposes a good behavior of our method in this QSPR studies. Finally, the approach described in this study appears to be a very promising structural invariant, useful not only for QSPR studies but also for similarity

  14. Noncausal stochastic calculus

    CERN Document Server

    Ogawa, Shigeyoshi

    2017-01-01

    This book presents an elementary introduction to the theory of noncausal stochastic calculus that arises as a natural alternative to the standard theory of stochastic calculus founded in 1944 by Professor Kiyoshi Itô. As is generally known, Itô Calculus is essentially based on the "hypothesis of causality", asking random functions to be adapted to a natural filtration generated by Brownian motion or more generally by square integrable martingale. The intention in this book is to establish a stochastic calculus that is free from this "hypothesis of causality". To be more precise, a noncausal theory of stochastic calculus is developed in this book, based on the noncausal integral introduced by the author in 1979. After studying basic properties of the noncausal stochastic integral, various concrete problems of noncausal nature are considered, mostly concerning stochastic functional equations such as SDE, SIE, SPDE, and others, to show not only the necessity of such theory of noncausal stochastic calculus but ...

  15. Two-stage stochastic day-ahead optimal resource scheduling in a distribution network with intensive use of distributed energy resources

    DEFF Research Database (Denmark)

    Sousa, Tiago; Ghazvini, Mohammad Ali Fotouhi; Morais, Hugo

    2015-01-01

    The integration of renewable sources and electric vehicles will introduce new uncertainties to the optimal resource scheduling, namely at the distribution level. These uncertainties are mainly originated by the power generated by renewables sources and by the electric vehicles charge requirements....... This paper proposes a two-state stochastic programming approach to solve the day-ahead optimal resource scheduling problem. The case study considers a 33-bus distribution network with 66 distributed generation units and 1000 electric vehicles....

  16. Optimal base-stock policy for the inventory system with periodic review, backorders and sequential lead times

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Thorstenson, Anders

    2008-01-01

    We extend well-known formulae for the optimal base stock of the inventory system with continuous review and constant lead time to the case with periodic review and stochastic, sequential lead times. Our extension uses the notion of the 'extended lead time'. The derived performance measures...

  17. Mesh Denoising based on Normal Voting Tensor and Binary Optimization.

    Science.gov (United States)

    Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad

    2017-08-17

    This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.

  18. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  19. A novel integrated condition-based maintenance and stochastic flexible job shop scheduling problem

    DEFF Research Database (Denmark)

    Rahmati, Seyed Habib A.; Ahmadi, Abbas; Govindan, Kannan

    2018-01-01

    the level of the system optimization. By means of this equipment, managers can benefit from a condition-based maintenance (CBM) for monitoring and managing their system. The chief aim of the paper is to develop a stochastic maintenance problem based on CBM activities engaged with a complex applied......Integrated consideration of production planning and maintenance processes is a real world assumption. Specifically, by improving the monitoring equipment such as various sensors or product-embedded information devices in recent years, joint assessment of these processes is inevitable for enhancing...... production problem called flexible job shop scheduling problem (FJSP). This integrated problem considers two maintenance scenarios in terms of corrective maintenance (CM) and preventive maintenance (PM). The activation of scenario is done by monitoring the degradation condition of the system and comparing...

  20. Optimal and centralized reservoir management for drought and flood protection via Stochastic Dual Dynamic Programming on the Upper Seine-Aube River system

    Science.gov (United States)

    Chiavico, Mattia; Raso, Luciano; Dorchies, David; Malaterre, Pierre-Olivier

    2015-04-01

    Seine river region is an extremely important logistic and economic junction for France and Europe. The hydraulic protection of most part of the region relies on four controlled reservoirs, managed by EPTB Seine-Grands Lacs. Presently, reservoirs operation is not centrally coordinated, and release rules are based on empirical filling curves. In this study, we analyze how a centralized release policy can face flood and drought risks, optimizing water system efficiency. The optimal and centralized decisional problem is solved by Stochastic Dual Dynamic Programming (SDDP) method, minimizing an operational indicator for each planning objective. SDDP allows us to include into the system: 1) the hydrological discharge, specifically a stochastic semi-distributed auto-regressive model, 2) the hydraulic transfer model, represented by a linear lag and route model, and 3) reservoirs and diversions. The novelty of this study lies on the combination of reservoir and hydraulic models in SDDP for flood and drought protection problems. The study case covers the Seine basin until the confluence with Aube River: this system includes two reservoirs, the city of Troyes, and the Nuclear power plant of Nogent-Sur-Seine. The conflict between the interests of flood protection, drought protection, water use and ecology leads to analyze the environmental system in a Multi-Objective perspective.

  1. Feedback optimal control of dynamic stochastic two-machine flowshop with a finite buffer

    Directory of Open Access Journals (Sweden)

    Thang Diep

    2010-06-01

    Full Text Available This paper examines the optimization of production involving a tandem two-machine system producing a single part type, with each machine being subject to random breakdowns and repairs. An analytical model is formulated with a view to solving an optimal stochastic production problem of the system with machines having up-downtime non-exponential distributions. The model developed is obtained by using a dynamic programming approach and a semi-Markov process. The control problem aims to find the production rates needed by the machines to meet the demand rate, through a minimization of the inventory/shortage cost. Using the Bellman principle, the optimality conditions obtained satisfy the Hamilton-Jacobi-Bellman equation, which depends on time and system states, and ultimately, leads to a feedback control. Consequently, the new model enables us to improve the coefficient of variation (CVup/down to be less than one while it is equal to one in Markov model. Heuristics methods are used to involve the problem because of the difficulty of the analytical model using several states, and to show what control law should be used in each system state (i.e., including Kanban, feedback and CONWIP control. Numerical methods are used to solve the optimality conditions and to show how a machine should produce.

  2. H∞ state estimation of stochastic memristor-based neural networks with time-varying delays.

    Science.gov (United States)

    Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir

    2018-03-01

    This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Bionic optimization in structural design stochastically based methods to improve the performance of parts and assemblies

    CERN Document Server

    Gekeler, Simon

    2016-01-01

    The book provides suggestions on how to start using bionic optimization methods, including pseudo-code examples of each of the important approaches and outlines of how to improve them. The most efficient methods for accelerating the studies are discussed. These include the selection of size and generations of a study’s parameters, modification of these driving parameters, switching to gradient methods when approaching local maxima, and the use of parallel working hardware. Bionic Optimization means finding the best solution to a problem using methods found in nature. As Evolutionary Strategies and Particle Swarm Optimization seem to be the most important methods for structural optimization, we primarily focus on them. Other methods such as neural nets or ant colonies are more suited to control or process studies, so their basic ideas are outlined in order to motivate readers to start using them. A set of sample applications shows how Bionic Optimization works in practice. From academic studies on simple fra...

  4. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  5. A Simulation-Based Dynamic Stochastic Route Choice Model for Evacuation

    Directory of Open Access Journals (Sweden)

    Xing Zhao

    2012-01-01

    Full Text Available This paper establishes a dynamic stochastic route choice model for evacuation to simulate the propagation process of traffic flow and estimate the stochastic route choice under evacuation situations. The model contains a lane-group-based cell transmission model (CTM which sets different traffic capacities for links with different turning movements to flow out in an evacuation situation, an actual impedance model which is to obtain the impedance of each route in time units at each time interval and a stochastic route choice model according to the probit-based stochastic user equilibrium. In this model, vehicles loading at each origin at each time interval are assumed to choose an evacuation route under determinate road network, signal design, and OD demand. As a case study, the proposed model is validated on the network nearby Nanjing Olympic Center after the opening ceremony of the 10th National Games of the People's Republic of China. The traffic volumes and clearing time at five exit points of the evacuation zone are calculated by the model to compare with survey data. The results show that this model can appropriately simulate the dynamic route choice and evolution process of the traffic flow on the network in an evacuation situation.

  6. International Conference Modern Stochastics: Theory and Applications III

    CERN Document Server

    Limnios, Nikolaos; Mishura, Yuliya; Sakhno, Lyudmyla; Shevchenko, Georgiy; Modern Stochastics and Applications

    2014-01-01

    This volume presents an extensive overview of all major modern trends in applications of probability and stochastic analysis. It will be a  great source of inspiration for designing new algorithms, modeling procedures, and experiments. Accessible to researchers, practitioners, as well as graduate and postgraduate students, this volume presents a variety of new tools, ideas, and methodologies in the fields of optimization, physics, finance, probability, hydrodynamics, reliability, decision making, mathematical finance, mathematical physics, and economics. Contributions to this Work include those of selected speakers from the international conference entitled “Modern Stochastics: Theory and Applications III,”  held on September 10 –14, 2012 at Taras Shevchenko National University of Kyiv, Ukraine. The conference covered the following areas of research in probability theory and its applications: stochastic analysis, stochastic processes and fields, random matrices, optimization methods in probability, st...

  7. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  8. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  9. Note: Optimal base-stock policy for the inventory system with periodic review, backorders and sequential lead times

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Thorstenson, Anders

    We show that well-known textbook formulae for determining the optimal base stock of the inventory system with continuous review and constant lead time can easily be extended to the case with periodic review and stochastic, sequential lead times. The provided performance measures and conditions...

  10. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  11. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  12. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  13. Mean Field Games for Stochastic Growth with Relative Utility

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Minyi, E-mail: mhuang@math.carleton.ca [Carleton University, School of Mathematics and Statistics (Canada); Nguyen, Son Luu, E-mail: sonluu.nguyen@upr.edu [University of Puerto Rico, Department of Mathematics (United States)

    2016-12-15

    This paper considers continuous time stochastic growth-consumption optimization in a mean field game setting. The individual capital stock evolution is determined by a Cobb–Douglas production function, consumption and stochastic depreciation. The individual utility functional combines an own utility and a relative utility with respect to the population. The use of the relative utility reflects human psychology, leading to a natural pattern of mean field interaction. The fixed point equation of the mean field game is derived with the aid of some ordinary differential equations. Due to the relative utility interaction, our performance analysis depends on some ratio based approximation error estimate.

  14. Mean Field Games for Stochastic Growth with Relative Utility

    International Nuclear Information System (INIS)

    Huang, Minyi; Nguyen, Son Luu

    2016-01-01

    This paper considers continuous time stochastic growth-consumption optimization in a mean field game setting. The individual capital stock evolution is determined by a Cobb–Douglas production function, consumption and stochastic depreciation. The individual utility functional combines an own utility and a relative utility with respect to the population. The use of the relative utility reflects human psychology, leading to a natural pattern of mean field interaction. The fixed point equation of the mean field game is derived with the aid of some ordinary differential equations. Due to the relative utility interaction, our performance analysis depends on some ratio based approximation error estimate.

  15. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  16. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  17. A Newton-Based Extremum Seeking MPPT Method for Photovoltaic Systems with Stochastic Perturbations

    Directory of Open Access Journals (Sweden)

    Heng Li

    2014-01-01

    Full Text Available Microcontroller based maximum power point tracking (MPPT has been the most popular MPPT approach in photovoltaic systems due to its high flexibility and efficiency in different photovoltaic systems. It is well known that PV systems typically operate under a range of uncertain environmental parameters and disturbances, which implies that MPPT controllers generally suffer from some unknown stochastic perturbations. To address this issue, a novel Newton-based stochastic extremum seeking MPPT method is proposed. Treating stochastic perturbations as excitation signals, the proposed MPPT controller has a good tolerance of stochastic perturbations in nature. Different from conventional gradient-based extremum seeking MPPT algorithm, the convergence rate of the proposed controller can be totally user-assignable rather than determined by unknown power map. The stability and convergence of the proposed controller are rigorously proved. We further discuss the effects of partial shading and PV module ageing on the proposed controller. Numerical simulations and experiments are conducted to show the effectiveness of the proposed MPPT algorithm.

  18. A pseudo-optimal inexact stochastic interval T2 fuzzy sets approach for energy and environmental systems planning under uncertainty: A case study for Xiamen City of China

    International Nuclear Information System (INIS)

    Jin, L.; Huang, G.H.; Fan, Y.R.; Wang, L.; Wu, T.

    2015-01-01

    Highlights: • Propose a new energy PIS-IT2FSLP model for Xiamen City under uncertainties. • Analyze the energy supply, demand, and its flow structure of this city. • Use real energy statistics to prove the superiority of PIS-IT2FSLP method. • Obtain optimal solutions that reflect environmental requirements. • Help local authorities devise an optimal energy strategy for this local area. - Abstract: In this study, a new Pseudo-optimal Inexact Stochastic Interval Type-2 Fuzzy Sets Linear Programming (PIS-IT2FSLP) energy model is developed to support energy system planning and environment requirements under uncertainties for Xiamen City. The PIS-IT2FSLP model is based on an integration of interval Type 2 (T2) Fuzzy Sets (FS) boundary programming and stochastic linear programming techniques, enables it to have robust abilities to the tackle uncertainties expressed as T2 FS intervals and probabilistic distributions within a general optimization framework. This new model can sophisticatedly facilitate system analysis of energy supply and energy conversion processes, and environmental requirements as well as provide capacity expansion options with multiple periods. The PIS-IT2FSLP model was applied to a real case study of Xiamen energy systems. Based on a robust two-step solution algorithm, reasonable solutions have been obtained, which reflect tradeoffs between economic and environmental requirements, and among seasonal volatility energy demands of the right hand side constraints of Xiamen energy system. Thus, the lower and upper solutions of PIS-IT2FSLP would then help local energy authorities adjust current energy patterns, and discover an optimal energy strategy for the development of Xiamen City

  19. Mitigating Observation Perturbation Sampling Errors in the Stochastic EnKF

    KAUST Repository

    Hoteit, Ibrahim

    2015-03-17

    The stochastic ensemble Kalman filter (EnKF) updates its ensemble members with observations perturbed with noise sampled from the distribution of the observational errors. This was shown to introduce noise into the system and may become pronounced when the ensemble size is smaller than the rank of the observational error covariance, which is often the case in real oceanic and atmospheric data assimilation applications. This work introduces an efficient serial scheme to mitigate the impact of observations’ perturbations sampling in the analysis step of the EnKF, which should provide more accurate ensemble estimates of the analysis error covariance matrices. The new scheme is simple to implement within the serial EnKF algorithm, requiring only the approximation of the EnKF sample forecast error covariance matrix by a matrix with one rank less. The new EnKF scheme is implemented and tested with the Lorenz-96 model. Results from numerical experiments are conducted to compare its performance with the EnKF and two standard deterministic EnKFs. This study shows that the new scheme enhances the behavior of the EnKF and may lead to better performance than the deterministic EnKFs even when implemented with relatively small ensembles.

  20. Mitigating Observation Perturbation Sampling Errors in the Stochastic EnKF

    KAUST Repository

    Hoteit, Ibrahim; Pham, D.-T.; El Gharamti, Mohamad; Luo, X.

    2015-01-01

    The stochastic ensemble Kalman filter (EnKF) updates its ensemble members with observations perturbed with noise sampled from the distribution of the observational errors. This was shown to introduce noise into the system and may become pronounced when the ensemble size is smaller than the rank of the observational error covariance, which is often the case in real oceanic and atmospheric data assimilation applications. This work introduces an efficient serial scheme to mitigate the impact of observations’ perturbations sampling in the analysis step of the EnKF, which should provide more accurate ensemble estimates of the analysis error covariance matrices. The new scheme is simple to implement within the serial EnKF algorithm, requiring only the approximation of the EnKF sample forecast error covariance matrix by a matrix with one rank less. The new EnKF scheme is implemented and tested with the Lorenz-96 model. Results from numerical experiments are conducted to compare its performance with the EnKF and two standard deterministic EnKFs. This study shows that the new scheme enhances the behavior of the EnKF and may lead to better performance than the deterministic EnKFs even when implemented with relatively small ensembles.

  1. Brownian motion and stochastic calculus

    CERN Document Server

    Karatzas, Ioannis

    1998-01-01

    This book is designed as a text for graduate courses in stochastic processes. It is written for readers familiar with measure-theoretic probability and discrete-time processes who wish to explore stochastic processes in continuous time. The vehicle chosen for this exposition is Brownian motion, which is presented as the canonical example of both a martingale and a Markov process with continuous paths. In this context, the theory of stochastic integration and stochastic calculus is developed. The power of this calculus is illustrated by results concerning representations of martingales and change of measure on Wiener space, and these in turn permit a presentation of recent advances in financial economics (option pricing and consumption/investment optimization). This book contains a detailed discussion of weak and strong solutions of stochastic differential equations and a study of local time for semimartingales, with special emphasis on the theory of Brownian local time. The text is complemented by a large num...

  2. A Bayesian optimal design for degradation tests based on the inverse Gaussian process

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Weiwen; Liu, Yu; Li, Yan Feng; Zhu, Shun Peng; Huang, Hong Zhong [University of Electronic Science and Technology of China, Chengdu (China)

    2014-10-15

    The inverse Gaussian process is recently introduced as an attractive and flexible stochastic process for degradation modeling. This process has been demonstrated as a valuable complement for models that are developed on the basis of the Wiener and gamma processes. We investigate the optimal design of the degradation tests on the basis of the inverse Gaussian process. In addition to an optimal design with pre-estimated planning values of model parameters, we also address the issue of uncertainty in the planning values by using the Bayesian method. An average pre-posterior variance of reliability is used as the optimization criterion. A trade-off between sample size and number of degradation observations is investigated in the degradation test planning. The effects of priors on the optimal designs and on the value of prior information are also investigated and quantified. The degradation test planning of a GaAs Laser device is performed to demonstrate the proposed method.

  3. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  4. A termination criterion for parameter estimation in stochastic models in systems biology.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2015-11-01

    Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model. Copyright © 2015. Published by Elsevier Ireland Ltd.

  5. Effects of Risk Aversion on Market Outcomes: A Stochastic Two-Stage Equilibrium Model

    DEFF Research Database (Denmark)

    Kazempour, Jalal; Pinson, Pierre

    2016-01-01

    This paper evaluates how different risk preferences of electricity producers alter the market-clearing outcomes. Toward this goal, we propose a stochastic equilibrium model for electricity markets with two settlements, i.e., day-ahead and balancing, in which a number of conventional and stochastic...... by its optimality conditions, resulting in a mixed complementarity problem. Numerical results from a case study based on the IEEE one-area reliability test system are derived and discussed....

  6. Susceptibility of optimal train schedules to stochastic disturbances of process times

    DEFF Research Database (Denmark)

    Larsen, Rune; Pranzo, Marco; D’Ariano, Andrea

    2013-01-01

    study, an advanced branch and bound algorithm, on average, outperforms a First In First Out scheduling rule both in deterministic and stochastic traffic scenarios. However, the characteristic of the stochastic processes and the way a stochastic instance is handled turn out to have a serious impact...... and dwell times). In fact, the objective of railway traffic management is to reduce delay propagation and to increase disturbance robustness of train schedules at a network scale. We present a quantitative study of traffic disturbances and their effects on the schedules computed by simple and advanced...

  7. Analytic continuation of quantum Monte Carlo data. Stochastic sampling method

    Energy Technology Data Exchange (ETDEWEB)

    Ghanem, Khaldoon; Koch, Erik [Institute for Advanced Simulation, Forschungszentrum Juelich, 52425 Juelich (Germany)

    2016-07-01

    We apply Bayesian inference to the analytic continuation of quantum Monte Carlo (QMC) data from the imaginary axis to the real axis. Demanding a proper functional Bayesian formulation of any analytic continuation method leads naturally to the stochastic sampling method (StochS) as the Bayesian method with the simplest prior, while it excludes the maximum entropy method and Tikhonov regularization. We present a new efficient algorithm for performing StochS that reduces computational times by orders of magnitude in comparison to earlier StochS methods. We apply the new algorithm to a wide variety of typical test cases: spectral functions and susceptibilities from DMFT and lattice QMC calculations. Results show that StochS performs well and is able to resolve sharp features in the spectrum.

  8. Optimal stochastic reactive power scheduling in a microgrid considering voltage droop scheme of DGs and uncertainty of wind farms

    International Nuclear Information System (INIS)

    Khorramdel, Benyamin; Raoofat, Mahdi

    2012-01-01

    Distributed Generators (DGs) in a microgrid may operate in three different reactive power control strategies, including PV, PQ and voltage droop schemes. This paper proposes a new stochastic programming approach for reactive power scheduling of a microgrid, considering the uncertainty of wind farms. The proposed algorithm firstly finds the expected optimal operating point of each DG in V-Q plane while the wind speed is a probabilistic variable. A multi-objective function with goals of loss minimization, reactive power reserve maximization and voltage security margin maximization is optimized using a four-stage multi-objective nonlinear programming. Then, using Monte Carlo simulation enhanced by scenario reduction technique, the proposed algorithm simulates actual condition and finds optimal operating strategy of DGs. Also, if any DGs are scheduled to operate in voltage droop scheme, the optimum droop is determined. Also, in the second part of the research, to enhance the optimality of the results, PSO algorithm is used for the multi-objective optimization problem. Numerical examples on IEEE 34-bus test system including two wind turbines are studied. The results show the benefits of voltage droop scheme for mitigating the impacts of the uncertainty of wind. Also, the results show preference of PSO method in the proposed approach. -- Highlights: ► Reactive power scheduling in a microgrid considering loss and voltage security. ► Stochastic nature of wind farms affects reactive power scheduling and is considered. ► Advantages of using the voltage droop characteristics of DGs in voltage security are shown. ► Power loss, voltage security and VAR reserve are three goals of a multi-objective optimization. ► Monte Carlo method with scenario reduction is used to determine optimal control strategy of DGs.

  9. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  10. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  11. On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods

    KAUST Repository

    Beck, Joakim; Tempone, Raul; Nobile, Fabio; Tamellini, Lorenzo

    2012-01-01

    In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.

  12. On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods

    KAUST Repository

    Beck, Joakim

    2012-09-01

    In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.

  13. Dynamic Asset Allocation with Stochastic Income and Interest Rates

    DEFF Research Database (Denmark)

    Munk, Claus; Sørensen, Carsten

    2010-01-01

    We solve for optimal portfolios when interest rates and labor income are stochastic with the expected income growth being affine in the short-term interest rate in order to encompass business cycle variations in wages. Our calibration based on the Panel Study of Income Dynamics (PSID) data supports...

  14. Fast and robust estimation of spectro-temporal receptive fields using stochastic approximations.

    Science.gov (United States)

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn

    2015-05-15

    The receptive field (RF) represents the signal preferences of sensory neurons and is the primary analysis method for understanding sensory coding. While it is essential to estimate a neuron's RF, finding numerical solutions to increasingly complex RF models can become computationally intensive, in particular for high-dimensional stimuli or when many neurons are involved. Here we propose an optimization scheme based on stochastic approximations that facilitate this task. The basic idea is to derive solutions on a random subset rather than computing the full solution on the available data set. To test this, we applied different optimization schemes based on stochastic gradient descent (SGD) to both the generalized linear model (GLM) and a recently developed classification-based RF estimation approach. Using simulated and recorded responses, we demonstrate that RF parameter optimization based on state-of-the-art SGD algorithms produces robust estimates of the spectro-temporal receptive field (STRF). Results on recordings from the auditory midbrain demonstrate that stochastic approximations preserve both predictive power and tuning properties of STRFs. A correlation of 0.93 with the STRF derived from the full solution may be obtained in less than 10% of the full solution's estimation time. We also present an on-line algorithm that allows simultaneous monitoring of STRF properties of more than 30 neurons on a single computer. The proposed approach may not only prove helpful for large-scale recordings but also provides a more comprehensive characterization of neural tuning in experiments than standard tuning curves. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  16. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  17. Stochastic resonance based on modulation instability in spatiotemporal chaos.

    Science.gov (United States)

    Han, Jing; Liu, Hongjun; Huang, Nan; Wang, Zhaolu

    2017-04-03

    A novel dynamic of stochastic resonance in spatiotemporal chaos is presented, which is based on modulation instability of perturbed partially coherent wave. The noise immunity of chaos can be reinforced through this effect and used to restore the coherent signal information buried in chaotic perturbation. A theoretical model with fluctuations term is derived from the complex Ginzburg-Landau equation via Wigner transform. It shows that through weakening the nonlinear threshold and triggering energy redistribution, the coherent component dominates the instability damped by incoherent component. The spatiotemporal output showing the properties of stochastic resonance may provide a potential application of signal encryption and restoration.

  18. Fuzzy stochastic damage mechanics (FSDM based on fuzzy auto-adaptive control theory

    Directory of Open Access Journals (Sweden)

    Ya-jun Wang

    2012-06-01

    Full Text Available In order to fully interpret and describe damage mechanics, the origin and development of fuzzy stochastic damage mechanics were introduced based on the analysis of the harmony of damage, probability, and fuzzy membership in the interval of [0,1]. In a complete normed linear space, it was proven that a generalized damage field can be simulated through β probability distribution. Three kinds of fuzzy behaviors of damage variables were formulated and explained through analysis of the generalized uncertainty of damage variables and the establishment of a fuzzy functional expression. Corresponding fuzzy mapping distributions, namely, the half-depressed distribution, swing distribution, and combined swing distribution, which can simulate varying fuzzy evolution in diverse stochastic damage situations, were set up. Furthermore, through demonstration of the generalized probabilistic characteristics of damage variables, the cumulative distribution function and probability density function of fuzzy stochastic damage variables, which show β probability distribution, were modified according to the expansion principle. The three-dimensional fuzzy stochastic damage mechanical behaviors of the Longtan rolled-concrete dam were examined with the self-developed fuzzy stochastic damage finite element program. The statistical correlation and non-normality of random field parameters were considered comprehensively in the fuzzy stochastic damage model described in this paper. The results show that an initial damage field based on the comprehensive statistical evaluation helps to avoid many difficulties in the establishment of experiments and numerical algorithms for damage mechanics analysis.

  19. System Entropy Measurement of Stochastic Partial Differential Systems

    Directory of Open Access Journals (Sweden)

    Bor-Sen Chen

    2016-03-01

    Full Text Available System entropy describes the dispersal of a system’s energy and is an indication of the disorder of a physical system. Several system entropy measurement methods have been developed for dynamic systems. However, most real physical systems are always modeled using stochastic partial differential dynamic equations in the spatio-temporal domain. No efficient method currently exists that can calculate the system entropy of stochastic partial differential systems (SPDSs in consideration of the effects of intrinsic random fluctuation and compartment diffusion. In this study, a novel indirect measurement method is proposed for calculating of system entropy of SPDSs using a Hamilton–Jacobi integral inequality (HJII-constrained optimization method. In other words, we solve a nonlinear HJII-constrained optimization problem for measuring the system entropy of nonlinear stochastic partial differential systems (NSPDSs. To simplify the system entropy measurement of NSPDSs, the global linearization technique and finite difference scheme were employed to approximate the nonlinear stochastic spatial state space system. This allows the nonlinear HJII-constrained optimization problem for the system entropy measurement to be transformed to an equivalent linear matrix inequalities (LMIs-constrained optimization problem, which can be easily solved using the MATLAB LMI-toolbox (MATLAB R2014a, version 8.3. Finally, several examples are presented to illustrate the system entropy measurement of SPDSs.

  20. Stochastic fluctuations and the detectability limit of network communities.

    Science.gov (United States)

    Floretta, Lucio; Liechti, Jonas; Flammini, Alessandro; De Los Rios, Paolo

    2013-12-01

    We have analyzed the detectability limits of network communities in the framework of the popular Girvan and Newman benchmark. By carefully taking into account the inevitable stochastic fluctuations that affect the construction of each and every instance of the benchmark, we come to the conclusion that the native, putative partition of the network is completely lost even before the in-degree/out-degree ratio becomes equal to that of a structureless Erdös-Rényi network. We develop a simple iterative scheme, analytically well described by an infinite branching process, to provide an estimate of the true detectability limit. Using various algorithms based on modularity optimization, we show that all of them behave (semiquantitatively) in the same way, with the same functional form of the detectability threshold as a function of the network parameters. Because the same behavior has also been found by further modularity-optimization methods and for methods based on different heuristics implementations, we conclude that indeed a correct definition of the detectability limit must take into account the stochastic fluctuations of the network construction.

  1. Efficient Estimating Functions for Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt

    The overall topic of this thesis is approximate martingale estimating function-based estimationfor solutions of stochastic differential equations, sampled at high frequency. Focuslies on the asymptotic properties of the estimators. The first part of the thesis deals with diffusions observed over...

  2. An Optimization of (Q,r Inventory Policy Based on Health Care Apparel Products with Compound Poisson Demands

    Directory of Open Access Journals (Sweden)

    An Pan

    2014-01-01

    Full Text Available Addressing the problems of a health care center which produces tailor-made clothes for specific people, the paper proposes a single product continuous review model and establishes an optimal policy for the center based on (Q,r control policy to minimize expected average cost on an order cycle. A generic mathematical model to compute cost on real-time inventory level is developed to generate optimal order quantity under stochastic stock variation. The customer demands are described as compound Poisson process. Comparisons on cost between optimization method and experience-based decision on Q are made through numerical studies conducted for the inventory system of the center.

  3. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  4. A New Optimization Framework To Solve The Optimal Feeder Reconfiguration And Capacitor Placement Problems

    Directory of Open Access Journals (Sweden)

    Mohammad-Reza Askari

    2015-07-01

    Full Text Available Abstract This paper introduces a new stochastic optimization framework based bat algorithm BA to solve the optimal distribution feeder reconfiguration DFR as well as the shunt capacitor placement and sizing in the distribution systems. The objective functions to be investigated are minimization of the active power losses and minimization of the total network costs an. In order to consider the uncertainties of the active and reactive loads in the problem point estimate method PEM with 2m scheme is employed as the stochastic tool. The feasibility and good performance of the proposed method are examined on the IEEE 69-bus test system.

  5. Pavement maintenance optimization model using Markov Decision Processes

    Science.gov (United States)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  6. Stochastic Properties of Plasticity Based Constitutive Law for Concrete

    DEFF Research Database (Denmark)

    Frier, Christian; Sørensen, John Dalsgaard

    The purpose of this paper is to obtain a stochastic model for the parameters in a constitutive model for concrete based on associated plasticity theory and with emphasis placed on the pre-failure range. The constitutive model is based on a Drucker Prager yield surface augmented by a Rankine cut-o...

  7. Event-based state estimation a stochastic perspective

    CERN Document Server

    Shi, Dawei; Chen, Tongwen

    2016-01-01

    This book explores event-based estimation problems. It shows how several stochastic approaches are developed to maintain estimation performance when sensors perform their updates at slower rates only when needed. The self-contained presentation makes this book suitable for readers with no more than a basic knowledge of probability analysis, matrix algebra and linear systems. The introduction and literature review provide information, while the main content deals with estimation problems from four distinct angles in a stochastic setting, using numerous illustrative examples and comparisons. The text elucidates both theoretical developments and their applications, and is rounded out by a review of open problems. This book is a valuable resource for researchers and students who wish to expand their knowledge and work in the area of event-triggered systems. At the same time, engineers and practitioners in industrial process control will benefit from the event-triggering technique that reduces communication costs ...

  8. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    KAUST Repository

    Naous, Rawan

    2016-11-02

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  9. Stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses

    International Nuclear Information System (INIS)

    Wang, Jiang; Guo, Xinmeng; Yu, Haitao; Liu, Chen; Deng, Bin; Wei, Xile; Chen, Yingyuan

    2014-01-01

    Highlights: •We study stochastic resonance in small-world neural networks with hybrid synapses. •The resonance effect depends largely on the probability of chemical synapse. •An optimal chemical synapse probability exists to evoke network resonance. •Network topology affects the stochastic resonance in hybrid neuronal networks. - Abstract: The dependence of stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses on the probability of chemical synapse and the rewiring probability is investigated. A subthreshold periodic signal is imposed on one single neuron within the neuronal network as a pacemaker. It is shown that, irrespective of the probability of chemical synapse, there exists a moderate intensity of external noise optimizing the response of neuronal networks to the pacemaker. Moreover, the effect of pacemaker driven stochastic resonance of the system depends largely on the probability of chemical synapse. A high probability of chemical synapse will need lower noise intensity to evoke the phenomenon of stochastic resonance in the networked neuronal systems. In addition, for fixed noise intensity, there is an optimal chemical synapse probability, which can promote the propagation of the localized subthreshold pacemaker across neural networks. And the optimal chemical synapses probability turns even larger as the coupling strength decreases. Furthermore, the small-world topology has a significant impact on the stochastic resonance in hybrid neuronal networks. It is found that increasing the rewiring probability can always enhance the stochastic resonance until it approaches the random network limit

  10. Stationary stochastic processes theory and applications

    CERN Document Server

    Lindgren, Georg

    2012-01-01

    Some Probability and Process BackgroundSample space, sample function, and observablesRandom variables and stochastic processesStationary processes and fieldsGaussian processesFour historical landmarksSample Function PropertiesQuadratic mean propertiesSample function continuityDerivatives, tangents, and other characteristicsStochastic integrationAn ergodic resultExercisesSpectral RepresentationsComplex-valued stochastic processesBochner's theorem and the spectral distributionSpectral representation of a stationary processGaussian processesStationary counting processesExercisesLinear Filters - General PropertiesLinear time invariant filtersLinear filters and differential equationsWhite noise in linear systemsLong range dependence, non-integrable spectra, and unstable systemsThe ARMA-familyLinear Filters - Special TopicsThe Hilbert transform and the envelopeThe sampling theoremKarhunen-Loève expansionClassical Ergodic Theory and MixingThe basic ergodic theorem in L2Stationarity and transformationsThe ergodic th...

  11. Inverse problem for particle size distributions of atmospheric aerosols using stochastic particle swarm optimization

    International Nuclear Information System (INIS)

    Yuan Yuan; Yi Hongliang; Shuai Yong; Wang Fuqiang; Tan Heping

    2010-01-01

    As a part of resolving optical properties in atmosphere radiative transfer calculations, this paper focuses on obtaining aerosol optical thicknesses (AOTs) in the visible and near infrared wave band through indirect method by gleaning the values of aerosol particle size distribution parameters. Although various inverse techniques have been applied to obtain values for these parameters, we choose a stochastic particle swarm optimization (SPSO) algorithm to perform an inverse calculation. Computational performances of different inverse methods are investigated and the influence of swarm size on the inverse problem of computation particles is examined. Next, computational efficiencies of various particle size distributions and the influences of the measured errors on computational accuracy are compared. Finally, we recover particle size distributions for atmospheric aerosols over Beijing using the measured AOT data (at wavelengths λ=0.400, 0.690, 0.870, and 1.020 μm) obtained from AERONET at different times and then calculate other AOT values for this band based on the inverse results. With calculations agreeing with measured data, the SPSO algorithm shows good practicability.

  12. Investments in the LNG Value Chain: A Multistage Stochastic Optimization Model focusing on Floating Liquefaction Units

    OpenAIRE

    Røstad, Lars Dybsjord; Erichsen, Jeanette Christine

    2012-01-01

    In this thesis, we have developed a strategic optimization model of investments in infrastructure in the LNG value chain. The focus is on floating LNG production units: when they are a viable solution and what value they add to the LNG value chain. First a deterministic model is presented with focus on describing the value chain, before it is expanded to a multistage stochastic model with uncertain field sizes and gas prices. The objective is to maximize expected discounted profits through op...

  13. Population stochastic modelling (PSM)--an R package for mixed-effects models based on stochastic differential equations.

    Science.gov (United States)

    Klim, Søren; Mortensen, Stig Bousgaard; Kristensen, Niels Rode; Overgaard, Rune Viig; Madsen, Henrik

    2009-06-01

    The extension from ordinary to stochastic differential equations (SDEs) in pharmacokinetic and pharmacodynamic (PK/PD) modelling is an emerging field and has been motivated in a number of articles [N.R. Kristensen, H. Madsen, S.H. Ingwersen, Using stochastic differential equations for PK/PD model development, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 109-141; C.W. Tornøe, R.V. Overgaard, H. Agersø, H.A. Nielsen, H. Madsen, E.N. Jonsson, Stochastic differential equations in NONMEM: implementation, application, and comparison with ordinary differential equations, Pharm. Res. 22 (August(8)) (2005) 1247-1258; R.V. Overgaard, N. Jonsson, C.W. Tornøe, H. Madsen, Non-linear mixed-effects models with stochastic differential equations: implementation of an estimation algorithm, J. Pharmacokinet. Pharmacodyn. 32 (February(1)) (2005) 85-107; U. Picchini, S. Ditlevsen, A. De Gaetano, Maximum likelihood estimation of a time-inhomogeneous stochastic differential model of glucose dynamics, Math. Med. Biol. 25 (June(2)) (2008) 141-155]. PK/PD models are traditionally based ordinary differential equations (ODEs) with an observation link that incorporates noise. This state-space formulation only allows for observation noise and not for system noise. Extending to SDEs allows for a Wiener noise component in the system equations. This additional noise component enables handling of autocorrelated residuals originating from natural variation or systematic model error. Autocorrelated residuals are often partly ignored in PK/PD modelling although violating the hypothesis for many standard statistical tests. This article presents a package for the statistical program R that is able to handle SDEs in a mixed-effects setting. The estimation method implemented is the FOCE(1) approximation to the population likelihood which is generated from the individual likelihoods that are approximated using the Extended Kalman Filter's one-step predictions.

  14. Global optimization and simulated annealing

    NARCIS (Netherlands)

    Dekkers, A.; Aarts, E.H.L.

    1988-01-01

    In this paper we are concerned with global optimization, which can be defined as the problem of finding points on a bounded subset of Rn in which some real valued functionf assumes its optimal (i.e. maximal or minimal) value. We present a stochastic approach which is based on the simulated annealing

  15. Stochastic programming and market equilibrium analysis of microgrids energy management systems

    International Nuclear Information System (INIS)

    Hu, Ming-Che; Lu, Su-Ying; Chen, Yen-Haw

    2016-01-01

    Microgrids facilitate optimum utilization of distributed renewable energy, provides better local energy supply, and reduces transmission loss and greenhouse gas emission. Because the uncertainty in energy demand affects the energy demand and supply system, the aim of this research is to develop a stochastic optimization and its market equilibrium for microgrids in the electricity market. Therefore, a two-stage stochastic programming model for microgrids and the market competition model are derived in this paper. In the stochastic model, energy demand and supply uncertainties are considered. Furthermore, a case study of the stochastic model is conducted to simulate the uncertainties on the INER microgrids in Taiwanese market. The optimal investment of the generators and batteries installation and operating strategies are determined under energy demand and supply uncertainties for the INER microgrids. The results show optimal investment and operating strategies for the current INER microgrids are also determined by the proposed two-stage stochastic model in the market. In addition, trade-off between the battery capacity and microgrids performance is investigated. Battery usage and power trading between the microgrids and main grid systems are the functions of battery capacity. - Highlights: • A two-stage stochastic programming model is developed for microgrids. • Market equilibrium analysis of microgrids is conducted. • A case study of the stochastic model is conducted for INER microgrids.

  16. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  17. Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent

    Science.gov (United States)

    De Sa, Christopher; Feldman, Matthew; Ré, Christopher; Olukotun, Kunle

    2018-01-01

    Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buckwild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11×. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems. PMID:29391770

  18. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  19. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  20. Stochastic scheduling of local distribution systems considering high penetration of plug-in electric vehicles and renewable energy sources

    International Nuclear Information System (INIS)

    Tabatabaee, Sajad; Mortazavi, Seyed Saeedallah; Niknam, Taher

    2017-01-01

    This paper investigates the optimal scheduling of electric power units in the renewable based local distribution systems considering plug-in electric vehicles (PEVs). The appearance of PEVs in the electric grid can create new challenges for the operation of distributed generations and power units inside the network. In order to deal with this issue, a new stochastic optimization method is devised to let the central controll manage the power units and charging behavior of PEVs. The problem formulation aims to minimize the total cost of the network including the cost of power supply for loads and PEVs as well as the cost of energy not supplied (ENS) as the reliability costs. In order to make PEVs as opportunity for the grid, the vehicle-2-grid (V2G) technology is employed to reduce the operational costs. To model the high uncertain behavior of wind turbine, photovoltaics and the charging and discharging pattern of PEVs, a new stochastic power flow based on unscented transform is proposed. Finally, a new optimization algorithm based on bat algorithm (BA) is proposed to solve the problem optimally. The satisfying performance of the proposed stochastic method is tested on a grid-connected local distribution system. - Highlights: • Introduction of stochastic method to assess Plug-in Electric Vehicles effects on the microgrid. • Assessing the role of V2G technology on battery aging and degradation costs. • Use of BA for solving the proposed problem. • Introduction of a new modification method for the BA.