WorldWideScience

Sample records for simulate optimal adjustments

  1. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS

    Arce, Pedro; Lagares, Juan Ignacio

    2018-02-01

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm2 to 40  ×  40 cm2, a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  2. Primal and dual approaches to adjustable robust optimization

    de Ruiter, Frans

    2018-01-01

    Robust optimization has become an important paradigm to deal with optimization under uncertainty. Adjustable robust optimization is an extension that deals with multistage problems. This thesis starts with a short but comprehensive introduction to adjustable robust optimization. Then the two

  3. Handbook of simulation optimization

    Fu, Michael C

    2014-01-01

    The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science,...

  4. Towards Optimal PDE Simulations

    Keyes, David

    2009-01-01

    The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.

  5. Terascale Optimal PDE Simulations

    David Keyes

    2009-07-28

    The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.

  6. Magnetic measurement, sorting optimization and adjustment of SDUV-FEL hybrid undulator

    Wang Tao; Jia Qika

    2007-01-01

    Construction of an undulator includes magnet block measurement, sorting, field measurement and adjustment. Optimizing SDUV-FEL undulator by simulated annealing algorithm using measurement results of the magnet blocks by Helmholtz coil before installing undulator magnets, the cost function can be reduced by three orders of magnitude. The practical parameters of one segment meet the design specifications after adjusting the magnetic field. (authors)

  7. Competition Leverage : How the Demand Side Affects Optimal Risk Adjustment

    Bijlsma, M.; Boone, J.; Zwart, Gijsbert

    2011-01-01

    We study optimal risk adjustment in imperfectly competitive health insurance markets when high-risk consumers are less likely to switch insurer than low-risk consumers. First, we find that insurers still have an incentive to select even if risk adjustment perfectly corrects for cost differences

  8. Global optimization and simulated annealing

    Dekkers, A.; Aarts, E.H.L.

    1988-01-01

    In this paper we are concerned with global optimization, which can be defined as the problem of finding points on a bounded subset of Rn in which some real valued functionf assumes its optimal (i.e. maximal or minimal) value. We present a stochastic approach which is based on the simulated annealing

  9. NUMERICAL SIMULATION AND OPTIMIZATION OF ...

    30 juin 2011 ... This article has as an aim the study and the simulation of the photovoltaic cells containing CdTe materials, contributing to the development of renewable energies, and able to feed from the houses, the shelters as well as ... and the output energy of conversion is 18.26%.Optimization is made according to the.

  10. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  11. Adjustment and Optimization of the Cropping Systems under Water Constraint

    Pingli An

    2016-11-01

    Full Text Available The water constraint on agricultural production receives growing concern with the increasingly sharp contradiction between demand and supply of water resources. How to mitigate and adapt to potential water constraint is one of the key issues for ensuring food security and achieving sustainable agriculture in the context of climate change. It has been suggested that adjustment and optimization of cropping systems could be an effective measure to improve water management and ensure food security. However, a knowledge gap still exists in how to quantify potential water constraint and how to select appropriate cropping systems. Here, we proposed a concept of water constraint risk and developed an approach for the evaluation of the water constraint risks for agricultural production by performing a case study in Daxing District, Beijing, China. The results show that, over the whole growth period, the order of the water constraint risks of crops from high to low was wheat, rice, broomcorn, foxtail millet, summer soybean, summer peanut, spring corn, and summer corn, and the order of the water constraint risks of the cropping systems from high to low was winter wheat-summer grain crops, rice, broomcorn, foxtail millet, and spring corn. Our results are consistent with the actual evolving process of cropping system. This indicates that our proposed method is practicable to adjust and optimize the cropping systems to mitigate and adapt to potential water risks. This study provides an insight into the adjustment and optimization of cropping systems under resource constraints.

  12. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2004-01-01

    In the present work a framework for optimizing the design of boilers for dynamic operation has been developed. A cost function to be minimized during the optimization has been formulated and for the present design variables related to the Boiler Volume and the Boiler load Gradient (i.e. ring rate...... on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with respect to drum...

  13. Cogeneration system simulation/optimization

    Puppa, B.A.; Chandrashekar, M.

    1992-01-01

    Companies are increasingly turning to computer software programs to improve and streamline the analysis o cogeneration systems. This paper introduces a computer program which originated with research at the University of Waterloo. The program can simulate and optimize any type of layout of cogeneration plant. An application of the program to a cogeneration feasibility study for a university campus is described. The Steam and Power Plant Optimization System (SAPPOS) is a PC software package which allows users to model any type of steam/power plant on a component-by-component basis. Individual energy/steam balances can be done quickly to model any scenario. A typical days per month cogeneration simulation can also be carried out to provide a detailed monthly cash flow and energy forecast. This paper reports that SAPPOS can be used for scoping, feasibility, and preliminary design work, along with financial studies, gas contract studies, and optimizing the operation of completed plants. In the feasibility study presented, SAPPOS is used to evaluate both diesel engine and gas turbine combined cycle options

  14. Humans make near-optimal adjustments of control to initial body configuration in vertical squat jumping.

    Bobbert, Maarten F; Richard Casius, L J; Kistemaker, Dinant A

    2013-05-01

    We investigated adjustments of control to initial posture in squat jumping. Eleven male subjects jumped from three initial postures: preferred initial posture (PP), a posture in which the trunk was rotated 18° more backward (BP) and a posture in which it was rotated 15° more forward (FP) than in PP. Kinematics, ground reaction forces and electromyograms (EMG) were collected. EMG was rectified and smoothed to obtain smoothed rectified EMG (srEMG). Subjects showed adjustments in srEMG histories, most conspicuously a shift in srEMG-onset of rectus femoris (REC): from early in BP to late in FP. Jumps from the subjects' initial postures were simulated with a musculoskeletal model comprising four segments and six Hill-type muscles, which had muscle stimulation (STIM) over time as input. STIM of each muscle changed from initial to maximal at STIM-onset, and STIM-onsets were optimized using jump height as criterion. Optimal simulated jumps from BP, PP and FP were similar to jumps of the subjects. Optimal solutions primarily differed in STIM-onset of REC: from early in BP to late in FP. Because the subjects' adjustments in srEMG-onsets were similar to adjustments of the model's optimal STIM-onsets, it was concluded that the former were near-optimal. With the model we also showed that near-maximum jumps from BP, PP and FP could be achieved when STIM-onset of REC depended on initial hip joint angle and STIM-onsets of the other muscles were posture-independent. A control theory that relies on a mapping from initial posture to STIM-onsets seems a parsimonious alternative to theories relying on internal optimal control models. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Simulation-based optimization parametric optimization techniques and reinforcement learning

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  16. Validation of a novel laparoscopic adjustable gastric band simulator.

    Sankaranarayanan, Ganesh; Adair, James D; Halic, Tansel; Gromski, Mark A; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B; De, Suvranu

    2011-04-01

    Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. The aim of our study was to determine face, construct, and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Twenty-eight subjects were categorized into two groups (expert and novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least 4 years of laparoscopic training and operative experience. Novices consisted of subjects with medical training but with less than 4 years of laparoscopic training. The subjects used the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. On a 5-point Likert scale (1 = lowest score, 5 = highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 (face validity). There were significant differences in the performances of the two subject groups (expert and novice) based on total scores (p virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct, and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure.

  17. Simulation Based Optimization for World Line Card Production System

    Sinan APAK

    2012-07-01

    Full Text Available Simulation based decision support system is one of the commonly used tool to examine complex production systems. The simulation approach provides process modules which can be adjusted with certain parameters by using data relatively easily obtainable in production process. World Line Card production system simulation is developed to evaluate the optimality of existing production line via using discrete event simulation model with variaty of alternative proposals. The current production system is analysed by a simulation model emphasizing the bottlenecks and the poorly utilized production line. Our analysis identified some improvements and efficient solutions for the existing system.

  18. The Study of an Optimal Robust Design and Adjustable Ordering Strategies in the HSCM.

    Liao, Hung-Chang; Chen, Yan-Kwang; Wang, Ya-huei

    2015-01-01

    The purpose of this study was to establish a hospital supply chain management (HSCM) model in which three kinds of drugs in the same class and with the same indications were used in creating an optimal robust design and adjustable ordering strategies to deal with a drug shortage. The main assumption was that although each doctor has his/her own prescription pattern, when there is a shortage of a particular drug, the doctor may choose a similar drug with the same indications as a replacement. Four steps were used to construct and analyze the HSCM model. The computation technology used included a simulation, a neural network (NN), and a genetic algorithm (GA). The mathematical methods of the simulation and the NN were used to construct a relationship between the factor levels and performance, while the GA was used to obtain the optimal combination of factor levels from the NN. A sensitivity analysis was also used to assess the change in the optimal factor levels. Adjustable ordering strategies were also developed to prevent drug shortages.

  19. Optimism, Social Support, and Adjustment in African American Women with Breast Cancer

    Shelby, Rebecca A.; Crespin, Tim R.; Wells-Di Gregorio, Sharla M.; Lamdan, Ruth M.; Siegel, Jamie E.; Taylor, Kathryn L.

    2013-01-01

    Past studies show that optimism and social support are associated with better adjustment following breast cancer treatment. Most studies have examined these relationships in predominantly non-Hispanic White samples. The present study included 77 African American women treated for nonmetastatic breast cancer. Women completed measures of optimism, social support, and adjustment within 10-months of surgical treatment. In contrast to past studies, social support did not mediate the relationship between optimism and adjustment in this sample. Instead, social support was a moderator of the optimism-adjustment relationship, as it buffered the negative impact of low optimism on psychological distress, well-being, and psychosocial functioning. Women with high levels of social support experienced better adjustment even when optimism was low. In contrast, among women with high levels of optimism, increasing social support did not provide an added benefit. These data suggest that perceived social support is an important resource for women with low optimism. PMID:18712591

  20. Simulated annealing algorithm for optimal capital growth

    Luo, Yong; Zhu, Bo; Tang, Yong

    2014-08-01

    We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.

  1. An optimal multivariable controller for transcritical CO2 refrigeration cycle with an adjustable ejector

    He, Yang; Deng, Jianqiang; Yang, Fusheng; Zhang, Zaoxiao

    2017-01-01

    Highlights: • Dynamic model for transcritical CO 2 ejector refrigeration system is developed. • A model-driven optimal multivariable controller is proposed. • Gas cooler pressure and cooling capacity are tracked independently. • Maximal performance for a given load is achieved by the optimal controller. - Abstract: The fixed ejector has to work under a restricted operating condition to keep its positive effectiveness on the transcritical CO 2 refrigeration cycle, and a controllable ejector will be helpful. In this paper, an optimal multivariable controller based on the dynamic model is proposed to improve transcritical CO 2 refrigeration cycle with an adjustable ejector (TCRAE). A nonlinear dynamic model is first developed to model the dynamic characteristic of TCRAE. The corresponding model linearization is carried out and the simulation results reproduce transient behavior of the nonlinear model very well. Based on the developed model, an optimal multivariable controller with a tracker based linear quadratic state feedback algorithm and a predictor using steepest descent method is designed. The controller is finally applied on the experimental apparatus and the performance is verified. Using the tracker only, the gas cooler pressure and chilled water outlet temperature (cooling capacity) are well tracked rejecting the disturbances from each other. Furthermore, by the predictor, the optimal gas cooler pressure for a constant cooling capacity is actually approached on the experimental apparatus with a settling time about 700 s.

  2. Coupled multiscale simulation and optimization in nanoelectronics

    2015-01-01

    Designing complex integrated circuits relies heavily on mathematical methods and calls for suitable simulation and optimization tools. The current design approach involves simulations and optimizations in different physical domains (device, circuit, thermal, electromagnetic) and in a range of electrical engineering disciplines (logic, timing, power, crosstalk, signal integrity, system functionality). COMSON was a Marie Curie Research Training Network created to meet these new scientific and training challenges by (a) developing new descriptive models that take these mutual dependencies into account, (b) combining these models with existing circuit descriptions in new simulation strategies, and (c) developing new optimization techniques that will accommodate new designs. The book presents the main project results in the fields of PDAE modeling and simulation, model order reduction techniques and optimization, based on merging the know-how of three major European semiconductor companies with the combined expe...

  3. Westinghouse waste simulation and optimization software tool

    Mennicken, Kim; Aign, Jorg

    2013-01-01

    Applications for dynamic simulation can be found in virtually all areas of process engineering. The tangible benefits of using dynamic simulation can be seen in tighter design, smoother start-ups and optimized operation. Thus, proper implementation of dynamic simulation can deliver substantial benefits. These benefits are typically derived from improved process understanding. Simulation gives confidence in evidence based decisions and enables users to try out lots of 'what if' scenarios until one is sure that a decision is the right one. In radioactive waste treatment tasks different kinds of waste with different volumes and properties have to be treated, e.g. from NPP operation or D and D activities. Finding a commercially and technically optimized waste treatment concept is a time consuming and difficult task. The Westinghouse Waste Simulation and Optimization Software Tool will enable the user to quickly generate reliable simulation models of various process applications based on equipment modules. These modules can be built with ease and be integrated into the simulation model. This capability ensures that this tool is applicable to typical waste treatment tasks. The identified waste streams and the selected treatment methods are the basis of the simulation and optimization software. After implementing suitable equipment data into the model, process requirements and waste treatment data are fed into the simulation to finally generate primary simulation results. A sensitivity analysis of automated optimization features of the software generates the lowest possible lifecycle cost for the simulated waste stream. In combination with proven waste management equipments and integrated waste management solutions, this tool provides reliable qualitative results that lead to an effective planning and minimizes the total project planning risk of any waste management activity. It is thus the ideal tool for designing a waste treatment facility in an optimum manner

  4. Westinghouse waste simulation and optimization software tool

    Mennicken, Kim; Aign, Jorg [Westinghouse Electric Germany GmbH, Hamburg (Germany)

    2013-07-01

    Applications for dynamic simulation can be found in virtually all areas of process engineering. The tangible benefits of using dynamic simulation can be seen in tighter design, smoother start-ups and optimized operation. Thus, proper implementation of dynamic simulation can deliver substantial benefits. These benefits are typically derived from improved process understanding. Simulation gives confidence in evidence based decisions and enables users to try out lots of 'what if' scenarios until one is sure that a decision is the right one. In radioactive waste treatment tasks different kinds of waste with different volumes and properties have to be treated, e.g. from NPP operation or D and D activities. Finding a commercially and technically optimized waste treatment concept is a time consuming and difficult task. The Westinghouse Waste Simulation and Optimization Software Tool will enable the user to quickly generate reliable simulation models of various process applications based on equipment modules. These modules can be built with ease and be integrated into the simulation model. This capability ensures that this tool is applicable to typical waste treatment tasks. The identified waste streams and the selected treatment methods are the basis of the simulation and optimization software. After implementing suitable equipment data into the model, process requirements and waste treatment data are fed into the simulation to finally generate primary simulation results. A sensitivity analysis of automated optimization features of the software generates the lowest possible lifecycle cost for the simulated waste stream. In combination with proven waste management equipments and integrated waste management solutions, this tool provides reliable qualitative results that lead to an effective planning and minimizes the total project planning risk of any waste management activity. It is thus the ideal tool for designing a waste treatment facility in an optimum manner

  5. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    , and the total stress level (i.e. stresses introduced due to internal pressure plus stresses introduced due to temperature gradients) must always be kept below the allowable stress level. In this way, the increased water-/steam space that should allow for better dynamic performance, in the end causes limited...... freedom with respect to dynamic operation of the plant. By means of an objective function including as well the price of the plant as a quantification of the value of dynamic operation of the plant an optimization is carried out. The dynamic model of the boiler plant is applied to define parts...

  6. Multiphysics simulation electromechanical system applications and optimization

    Dede, Ercan M; Nomura, Tsuyoshi

    2014-01-01

    This book highlights a unique combination of numerical tools and strategies for handling the challenges of multiphysics simulation, with a specific focus on electromechanical systems as the target application. Features: introduces the concept of design via simulation, along with the role of multiphysics simulation in today's engineering environment; discusses the importance of structural optimization techniques in the design and development of electromechanical systems; provides an overview of the physics commonly involved with electromechanical systems for applications such as electronics, ma

  7. Performance Optimization of the ATLAS Detector Simulation

    AUTHOR|(CDS)2091018

    In the thesis at hand the current performance of the ATLAS detector simulation, part of the Athena framework, is analyzed and possible optimizations are examined. For this purpose the event based sampling profiler VTune Amplifier by Intel is utilized. As the most important metric to measure improvements, the total execution time of the simulation of $t\\bar{t}$ events is also considered. All efforts are focused on structural changes, which do not influence the simulation output and can be attributed to CPU specific issues, especially front end stalls and vectorization. The most promising change is the activation of profile guided optimization for Geant4, which is a critical external dependency of the simulation. Profile guided optimization gives an average improvement of $8.9\\%$ and $10.0\\%$ for the two considered cases at the cost of one additional compilation (instrumented binaries) and execution (training to obtain profiling data) at build time.

  8. Purex optimization by computer simulation

    Campbell, T.G.; McKibben, J.M.

    1980-08-01

    For the past 2 years computer simulation has been used to study the performance of several solvent extraction banks in the Purex facility at the Savannah River Plant in Aiken, South Carolina. Individual process parameters were varied about their normal base case values to determine their individual effects on concentration profiles and end-stream compositions. The data are presented in graphical form to show the extent to which product losses, decontamination factors, solvent extraction bank inventories of fissile materials, and other key properties are affected by process changes. Presented in this way, the data are useful for adapting flowsheet conditions to a particular feed material or product specification, and for evaluating nuclear safety as related to bank inventories

  9. Simulation-based optimization of thermal systems

    Jaluria, Yogesh

    2009-01-01

    This paper considers the design and optimization of thermal systems on the basis of the mathematical and numerical modeling of the system. Many complexities are often encountered in practical thermal processes and systems, making the modeling challenging and involved. These include property variations, complicated regions, combined transport mechanisms, chemical reactions, and intricate boundary conditions. The paper briefly presents approaches that may be used to accurately simulate these systems. Validation of the numerical model is a particularly critical aspect and is discussed. It is important to couple the modeling with the system performance, design, control and optimization. This aspect, which has often been ignored in the literature, is considered in this paper. Design of thermal systems based on concurrent simulation and experimentation is also discussed in terms of dynamic data-driven optimization methods. Optimization of the system and of the operating conditions is needed to minimize costs and improve product quality and system performance. Different optimization strategies that are currently used for thermal systems are outlined, focusing on new and emerging strategies. Of particular interest is multi-objective optimization, since most thermal systems involve several important objective functions, such as heat transfer rate and pressure in electronic cooling systems. A few practical thermal systems are considered in greater detail to illustrate these approaches and to present typical simulation, design and optimization results

  10. Site utility system optimization with operation adjustment under uncertainty

    Sun, Li; Gai, Limei; Smith, Robin

    2017-01-01

    Highlights: • Uncertainties are classified into time-based and probability-based uncertain factors. • Multi-period operation and recourses deal with uncertainty implementation. • Operation scheduling are specified at the design stage to deal with uncertainties. • Steam mains superheating affects steam distribution and power generation in the system. - Abstract: Utility systems must satisfy process energy and power demands under varying conditions. The system performance is decided by the system configuration and individual equipment operating load for boilers, gas turbines, steam turbines, condensers, and let down valves. Steam mains conditions in terms of steam pressures and steam superheating also play important roles on steam distribution in the system and power generation by steam expansion in steam turbines, and should be included in the system optimization. Uncertainties such as process steam power demand changes and electricity price fluctuations should be included in the system optimization to eliminate as much as possible the production loss caused by steam power deficits due to uncertainties. In this paper, uncertain factors are classified into time-based and probability-based uncertain factors, and operation scheduling containing multi-period equipment load sharing, redundant equipment start up, and electricity import to compensate for power deficits, have been presented to deal with the happens of uncertainties, and are formulated as a multi-period item and a recourse item in the optimization model. There are two case studies in this paper. One case illustrates the system design to determine system configuration, equipment selection, and system operation scheduling at the design stage to deal with uncertainties. The other case provides operational optimization scenarios for an existing system, especially when the steam superheating varies. The proposed method can provide practical guidance to system energy efficiency improvement.

  11. Collateral Optimization : Liquidity & Funding Value Adjustments, - Best Practices -

    Genest, Benoit; Rego, David; Freon, Helene

    2013-01-01

    The purpose of this paper is to understand how the current financial landscape shaped by the crises and new regulations impacts Investment Banking’s business model. We will focus on quantitative implications, i.e. valuation, modeling and pricing issues, as well as qualitative implications, i.e. best practices to manage quantitative aspects and handle these functions to the current Investment Banking organization. We considered two pillars to shape our vision of collateral optimization: ...

  12. Adjusting process count on demand for petascale global optimization

    Sosonkina, Masha; Watson, Layne T.; Radcliffe, Nicholas R.; Haftka, Rafael T.; Trosset, Michael W.

    2013-01-01

    There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.

  13. Modeling, simulation and optimization of bipedal walking

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  14. Simulation-based optimization of sustainable national energy systems

    Batas Bjelić, Ilija; Rajaković, Nikola

    2015-01-01

    The goals of the EU2030 energy policy should be achieved cost-effectively by employing the optimal mix of supply and demand side technical measures, including energy efficiency, renewable energy and structural measures. In this paper, the achievement of these goals is modeled by introducing an innovative method of soft-linking of EnergyPLAN with the generic optimization program (GenOpt). This soft-link enables simulation-based optimization, guided with the chosen optimization algorithm, rather than manual adjustments of the decision vectors. In order to obtain EnergyPLAN simulations within the optimization loop of GenOpt, the decision vectors should be chosen and explained in GenOpt for scenarios created in EnergyPLAN. The result of the optimization loop is an optimal national energy master plan (as a case study, energy policy in Serbia was taken), followed with sensitivity analysis of the exogenous assumptions and with focus on the contribution of the smart electricity grid to the achievement of EU2030 goals. It is shown that the increase in the policy-induced total costs of less than 3% is not significant. This general method could be further improved and used worldwide in the optimal planning of sustainable national energy systems. - Highlights: • Innovative method of soft-linking of EnergyPLAN with GenOpt has been introduced. • Optimal national energy master plan has been developed (the case study for Serbia). • Sensitivity analysis on the exogenous world energy and emission price development outlook. • Focus on the contribution of smart energy systems to the EU2030 goals. • Innovative soft-linking methodology could be further improved and used worldwide.

  15. Interrelations of stress, optimism and control in older people's psychological adjustment.

    Bretherton, Susan Jane; McLean, Louise Anne

    2015-06-01

    To investigate the influence of perceived stress, optimism and perceived control of internal states on the psychological adjustment of older adults. The sample consisted of 212 older adults, aged between 58 and 103 (M = 80.42 years, SD = 7.31 years), living primarily in retirement villages in Melbourne, Victoria. Participants completed the Perceived Stress Scale, Life Orientation Test-Revised, Perceived Control of Internal States Scale and the World Health Organisation Quality of Life-Bref. Optimism significantly mediated the relationship between older people's perceived stress and psychological health, and perceived control of internal states mediated the relationships among stress, optimism and psychological health. The variables explained 49% of the variance in older people's psychological adjustment. It is suggested that strategies to improve optimism and perceived control may improve the psychological adjustment of older people struggling to adapt to life's stressors. © 2014 ACOTA.

  16. Kanban simulation model for production process optimization

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  17. Conventional treatment planning optimization using simulated annealing

    Morrill, S.M.; Langer, M.; Lane, R.G.

    1995-01-01

    Purpose: Simulated annealing (SA) allows for the implementation of realistic biological and clinical cost functions into treatment plan optimization. However, a drawback to the clinical implementation of SA optimization is that large numbers of beams appear in the final solution, some with insignificant weights, preventing the delivery of these optimized plans using conventional (limited to a few coplanar beams) radiation therapy. A preliminary study suggested two promising algorithms for restricting the number of beam weights. The purpose of this investigation was to compare these two algorithms using our current SA algorithm with the aim of producing a algorithm to allow clinically useful radiation therapy treatment planning optimization. Method: Our current SA algorithm, Variable Stepsize Generalized Simulated Annealing (VSGSA) was modified with two algorithms to restrict the number of beam weights in the final solution. The first algorithm selected combinations of a fixed number of beams from the complete solution space at each iterative step of the optimization process. The second reduced the allowed number of beams by a factor of two at periodic steps during the optimization process until only the specified number of beams remained. Results of optimization of beam weights and angles using these algorithms were compared using a standard cadre of abdominal cases. The solution space was defined as a set of 36 custom-shaped open and wedged-filtered fields at 10 deg. increments with a target constant target volume margin of 1.2 cm. For each case a clinically-accepted cost function, minimum tumor dose was maximized subject to a set of normal tissue binary dose-volume constraints. For this study, the optimized plan was restricted to four (4) fields suitable for delivery with conventional therapy equipment. Results: The table gives the mean value of the minimum target dose obtained for each algorithm averaged over 5 different runs and the comparable manual treatment

  18. Optimal simulation of a perfect entangler

    Yu Nengkun; Duan Runyao; Ying Mingsheng

    2010-01-01

    A 2 x 2 unitary operation is called a perfect entangler if it can generate a maximally entangled state from some unentangled input. We study the following question: How many runs of a given two-qubit entangling unitary operation are required to simulate some perfect entangler with one-qubit unitary operations as free resources? We completely solve this problem by presenting an analytical formula for the optimal number of runs of the entangling operation. Our result reveals an entanglement strength of two-qubit unitary operations.

  19. Simulation and optimization of fractional crystallization processes

    Thomsen, Kaj; Rasmussen, Peter; Gani, Rafiqul

    1998-01-01

    A general method for the calculation of various types of phase diagrams for aqueous electrolyte mixtures is outlined. It is shown how the thermodynamic equilibrium precipitation process can be used to satisfy the operational needs of industrial crystallizer/centrifuge units. Examples of simulation...... and optimization of fractional crystallization processes are shown. In one of these examples, a process with multiple steady states is analyzed. The thermodynamic model applied for describing the highly non-ideal aqueous electrolyte systems is the Extended UNIQUAC model. (C) 1998 Published by Elsevier Science Ltd...

  20. Ring rolling process simulation for geometry optimization

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  1. Opportunities for Improving Army Modeling and Simulation Development: Making Fundamental Adjustments and Borrowing Commercial Business Practices

    Lee, John

    2000-01-01

    ...; requirements which span the conflict spectrum. The Army's current staff training simulation development process could better support all possible scenarios by making some fundamental adjustments and borrowing commercial business practices...

  2. CALCULATION METHODS OF OPTIMAL ADJUSTMENT OF CONTROL SYSTEM THROUGH DISTURBANCE CHANNEL

    I. M. Golinko

    2014-01-01

    Full Text Available In the process of automatic control system debugging the great attention is paid to determining formulas’ parameters of optimal dynamic adjustment of regulators, taking into account the dynamics of Objects control. In most cases the known formulas are oriented on design of automatic control system through channel “input-output definition”. But practically in all continuous processes the main task of all regulators is stabilization of output parameters. The Methods of parameters calculation for dynamic adjustment of regulations were developed. These methods allow to optimize the analog and digital regulators, taking into account minimization of regulated influences. There were suggested to use the fact of detuning and maximum value of regulated influence. As the automatic control system optimization with proportional plus reset controllers on disturbance channel is an unimodal task, the main algorithm of optimization is realized by Hooke – Jeeves method. For controllers optimization through channel external disturbance there were obtained functional dependences of parameters calculations of dynamic proportional plus reset controllers from dynamic characteristics of Object control. The obtained dependences allow to improve the work of controllers (regulators of automatic control on external disturbance channel and so it allows to improve the quality of regulation of transient processes. Calculation formulas provide high accuracy and convenience in usage. In suggested method there are no nomographs and this fact expels subjectivity of investigation in determination of parameters of dynamic adjustment of proportional plus reset controllers. Functional dependences can be used for calculation of adjustment of PR controllers in a great range of change of dynamic characteristics of Objects control.

  3. Method for optimum determination of adjustable parameters in the boiling water reactor core simulator using operating data on flux distribution

    Kiguchi, T.; Kawai, T.

    1975-01-01

    A method has been developed to optimally and automatically determine the adjustable parameters of the boiling water reactor three-dimensional core simulator FLARE. The steepest gradient method is adopted for the optimization. The parameters are adjusted to best fit the operating data on power distribution measured by traversing in-core probes (TIP). The average error in the calculated TIP readings normalized by the core average is 0.053 at the rated power. The k-infinity correction term has also been derived theoretically to reduce the relatively large error in the calculated TIP readings near the tips of control rods, which is induced by the coarseness of mesh points. By introducing this correction, the average error decreases to 0.047. The void-quality relation is recognized as a function of coolant flow rate. The relation is estimated to fit the measured distributions of TIP reading at the partial power states

  4. A Three-Stage Optimization Algorithm for the Stochastic Parallel Machine Scheduling Problem with Adjustable Production Rates

    Rui Zhang

    2013-01-01

    Full Text Available We consider a parallel machine scheduling problem with random processing/setup times and adjustable production rates. The objective functions to be minimized consist of two parts; the first part is related with the due date performance (i.e., the tardiness of the jobs, while the second part is related with the setting of machine speeds. Therefore, the decision variables include both the production schedule (sequences of jobs and the production rate of each machine. The optimization process, however, is significantly complicated by the stochastic factors in the manufacturing system. To address the difficulty, a simulation-based three-stage optimization framework is presented in this paper for high-quality robust solutions to the integrated scheduling problem. The first stage (crude optimization is featured by the ordinal optimization theory, the second stage (finer optimization is implemented with a metaheuristic called differential evolution, and the third stage (fine-tuning is characterized by a perturbation-based local search. Finally, computational experiments are conducted to verify the effectiveness of the proposed approach. Sensitivity analysis and practical implications are also discussed.

  5. Optimizing a Water Simulation based on Wavefront Parameter Optimization

    Lundgren, Martin

    2017-01-01

    DICE, a Swedish game company, wanted a more realistic water simulation. Currently, most large scale water simulations used in games are based upon ocean simulation technology. These techniques falter when used in other scenarios, such as coastlines. In order to produce a more realistic simulation, a new one was created based upon the water simulation technique "Wavefront Parameter Interpolation". This technique involves a rather extensive preprocess that enables ocean simulations to have inte...

  6. Automatic efficiency optimization of an axial compressor with adjustable inlet guide vanes

    Li, Jichao; Lin, Feng; Nie, Chaoqun; Chen, Jingyi

    2012-04-01

    The inlet attack angle of rotor blade reasonably can be adjusted with the change of the stagger angle of inlet guide vane (IGV); so the efficiency of each condition will be affected. For the purpose to improve the efficiency, the DSP (Digital Signal Processor) controller is designed to adjust the stagger angle of IGV automatically in order to optimize the efficiency at any operating condition. The A/D signal collection includes inlet static pressure, outlet static pressure, outlet total pressure, rotor speed and torque signal, the efficiency can be calculated in the DSP, and the angle signal for the stepping motor which control the IGV will be sent out from the D/A. Experimental investigations are performed in a three-stage, low-speed axial compressor with variable inlet guide vanes. It is demonstrated that the DSP designed can well adjust the stagger angle of IGV online, the efficiency under different conditions can be optimized. This establishment of DSP online adjustment scheme may provide a practical solution for improving performance of multi-stage axial flow compressor when its operating condition is varied.

  7. A design of calibration single star simulator with adjustable magnitude and optical spectrum output system

    Hu, Guansheng; Zhang, Tao; Zhang, Xuan; Shi, Gentai; Bai, Haojie

    2018-03-01

    In order to achieve multi-color temperature and multi-magnitude output, magnitude and temperature can real-time adjust, a new type of calibration single star simulator was designed with adjustable magnitude and optical spectrum output in this article. xenon lamp and halogen tungsten lamp were used as light source. The control of spectrum band and temperature of star was realized with different multi-beam narrow band spectrum with light of varying intensity. When light source with different spectral characteristics and color temperature go into the magnitude regulator, the light energy attenuation were under control by adjusting the light luminosity. This method can completely satisfy the requirements of calibration single star simulator with adjustable magnitude and optical spectrum output in order to achieve the adjustable purpose of magnitude and spectrum.

  8. Optimization of Simulated Inventory Systems : OptQuest and Alternatives

    Kleijnen, J.P.C.; Wan, J.

    2006-01-01

    This article illustrates simulation optimization through an (s, S) inventory manage- ment system. In this system, the goal function to be minimized is the expected value of speci…c inventory costs. Moreover, speci…c constraints must be satis…ed for some random simulation responses, namely the service or …ll rate, and for some determin- istic simulation inputs, namely the constraint s optimization methods, including the popular OptQuest method. The optimal...

  9. Bandwidth Optimization of Normal Equation Matrix in Bundle Block Adjustment in Multi-baseline Rotational Photography

    WANG Xiang

    2016-02-01

    Full Text Available A new bandwidth optimization method of normal equation matrix in bundle block adjustment in multi-baseline rotational close range photography by image index re-sorting is proposed. The equivalent exposure station of each image is calculated by its object space coverage and the relationship with other adjacent images. Then, according to the coordinate relations between equivalent exposure stations, new logical indices of all images are computed, based on which, the optimized bandwidth value can be obtained. Experimental results show that the bandwidth determined by our proposed method is significantly better than its original value, thus the operational efficiency, as well as the memory consumption of multi-baseline rotational close range photography in real-data applications, is optimized to a certain extent.

  10. Numerical simulation and optimal design of Segmented Planar Imaging Detector for Electro-Optical Reconnaissance

    Chu, Qiuhui; Shen, Yijie; Yuan, Meng; Gong, Mali

    2017-12-01

    Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is a cutting-edge electro-optical imaging technology to realize miniaturization and complanation of imaging systems. In this paper, the principle of SPIDER has been numerically demonstrated based on the partially coherent light theory, and a novel concept of adjustable baseline pairing SPIDER system has further been proposed. Based on the results of simulation, it is verified that the imaging quality could be effectively improved by adjusting the Nyquist sampling density, optimizing the baseline pairing method and increasing the spectral channel of demultiplexer. Therefore, an adjustable baseline pairing algorithm is established for further enhancing the image quality, and the optimal design procedure in SPIDER for arbitrary targets is also summarized. The SPIDER system with adjustable baseline pairing method can broaden its application and reduce cost under the same imaging quality.

  11. Concrete Plant Operations Optimization Using Combined Simulation and Genetic Algorithms

    Cao, Ming; Lu, Ming; Zhang, Jian-Ping

    2004-01-01

    This work presents a new approach for concrete plant operations optimization by combining a ready mixed concrete (RMC) production simulation tool (called HKCONSIM) with a genetic algorithm (GA) based optimization procedure. A revamped HKCONSIM computer system can be used to automate the simulation

  12. Optimizing Chromatographic Separation: An Experiment Using an HPLC Simulator

    Shalliker, R. A.; Kayillo, S.; Dennis, G. R.

    2008-01-01

    Optimization of a chromatographic separation within the time constraints of a laboratory session is practically impossible. However, by employing a HPLC simulator, experiments can be designed that allow students to develop an appreciation of the complexities involved in optimization procedures. In the present exercise, a HPLC simulator from "JCE…

  13. GPU-accelerated CFD Simulations for Turbomachinery Design Optimization

    Aissa, M.H.

    2017-01-01

    Design optimization relies heavily on time-consuming simulations, especially when using gradient-free optimization methods. These methods require a large number of simulations in order to get a remarkable improvement over reference designs, which are nowadays based on the accumulated engineering

  14. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  15. Constrained optimization via simulation models for new product innovation

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  16. Optimization of Operations Resources via Discrete Event Simulation Modeling

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  17. Multipacting Simulations of Tuner-adjustable waveguide coupler (TaCo) with CST

    Shafqat, Nuaman; Wegner, Rolf

    2015-01-01

    Tuner-adjustable waveguide couplers (TaCo) are used to feed microwave power to different RF structures of LINAC4. This paper studies the multipacting phenomenon for TaCo using PIC solver of CST PS. Simulations are performed for complete field sweeps and results are analysed.

  18. Sequential use of simulation and optimization in analysis and planning

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  19. CASTING IMPROVEMENT BASED ON METAHEURISTIC OPTIMIZATION AND NUMERICAL SIMULATION

    Radomir Radiša

    2017-12-01

    Full Text Available This paper presents the use of metaheuristic optimization techniques to support the improvement of casting process. Genetic algorithm (GA, Ant Colony Optimization (ACO, Simulated annealing (SA and Particle Swarm Optimization (PSO have been considered as optimization tools to define the geometry of the casting part’s feeder. The proposed methodology has been demonstrated in the design of the feeder for casting Pelton turbine bucket. The results of the optimization are dimensional characteristics of the feeder, and the best result from all the implemented optimization processes has been adopted. Numerical simulation has been used to verify the validity of the presented design methodology and the feeding system optimization in the casting system of the Pelton turbine bucket.

  20. Beam Delivery Simulation - Recent Developments and Optimization

    AUTHOR|(INSPIRE)INSPIRE-00232566; Boogert, Stewart Takashi; Garcia-Morales, H; Gibson, Stephen; Kwee-Hinzmann, Regina; Nevay, Laurence James; Deacon, Lawrence Charles

    2015-01-01

    Beam Delivery Simulation (BDSIM) is a particle tracking code that simulates the passage of particles through both the magnetic accelerator lattice as well as their interaction with the material of the accelerator itself. The Geant4 toolkit is used to give a full range of physics processes needed to simulate both the interaction of primary particles and the production and subsequent propagation of secondaries. BDSIM has already been used to simulate linear accelerators such as the International Linear Collider (ILC) and the Compact Linear Collider (CLIC), but it has recently been adapted to simulate circular accelerators as well, producing loss maps for the Large Hadron Collider (LHC). In this paper the most recent developments, which extend BDSIM’s functionality as well as improve its efficiency are presented. Improvement and refactorisation of the tracking algorithms are presented alongside improved automatic geometry construction for increased particle tracking speed.

  1. Multiobjective Shape Optimization for Deployment and Adjustment Properties of Cable-Net of Deployable Antenna

    Guoqiang You

    2015-01-01

    Full Text Available Based on structural features of cable-net of deployable antenna, a multiobjective shape optimization method is proposed to help to engineer antenna’s cable-net structure that has better deployment and adjustment properties. In this method, the multiobjective optimum mathematical model is built with lower nodes’ locations of cable-net as variables, the average stress ratio of cable elements and strain energy as objectives, and surface precision and natural frequency of cable-net as constraints. Sequential quadratic programming method is used to solve this nonlinear mathematical model in conditions with different weighting coefficients, and the results show the validity and effectiveness of the proposed method and model.

  2. optimal assembly line balancing using simulation techniques

    user

    Department of Mechanical Engineering ... perspective on how the business operates, and ... Process simulation allows management ... improvement and change since it would be a costly ... The work content performed on an assembly line.

  3. Conceptual Model for Simulating the Adjustments of Bankfull Characteristics in the Lower Yellow River, China

    Yuanjian Wang

    2014-01-01

    Full Text Available We present a conceptual model for simulating the temporal adjustments in the banks of the Lower Yellow River (LYR. Basic conservation equations for mass, friction, and sediment transport capacity and the Exner equation were adopted to simulate the hydrodynamics underlying fluvial processes. The relationship between changing rates in bankfull width and depth, derived from quasiuniversal hydraulic geometries, was used as a closure for the hydrodynamic equations. On inputting the daily flow discharge and sediment load, the conceptual model successfully simulated the 30-year adjustments in the bankfull geometries of typical reaches of the LYR. The square of the correlating coefficient reached 0.74 for Huayuankou Station in the multiple-thread reach and exceeded 0.90 for Lijin Station in the meandering reach. This proposed model allows multiple dependent variables and the input of daily hydrological data for long-term simulations. This links the hydrodynamic and geomorphic processes in a fluvial river and has potential applicability to fluvial rivers undergoing significant adjustments.

  4. Applied simulation and optimization : in logistics, industrial and aeronautical practice

    Mujica Mota, Miguel; De la Mota, Idalia Flores; Guimarans Serrano, Daniel

    2015-01-01

    Presenting techniques, case-studies and methodologies that combine the use of simulation approaches with optimization techniques for facing problems in manufacturing, logistics, or aeronautical problems, this book provides solutions to common industrial problems in several fields, which range from

  5. Robust Optimization in Simulation : Taguchi and Response Surface Methodology

    Dellino, G.; Kleijnen, J.P.C.; Meloni, C.

    2008-01-01

    Optimization of simulated systems is tackled by many methods, but most methods assume known environments. This article, however, develops a 'robust' methodology for uncertain environments. This methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by

  6. Simulation and Optimization of Foam EOR Processes

    Namdar Zanganeh, M.

    2011-01-01

    Chemical enhanced oil recovery (EOR) is relatively expensive due to the high cost of the injected chemicals such as surfactants. Excessive use of these chemicals leads to processes that are not economically feasible. Therefore, optimizing the volume of these injected chemicals is of extreme

  7. Optimal Simulations by Butterfly Networks: Extended Abstract,

    1987-11-01

    Typescript , Univ. of Massachusetts; submitted for nublication. 1_2.2 Ll, - W 12. ifliU 1.8 UI1.25 . l i I 61 MICROCOPY RESOLUTION TEST CHART NATIONAL...1987): An optimal mapping of the FFT algorithm onto the tlypercube architecture. Typescript , Univ. of Massachusetts; submitted for publication. (HR I

  8. Study on Gas Field Optimization Distribution with Parameters Adjustment of the Air Duct Outlet for Mechanized Heading Face in Coal Mine

    Gong, Xiao-Yan; Zhang, Xin-Yi; Wu, Yue; Xia, Zhi-Xin; Li, Ying

    2017-12-01

    At present, as the increasingly drilling dimensions with cross-section expansion and distance prolong in coal mine, the situation of gas accumulation in mechanized heading face becomes severe. In this paper, optimization research of gas distribution was carried out by adjusting parameters of the air duct outlet, including angle, caliber and the front and rear distance of air duct outlet. Mechanized heading face of Ningtiaota coal mine was taken as the research object, simulated and analyzed the problems of original gas field, the reasonable parameters range of the air duct outlet was determined according to the allowable range of wind speed and the effect of gas dilution, the adjustment range of each parameter of the air duct outlet is preliminarily determined. Base on this, the distribution of gas field under different parameters adjustment of air duct outlet was simulated. The specific parameters under the different distance between the air duct outlet and the mechanized heading face were obtained, and a new method of optimizing the gas distribution by adjusting parameters of the air duct outlet was provided.

  9. Optimization of forging processes using finite element simulations

    Bonte, M.H.A.; Fourment, Lionel; Do, Tien-tho; van den Boogaard, Antonius H.; Huetink, Han

    2010-01-01

    During the last decades, simulation software based on the Finite Element Method (FEM) has significantly contributed to the design of feasible forming processes. Coupling FEM to mathematical optimization algorithms offers a promising opportunity to design optimal metal forming processes rather than

  10. Optimization Model for Web Based Multimodal Interactive Simulations.

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  11. OPTIMIZING THE DISTRIBUTION OF TIE POINTS FOR THE BUNDLE ADJUSTMENT OF HRSC IMAGE MOSAICS

    J. Bostelmann

    2017-07-01

    Full Text Available For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC regularly acquires long image strips. By now more than 4,000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs and orthophotos. To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the

  12. Coded aperture optimization using Monte Carlo simulations

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  13. Conditional simulation for efficient global optimization

    Kleijnen, Jack P.C.; Mehdad, E.; Pasupathy, R.; Kim, S.-H.; Tolk, A.; Hill, R.; Kuhl, M.E.

    2013-01-01

    A classic Kriging or Gaussian process (GP) metamodel estimates the variance of its predictor by plugging-in the estimated GP (hyper)parameters; namely, the mean, variance, and covariances. The problem is that this predictor variance is biased. To solve this problem for deterministic simulations, we

  14. Optimized multi area AGC simulation in restructured power systems

    Bhatt, Praghnesh; Roy, Ranjit; Ghoshal, S.P.

    2010-01-01

    In this paper, the traditional automatic generation control loop with modifications is incorporated for simulating automatic generation control (AGC) in restructured power system. Federal energy regulatory commission (FERC) encourages an open market system for price based operation. FERC has issued a notice for proposed rulemaking of various ancillary services. One of these ancillary services is load following with frequency control which comes broadly under Automatic Generation Control in deregulated regime. The concept of DISCO participation matrix is used to simulate the bilateral contracts in the three areas and four area diagrams. Hybrid particle swarm optimization is used to obtain optimal gain parameters for optimal transient performance. (author)

  15. Dr. Mainte. Integrated simulator of maintenance optimization of LWRs

    Isobe, Yoshihiro; Sagisaka, Mitsuyuki; Etoh, Junji; Matsunaga, Takashi; Kosaka, Toru; Matsumoto, Satoshi; Yoshimura, Shinobu

    2014-01-01

    Dr. Mainte, an integrated simulator for maintenance optimization of LWRs (Light Water Reactors) has been developed based on PFM (Probabilistic Fracture Mechanics) analyses. The concept of the simulator is to provide a decision-making system to optimize maintenance activities for representative components and piping systems in nuclear power plants totally and quantitatively in terms of safety, availability and economic efficiency, environmental impact and social acceptance. For the further improvement of the safety and availability, the effect of human error and its reduction on the optimization of plant maintenance activities and approaches of reducing it have been studied. (author)

  16. Simulated annealing algorithm for reactor in-core design optimizations

    Zhong Wenfa; Zhou Quan; Zhong Zhaopeng

    2001-01-01

    A nuclear reactor must be optimized for in core fuel management to make full use of the fuel, to reduce the operation cost and to flatten the power distribution reasonably. The author presents a simulated annealing algorithm. The optimized objective function and the punishment function were provided for optimizing the reactor physics design. The punishment function was used to practice the simulated annealing algorithm. The practical design of the NHR-200 was calculated. The results show that the K eff can be increased by 2.5% and the power distribution can be flattened

  17. Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations

    Niemeier, Wolfgang; Tengen, Dieter

    2017-06-01

    In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.

  18. CLIC Telescope optimization with ALLPIX simulation

    Qi, Wu

    2015-01-01

    A simulation study of CLIC-EUDET telescope resolution with MIMOSA 26 as reference sensors under DESY (5.6 GeV electron beam) and CERN-SPS (120-180 GeV pion^{-} beam) conditions. During the study, a virtual DUT sensor with cylindrical sensing area was defined and used with ALLPIX software. By changing the configuration of telescope, some results for DESY's setup were found agreeing with the theoretical calculation.

  19. Optimized Loading for Particle-in-cell Gyrokinetic Simulations

    Lewandowski, J.L.V.

    2004-01-01

    The problem of particle loading in particle-in-cell gyrokinetic simulations is addressed using a quadratic optimization algorithm. Optimized loading in configuration space dramatically reduces the short wavelength modes in the electrostatic potential that are partly responsible for the non-conservation of total energy; further, the long wavelength modes are resolved with good accuracy. As a result, the conservation of energy for the optimized loading is much better that the conservation of energy for the random loading. The method is valid for any geometry and can be coupled to optimization algorithms in velocity space

  20. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  1. Beam Delivery Simulation: BDSIM - Development & Optimization

    Nevay, Laurence James; Garcia-Morales, H; Gibson, S M; Kwee-Hinzmann, R; Snuverink, J; Deacon, L C

    2014-01-01

    Beam Delivery Simulation (BDSIM) is a Geant4 and C++ based particle tracking code that seamlessly tracks particles through accelerators and detectors, including the full range of particle interaction physics processes from Geant4. BDSIM has been successfully used to model beam loss and background conditions for many current and future linear accelerators such as the Accelerator Test Facility 2 (ATF2) and the International Linear Collider (ILC). Current developments extend its application for use with storage rings, in particular for the Large Hadron Collider (LHC) and the High Luminosity upgrade project (HL-LHC). This paper presents the latest results from using BDSIM to model the LHC as well as the developments underway to improve performance.

  2. Embedded FPGA Design for Optimal Pixel Adjustment Process of Image Steganography

    Chiung-Wei Huang

    2018-01-01

    Full Text Available We propose a prototype of field programmable gate array (FPGA implementation for optimal pixel adjustment process (OPAP algorithm of image steganography. In the proposed scheme, the cover image and the secret message are transmitted from a personal computer (PC to an FPGA board using RS232 interface for hardware processing. We firstly embed k-bit secret message into each pixel of the cover image by the last-significant-bit (LSB substitution method, followed by executing associated OPAP calculations to construct a stego pixel. After all pixels of the cover image have been embedded, a stego image is created and transmitted from FPGA back to the PC and stored in the PC. Moreover, we have extended the basic pixel-wise structure to a parallel structure which can fully use the hardware devices to speed up the embedding process and embed several bits of secret message at the same time. Through parallel mechanism of the hardware based design, the data hiding process can be completed in few clock cycles to produce steganography outcome. Experimental results show the effectiveness and correctness of the proposed scheme.

  3. Structure optimization and simulation analysis of the quartz micromachined gyroscope

    Xuezhong Wu

    2014-02-01

    Full Text Available Structure optimization and simulation analysis of the quartz micromachined gyroscope are reported in this paper. The relationships between the structure parameters and the frequencies of work mode were analysed by finite element analysis. The structure parameters of the quartz micromachined gyroscope were optimized to reduce the difference between the frequencies of the drive mode and the sense mode. The simulation results were proved by testing the prototype gyroscope, which was fabricated by micro-electromechanical systems (MEMS technology. Therefore, the frequencies of the drive mode and the sense mode can match each other by the structure optimization and simulation analysis of the quartz micromachined gyroscope, which is helpful in the design of the high sensitivity quartz micromachined gyroscope.

  4. A Thermodynamic Library for Simulation and Optimization of Dynamic Processes

    Ritschel, Tobias Kasper Skovborg; Gaspar, Jozsef; Jørgensen, John Bagterp

    2017-01-01

    Process system tools, such as simulation and optimization of dynamic systems, are widely used in the process industries for development of operational strategies and control for process systems. These tools rely on thermodynamic models and many thermodynamic models have been developed for different...... compounds and mixtures. However, rigorous thermodynamic models are generally computationally intensive and not available as open-source libraries for process simulation and optimization. In this paper, we describe the application of a novel open-source rigorous thermodynamic library, ThermoLib, which...... is designed for dynamic simulation and optimization of vapor-liquid processes. ThermoLib is implemented in Matlab and C and uses cubic equations of state to compute vapor and liquid phase thermodynamic properties. The novelty of ThermoLib is that it provides analytical first and second order derivatives...

  5. Optimization and Simulation in the Danish Fishing Industry

    Jensen, Toke Koldborg; Clausen, Jens

    and simulation can be applied in a holistic modeling framework. Using the insights into supply chain theory and the Danish fishing industry, we investigate how the fishing industry as a whole may benefit from the formulation and use of mathematical optimization and simulation models. Finally, an appendix......We consider the Danish fishing industry from a holistic viewpoint, and give a review of the main aspects, and the important actors. We also consider supply chain theory, and identify both theoretically, and based on other application areas, e.g. other fresh food industries, how optimization...

  6. Response Adjusted for Days of Antibiotic Risk (RADAR): evaluation of a novel method to compare strategies to optimize antibiotic use.

    Schweitzer, V A; van Smeden, M; Postma, D F; Oosterheert, J J; Bonten, M J M; van Werkhoven, C H

    2017-12-01

    The Response Adjusted for Days of Antibiotic Risk (RADAR) statistic was proposed to improve the efficiency of trials comparing antibiotic stewardship strategies to optimize antibiotic use. We studied the behaviour of RADAR in a non-inferiority trial in which a β-lactam monotherapy strategy (n = 656) was non-inferior to fluoroquinolone monotherapy (n = 888) for patients with moderately severe community-acquired pneumonia. Patients were ranked according to clinical outcome, using five or eight categories, and antibiotic use. RADAR was calculated as the probability that the β-lactam group had a more favourable ranking than the fluoroquinolone group. To investigate the sensitivity of RADAR to detrimental clinical outcome we simulated increasing rates of 90-day mortality in the β-lactam group and performed the RADAR and non-inferiority analysis. The RADAR of the β-lactam group compared with the fluoroquinolone group was 60.3% (95% CI 57.9%-62.7%) using five and 58.4% (95% CI 56.0%-60.9%) using eight clinical outcome categories, all in favour of β-lactam. Sample sizes for RADAR were 38% (250/653) and 89% (580/653) of the non-inferiority sample size calculation, using five or eight clinical outcome categories, respectively. With simulated mortality rates, loss of non-inferiority of the β-lactam group occurred at a relative risk of 1.125 in the conventional analysis, whereas using RADAR the β-lactam group lost superiority at a relative risk of mortality of 1.25 and 1.5, with eight and five clinical outcome categories, respectively. RADAR favoured β-lactam over fluoroquinolone therapy for community-acquired pneumonia. Although RADAR required fewer patients than conventional non-inferiority analysis, the statistic was less sensitive to detrimental outcomes. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  7. Terascale Optimal PDE Simulations (TOPS) Center

    Professor Olof B. Widlund

    2007-07-09

    Our work has focused on the development and analysis of domain decomposition algorithms for a variety of problems arising in continuum mechanics modeling. In particular, we have extended and analyzed FETI-DP and BDDC algorithms; these iterative solvers were first introduced and studied by Charbel Farhat and his collaborators, see [11, 45, 12], and by Clark Dohrmann of SANDIA, Albuquerque, see [43, 2, 1], respectively. These two closely related families of methods are of particular interest since they are used more extensively than other iterative substructuring methods to solve very large and difficult problems. Thus, the FETI algorithms are part of the SALINAS system developed by the SANDIA National Laboratories for very large scale computations, and as already noted, BDDC was first developed by a SANDIA scientist, Dr. Clark Dohrmann. The FETI algorithms are also making inroads in commercial engineering software systems. We also note that the analysis of these algorithms poses very real mathematical challenges. The success in developing this theory has, in several instances, led to significant improvements in the performance of these algorithms. A very desirable feature of these iterative substructuring and other domain decomposition algorithms is that they respect the memory hierarchy of modern parallel and distributed computing systems, which is essential for approaching peak floating point performance. The development of improved methods, together with more powerful computer systems, is making it possible to carry out simulations in three dimensions, with quite high resolution, relatively easily. This work is supported by high quality software systems, such as Argonne's PETSc library, which facilitates code development as well as the access to a variety of parallel and distributed computer systems. The success in finding scalable and robust domain decomposition algorithms for very large number of processors and very large finite element problems is, e

  8. Numerical simulation and optimization of nickel-hydrogen batteries

    Yu, Li-Jun; Qin, Ming-Jun; Zhu, Peng; Yang, Li

    2008-05-01

    A three-dimensional, transient numerical model of an individual pressure vessel (IPV) nickel-hydrogen battery has been developed based on energy conservation law, mechanisms of heat and mass transfer, and electrochemical reactions in the battery. The model, containing all components of a battery including the battery shell, was utilized to simulate the transient temperature of the battery, using computational fluid dynamics (CFD) technology. The comparison of the model prediction and experimental data shows a good agreement, which means that the present model can be used for the engineering design and parameter optimization of nickel-hydrogen batteries in aerospace power systems. Two kinds of optimization schemes were provided and evaluated by the simulated temperature field. Based on the model, the temperature simulation during five successive periods in a designed space battery was conducted and the simulation results meet the requirement of safe operation.

  9. Logic hybrid simulation-optimization algorithm for distillation design

    Caballero Suárez, José Antonio

    2014-01-01

    In this paper, we propose a novel algorithm for the rigorous design of distillation columns that integrates a process simulator in a generalized disjunctive programming formulation. The optimal distillation column, or column sequence, is obtained by selecting, for each column section, among a set of column sections with different number of theoretical trays. The selection of thermodynamic models, properties estimation etc., are all in the simulation environment. All the numerical issues relat...

  10. Optimization and Simulation in Drug Development - Review and Analysis

    Schjødt-Eriksen, Jens; Clausen, Jens

    2003-01-01

    We give a review of pharmaceutical R&D and mathematical simulation and optimization methods used to support decision making within the pharmaceutical development process. The complex nature of drug development is pointed out through a description of the various phases of the pharmaceutical develo...... development process. A part of the paper is dedicated to the use of simulation techniques to support clinical trials. The paper ends with a section describing portfolio modelling methods in the context of the pharmaceutical industry....

  11. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  12. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  13. Automatic CT simulation optimization for radiation therapy: A general strategy.

    Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa

    2014-03-01

    In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes

  14. Automatic CT simulation optimization for radiation therapy: A general strategy

    Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M.; Mutic, Sasa [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Yu, Lifeng [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2014-03-15

    Purpose: In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. Methods: The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Results: Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube

  15. A regulatory adjustment process for the determination of the optimal percentage requirement in an electricity market with Tradable Green Certificates

    Currier, Kevin M.

    2013-01-01

    A system of Tradable Green Certificates (TGCs) is a market-based subsidy scheme designed to promote electricity generation from renewable energy sources such as wind power. Under a TGC system, the principal policy instrument is the “percentage requirement,” which stipulates the percentage of total electricity production (“green” plus “black”) that must be obtained from renewable sources. In this paper, we propose a regulatory adjustment process that a regulator can employ to determine the socially optimal percentage requirement, explicitly accounting for environmental damages resulting from black electricity generation. - Highlights: • A Tradable Green Certificate (TGC) system promotes energy production from renewable sources. • We consider an electricity oligopoly operated under a TGC system. • Welfare analysis must account for damages from “black” electricity production. • We characterize the welfare maximizing (optimal) “percentage requirement.” • We present a regulatory adjustment process that computes the optimal percentage requirement iteratively

  16. Simulation and OR (operations research) in combination for practical optimization

    van Dijk, N.; van der Sluis, E.; Haijema, R.; Al-Ibrahim, A.; van der Wal, J.; Kuhl, M.E.; Steiger, N.M.; Armstrong, F.B.; Joines, J.A.

    2005-01-01

    Should we pool capacities or not? This is a question that one can regularly be confronted with in operations and service management. It is a question that necessarily requires a combination of queueing (as OR discipline) and simulation (as evaluative tool) and further steps for optimization. It will

  17. Simulation-based optimization for product and process design

    Driessen, L.

    2006-01-01

    The design of products and processes has gradually shifted from a purely physical process towards a process that heavily relies on computer simulations (virtual prototyping). To optimize this virtual design process in terms of speed and final product quality, statistical methods and mathematical

  18. Robust Optimization in Simulation : Taguchi and Krige Combined

    Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.

    2009-01-01

    Optimization of simulated systems is the goal of many methods, but most methods as- sume known environments. We, however, develop a `robust' methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by

  19. Metamodel-based robust simulation-optimization : An overview

    Dellino, G.; Meloni, C.; Kleijnen, J.P.C.; Dellino, Gabriella; Meloni, Carlo

    2015-01-01

    Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a "robust" methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by

  20. Robust optimization in simulation : Taguchi and Krige combined

    Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.

    2012-01-01

    Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a “robust” methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by

  1. Modelling, simulating and optimizing boiler heating surfaces and evaporator circuits

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for optimizing the dynamic performance of boiler have been developed. Design variables related to the size of the boiler and its dynamic performance have been defined. The object function to be optimized takes the weight of the boiler and its dynamic capability into account. As constraints...... for the optimization a dynamic model for the boiler is applied. Furthermore a function for the value of the dynamic performance is included in the model. The dynamic models for simulating boiler performance consists of a model for the flue gas side, a model for the evaporator circuit and a model for the drum....... The dynamic model has been developed for the purpose of determining boiler material temperatures and heat transfer from the flue gas side to the water-/steam side in order to simulate the circulation in the evaporator circuit and hereby the water level fluctuations in the drum. The dynamic model has been...

  2. Simulation and optimization of an industrial PSA unit

    Barg C.

    2000-01-01

    Full Text Available The Pressure Swing Adsorption (PSA units have been used as a low cost alternative to the usual gas separation processes. Its largest commercial application is for hydrogen purification systems. Several studies have been made about the simulation of pressure swing adsorption units, but there are only few reports on the optimization of such processes. The objective of this study is to simulate and optimize an industrial PSA unit for hydrogen purification. This unit consists of six beds, each of them have three layers of different kinds of adsorbents. The main impurities are methane, carbon monoxide and sulfidric gas. The product stream has 99.99% purity in hydrogen, and the recovery is around 90%. A mathematical model for a commercial PSA unit is developed. The cycle time and the pressure swing steps are optimized. All the features concerning with complex commercial processes are considered.

  3. Applied simulation and optimization in logistics, industrial and aeronautical practice

    Mota, Idalia; Serrano, Daniel

    2015-01-01

    Presenting techniques, case-studies and methodologies that combine the use of simulation approaches with optimization techniques for facing problems in manufacturing, logistics, or aeronautical problems, this book provides solutions to common industrial problems in several fields, which range from manufacturing to aviation problems, where the common denominator is the combination of simulation’s flexibility with optimization techniques’ robustness. Providing readers with a comprehensive guide to tackle similar issues in industrial environments, this text explores novel ways to face industrial problems through hybrid approaches (simulation-optimization) that benefit from the advantages of both paradigms, in order to give solutions to important problems in service industry, production processes, or supply chains, such as scheduling, routing problems and resource allocations, among others.

  4. A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios

    Hiroshi Shiraishi

    2012-01-01

    Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.

  5. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  6. Simulation, optimization and control of a thermal cracking furnace

    Masoumi, M.E.; Sadrameli, S.M.; Towfighi, J.; Niaei, A.

    2006-01-01

    The ethylene production process is one of the most important aspect of a petrochemical plant and the cracking furnace is the heart of the process. Since, ethylene is one of the raw materials in the chemical industry and the market situation of not only the feed and the product, but also the utility is rapidly changing, the optimal operation and control of the plant is important. A mathematical model, which describes the static and dynamic operations of a pilot plant furnace, was developed. The static simulation was used to predict the steady-state profiles of temperature, pressure and products yield. The dynamic simulation of the process was used to predict the transient behavior of thermal cracking reactor. Using a dynamic programming technique, an optimal temperature profile was developed along the reactor. Performances of temperature control loop were tested for different controller parameters and disturbances. The results of the simulation were tested experimentally in a computer control pilot plant

  7. An Indirect Simulation-Optimization Model for Determining Optimal TMDL Allocation under Uncertainty

    Feng Zhou

    2015-11-01

    Full Text Available An indirect simulation-optimization model framework with enhanced computational efficiency and risk-based decision-making capability was developed to determine optimal total maximum daily load (TMDL allocation under uncertainty. To convert the traditional direct simulation-optimization model into our indirect equivalent model framework, we proposed a two-step strategy: (1 application of interval regression equations derived by a Bayesian recursive regression tree (BRRT v2 algorithm, which approximates the original hydrodynamic and water-quality simulation models and accurately quantifies the inherent nonlinear relationship between nutrient load reductions and the credible interval of algal biomass with a given confidence interval; and (2 incorporation of the calibrated interval regression equations into an uncertain optimization framework, which is further converted to our indirect equivalent framework by the enhanced-interval linear programming (EILP method and provides approximate-optimal solutions at various risk levels. The proposed strategy was applied to the Swift Creek Reservoir’s nutrient TMDL allocation (Chesterfield County, VA to identify the minimum nutrient load allocations required from eight sub-watersheds to ensure compliance with user-specified chlorophyll criteria. Our results indicated that the BRRT-EILP model could identify critical sub-watersheds faster than the traditional one and requires lower reduction of nutrient loadings compared to traditional stochastic simulation and trial-and-error (TAE approaches. This suggests that our proposed framework performs better in optimal TMDL development compared to the traditional simulation-optimization models and provides extreme and non-extreme tradeoff analysis under uncertainty for risk-based decision making.

  8. Modeling, simulation, and optimization of a front-end system for acetylene hydrogenation reactors

    R. Gobbo

    2004-12-01

    Full Text Available The modeling, simulation, and dynamic optimization of an industrial reaction system for acetylene hydrogenation are discussed in the present work. The process consists of three adiabatic fixed-bed reactors, in series, with interstage cooling. These reactors are located after the compression and the caustic scrubbing sections of an ethylene plant, characterizing a front-end system; in contrast to the tail-end system where the reactors are placed after the de-ethanizer unit. The acetylene conversion and selectivity profiles for the reactors are optimized, taking into account catalyst deactivation and process constraints. A dynamic optimal temperature profile that maximizes ethylene production and meets product specifications is obtained by controlling the feed and intercoolers temperatures. An industrial acetylene hydrogenation system is used to provide the necessary data to adjust kinetics and transport parameters and to validate the approach.

  9. Model-data fusion across ecosystems: from multisite optimizations to global simulations

    Kuppel, S.; Peylin, P.; Maignan, F.; Chevallier, F.; Kiely, G.; Montagnani, L.; Cescatti, A.

    2014-11-01

    This study uses a variational data assimilation framework to simultaneously constrain a global ecosystem model with eddy covariance measurements of daily net ecosystem exchange (NEE) and latent heat (LE) fluxes from a large number of sites grouped in seven plant functional types (PFTs). It is an attempt to bridge the gap between the numerous site-specific parameter optimization works found in the literature and the generic parameterization used by most land surface models within each PFT. The present multisite approach allows deriving PFT-generic sets of optimized parameters enhancing the agreement between measured and simulated fluxes at most of the sites considered, with performances often comparable to those of the corresponding site-specific optimizations. Besides reducing the PFT-averaged model-data root-mean-square difference (RMSD) and the associated daily output uncertainty, the optimization improves the simulated CO2 balance at tropical and temperate forests sites. The major site-level NEE adjustments at the seasonal scale are reduced amplitude in C3 grasslands and boreal forests, increased seasonality in temperate evergreen forests, and better model-data phasing in temperate deciduous broadleaf forests. Conversely, the poorer performances in tropical evergreen broadleaf forests points to deficiencies regarding the modelling of phenology and soil water stress for this PFT. An evaluation with data-oriented estimates of photosynthesis (GPP - gross primary productivity) and ecosystem respiration (Reco) rates indicates distinctively improved simulations of both gross fluxes. The multisite parameter sets are then tested against CO2 concentrations measured at 53 locations around the globe, showing significant adjustments of the modelled seasonality of atmospheric CO2 concentration, whose relevance seems PFT-dependent, along with an improved interannual variability. Lastly, a global-scale evaluation with remote sensing NDVI (normalized difference vegetation index

  10. Optimization of reconstruction algorithms using Monte Carlo simulation

    Hanson, K.M.

    1989-01-01

    A method for optimizing reconstruction algorithms is presented that is based on how well a specified task can be performed using the reconstructed images. Task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, reconstruction, and performance of the stated task based on the final image. The use of this method is demonstrated through the optimization of the Algebraic Reconstruction Technique (ART), which reconstructs images from their projections by an iterative procedure. The optimization is accomplished by varying the relaxation factor employed in the updating procedure. In some of the imaging situations studied, it is found that the optimization of constrained ART, in which a non-negativity constraint is invoked, can vastly increase the detectability of objects. There is little improvement attained for unconstrained ART. The general method presented may be applied to the problem of designing neutron-diffraction spectrometers. (author)

  11. Simulation platform to model, optimize and design wind turbines

    Iov, F.; Hansen, A.D.; Soerensen, P.; Blaabjerg, F.

    2004-03-01

    This report is a general overview of the results obtained in the project 'Electrical Design and Control. Simulation Platform to Model, Optimize and Design Wind Turbines'. The motivation for this research project is the ever-increasing wind energy penetration into the power network. Therefore, the project has the main goal to create a model database in different simulation tools for a system optimization of the wind turbine systems. Using this model database a simultaneous optimization of the aerodynamic, mechanical, electrical and control systems over the whole range of wind speeds and grid characteristics can be achieved. The report is structured in six chapters. First, the background of this project and the main goals as well as the structure of the simulation platform is given. The main topologies for wind turbines, which have been taken into account during the project, are briefly presented. Then, the considered simulation tools namely: HAWC, DIgSILENT, Saber and Matlab/Simulink have been used in this simulation platform are described. The focus here is on the modelling and simulation time scale aspects. The abilities of these tools are complementary and they can together cover all the modelling aspects of the wind turbines e.g. mechanical loads, power quality, switching, control and grid faults. However, other simulation packages e.g PSCAD/EMTDC can easily be added in the simulation platform. New models and new control algorithms for wind turbine systems have been developed and tested in these tools. All these models are collected in dedicated libraries in Matlab/Simulink as well as in Saber. Some simulation results from the considered tools are presented for MW wind turbines. These simulation results focuses on fixed-speed and variable speed/pitch wind turbines. A good agreement with the real behaviour of these systems is obtained for each simulation tool. These models can easily be extended to model different kinds of wind turbines or large wind

  12. A Simulation Framework for Optimal Energy Storage Sizing

    Carlos Suazo-Martínez

    2014-05-01

    Full Text Available Despite the increasing interest in Energy Storage Systems (ESS, quantification of their technical and economical benefits remains a challenge. To assess the use of ESS, a simulation approach for ESS optimal sizing is presented. The algorithm is based on an adapted Unit Commitment, including ESS operational constraints, and the use of high performance computing (HPC. Multiple short-term simulations are carried out within a multiple year horizon. Evaluation is performed for Chile's Northern Interconnected Power System (SING. The authors show that a single year evaluation could lead to sub-optimal results when evaluating optimal ESS size. Hence, it is advisable to perform long-term evaluations of ESS. Additionally, the importance of detailed simulation for adequate assessment of ESS contributions and to fully capture storage value is also discussed. Furthermore, the robustness of the optimal sizing approach is evaluated by means of a sensitivity analyses. The results suggest that regulatory frameworks should recognize multiple value streams from storage in order to encourage greater ESS integration.

  13. Fuzzy Simulation-Optimization Model for Waste Load Allocation

    Motahhare Saadatpour

    2006-01-01

    Full Text Available This paper present simulation-optimization models for waste load allocation from multiple point sources which include uncertainty due to vagueness of the parameters and goals. This model employs fuzzy sets with appropriate membership functions to deal with uncertainties due to vagueness. The fuzzy waste load allocation model (FWLAM incorporate QUAL2E as a water quality simulation model and Genetic Algorithm (GA as an optimization tool to find the optimal combination of the fraction removal level to the dischargers and pollution control agency (PCA. Penalty functions are employed to control the violations in the system.  The results demonstrate that the goal of PCA to achieve the best water quality and the goal of the dischargers to use the full assimilative capacity of the river have not been satisfied completely and a compromise solution between these goals is provided. This fuzzy optimization model with genetic algorithm has been used for a hypothetical problem. Results demonstrate a very suitable convergence of proposed optimization algorithm to the global optima.

  14. When teams shift among processes: insights from simulation and optimization.

    Kennedy, Deanna M; McComb, Sara A

    2014-09-01

    This article introduces process shifts to study the temporal interplay among transition and action processes espoused in the recurring phase model proposed by Marks, Mathieu, and Zacarro (2001). Process shifts are those points in time when teams complete a focal process and change to another process. By using team communication patterns to measure process shifts, this research explores (a) when teams shift among different transition processes and initiate action processes and (b) the potential of different interventions, such as communication directives, to manipulate process shift timing and order and, ultimately, team performance. Virtual experiments are employed to compare data from observed laboratory teams not receiving interventions, simulated teams receiving interventions, and optimal simulated teams generated using genetic algorithm procedures. Our results offer insights about the potential for different interventions to affect team performance. Moreover, certain interventions may promote discussions about key issues (e.g., tactical strategies) and facilitate shifting among transition processes in a manner that emulates optimal simulated teams' communication patterns. Thus, we contribute to theory regarding team processes in 2 important ways. First, we present process shifts as a way to explore the timing of when teams shift from transition to action processes. Second, we use virtual experimentation to identify those interventions with the greatest potential to affect performance by changing when teams shift among processes. Additionally, we employ computational methods including neural networks, simulation, and optimization, thereby demonstrating their applicability in conducting team research. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  15. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  16. Optimization of the particle pusher in a diode simulation code

    Theimer, M.M.; Quintenz, J.P.

    1979-09-01

    The particle pusher in Sandia's particle-in-cell diode simulation code has been rewritten to reduce the required run time of a typical simulation. The resulting new version of the code has been found to run up to three times as fast as the original with comparable accuracy. The cost of this optimization was an increase in storage requirements of about 15%. The new version has also been written to run efficiently on a CRAY-1 computing system. Steps taken to affect this reduced run time are described. Various test cases are detailed

  17. Optimal Results and Numerical Simulations for Flow Shop Scheduling Problems

    Tao Ren

    2012-01-01

    Full Text Available This paper considers the m-machine flow shop problem with two objectives: makespan with release dates and total quadratic completion time, respectively. For Fm|rj|Cmax, we prove the asymptotic optimality for any dense scheduling when the problem scale is large enough. For Fm‖ΣCj2, improvement strategy with local search is presented to promote the performance of the classical SPT heuristic. At the end of the paper, simulations show the effectiveness of the improvement strategy.

  18. Simulation based optimization on automated fibre placement process

    Lei, Shi

    2018-02-01

    In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.

  19. D1+ Simulator: A cost and risk optimized approach to nuclear power plant simulator modernization

    Wischert, W.

    2006-01-01

    D1-Simulator is operated by Kraftwerks-Simulator-Gesellschaft (KSG) and Gesellschaft f?r Simulatorschulung (GfS) at the Simulator Centre in Essen since 1977. The full-scope control room training simulator, used for Kernkraftwerk Biblis (KWB) is based on a PDP-11 hardware platform and is mainly programmed in ASSEMBLER language. The Simulator has reached a continuous high availability of operation throughout the years due to specialized hardware and software support from KSG maintenance team. Nevertheless, D1-Simulator largely reveals limitations with respect to computer capacity and spares and suffers progressively from the non-availability of hardware replacement materials. In order to ensure long term maintainability within the framework of the consensus on nuclear energy, a 2-years refurbishing program has been launched by KWB focusing on quality and budgetary aspects. The so-called D1+ Simulator project is based on the re-use of validated data from existing simulators. Allowing for flexible project management methods, the project outlines a cost and risk optimized approach to Nuclear Power Plant (NPP) Simulator modernization. D1+ Simulator is being built by KSG/GfS in close collaboration with KWB and the simulator vendor THALES by re-using a modern hardware and software development environment from D56-Simulator, used by Kernkraftwerk Obrigheim (KWO) before its decommissioning in 2005. The Simulator project, launched in 2004, is expected to be completed by end of 2006. (author)

  20. Optimal allocation of testing resources for statistical simulations

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  1. spsann - optimization of sample patterns using spatial simulated annealing

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  2. Exploiting Expert Knowledge to Enhance Simulation-based Optimization of Environmental Remediation Systems

    Reslink, C. F.; Matott, L. S.

    2012-12-01

    Designing cost-effective systems to safeguard national water supplies from contaminated sites is often aided by simulation-based optimization - where a flow or transport model is linked with an "off-the-shelf" global optimization search algorithm. However, achieving good performance from these types of optimizers within a reasonable computational budget has proven to be difficult. Therefore, this research seeks to boost optimization efficiency by augmenting search procedures with non-traditional information, such as site-specific knowledge and practitioner rules-of-thumb. An example application involving pump-and-treat optimization is presented in which a series of extraction wells are to be installed to intercept pollutants at a contaminated site in Billings, Montana. Selected heuristic algorithms (e.g. Genetic Algorithm) are interfaced with a rules engine that makes inline adjustments to the well locations of candidate pump-and-treat designs. If necessary, the rules engine modifies a given pump-and-treat design so that: (1) wells are placed within plume boundaries; and (2) well placement is biased toward areas where, if left untreated, the plume is predicted to spread most rapidly. Results suggest that incorporating this kind of expert knowledge can significantly increase the search efficiency of many popular global optimizers.

  3. Integrating Multibody Simulation and CFD: toward Complex Multidisciplinary Design Optimization

    Pieri, Stefano; Poloni, Carlo; Mühlmeier, Martin

    This paper describes the use of integrated multidisciplinary analysis and optimization of a race car model on a predefined circuit. The objective is the definition of the most efficient geometric configuration that can guarantee the lowest lap time. In order to carry out this study it has been necessary to interface the design optimization software modeFRONTIER with the following softwares: CATIA v5, a three dimensional CAD software, used for the definition of the parametric geometry; A.D.A.M.S./Motorsport, a multi-body dynamic simulation software; IcemCFD, a mesh generator, for the automatic generation of the CFD grid; CFX, a Navier-Stokes code, for the fluid-dynamic forces prediction. The process integration gives the possibility to compute, for each geometrical configuration, a set of aerodynamic coefficients that are then used in the multiboby simulation for the computation of the lap time. Finally an automatic optimization procedure is started and the lap-time minimized. The whole process is executed on a Linux cluster running CFD simulations in parallel.

  4. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  5. Simulated Annealing-Based Krill Herd Algorithm for Global Optimization

    Gai-Ge Wang

    2013-01-01

    Full Text Available Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH, for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH method is proposed for optimization tasks. A new krill selecting (KS operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA. In addition, a kind of elitism scheme is used to save the best individuals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.

  6. QCAD simulation and optimization of semiconductor double quantum dots

    Nielsen, Erik; Gao, Xujiao; Kalashnikova, Irina; Muller, Richard Partain; Salinger, Andrew Gerhard; Young, Ralph Watson

    2013-12-01

    We present the Quantum Computer Aided Design (QCAD) simulator that targets modeling quantum devices, particularly silicon double quantum dots (DQDs) developed for quantum qubits. The simulator has three di erentiating features: (i) its core contains nonlinear Poisson, e ective mass Schrodinger, and Con guration Interaction solvers that have massively parallel capability for high simulation throughput, and can be run individually or combined self-consistently for 1D/2D/3D quantum devices; (ii) the core solvers show superior convergence even at near-zero-Kelvin temperatures, which is critical for modeling quantum computing devices; (iii) it couples with an optimization engine Dakota that enables optimization of gate voltages in DQDs for multiple desired targets. The Poisson solver includes Maxwell- Boltzmann and Fermi-Dirac statistics, supports Dirichlet, Neumann, interface charge, and Robin boundary conditions, and includes the e ect of dopant incomplete ionization. The solver has shown robust nonlinear convergence even in the milli-Kelvin temperature range, and has been extensively used to quickly obtain the semiclassical electrostatic potential in DQD devices. The self-consistent Schrodinger-Poisson solver has achieved robust and monotonic convergence behavior for 1D/2D/3D quantum devices at very low temperatures by using a predictor-correct iteration scheme. The QCAD simulator enables the calculation of dot-to-gate capacitances, and comparison with experiment and between solvers. It is observed that computed capacitances are in the right ballpark when compared to experiment, and quantum con nement increases capacitance when the number of electrons is xed in a quantum dot. In addition, the coupling of QCAD with Dakota allows to rapidly identify which device layouts are more likely leading to few-electron quantum dots. Very efficient QCAD simulations on a large number of fabricated and proposed Si DQDs have made it possible to provide fast feedback for design

  7. Optimal level of continuous positive airway pressure: auto-adjusting titration versus titration with a predictive equation.

    Choi, Ji Ho; Jun, Young Joon; Oh, Jeong In; Jung, Jong Yoon; Hwang, Gyu Ho; Kwon, Soon Young; Lee, Heung Man; Kim, Tae Hoon; Lee, Sang Hag; Lee, Seung Hoon

    2013-05-01

    The aims of the present study were twofold. We sought to compare two methods of titrating the level of continuous positive airway pressure (CPAP) - auto-adjusting titration and titration using a predictive equation - with full-night manual titration used as the benchmark. We also investigated the reliability of the two methods in patients with obstructive sleep apnea syndrome (OSAS). Twenty consecutive adult patients with OSAS who had successful, full-night manual and auto-adjusting CPAP titration participated in this study. The titration pressure level was calculated with a previously developed predictive equation based on body mass index and apnea-hypopnea index. The mean titration pressure levels obtained with the manual, auto-adjusting, and predictive equation methods were 9.0 +/- 3.6, 9.4 +/- 3.0, and 8.1 +/- 1.6 cm H2O,respectively. There was a significant difference in the concordance within the range of +/- 2 cm H2O (p = 0.019) between both the auto-adjusting titration and the titration using the predictive equation compared to the full-night manual titration. However, there was no significant difference in the concordance within the range of +/- 1 cm H2O (p > 0.999). When compared to full-night manual titration as the standard method, auto-adjusting titration appears to be more reliable than using a predictive equation for determining the optimal CPAP level in patients with OSAS.

  8. Optimization and simulation of MEMS rectilinear ion trap

    Huang Gang

    2015-04-01

    Full Text Available In this paper, the design of a MEMS rectilinear ion trap was optimized under simulated conditions. The size range of the MEMS rectilinear ion trap’s electrodes studied in this paper is measured at micron scale. SIMION software was used to simulate the MEMS rectilinear ion trap with different sizes and different radio-frequency signals. The ion-trapping efficiencies of the ion trap under these different simulation conditions were obtained. The ion-trapping efficiencies were compared to determine the performance of the MEMS rectilinear ion trap in different conditions and to find the optimum conditions. The simulation results show that for the ion trap at micron scale or smaller, the optimized length–width ratio was 0.8, and a higher frequency of radio-frequency signal is necessary to obtain a higher ion-trapping efficiency. These results have a guiding role in the process of developing MEMS rectilinear ion traps, and great application prospects in the research fields of the MEMS rectilinear ion trap and the MEMS mass spectrometer.

  9. Classification and optimization of training tools for NPP simulator

    Billoen, G. van

    1994-01-01

    The training cycle of nuclear power plant (NPP) operators has evolved during the last decade in parallel with the evolution of the training tools. The phases of the training cycle can be summarized as follows: (1) basic principle learning, (2) specific functional training, (3) full operating range training, and (4) detailed accident analyses. The progress in simulation technology and man/machine interface (MMI) gives the training centers new opportunities to improve their training methods and effectiveness in the transfer of knowledge. To take advantage of these new opportunities a significant investment in simulation tools may be required. It is therefore important to propose an optimized approach when dealing with the overall equipment program for these training centers. An overall look of tools proposed on the international simulation market shows that there is a need for systematic approach in this field. Classification of the different training tools needed for each training cycle is the basis for an optimized approach in terms of hardware configuration and software specifications of the equipment to install in training centers. The 'Multi-Function Simulator' is one of the approaches. (orig.) (3 tabs.)

  10. Humans make near-optimal adjustments of control to initial body configuration in vertical squat jumping

    Bobbert, M.F.; Casius, L.J.R.; Kistemaker, D.A.

    2013-01-01

    We investigated adjustments of control to initial posture in squat jumping. Eleven male subjects jumped from three initial postures: preferred initial posture (PP), a posture in which the trunk was rotated 18° more backward (BP) and a posture in which it was rotated 15° more forward (FP) than in PP.

  11. Commitment and dispatch of heat and power units via affinely adjustable robust optimization

    Zugno, Marco; Morales González, Juan Miguel; Madsen, Henrik

    2016-01-01

    compromising computational tractability. We perform an extensive numerical study based on data from the Copenhagen area in Denmark, which highlights important features of the proposed model. Firstly, we illustrate commitment and dispatch choices that increase conservativeness in the robust optimization...... and conservativeness of the solution. Finally, we perform a thorough comparison with competing models based on deterministic optimization and stochastic programming. (C) 2016 Elsevier Ltd. All rights reserved....

  12. Conceptualizing and measuring illness self-concept: a comparison with self-esteem and optimism in predicting fibromyalgia adjustment.

    Morea, Jessica M; Friend, Ronald; Bennett, Robert M

    2008-12-01

    Illness self-concept (ISC), or the extent to which individuals are consumed by their illness, was theoretically described and evaluated with the Illness Self-Concept Scale (ISCS), a new 23-item scale, to predict adjustment in fibromyalgia. To establish convergent and discriminant validity, illness self-concept was compared to self-esteem and optimism in predicting health status, illness intrusiveness, depression, and life satisfaction. The ISCS demonstrated good reliability (alpha = .94; test-retest r = .80) and was a strong predictor of outcomes, even after controlling for optimism or self-esteem. The ISCS predicted unique variance in health-related outcomes; optimism and self-esteem did not, providing construct validation. Illness self-concept may play a significant role in coping with fibromyalgia and may prove useful in the evaluation of other chronic illnesses. (c) 2008 Wiley Periodicals, Inc.

  13. Simulation and optimal control of wind-farm boundary layers

    Meyers, Johan; Goit, Jay

    2014-05-01

    In large wind farms, the effect of turbine wakes, and their interaction leads to a reduction in farm efficiency, with power generated by turbines in a farm being lower than that of a lone-standing turbine by up to 50%. In very large wind farms or `deep arrays', this efficiency loss is related to interaction of the wind farms with the planetary boundary layer, leading to lower wind speeds at turbine level. Moreover, for these cases it has been demonstrated both in simulations and wind-tunnel experiments that the wind-farm energy extraction is dominated by the vertical turbulent transport of kinetic energy from higher regions in the boundary layer towards the turbine level. In the current study, we investigate the use of optimal control techniques combined with Large-Eddy Simulations (LES) of wind-farm boundary layer interaction for the increase of total energy extraction in very large `infinite' wind farms. We consider the individual wind turbines as flow actuators, whose energy extraction can be dynamically regulated in time so as to optimally influence the turbulent flow field, maximizing the wind farm power. For the simulation of wind-farm boundary layers we use large-eddy simulations in combination with actuator-disk and actuator-line representations of wind turbines. Simulations are performed in our in-house pseudo-spectral code SP-Wind that combines Fourier-spectral discretization in horizontal directions with a fourth-order finite-volume approach in the vertical direction. For the optimal control study, we consider the dynamic control of turbine-thrust coefficients in an actuator-disk model. They represent the effect of turbine blades that can actively pitch in time, changing the lift- and drag coefficients of the turbine blades. Optimal model-predictive control (or optimal receding horizon control) is used, where the model simply consists of the full LES equations, and the time horizon is approximately 280 seconds. The optimization is performed using a

  14. Linearity optimizations of analog ring resonator modulators through bias voltage adjustments

    Hosseinzadeh, Arash; Middlebrook, Christopher T.

    2018-03-01

    The linearity of ring resonator modulator (RRM) in microwave photonic links is studied in terms of instantaneous bandwidth, fabrication tolerances, and operational bandwidth. A proposed bias voltage adjustment method is shown to maximize spur-free dynamic range (SFDR) at instantaneous bandwidths required by microwave photonic link (MPL) applications while also mitigating RRM fabrication tolerances effects. The proposed bias voltage adjustment method shows RRM SFDR improvement of ∼5.8 dB versus common Mach-Zehnder modulators at 500 MHz instantaneous bandwidth. Analyzing operational bandwidth effects on SFDR shows RRMs can be promising electro-optic modulators for MPL applications which require high operational frequencies while in a limited bandwidth such as radio-over-fiber 60 GHz wireless network access.

  15. Derivative-free optimization under uncertainty applied to costly simulators

    Pauwels, Benoit

    2016-01-01

    The modeling of complex phenomena encountered in industrial issues can lead to the study of numerical simulation codes. These simulators may require extensive execution time (from hours to days), involve uncertain parameters and even be intrinsically stochastic. Importantly within the context of simulation-based optimization, the derivatives of the outputs with respect to the inputs may be inexistent, inaccessible or too costly to approximate reasonably. This thesis is organized in four chapters. The first chapter discusses the state of the art in derivative-free optimization and uncertainty modeling. The next three chapters introduce three independent - although connected - contributions to the field of derivative-free optimization in the presence of uncertainty. The second chapter addresses the emulation of costly stochastic simulation codes - stochastic in the sense simulations run with the same input parameters may lead to distinct outputs. Such was the matter of the CODESTOCH project carried out at the Summer mathematical research center on scientific computing and its applications (CEMRACS) during the summer of 2013, together with two Ph.D. students from Electricity of France (EDF) and the Atomic Energy and Alternative Energies Commission (CEA). We designed four methods to build emulators for functions whose values are probability density functions. These methods were tested on two toy functions and applied to industrial simulation codes concerned with three complex phenomena: the spatial distribution of molecules in a hydrocarbon system (IFPEN), the life cycle of large electric transformers (EDF) and the repercussions of a hypothetical accidental in a nuclear plant (CEA). Emulation was a preliminary process towards optimization in the first two cases. In the third chapter we consider the influence of inaccurate objective function evaluations on direct search - a classical derivative-free optimization method. In real settings inaccuracy may never vanish

  16. Towards information-optimal simulation of partial differential equations.

    Leike, Reimar H; Enßlin, Torsten A

    2018-03-01

    Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.

  17. Genetic algorithms and Monte Carlo simulation for optimal plant design

    Cantoni, M.; Marseguerra, M.; Zio, E.

    2000-01-01

    We present an approach to the optimal plant design (choice of system layout and components) under conflicting safety and economic constraints, based upon the coupling of a Monte Carlo evaluation of plant operation with a Genetic Algorithms-maximization procedure. The Monte Carlo simulation model provides a flexible tool, which enables one to describe relevant aspects of plant design and operation, such as standby modes and deteriorating repairs, not easily captured by analytical models. The effects of deteriorating repairs are described by means of a modified Brown-Proschan model of imperfect repair which accounts for the possibility of an increased proneness to failure of a component after a repair. The transitions of a component from standby to active, and vice versa, are simulated using a multiplicative correlation model. The genetic algorithms procedure is demanded to optimize a profit function which accounts for the plant safety and economic performance and which is evaluated, for each possible design, by the above Monte Carlo simulation. In order to avoid an overwhelming use of computer time, for each potential solution proposed by the genetic algorithm, we perform only few hundreds Monte Carlo histories and, then, exploit the fact that during the genetic algorithm population evolution, the fit chromosomes appear repeatedly many times, so that the results for the solutions of interest (i.e. the best ones) attain statistical significance

  18. Modeling and Dynamic Simulation of the Adjust and Control System Mechanism for Reactor CAREM-25

    Larreteguy, A.E; Mazufri, C.M

    2000-01-01

    The adjust and control system mechanism, MSAC, is an advanced, and in some senses unique, hydromechanical device.The efforts in modeling this mechanism are aimed to: Get a deep understanding of the physical phenomena involved,Identify the set of parameters relevant to the dynamics of the system,Allow the numerical simulation of the system,Predict the behavior of the mechanism in conditions other than that obtainable within the range of operation of the experimental setup (CEM), and Help in defining the design of the CAPEM (loop for testing the mechanism under high pressure/high temperature conditions).Thanks to the close interaction between the mechanics, the experimenters, and the modelists that compose the MSAC task force, it has been possible to suggest improvements, not only in the design of the mechanism, but also in the design and the operation of the pulse generator (GDP) and the rest of the CEM.This effort has led to a design mature enough so as to be tested in a high-pressure loop

  19. Simulation and optimization of a dc SQUID with finite capacitance

    de Waal, V.J.; Schrijner, P.; Llurba, R.

    1984-02-01

    This paper deals with the calculations of the noise an the optimization of the energy resolution of a dc SQUID with finite junction capacitance. Up to now noise calculations of dc SQUIDs were performed using a model without parasitic capacitances across the Josephson junctions. As the capacitances limit the performance of the SQUID, for a good optimization one must take them into account. The model consists of two coupled nonlinear second-order differential equations. The equations are very suitable for simulation with an analog circuit. We implemented the model on a hybrid computer. The noise spectrum from the model is calculated with a fast Fourier transform. A calculation of the energy resolution for one set of parameters takes about 6 min of computer time. Detailed results of the optimization are given for products of inductance and temperature of LT = 1.2 and 5 nHK. Within a range of ..beta.. and ..beta../sub c/ between 1 and 2, which is optimum, the energy resolution is nearly independent of these variables. In this region the energy resolution is near the value calculated without parasitic capacitances. Results of the optimized energy resolution are given as a function of LT between 1.2 and 10 nHK.

  20. Simulation and optimization of a dc SQUID with finite capacitance

    de Waal, V. J.; Schrijner, P.; Llurba, R.

    1984-02-01

    This paper deals with the calculations of the noise and the optimization of the energy resolution of a dc SQUID with finite junction capacitance. Up to now noise calculations of dc SQUIDs were performed using a model without parasitic capacitances across the Josephson junctions. As the capacitances limit the performance of the SQUID, for a good optimization one must take them into account. The model consists of two coupled nonlinear second-order differential equations. The equations are very suitable for simulation with an analog circuit. We implemented the model on a hybrid computer. The noise spectrum from the model is calculated with a fast Fourier transform. A calculation of the energy resolution for one set of parameters takes about 6 min of computer time. Detailed results of the optimization are given for products of inductance and temperature of LT=1.2 and 5 nH K. Within a range of β and β c between 1 and 2, which is optimum, the energy resolution is nearly independent of these variables. In this region the energy resolution is near the value calculated without parasitic capacitances. Results of the optimized energy resolution are given as a function of LT between 1.2 and 10 mH K.

  1. Optimal control and quantum simulations in superconducting quantum devices

    Egger, Daniel J.

    2014-10-31

    Quantum optimal control theory is the science of steering quantum systems. In this thesis we show how to overcome the obstacles in implementing optimal control for superconducting quantum bits, a promising candidate for the creation of a quantum computer. Building such a device will require the tools of optimal control. We develop pulse shapes to solve a frequency crowding problem and create controlled-Z gates. A methodology is developed for the optimisation towards a target non-unitary process. We show how to tune-up control pulses for a generic quantum system in an automated way using a combination of open- and closed-loop optimal control. This will help scaling of quantum technologies since algorithms can calibrate control pulses far more efficiently than humans. Additionally we show how circuit QED can be brought to the novel regime of multi-mode ultrastrong coupling using a left-handed transmission line coupled to a right-handed one. We then propose to use this system as an analogue quantum simulator for the Spin-Boson model to show how dissipation arises in quantum systems.

  2. Optimization of reconstruction algorithms using Monte Carlo simulation

    Hanson, K.M.

    1989-01-01

    A method for optimizing reconstruction algorithms is presented that is based on how well a specified task can be performed using the reconstructed images. Task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, reconstruction, and performance of the stated task based on the final image. The use of this method is demonstrated through the optimization of the Algebraic Reconstruction Technique (ART), which reconstructs images from their projections by a iterative procedure. The optimization is accomplished by varying the relaxation factor employed in the updating procedure. In some of the imaging situations studied, it is found that the optimization of constrained ART, in which a nonnegativity constraint is invoked, can vastly increase the detectability of objects. There is little improvement attained for unconstrained ART. The general method presented may be applied to the problem of designing neutron-diffraction spectrometers. 11 refs., 6 figs., 2 tabs

  3. Geometric Optimization of Thermo-electric Coolers Using Simulated Annealing

    Khanh, D V K; Vasant, P M; Elamvazuthi, I; Dieu, V N

    2015-01-01

    The field of thermo-electric coolers (TECs) has grown drastically in recent years. In an extreme environment as thermal energy and gas drilling operations, TEC is an effective cooling mechanism for instrument. However, limitations such as the relatively low energy conversion efficiency and ability to dissipate only a limited amount of heat flux may seriously damage the lifetime and performance of the instrument. Until now, many researches were conducted to expand the efficiency of TECs. The material parameters are the most significant, but they are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of finding the optimal TECs design is to define a set of design parameters. In this paper, a new method of optimizing the dimension of TECs using simulated annealing (SA), to maximize the rate of refrigeration (ROR) was proposed. Equality constraint and inequality constraint were taken into consideration. This work reveals that SA shows better performance than Cheng's work. (paper)

  4. Stochastic simulation and robust design optimization of integrated photonic filters

    Weng Tsui-Wei

    2016-07-01

    Full Text Available Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%–35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  5. Minimizing the Discrepancy between Simulated and Historical Failures in Turbine Engines: A Simulation-Based Optimization Method

    Ahmed Kibria

    2015-01-01

    Full Text Available The reliability modeling of a module in a turbine engine requires knowledge of its failure rate, which can be estimated by identifying statistical distributions describing the percentage of failure per component within the turbine module. The correct definition of the failure statistical behavior per component is highly dependent on the engineer skills and may present significant discrepancies with respect to the historical data. There is no formal methodology to approach this problem and a large number of labor hours are spent trying to reduce the discrepancy by manually adjusting the distribution’s parameters. This paper addresses this problem and provides a simulation-based optimization method for the minimization of the discrepancy between the simulated and the historical percentage of failures for turbine engine components. The proposed methodology optimizes the parameter values of the component’s failure statistical distributions within the component’s likelihood confidence bounds. A complete testing of the proposed method is performed on a turbine engine case study. The method can be considered as a decision-making tool for maintenance, repair, and overhaul companies and will potentially reduce the cost of labor associated to finding the appropriate value of the distribution parameters for each component/failure mode in the model and increase the accuracy in the prediction of the mean time to failures (MTTF.

  6. TH-CD-209-06: LET-Based Adjustment of IMPT Plans Using Prioritized Optimization

    Unkelbach, J; Giantsoudi, D; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States); Botas, P [Massachusetts General Hospital, Boston, MA (United States); Heidelberg University, Heidelberg, DE (Germany); Qin, N; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: In-vitro experiments suggest an increase in proton relative biological effectiveness (RBE) towards the end of range. However, proton treatment planning and dose reporting for clinical outcome assessment has been based on physical dose and constant RBE. Therefore, treatment planning for intensity-modulated proton therapy (IMPT) is unlikely to transition radically to pure RBE-based planning. We suggest a hybrid approach where treatment plans are initially created based on physical dose constraints and prescriptions, and are subsequently altered to avoid high linear energy transfer (LET) in critical structures while limiting the degradation of the physical dose distribution. Methods: To allow fast optimization based on dose and LET we extended a GPU-based Monte-Carlo code towards providing dose-averaged LET in addition to dose for all pencil beams. After optimizing an initial IMPT plan based on physical dose, a prioritized optimization scheme is used to modify the LET distribution while constraining the physical dose objectives to values close to the initial plan. The LET optimization step is performed based on objective functions evaluated for the product of physical dose and LET (LETxD). To first approximation, LETxD represents a measure of the additional biological dose that is caused by high LET. Regarding optimization techniques, LETxD has the advantage of being a linear function of the pencil beam intensities. Results: The method is applicable to treatments where serial critical structures with maximum dose constraint are located in or near the target. We studied intra-cranial tumors (high-grade meningiomas, base-of-skull chordomas) where the target (CTV) overlaps with the brainstem and optic structures. Often, high LETxD in critical structures can be avoided while minimally compromising physical dose planning objectives. Conclusion: LET-based re-optimization of IMPT plans represents a pragmatic approach to bridge the gap between purely physical dose

  7. Applying simulation to optimize plastic molded optical parts

    Jaworski, Matthew; Bakharev, Alexander; Costa, Franco; Friedl, Chris

    2012-10-01

    Optical injection molded parts are used in many different industries including electronics, consumer, medical and automotive due to their cost and performance advantages compared to alternative materials such as glass. The injection molding process, however, induces elastic (residual stress) and viscoelastic (flow orientation stress) deformation into the molded article which alters the material's refractive index to be anisotropic in different directions. Being able to predict and correct optical performance issues associated with birefringence early in the design phase is a huge competitive advantage. This paper reviews how to apply simulation analysis of the entire molding process to optimize manufacturability and part performance.

  8. Optimizing Grippers for Compensating Pose Uncertainties by Dynamic Simulation

    Wolniakowski, Adam; Kramberger, Aljaž; Gams, Andrej

    2017-01-01

    Gripper design process is one of the interesting challenges in the context of grasping within industry. Typically, simple parallel-finger grippers, which are easy to install and maintain, are used in platforms for robotic grasping. The context switches in these platforms require frequent exchange......, we have presented a method to automatically compute the optimal finger shapes for defined task contexts in simulation. In this paper, we show the performance of our method in an industrial grasping scenario. We first analyze the uncertainties of the used vision system, which are the major source...

  9. Adjusted light and dark cycles can optimize photosynthetic efficiency in algae growing in photobioreactors.

    Eleonora Sforza

    Full Text Available Biofuels from algae are highly interesting as renewable energy sources to replace, at least partially, fossil fuels, but great research efforts are still needed to optimize growth parameters to develop competitive large-scale cultivation systems. One factor with a seminal influence on productivity is light availability. Light energy fully supports algal growth, but it leads to oxidative stress if illumination is in excess. In this work, the influence of light intensity on the growth and lipid productivity of Nannochloropsis salina was investigated in a flat-bed photobioreactor designed to minimize cells self-shading. The influence of various light intensities was studied with both continuous illumination and alternation of light and dark cycles at various frequencies, which mimic illumination variations in a photobioreactor due to mixing. Results show that Nannochloropsis can efficiently exploit even very intense light, provided that dark cycles occur to allow for re-oxidation of the electron transporters of the photosynthetic apparatus. If alternation of light and dark is not optimal, algae undergo radiation damage and photosynthetic productivity is greatly reduced. Our results demonstrate that, in a photobioreactor for the cultivation of algae, optimizing mixing is essential in order to ensure that the algae exploit light energy efficiently.

  10. Power-Quality-Oriented Optimization in Multiple Three-Phase Adjustable Speed Drives

    Yang, Yongheng; Davari, Pooya; Blaabjerg, Frede

    2016-01-01

    As an almost standardized configuration, Diode Rectifiers (DRs) and Silicon-Controlled Rectifiers (SCRs) are commonly employed as the front-end topology in three-phase Adjustable Speed Drive (ASD) systems. Features of this ASD configuration include: structural and control simplicity, small volume......, low cost, and high reliability during operation. Yet, DRs and SCRs bring harmonic distortions in the mains and thus lowering the overall efficiency. Power quality standards/rules are thus released. For multiple ASD systems, certain harmonics of the total grid current can be mitigated by phase......-shifting the currents drawn by SCR-fed drives, and thus it is much flexible to reduce the Total Harmonic Distortion (THD) level in such applications. However, the effectiveness of this harmonic mitigation scheme for multiple ASD systems depends on: a) the number of parallel drives, b) the power levels, and c...

  11. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the…

  12. Multipacting Simulations of Tuner-adjustable waveguide coupler (TaCo) with CST Particle Studio®

    Shafqat, N; Wegner, R

    2014-01-01

    Tuner-adjustable waveguide couplers (TaCo) are used to feed microwave power to different RF structures of LINAC4. This paper studies the multipacting phenomenon for TaCo using the PIC solver of CST PS. Simulations are performed for complete field sweeps and results are analysed.

  13. Parametric Optimization Through Numerical Simulation of VCR Diesel Engine

    Ganji, Prabhakara Rao; Mahmood, Al-Qarttani Abdulrahman Shakir; Kandula, Aasrith; Raju, Vysyaraju Rajesh Khana; Rao, Surapaneni Srinivasa

    2017-08-01

    In the present study, the Variable Compression Ratio (VCR) engine was analyzed numerically using CONVERGE™ Computational Fluid Dynamics code in order to optimize the design/operating parameters such as Compression Ratio (CR), Start of Injection (SOI) and Exhaust Gas Recirculation (EGR). VCR engine was run for 100 % load to test its performance and it was validated for standard configuration. Simulations were performed by varying the design/operating parameters such as CR (18-14), SOI (17°-26° bTDC) and EGR (0-15 %) at constant fuel injection pressure of 230 bar and speed of 1500 rpm. The effect of each of these parameters on pressure, oxides of nitrogen (NOx) and soot are presented. Finally, regression equations were developed for pressure, NOx and soot by using the simulation results. The regression equations were solved for multi objective criteria in order to reduce the NOx and soot while maintaining the baseline performance. The optimized configuration was tested for validation and found satisfactory.

  14. Using simulation-optimization techniques to improve multiphase aquifer remediation

    Finsterle, S.; Pruess, K. [Lawrence Berkeley Laboratory, Berkeley, CA (United States)

    1995-03-01

    The T2VOC computer model for simulating the transport of organic chemical contaminants in non-isothermal multiphase systems has been coupled to the ITOUGH2 code which solves parameter optimization problems. This allows one to use linear programming and simulated annealing techniques to solve groundwater management problems, i.e. the optimization of operations for multiphase aquifer remediation. A cost function has to be defined, containing the actual and hypothetical expenses of a cleanup operation which depend - directly or indirectly - on the state variables calculated by T2VOC. Subsequently, the code iteratively determines a remediation strategy (e.g. pumping schedule) which minimizes, for instance, pumping and energy costs, the time for cleanup, and residual contamination. We discuss an illustrative sample problem to discuss potential applications of the code. The study shows that the techniques developed for estimating model parameters can be successfully applied to the solution of remediation management problems. The resulting optimum pumping scheme depends, however, on the formulation of the remediation goals and the relative weighting between individual terms of the cost function.

  15. Validation, Optimization and Simulation of a Solar Thermoelectric Generator Model

    Madkhali, Hadi Ali; Hamil, Ali; Lee, HoSung

    2017-12-01

    This study explores thermoelectrics as a viable option for small-scale solar thermal applications. Thermoelectric technology is based on the Seebeck effect, which states that a voltage is induced when a temperature gradient is applied to the junctions of two differing materials. This research proposes to analyze, validate, simulate, and optimize a prototype solar thermoelectric generator (STEG) model in order to increase efficiency. The intent is to further develop STEGs as a viable and productive energy source that limits pollution and reduces the cost of energy production. An empirical study (Kraemer et al. in Nat Mater 10:532, 2011) on the solar thermoelectric generator reported a high efficiency performance of 4.6%. The system had a vacuum glass enclosure, a flat panel (absorber), thermoelectric generator and water circulation for the cold side. The theoretical and numerical approach of this current study validated the experimental results from Kraemer's study to a high degree. The numerical simulation process utilizes a two-stage approach in ANSYS software for Fluent and Thermal-Electric Systems. The solar load model technique uses solar radiation under AM 1.5G conditions in Fluent. This analytical model applies Dr. Ho Sung Lee's theory of optimal design to improve the performance of the STEG system by using dimensionless parameters. Applying this theory, using two cover glasses and radiation shields, the STEG model can achieve a highest efficiency of 7%.

  16. Simulation-based optimal Bayesian experimental design for nonlinear systems

    Huan, Xun

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics. © 2012 Elsevier Inc.

  17. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Daniel Bartz

    Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  18. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  19. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  20. Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.

    Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas

    2011-06-24

    This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Design-Based Comparison of Spine Surgery Simulators: Optimizing Educational Features of Surgical Simulators.

    Ryu, Won Hyung A; Mostafa, Ahmed E; Dharampal, Navjit; Sharlin, Ehud; Kopp, Gail; Jacobs, W Bradley; Hurlbert, R John; Chan, Sonny; Sutherland, Garnette R

    2017-10-01

    Simulation-based education has made its entry into surgical residency training, particularly as an adjunct to hands-on clinical experience. However, one of the ongoing challenges to wide adoption is the capacity of simulators to incorporate educational features required for effective learning. The aim of this study was to identify strengths and limitations of spine simulators to characterize design elements that are essential in enhancing resident education. We performed a mixed qualitative and quantitative cohort study with a focused survey and interviews of stakeholders in spine surgery pertaining to their experiences on 3 spine simulators. Ten participants were recruited spanning all levels of training and expertise until qualitative analysis reached saturation of themes. Participants were asked to perform lumbar pedicle screw insertion on 3 simulators. Afterward, a 10-item survey was administrated and a focused interview was conducted to explore topics pertaining to the design features of the simulators. Overall impressions of the simulators were positive with regards to their educational benefit, but our qualitative analysis revealed differing strengths and limitations. Main design strengths of the computer-based simulators were incorporation of procedural guidance and provision of performance feedback. The synthetic model excelled in achieving more realistic haptic feedback and incorporating use of actual surgical tools. Stakeholders from trainees to experts acknowledge the growing role of simulation-based education in spine surgery. However, different simulation modalities have varying design elements that augment learning in distinct ways. Characterization of these design characteristics will allow for standardization of simulation curricula in spinal surgery, optimizing educational benefit. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

    Hailong Wang

    2018-01-01

    Full Text Available The backtracking search optimization algorithm (BSA is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed.

  3. Water striders adjust leg movement speed to optimize takeoff velocity for their morphology

    Yang, Eunjin; Son, Jae Hak; Lee, Sang-Im; Jablonski, Piotr G.; Kim, Ho-Young

    2016-12-01

    Water striders are water-walking insects that can jump upwards from the water surface. Quick jumps allow striders to avoid sudden dangers such as predators' attacks, and therefore their jumping is expected to be shaped by natural selection for optimal performance. Related species with different morphological constraints could require different jumping mechanics to successfully avoid predation. Here we show that jumping striders tune their leg rotation speed to reach the maximum jumping speed that water surface allows. We find that the leg stroke speeds of water strider species with different leg morphologies correspond to mathematically calculated morphology-specific optima that maximize vertical takeoff velocity by fully exploiting the capillary force of water. These results improve the understanding of correlated evolution between morphology and leg movements in small jumping insects, and provide a theoretical basis to develop biomimetic technology in semi-aquatic environments.

  4. Productivity simulation model for optimization of maritime container terminals

    Elen TWRDY

    2009-01-01

    Full Text Available This article describes a proposed productivity simulation model enabling container terminal operators to find optimization possibilities. A research of more than forty terminals has been done, in order to provide a helping tool for maritime container terminals. By applying an adequate simulation model, it is possible to measure and increase the productivity in all subsystem of the maritime container terminal. Management of a maritime container terminal includes a vast number of different financial and operational decisions. Financial decisions are often in a direct connection with investments in infrastructure and handling equipment. Such investments are very expensive. Therefore, they must give back the invested money as soon as possible. On the other hand, some terminals are limited by the physical extension and are forced to increase annual throughput only with sophisticated equipment on the berth side and on the yard as well. Considering all these important facts in container and shipping industry, the proposed simulation model gives a helping tool for checking the productivity and its time variation and monitoring competitiveness of a certain maritime terminal with terminals from the same group.

  5. [Numerical simulation and operation optimization of biological filter].

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  6. On the Structure and Adjustment of Inversion-Capped Neutral Atmospheric Boundary-Layer Flows: Large-Eddy Simulation Study

    Pedersen, Jesper Grønnegaard; Gryning, Sven-Erik; Kelly, Mark C.

    2014-01-01

    A range of large-eddy simulations, with differing free atmosphere stratification and zero or slightly positive surface heat flux, is investigated to improve understanding of the neutral and near-neutral, inversion-capped, horizontally homogeneous, barotropic atmospheric boundary layer with emphasis...... on the upper region. We find that an adjustment time of at least 16 h is needed for the simulated flow to reach a quasi-steady state. The boundary layer continues to grow, but at a slow rate that changes little after 8 h of simulation time. A common feature of the neutral simulations is the development...... of a super-geostrophic jet near the top of the boundary layer. The analytical wind-shear models included do not account for such a jet, and the best agreement with simulated wind shear is seen in cases with weak stratification above the boundary layer. Increasing the surface heat flux decreases the magnitude...

  7. Optimism, Positive and Negative Affect, and Goal Adjustment Strategies: Their Relationship to Activity Patterns in Patients with Chronic Musculoskeletal Pain

    Rosa Esteve

    2018-01-01

    Full Text Available Objective. Activity patterns are the product of pain and of the self-regulation of current goals in the context of pain. The aim of this study was to investigate the association between goal management strategies and activity patterns while taking into account the role of optimism/pessimism and positive/negative affect. Methods. Two hundred and thirty-seven patients with chronic musculoskeletal pain filled out questionnaires on optimism, positive and negative affect, pain intensity, and the activity patterns they employed in dealing with their pain. Questionnaires were also administered to assess their general goal management strategies: goal persistence, flexible goal adjustment, and disengagement and reengagement with goals. Results. Structural equation modelling showed that higher levels of optimism were related to persistence, flexible goal management, and commitment to new goals. These strategies were associated with higher positive affect, persistence in finishing tasks despite pain, and infrequent avoidance behaviour in the presence or anticipation of pain. Conclusions. The strategies used by the patients with chronic musculoskeletal pain to manage their life goals are related to their activity patterns.

  8. Optimism, Positive and Negative Affect, and Goal Adjustment Strategies: Their Relationship to Activity Patterns in Patients with Chronic Musculoskeletal Pain.

    Esteve, Rosa; López-Martínez, Alicia E; Peters, Madelon L; Serrano-Ibáñez, Elena R; Ruiz-Párraga, Gema T; Ramírez-Maestre, Carmen

    2018-01-01

    Activity patterns are the product of pain and of the self-regulation of current goals in the context of pain. The aim of this study was to investigate the association between goal management strategies and activity patterns while taking into account the role of optimism/pessimism and positive/negative affect. Two hundred and thirty-seven patients with chronic musculoskeletal pain filled out questionnaires on optimism, positive and negative affect, pain intensity, and the activity patterns they employed in dealing with their pain. Questionnaires were also administered to assess their general goal management strategies: goal persistence, flexible goal adjustment, and disengagement and reengagement with goals. Structural equation modelling showed that higher levels of optimism were related to persistence, flexible goal management, and commitment to new goals. These strategies were associated with higher positive affect, persistence in finishing tasks despite pain, and infrequent avoidance behaviour in the presence or anticipation of pain. The strategies used by the patients with chronic musculoskeletal pain to manage their life goals are related to their activity patterns.

  9. Optimizing nitrogen fertilizer use: Current approaches and simulation models

    Baethgen, W.E.

    2000-01-01

    Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)

  10. Optimized Design of Spacer in Electrodialyzer Using CFD Simulation Method

    Jia, Yuxiang; Yan, Chunsheng; Chen, Lijun; Hu, Yangdong

    2018-06-01

    In this study, the effects of length-width ratio and diversion trench of the spacer on the fluid flow behavior in an electrodialyzer have been investigated through CFD simulation method. The relevant information, including the pressure drop, velocity vector distribution and shear stress distribution, demonstrates the importance of optimized design of the spacer in an electrodialysis process. The results show width of the diversion trench has a great effect on the fluid flow compared with length. Increase of the diversion trench width could strength the fluid flow, but also increase the pressure drop. Secondly, the dead zone of the fluid flow decreases with increase of length-width ratio of the spacer, but the pressure drop increases with the increase of length-width ratio of the spacer. So the appropriate length-width ratio of the space should be moderate.

  11. Use of genetic algorithms for optimization of subchannel simulations

    Nava Dominguez, A.

    2004-01-01

    To facilitate the modeling of a rod fuel bundle, the most common used method consist in dividing the complex cross-sectional area in small subsections called subchannels. To close the system equations, a mixture model is used to represent the intersubchannel interactions. These interactions are as follows: diversion cross-flow, turbulent void diffusion, void drift and buoyancy drift. Amongst these mechanisms, the turbulent void diffusion and void drift are frequently modelled using diffusion coefficients. In this work, a novel approach has been employed where an existing subchannel code coupled to a genetic algorithm code which were used to optimize these coefficients. After several numerical simulations, a new objective function based in the principle of minimum dissipated energy was developed. The use of this function in the genetic algorithm coupled to the subchannel code, gave results in good agreement with the experimental data

  12. Dynamic Simulation and Optimization of Nuclear Hydrogen Production Systems

    Paul I. Barton; Mujid S. Kaximi; Georgios Bollas; Patricio Ramirez Munoz

    2009-07-31

    This project is part of a research effort to design a hydrogen plant and its interface with a nuclear reactor. This project developed a dynamic modeling, simulation and optimization environment for nuclear hydrogen production systems. A hybrid discrete/continuous model captures both the continuous dynamics of the nuclear plant, the hydrogen plant, and their interface, along with discrete events such as major upsets. This hybrid model makes us of accurate thermodynamic sub-models for the description of phase and reaction equilibria in the thermochemical reactor. Use of the detailed thermodynamic models will allow researchers to examine the process in detail and have confidence in the accurary of the property package they use.

  13. BASIMO - Borehole Heat Exchanger Array Simulation and Optimization Tool

    Schulte, Daniel O.; Bastian, Welsch; Wolfram, Rühaak; Kristian, Bär; Ingo, Sass

    2017-04-01

    Arrays of borehole heat exchangers are an increasingly popular source for renewable energy. Furthermore, they can serve as borehole thermal energy storage (BTES) systems for seasonally fluctuating heat sources like solar thermal energy or district heating grids. The high temperature level of these heat sources prohibits the use of the shallow subsurface for environmental reasons. Therefore, deeper reservoirs have to be accessed instead. The increased depth of the systems results in high investment costs and has hindered the implementation of this technology until now. Therefore, research of medium deep BTES systems relies on numerical simulation models. Current simulation tools cannot - or only to some extent - describe key features like partly insulated boreholes unless they run fully discretized models of the borehole heat exchangers. However, fully discretized models often come at a high computational cost, especially for large arrays of borehole heat exchangers. We give an update on the development of BASIMO: a tool, which uses one dimensional thermal resistance and capacity models for the borehole heat exchangers coupled with a numerical finite element model for the subsurface heat transport in a dual-continuum approach. An unstructured tetrahedral mesh bypasses the limitations of structured grids for borehole path geometries, while the thermal resistance and capacity model is improved to account for borehole heat exchanger properties changing with depth. Thereby, partly insulated boreholes can be considered in the model. Furthermore, BASIMO can be used to improve the design of BTES systems: the tool allows for automated parameter variations and is readily coupled to other code like mathematical optimization algorithms. Optimization can be used to determine the required minimum system size or to increase the system performance.

  14. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Soobae Kim

    2015-10-01

    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  15. Robust Multivariable Optimization and Performance Simulation for ASIC Design

    DuMonthier, Jeffrey; Suarez, George

    2013-01-01

    Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.

  16. Optimizing the HLT Buffer Strategy with Monte Carlo Simulations

    AUTHOR|(CDS)2266763

    2017-01-01

    This project aims to optimize the strategy of utilizing the disk buffer for the High Level Trigger (HLT) of the LHCb experiment with the help of Monte-Carlo simulations. A method is developed, which simulates the Event Filter Farm (EFF) -- a computing cluster for the High Level Trigger -- as a compound of nodes with different performance properties. In this way, the behavior of the computing farm can be analyzed at a deeper level than before. It is demonstrated that the current operating strategy might be improved when data taking is reaching a mid-year scheduled stop or the year-end technical stop. The processing time of the buffered data can be lowered by distributing the detector data according to the processing power of the nodes instead of the relative disk size as long as the occupancy level of the buffer is low enough. Moreover, this ensures that data taken and stored on the buffer at the same time is processed by different nodes nearly simultaneously, which reduces load on the infrastructure.

  17. Optimizing switching frequency of the soliton transistor by numerical simulation

    Izadyar, S., E-mail: S_izadyar@yahoo.co [Department of Electronics, Khaje Nasir Toosi University of Technology, Shariati Ave., Tehran (Iran, Islamic Republic of); Niazzadeh, M.; Raissi, F. [Department of Electronics, Khaje Nasir Toosi University of Technology, Shariati Ave., Tehran (Iran, Islamic Republic of)

    2009-10-15

    In this paper, by numerical simulations we have examined different ways to increase the soliton transistor's switching frequency. Speed of the solitons in a soliton transistor depends on various parameters such as the loss of the junction, the applied bias current, and the transmission line characteristics. Three different ways have been examined; (i) decreasing the size of the transistor without losing transistor effect. (ii) Decreasing the amount of loss of the junction to increase the soliton speed. (iii) Optimizing the bias current to obtain maximum possible speed. We have obtained the shortest possible length to have at least one working soliton inside the transistor. The dimension of the soliton can be decreased by changing the inductance of the transmission line, causing a further decrease in the size of the transistor, however, a trade off between the size and the inductance is needed to obtain the optimum switching speed. Decreasing the amount of loss can be accomplished by increasing the characteristic tunneling resistance of the device, however, a trade off is again needed to make soliton and antisoliton annihilation possible. By increasing the bias current, the forces acting the solitons increases and so does their speed. Due to nonuniform application of bias current a self induced magnetic field is created which can result in creation of unwanted solitons. Optimum bias current application can result in larger bias currents and larger soliton speed. Simulations have provided us with such an arrangement of bias current paths.

  18. Optimizing switching frequency of the soliton transistor by numerical simulation

    Izadyar, S.; Niazzadeh, M.; Raissi, F.

    2009-01-01

    In this paper, by numerical simulations we have examined different ways to increase the soliton transistor's switching frequency. Speed of the solitons in a soliton transistor depends on various parameters such as the loss of the junction, the applied bias current, and the transmission line characteristics. Three different ways have been examined; (i) decreasing the size of the transistor without losing transistor effect. (ii) Decreasing the amount of loss of the junction to increase the soliton speed. (iii) Optimizing the bias current to obtain maximum possible speed. We have obtained the shortest possible length to have at least one working soliton inside the transistor. The dimension of the soliton can be decreased by changing the inductance of the transmission line, causing a further decrease in the size of the transistor, however, a trade off between the size and the inductance is needed to obtain the optimum switching speed. Decreasing the amount of loss can be accomplished by increasing the characteristic tunneling resistance of the device, however, a trade off is again needed to make soliton and antisoliton annihilation possible. By increasing the bias current, the forces acting the solitons increases and so does their speed. Due to nonuniform application of bias current a self induced magnetic field is created which can result in creation of unwanted solitons. Optimum bias current application can result in larger bias currents and larger soliton speed. Simulations have provided us with such an arrangement of bias current paths.

  19. Simulation of Optimal Decision-Making Under the Impacts of Climate Change.

    Møller, Lea Ravnkilde; Drews, Martin; Larsen, Morten Andreas Dahl

    2017-07-01

    Climate change causes transformations to the conditions of existing agricultural practices appointing farmers to continuously evaluate their agricultural strategies, e.g., towards optimising revenue. In this light, this paper presents a framework for applying Bayesian updating to simulate decision-making, reaction patterns and updating of beliefs among farmers in a developing country, when faced with the complexity of adapting agricultural systems to climate change. We apply the approach to a case study from Ghana, where farmers seek to decide on the most profitable of three agricultural systems (dryland crops, irrigated crops and livestock) by a continuous updating of beliefs relative to realised trajectories of climate (change), represented by projections of temperature and precipitation. The climate data is based on combinations of output from three global/regional climate model combinations and two future scenarios (RCP4.5 and RCP8.5) representing moderate and unsubstantial greenhouse gas reduction policies, respectively. The results indicate that the climate scenario (input) holds a significant influence on the development of beliefs, net revenues and thereby optimal farming practices. Further, despite uncertainties in the underlying net revenue functions, the study shows that when the beliefs of the farmer (decision-maker) opposes the development of the realised climate, the Bayesian methodology allows for simulating an adjustment of such beliefs, when improved information becomes available. The framework can, therefore, help facilitating the optimal choice between agricultural systems considering the influence of climate change.

  20. Design and Optimal Research of a Non-Contact Adjustable Magnetic Adhesion Mechanism for a Wall-Climbing Welding Robot

    Minghui Wu

    2013-01-01

    Full Text Available Wall-climbing welding robots (WCWRs can replace workers in manufacturing and maintaining large unstructured equipment, such as ships. The adhesion mechanism is the key component of WCWRs. As it is directly related to the robot's ability in relation to adsorbing, moving flexibly and obstacle-passing. In this paper, a novel non-contact adjustably magnetic adhesion mechanism is proposed. The magnet suckers are mounted under the robot's axils and the sucker and wall are in non-contact. In order to pass obstacles, the sucker and the wheel unit can be pulled up and pushed down by a lifting mechanism. The magnetic adhesion force can be adjusted by changing the height of the gap between the sucker and the wall by the lifting mechanism. In order to increase the adhesion force, the value of the sucker's magnetic energy density (MED is maximized by optimizing the magnet sucker's structure parameters with a finite element method. Experiments prove that the magnetic adhesion mechanism has enough adhesion force and that the WCWR can complete wall-climbing work within a large unstructured environment.

  1. Opportunities for Improving Army Modeling and Simulation Development: Making Fundamental Adjustments and Borrowing Commercial Business Practices

    Lee, John

    2000-01-01

    .... This paper briefly explores project management principles, leadership theory, and commercial business practices, suggesting improvements to the Army's modeling and simulation development process...

  2. Optimal array factor radiation pattern synthesis for linear antenna array using cat swarm optimization: validation by an electromagnetic simulator

    Gopi RAM; Durbadal MANDAL; Sakti Prasad GHOSHAL; Rajib KAR

    2017-01-01

    In this paper, an optimal design of linear antenna arrays having microstrip patch antenna elements has been carried out. Cat swarm optimization (CSO) has been applied for the optimization of the control parameters of radiation pattern of an antenna array. The optimal radiation patterns of isotropic antenna elements are obtained by optimizing the current excitation weight of each element and the inter-element spacing. The antenna arrays of 12, 16, and 20 elements are taken as examples. The arrays are de-signed by using MATLAB computation and are validated through Computer Simulation Technology-Microwave Studio (CST-MWS). From the simulation results it is evident that CSO is able to yield the optimal design of linear antenna arrays of patch antenna elements.

  3. Simulation Optimization by Genetic Search: A Comprehensive Study with Applications to Production Management

    Yunker, James

    2003-01-01

    In this report, a relatively new simulation optimization technique, the genetic search, is compared to two more established simulation techniques-the pattern search and the response surface methodology search...

  4. Numerical simulation of CICC design based on optimization of ratio of copper to superconductor

    Jiang Huawei; Li Yuan; Yan Shuailing

    2007-01-01

    For cable-in-conduit conductor (CICC) structure design, a numeric simulation is proposed for conductor configuration based on optimization of ratio of copper to superconductor. The simulation outcome is in agreement with engineering design one. (authors)

  5. The Optimization of a Microfluidic CTC Filtering Chip by Simulation

    Huan Li

    2017-03-01

    Full Text Available The detection and separation of circulating tumor cells (CTCs are crucial in early cancer diagnosis and cancer prognosis. Filtration through a thin film is one of the size and deformability based separation methods, which can isolate rare CTCs from the peripheral blood of cancer patients regardless of their heterogeneity. In this paper, volume of fluid (VOF multiphase flow models are employed to clarify the cells’ filtering processes. The cells may deform significantly when they enter a channel constriction, which will induce cell membrane stress and damage if the area strain is larger than the critical value. Therefore, the cellular damage criterion characterized by membrane area strain is presented in our model, i.e., the lysis limit of the lipid bilayer is taken as the critical area strain. Under this criterion, we discover that the microfilters with slit-shaped pores do less damage to cells than those with circular pores. The influence of contact angle between the microfilters and blood cells on cellular injury is also discussed. Moreover, the optimal film thickness and flux in our simulations are obtained as 0.5 μm and 0.375 mm/s, respectively. These findings will provide constructive guidance for the improvement of next generation microfilters with higher throughput and less cellular damage.

  6. Optimization and Numerical Simulation of Outlet of Twin Screw Extruder

    Zhang Yuan

    2018-01-01

    Full Text Available In view of the unreasonable design of non-intermeshing counter-rotating twin screw extruder die, the problem of productivity reduction was discussed. Firstly, the mathematical model of extruder productivity was established. The extruder die model was improved. Secondly, the force analysis of twin screw extruder physical model was carried out. Meanwhile, A combination of mechanical analysis and numerical simulation was adopted. The velocity field, pressure field and viscosity field were calculated by Mini-Element interpolation method, linear interpolation method and Picard iterative convergence method respectively. The influence of die model on the quantity of each field before and after improvement was analyzed. The results show that the improved model had increased the rheological parameters of the flow field, the leakage and reverse flow decreased. Through post-processing calculation, the productivity of the third dies extruder was 10% higher than before. The research results provide a theoretical basis for the design and optimization of die model of non intermeshing counter-rotating twin screw extruder.

  7. Minimization for conditional simulation: Relationship to optimal transport

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  8. A Monte Carlo simulation technique to determine the optimal portfolio

    Hassan Ghodrati

    2014-03-01

    Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.

  9. Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization

    Carrie Ka Yuk Lin

    2017-01-01

    Full Text Available This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources.

  10. Humans use social information to adjust their quorum thresholds adaptively in a simulated predator detection experiment

    Kurvers, R.H.J.M.; Wolf, M.; Krause, J.

    2014-01-01

    Quorum sensing is used in many biological systems to increase decision accuracy. In quorum sensing, the probability that an individual adopts a behavior is a nonlinear function of the number of other individuals adopting this behavior. From an optimal decision-making perspective, individuals should

  11. Evaluation of CMIP5 continental precipitation simulations relative to satellite-based gauge-adjusted observations

    Mehran, A.; AghaKouchak, A.; Phillips, T. J.

    2014-02-01

    The objective of this study is to cross-validate 34 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of precipitation against the Global Precipitation Climatology Project (GPCP) data, quantifying model pattern discrepancies, and biases for both entire distributions and their upper tails. The results of the volumetric hit index (VHI) analysis of the total monthly precipitation amounts show that most CMIP5 simulations are in good agreement with GPCP patterns in many areas but that their replication of observed precipitation over arid regions and certain subcontinental regions (e.g., northern Eurasia, eastern Russia, and central Australia) is problematical. Overall, the VHI of the multimodel ensemble mean and median also are superior to that of the individual CMIP5 models. However, at high quantiles of reference data (75th and 90th percentiles), all climate models display low skill in simulating precipitation, except over North America, the Amazon, and Central Africa. Analyses of total bias (B) in CMIP5 simulations reveal that most models overestimate precipitation over regions of complex topography (e.g., western North and South America and southern Africa and Asia), while underestimating it over arid regions. Also, while most climate model simulations show low biases over Europe, intermodel variations in bias over Australia and Amazonia are considerable. The quantile bias analyses indicate that CMIP5 simulations are even more biased at high quantiles of precipitation. It is found that a simple mean field bias removal improves the overall B and VHI values but does not make a significant improvement at high quantiles of precipitation.

  12. Simulation and Optimization of Silicon Solar Cell Back Surface Field

    Souad TOBBECHE

    2015-11-01

    Full Text Available In this paper, TCAD Silvaco (Technology Computer Aided Design software has been used to study the Back Surface Field (BSF effect of a p+ silicon layer for a n+pp+ silicon solar cell. To study this effect, the J-V characteristics and the external quantum efficiency (EQE are simulated under AM 1.5 illumination for two types of cells. The first solar cell is without BSF (n+p structure while the second one is with BSF (n+pp+ structure. The creation of the BSF on the rear face of the cell results in efficiency h of up to 16.06% with a short-circuit current density Jsc = 30.54 mA/cm2, an open-circuit voltage Voc = 0.631 V, a fill factor FF = 0.832 and a clear improvement of the spectral response obtained in the long wavelengths range. An electric field and a barrier of potential are created by the BSF and located at the junction p+/p with a maximum of 5800 V/cm and 0.15 V, respectively. The optimization of the BSF layer shows that the cell performance improves with the p+ thickness between 0.35 – 0.39 µm, the p+ doping dose is about 2 × 1014 cm-2, the maximum efficiency up to 16.19 %. The cell efficiency is more sensitive to the value of the back surface recombination velocity above a value of 103 cm/s in n+p than n+pp+ solar cell.DOI: http://dx.doi.org/10.5755/j01.ms.21.4.9565

  13. An Optimization Algorithm for Multipath Parallel Allocation for Service Resource in the Simulation Task Workflow

    Zhiteng Wang

    2014-01-01

    Full Text Available Service oriented modeling and simulation are hot issues in the field of modeling and simulation, and there is need to call service resources when simulation task workflow is running. How to optimize the service resource allocation to ensure that the task is complete effectively is an important issue in this area. In military modeling and simulation field, it is important to improve the probability of success and timeliness in simulation task workflow. Therefore, this paper proposes an optimization algorithm for multipath service resource parallel allocation, in which multipath service resource parallel allocation model is built and multiple chains coding scheme quantum optimization algorithm is used for optimization and solution. The multiple chains coding scheme quantum optimization algorithm is to extend parallel search space to improve search efficiency. Through the simulation experiment, this paper investigates the effect for the probability of success in simulation task workflow from different optimization algorithm, service allocation strategy, and path number, and the simulation result shows that the optimization algorithm for multipath service resource parallel allocation is an effective method to improve the probability of success and timeliness in simulation task workflow.

  14. Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters

    Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana

    2016-02-01

    This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.

  15. Indoor environment and energy consumption optimization using field measurements and building energy simulation

    Christensen, Jørgen Erik; Chasapis, Kleanthis; Gazovic, Libor

    2015-01-01

    Modern buildings are usually equipped with advanced climate conditioning systems to ensure comfort of their occupants. However, analysis of their actual operation usually identifies large potential for improvements with respect to their efficiency. Present study investigated potential for improve......, which was used for optimization of building’s performance. Proposed optimization scenarios bring 21-37% reduction on heating consumption and thermal comfort improvement by 7-12%. The approach (procedure) can help to optimize building operation and shorten the adjustment period....

  16. Parameter identification using optimization techniques in the continuous simulation programs FORSIM and MACKSIM

    Carver, M.B.; Austin, C.F.; Ross, N.E.

    1980-02-01

    This report discusses the mechanics of automated parameter identification in simulation packages, and reviews available integration and optimization algorithms and their interaction within the recently developed optimization options in the FORSIM and MACKSIM simulation packages. In the MACKSIM mass-action chemical kinetics simulation package, the form and structure of the ordinary differential equations involved is known, so the implementation of an optimizing option is relatively straightforward. FORSIM, however, is designed to integrate ordinary and partial differential equations of abritrary definition. As the form of the equations is not known in advance, the design of the optimizing option is more intricate, but the philosophy could be applied to most simulation packages. In either case, however, the invocation of the optimizing interface is simple and user-oriented. Full details for the use of the optimizing mode for each program are given; specific applications are used as examples. (O.T.)

  17. A Framework for the Optimization of Discrete-Event Simulation Models

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  18. Simulation and Optimization of SCR System for Direct-injection Diesel Engine

    Guanqiang Ruan

    2014-11-01

    Full Text Available The turbo diesel SCR system has been researched and analyzed in this paper. By using software of CATIA, three-dimensional physical model of SCR system has been established, and with software of AVL-FIRE, the boundary conditions have been set, simulated and optimized. In the process of SCR system optimizing, it mainly optimized the pray angle. Compare the effects of processing NO to obtain batter optimization results. At last the optimization results are compared by bench test, and the experimental results are quite consistent with simulation.

  19. Optimization and Simulation Modeling of Disaster Relief Supply Chain: A Literature Review

    Feng, Keli; Bizimana, Emmanuel; Agu, Deedee D.; Issac, Tana T.

    2012-01-01

    Recent natural and man-made disasters underscore the need of a resilient and agile disaster relief supply chain to mitigate the damages and save people’s lives. Optimization and simulation modeling have become powerful and useful tools to help decision makers tackle problems related to disaster relief supply chain. This paper reviews optimization and simulation models used in the field of disaster relief supply chain. We review the literature of the facility location optimization problems of ...

  20. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  1. Membrane Modeling, Simulation and Optimization for Propylene/Propane Separation

    Alshehri, Ali

    2015-06-01

    Energy efficiency is critical for sustainable industrial growth and the reduction of environmental impacts. Energy consumption by the industrial sector accounts for more than half of the total global energy usage and, therefore, greater attention is focused on enhancing this sector’s energy efficiency. It is predicted that by 2020, more than 20% of today’s energy consumption can be avoided in countries that have effectively implemented an action plan towards efficient energy utilization. Breakthroughs in material synthesis of high selective membranes have enabled the technology to be more energy efficient. Hence, high selective membranes are increasingly replacing conventional energy intensive separation processes, such as distillation and adsorption units. Moreover, the technology offers more special features (which are essential for special applications) and its small footprint makes membrane technology suitable for platform operations (e.g., nitrogen enrichment for oil and gas offshore sites). In addition, its low maintenance characteristics allow the technology to be applied to remote operations. For these reasons, amongst other, the membrane technology market is forecast to reach $16 billion by 2017. This thesis is concerned with the engineering aspects of membrane technology and covers modeling, simulation and optimization of membranes as a stand-alone process or as a unit operation within a hybrid system. Incorporating the membrane model into a process modeling software simplifies the simulation and optimization of the different membrane processes and hybrid configurations, since all other unit operations are pre-configured. Various parametric analyses demonstrated that only the membrane selectivity and transmembrane pressure ratio parameters define a membrane’s ability to accomplish a certain separation task. Moreover, it was found that both membrane selectivity and pressure ratio exhibit a minimum value that is only defined by the feed composition

  2. SIMULATION AS A TOOL FOR PROCESS OPTIMIZATION OF LOGISTIC SYSTEMS

    Radko Popovič

    2015-09-01

    Full Text Available The paper deals with the simulation of the production processes, especially module of Siemens Tecnomatix software. Tecnomatix Process Simulate is designed for building new or modifying existing production processes. The simulation created in this software has a posibility for fast testing of planned changes or improvements of the production processes. On the base of simulation you can imagine the future picture of the real production system. 3D Simulation can reflects the actual status and conditions on the running system and of course, after some improvements, it can show the possible figure of the production system.

  3. Controlled patterns of daytime light exposure improve circadian adjustment in simulated night work.

    Dumont, Marie; Blais, Hélène; Roy, Joanie; Paquet, Jean

    2009-10-01

    Circadian misalignment between the endogenous circadian signal and the imposed rest-activity cycle is one of the main sources of sleep and health troubles in night shift workers. Timed bright light exposure during night work can reduce circadian misalignment in night workers, but this approach is limited by difficulties in incorporating bright light treatment into most workplaces. Controlled light and dark exposure during the daytime also has a significant impact on circadian phase and could be easier to implement in real-life situations. The authors previously described distinctive light exposure patterns in night nurses with and without circadian adaptation. In the present study, the main features of these patterns were used to design daytime light exposure profiles. Profiles were then tested in a laboratory simulation of night work to evaluate their efficacy in reducing circadian misalignment in night workers. The simulation included 2 day shifts followed by 4 consecutive night shifts (2400-0800 h). Healthy subjects (15 men and 23 women; 20-35 years old) were divided into 3 groups to test 3 daytime light exposure profiles designed to produce respectively a phase delay (delay group, n=12), a phase advance (advance group, n=13), or an unchanged circadian phase (stable group, n=13). In all 3 groups, light intensity was set at 50 lux during the nights of simulated night work. Salivary dim light melatonin onset (DLMO) showed a significant phase advance of 2.3 h (+/-1.3 h) in the advance group and a significant phase delay of 4.1 h (+/-1.3 h) in the delay group. The stable group showed a smaller but significant phase delay of 1.7 h (+/-1.6 h). Urinary 6-sulfatoxymelatonin (aMT6s) acrophases were highly correlated to salivary DLMOs. Urinary aMT6s acrophases were used to track daily phase shifts. They showed that phase shifts occurred rapidly and differed between the 3 groups by the 3rd night of simulated night work. These results show that significant phase shifts can

  4. Optimizing Cognitive Load for Learning from Computer-Based Science Simulations

    Lee, Hyunjeong; Plass, Jan L.; Homer, Bruce D.

    2006-01-01

    How can cognitive load in visual displays of computer simulations be optimized? Middle-school chemistry students (N = 257) learned with a simulation of the ideal gas law. Visual complexity was manipulated by separating the display of the simulations in two screens (low complexity) or presenting all information on one screen (high complexity). The…

  5. Simulation Modeling to Compare High-Throughput, Low-Iteration Optimization Strategies for Metabolic Engineering.

    Heinsch, Stephen C; Das, Siba R; Smanski, Michael J

    2018-01-01

    Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems.

  6. An optimization method of relativistic backward wave oscillator using particle simulation and genetic algorithms

    Chen, Zaigao; Wang, Jianguo [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Wang, Yue; Qiao, Hailiang; Zhang, Dianhui [Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Guo, Weijie [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-11-15

    Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.

  7. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  8. Rapid Adjustment of Circadian Clocks to Simulated Travel to Time Zones across the Globe.

    Harrison, Elizabeth M; Gorman, Michael R

    2015-12-01

    Daily rhythms in mammalian physiology and behavior are generated by a central pacemaker located in the hypothalamic suprachiasmatic nuclei (SCN), the timing of which is set by light from the environment. When the ambient light-dark cycle is shifted, as occurs with travel across time zones, the SCN and its output rhythms must reset or re-entrain their phases to match the new schedule-a sluggish process requiring about 1 day per hour shift. Using a global assay of circadian resetting to 6 equidistant time-zone meridians, we document this characteristically slow and distance-dependent resetting of Syrian hamsters under typical laboratory lighting conditions, which mimic summer day lengths. The circadian pacemaker, however, is additionally entrainable with respect to its waveform (i.e., the shape of the 24-h oscillation) allowing for tracking of seasonally varying day lengths. We here demonstrate an unprecedented, light exposure-based acceleration in phase resetting following 2 manipulations of circadian waveform. Adaptation of circadian waveforms to long winter nights (8 h light, 16 h dark) doubled the shift response in the first 3 days after the shift. Moreover, a bifurcated waveform induced by exposure to a novel 24-h light-dark-light-dark cycle permitted nearly instant resetting to phase shifts from 4 to 12 h in magnitude, representing a 71% reduction in the mismatch between the activity rhythm and the new photocycle. Thus, a marked enhancement of phase shifting can be induced via nonpharmacological, noninvasive manipulation of the circadian pacemaker waveform in a model species for mammalian circadian rhythmicity. Given the evidence of conserved flexibility in the human pacemaker waveform, these findings raise the promise of flexible resetting applicable to circadian disruption in shift workers, frequent time-zone travelers, and any individual forced to adjust to challenging schedules. © 2015 The Author(s).

  9. Simulation optimization based ant colony algorithm for the uncertain quay crane scheduling problem

    Naoufal Rouky

    2019-01-01

    Full Text Available This work is devoted to the study of the Uncertain Quay Crane Scheduling Problem (QCSP, where the loading /unloading times of containers and travel time of quay cranes are considered uncertain. The problem is solved with a Simulation Optimization approach which takes advantage of the great possibilities offered by the simulation to model the real details of the problem and the capacity of the optimization to find solutions with good quality. An Ant Colony Optimization (ACO meta-heuristic hybridized with a Variable Neighborhood Descent (VND local search is proposed to determine the assignments of tasks to quay cranes and the sequences of executions of tasks on each crane. Simulation is used inside the optimization algorithm to generate scenarios in agreement with the probabilities of the distributions of the uncertain parameters, thus, we carry out stochastic evaluations of the solutions found by each ant. The proposed optimization algorithm is tested first for the deterministic case on several well-known benchmark instances. Then, in the stochastic case, since no other work studied exactly the same problem with the same assumptions, the Simulation Optimization approach is compared with the deterministic version. The experimental results show that the optimization algorithm is competitive as compared to the existing methods and that the solutions found by the Simulation Optimization approach are more robust than those found by the optimization algorithm.

  10. Optimal Classical Simulation of State-Independent Quantum Contextuality

    Cabello, Adán; Gu, Mile; Gühne, Otfried; Xu, Zhen-Peng

    2018-03-01

    Simulating quantum contextuality with classical systems requires memory. A fundamental yet open question is what is the minimum memory needed and, therefore, the precise sense in which quantum systems outperform classical ones. Here, we make rigorous the notion of classically simulating quantum state-independent contextuality (QSIC) in the case of a single quantum system submitted to an infinite sequence of measurements randomly chosen from a finite QSIC set. We obtain the minimum memory needed to simulate arbitrary QSIC sets via classical systems under the assumption that the simulation should not contain any oracular information. In particular, we show that, while classically simulating two qubits tested with the Peres-Mermin set requires log224 ≈4.585 bits, simulating a single qutrit tested with the Yu-Oh set requires, at least, 5.740 bits.

  11. Optimization of Particle Search Algorithm for CFD-DEM Simulations

    G. Baryshev

    2013-09-01

    Full Text Available Discrete element method has numerous applications in particle physics. However, simulating particles as discrete entities can become costly for large systems. In time-driven DEM simulation most computation time is taken by contact search stage. We propose an efficient collision detection method which is based on sorting particles by their coordinates. Using multiple sorting criteria allows minimizing number of potential neighbours and defines fitness of this approach for simulation of massive systems in 3D. This method is compared to a common approach that consists of placing particles onto a grid of cells. Advantage of the new approach is independence of simulation parameters upon particle radius and domain size.

  12. Heat transfer simulation and retort program adjustment for thermal processing of wheat based Haleem in semi-rigid aluminum containers.

    Vatankhah, Hamed; Zamindar, Nafiseh; Shahedi Baghekhandan, Mohammad

    2015-10-01

    A mixed computational strategy was used to simulate and optimize the thermal processing of Haleem, an ancient eastern food, in semi-rigid aluminum containers. Average temperature values of the experiments showed no significant difference (α = 0.05) in contrast to the predicted temperatures at the same positions. According to the model, the slowest heating zone was located in geometrical center of the container. The container geometrical center F0 was estimated to be 23.8 min. A 19 min processing time interval decrease in holding time of the treatment was estimated to optimize the heating operation since the preferred F0 of some starch or meat based fluid foods is about 4.8-7.5 min.

  13. Simulation-based robust optimization for signal timing and setting.

    2009-12-30

    The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...

  14. Finite Element Multidisciplinary Optimization Simulation of Flight Vehicles, Phase I

    National Aeronautics and Space Administration — The proposed effort is concerned with the development of a novel optimization scheme and computer software for the effective design of advanced aerospace vehicles....

  15. Simulation-based optimal Bayesian experimental design for nonlinear systems

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical

  16. High-speed LWR transients simulation for optimizing emergency response

    Wulff, W.; Cheng, H.S.; Lekach, S.V.; Mallen, A.N.; Stritar, A.

    1984-01-01

    The purpose of computer-assisted emergency response in nuclear power plants, and the requirements for achieving such a response, are presented. An important requirement is the attainment of realistic high-speed plant simulations at the reactor site. Currently pursued development programs for plant simulations are reviewed. Five modeling principles are established and a criterion is presented for selecting numerical procedures and efficient computer hardware to achieve high-speed simulations. A newly developed technology for high-speed power plant simulation is described and results are presented. It is shown that simulation speeds ten times greater than real-time process-speeds are possible, and that plant instrumentation can be made part of the computational loop in a small, on-site minicomputer. Additional technical issues are presented which must still be resolved before the newly developed technology can be implemented in a nuclear power plant

  17. Optimization and simulation of tandem column supercritical fluid chromatography separations using column back pressure as a unique parameter.

    Wang, Chunlei; Tymiak, Adrienne A; Zhang, Yingru

    2014-04-15

    Tandem column supercritical fluid chromatography (SFC) has demonstrated to be a useful technique to resolve complex mixtures by serially coupling two columns of different selectivity. The overall selectivity of a tandem column separation is the retention time weighted average of selectivity from each coupled column. Currently, the method development merely relies on extensive screenings and is often a hit-or-miss process. No attention is paid to independently adjust retention and selectivity contributions from individual columns. In this study, we show how tandem column SFC selectivity can be optimized by changing relative dimensions (length or inner diameter) of the coupled columns. Moreover, we apply column back pressure as a unique parameter for SFC optimization. Continuous tuning of tandem column SFC selectivity is illustrated through column back pressure adjustments of the upstream column, for the first time. In addition, we show how and why changing coupling order of the columns can produce dramatically different separations. Using the empirical mathematical equation derived in our previous study, we also demonstrate a simulation of tandem column separations based on a single retention time measurement on each column. The simulation compares well with experimental results and correctly predicts column order and back pressure effects on the separations. Finally, considerations on instrument and column hardware requirements are discussed.

  18. Reliability-based performance simulation for optimized pavement maintenance

    Chou, Jui-Sheng; Le, Thanh-Son

    2011-01-01

    Roadway pavement maintenance is essential for driver safety and highway infrastructure efficiency. However, regular preventive maintenance and rehabilitation (M and R) activities are extremely costly. Unfortunately, the funds available for the M and R of highway pavement are often given lower priority compared to other national development policies, therefore, available funds must be allocated wisely. Maintenance strategies are typically implemented by optimizing only the cost whilst the reliability of facility performance is neglected. This study proposes a novel algorithm using multi-objective particle swarm optimization (MOPSO) technique to evaluate the cost-reliability tradeoff in a flexible maintenance strategy based on non-dominant solutions. Moreover, a probabilistic model for regression parameters is employed to assess reliability-based performance. A numerical example of a highway pavement project is illustrated to demonstrate the efficacy of the proposed MOPSO algorithms. The analytical results show that the proposed approach can help decision makers to optimize roadway maintenance plans. - Highlights: →A novel algorithm using multi-objective particle swarm optimization technique. → Evaluation of the cost-reliability tradeoff in a flexible maintenance strategy. → A probabilistic model for regression parameters is employed to assess reliability-based performance. → The proposed approach can help decision makers to optimize roadway maintenance plans.

  19. Reliability-based performance simulation for optimized pavement maintenance

    Chou, Jui-Sheng, E-mail: jschou@mail.ntust.edu.tw [Department of Construction Engineering, National Taiwan University of Science and Technology (Taiwan Tech), 43 Sec. 4, Keelung Rd., Taipei 106, Taiwan (China); Le, Thanh-Son [Department of Construction Engineering, National Taiwan University of Science and Technology (Taiwan Tech), 43 Sec. 4, Keelung Rd., Taipei 106, Taiwan (China)

    2011-10-15

    Roadway pavement maintenance is essential for driver safety and highway infrastructure efficiency. However, regular preventive maintenance and rehabilitation (M and R) activities are extremely costly. Unfortunately, the funds available for the M and R of highway pavement are often given lower priority compared to other national development policies, therefore, available funds must be allocated wisely. Maintenance strategies are typically implemented by optimizing only the cost whilst the reliability of facility performance is neglected. This study proposes a novel algorithm using multi-objective particle swarm optimization (MOPSO) technique to evaluate the cost-reliability tradeoff in a flexible maintenance strategy based on non-dominant solutions. Moreover, a probabilistic model for regression parameters is employed to assess reliability-based performance. A numerical example of a highway pavement project is illustrated to demonstrate the efficacy of the proposed MOPSO algorithms. The analytical results show that the proposed approach can help decision makers to optimize roadway maintenance plans. - Highlights: > A novel algorithm using multi-objective particle swarm optimization technique. > Evaluation of the cost-reliability tradeoff in a flexible maintenance strategy. > A probabilistic model for regression parameters is employed to assess reliability-based performance. > The proposed approach can help decision makers to optimize roadway maintenance plans.

  20. Dose optimization in simulated permanent interstitial implant of prostate brachytherapy

    Faria, Fernando Pereira de

    2006-01-01

    Any treatment of cancer that uses some modality of radiotherapy is planned before being executed. In general the goal in radiotherapy is to irradiate the target to be treated minimizing the incidence of radiation in healthy surrounding tissues. The planning differ among themselves according to the modality of radiotherapy, the type of cancer and where it is located. This work approaches the problem of dose optimization for the planning of prostate cancer treatment through the modality of low dose-rate brachytherapy with Iodine 125 or Palladium 103 seeds. An algorithm for dose calculation and optimization was constructed to find the seeds configuration that better fits the relevant clinical criteria such as as the tolerated dose by the urethra and rectum and the desired dose for prostate. The algorithm automatically finds this configuration from the prostate geometry established in two or three dimensions by using images of ultrasound, magnetic resonance or tomography and from the establishment of minimum restrictions to the positions of the seeds in the prostate and needles in a template. Six patterns of seeds distribution based on clinical criteria were suggested and tested in this work. Each one of these patterns generated a space of possible seeds configurations for the prostate tested by the dose calculation and optimization algorithm. The configurations that satisfied the clinical criteria were submitted to a test according to an optimization function suggested in this work. The configuration that produced maximum value for this function was considered the optimized one. (author)

  1. Modeling, simulation and optimization for science and technology

    Kuznetsov, Yuri; Neittaanmäki, Pekka; Pironneau, Olivier

    2014-01-01

    This volume contains thirteen articles on advances in applied mathematics and computing methods for engineering problems. Six papers are on optimization methods and algorithms with emphasis on problems with multiple criteria; four articles are on numerical methods for applied problems modeled with nonlinear PDEs; two contributions are on abstract estimates for error analysis; finally one paper deals with rare events in the context of uncertainty quantification. Applications include aerospace, glaciology and nonlinear elasticity. Herein is a selection of contributions from speakers at two conferences on applied mathematics held in June 2012 at the University of Jyväskylä, Finland. The first conference, “Optimization and PDEs with Industrial Applications” celebrated the seventieth birthday of Professor Jacques Périaux of the University of Jyväskylä and Polytechnic University of Catalonia (Barcelona Tech), and the second conference, “Optimization and PDEs with Applications” celebrated the seventy-fi...

  2. Variable Ratio Hydrostatic Transmission Simulator for Optimal Wind Power Drivetrains

    Jose M. Garcia-Bravo

    2017-01-01

    Full Text Available This work presents a hydromechanical transmission coupled to an electric AC motor and DC generator to simulate a wind power turbine drive train. The goal of this project was to demonstrate and simulate the ability of a hydrostatic variable ratio system to produce constant electric power at varying wind speeds. The experimental results show that the system can maintain a constant voltage when a 40% variation in input speed is produced. An accompanying computer simulation of the system was built and experimentally validated showing a discrete error no larger than 12%. Both the simulation and the experimental results show that the electrical power output can be regulated further if an energy storage device is used to absorb voltage spikes produced by abrupt changes in wind speed or wind direction.

  3. Optimal Rendezvous and Docking Simulator for Elliptical Orbits, Phase I

    National Aeronautics and Space Administration — It is proposed to develop and implement a simulation of spacecraft rendezvous and docking guidance, navigation, and control in elliptical orbit. The foundation of...

  4. Adjusting for treatment switching in randomised controlled trials - A simulation study and a simplified two-stage method.

    Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J

    2017-04-01

    Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.

  5. A Novel Design for Adjustable Stiffness Artificial Tendon for the Ankle Joint of a Bipedal Robot: Modeling & Simulation

    Aiman Omer

    2015-12-01

    Full Text Available Bipedal humanoid robots are expected to play a major role in the future. Performing bipedal locomotion requires high energy due to the high torque that needs to be provided by its legs’ joints. Taking the WABIAN-2R as an example, it uses harmonic gears in its joint to increase the torque. However, using such a mechanism increases the weight of the legs and therefore increases energy consumption. Therefore, the idea of developing a mechanism with adjustable stiffness to be connected to the leg joint is introduced here. The proposed mechanism would have the ability to provide passive and active motion. The mechanism would be attached to the ankle pitch joint as an artificial tendon. Using computer simulations, the dynamical performance of the mechanism is analytically evaluated.

  6. Simulation-Based Optimization for Storage Allocation Problem of Outbound Containers in Automated Container Terminals

    Ning Zhao

    2015-01-01

    Full Text Available Storage allocation of outbound containers is a key factor of the performance of container handling system in automated container terminals. Improper storage plans of outbound containers make QC waiting inevitable; hence, the vessel handling time will be lengthened. A simulation-based optimization method is proposed in this paper for the storage allocation problem of outbound containers in automated container terminals (SAPOBA. A simulation model is built up by Timed-Colored-Petri-Net (TCPN, used to evaluate the QC waiting time of storage plans. Two optimization approaches, based on Particle Swarm Optimization (PSO and Genetic Algorithm (GA, are proposed to form the complete simulation-based optimization method. Effectiveness of this method is verified by experiment, as the comparison of the two optimization approaches.

  7. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions

    Atallah, Nabil M.

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester.

  8. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.

    Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions

    Atallah, Nabil M.; El-Fadel, Mutasem E.; Ghanimeh, Sophia A.; Saikaly, Pascal; Abou Najm, Majdi R.

    2014-01-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester.

  10. Applied simulation and optimization 2 new applications in logistics, industrial and aeronautical practice

    Mota, Idalia

    2017-01-01

    Building on the author’s earlier Applied Simulation and Optimization, this book presents novel methods for solving problems in industry, based on hybrid simulation-optimization approaches that combine the advantages of both paradigms. The book serves as a comprehensive guide to tackling scheduling, routing problems, resource allocations and other issues in industrial environments, the service industry, production processes, or supply chains and aviation. Logistics, manufacturing and operational problems can either be modelled using optimization techniques or approaches based on simulation methodologies. Optimization techniques have the advantage of performing efficiently when the problems are properly defined, but they are often developed through rigid representations that do not include or accurately represent the stochasticity inherent in real systems. Furthermore, important information is lost during the abstraction process to fit each problem into the optimization technique. On the other hand, simulatio...

  11. Method to Simulate and Optimize the Operating Conditions of a Solar-Fuel Heat Supply System

    Anarbaev, A.; Zakhidov, R.

    2011-01-01

    The problem of how to determine the optimal parameters for the solar part of a plant with respect to boiler equipment efficiency is examined. The most efficient condensing boilers are chosen for simulation. (authors)

  12. Adjustment of Turbulent Boundary-Layer Flow to Idealized Urban Surfaces: A Large-Eddy Simulation Study

    Cheng, Wai-Chi; Porté-Agel, Fernando

    2015-05-01

    Large-eddy simulations (LES) are performed to simulate the atmospheric boundary-layer (ABL) flow through idealized urban canopies represented by uniform arrays of cubes in order to better understand atmospheric flow over rural-to-urban surface transitions. The LES framework is first validated with wind-tunnel experimental data. Good agreement between the simulation results and the experimental data are found for the vertical and spanwise profiles of the mean velocities and velocity standard deviations at different streamwise locations. Next, the model is used to simulate ABL flows over surface transitions from a flat homogeneous terrain to aligned and staggered arrays of cubes with height . For both configurations, five different frontal area densities , equal to 0.028, 0.063, 0.111, 0.174 and 0.250, are considered. Within the arrays, the flow is found to adjust quickly and shows similar structure to the wake of the cubes after the second row of cubes. An internal boundary layer is identified above the cube arrays and found to have a similar depth in all different cases. At a downstream location where the flow immediately above the cube array is already adjusted to the surface, the spatially-averaged velocity is found to have a logarithmic profile in the vertical. The values of the displacement height are found to be quite insensitive to the canopy layout (aligned vs. staggered) and increase roughly from to as increases from 0.028 to 0.25. Relatively larger values of the aerodynamic roughness length are obtained for the staggered arrays, compared with the aligned cases, and a maximum value of is found at for both configurations. By explicitly calculating the drag exerted by the cubes on the flow and the drag coefficients of the cubes using our LES results, and comparing the results with existing theoretical expressions, we show that the larger values of for the staggered arrays are related to the relatively larger drag coefficients of the cubes for that

  13. Simulating and Optimizing Preparative Protein Chromatography with ChromX

    Hahn, Tobias; Huuk, Thiemo; Heuveline, Vincent; Hubbuch, Ju¨rgen

    2015-01-01

    Industrial purification of biomolecules is commonly based on a sequence of chromatographic processes, which are adapted slightly to new target components, as the time to market is crucial. To improve time and material efficiency, modeling is increasingly used to determine optimal operating conditions, thus providing new challenges for current and…

  14. Lattice Boltzmann Simulation Optimization on Leading Multicore Platforms

    Williams, Samuel; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine

    2008-02-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned LBMHD application achieves up to a 14x improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.

  15. Lattice Boltzmann simulation optimization on leading multicore platforms

    Williams, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Carter, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oliker, L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shalf, J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yelick, K. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States)

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of searchbased performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to a lattice Boltzmann application (LBMHD) that historically has made poor use of scalar microprocessors due to its complex data structures and memory access patterns. We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Clovertown, AMD Opteron X2, Sun Niagara2, STI Cell, as well as the single core Intel Itanium2. Rather than hand-tuning LBMHD for each system, we develop a code generator that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our autotuned LBMHD application achieves up to a 14 improvement compared with the original code. Additionally, we present detailed analysis of each optimization, which reveal surprising hardware bottlenecks and software challenges for future multicore systems and applications.

  16. Nonlinear dynamic simulation of optimal depletion of crude oil in the lower 48 United States

    Ruth, M.; Cleveland, C.J.

    1993-01-01

    This study combines the economic theory of optimal resource use with econometric estimates of demand and supply parameters to develop a nonlinear dynamic model of crude oil exploration, development, and production in the lower 48 United States. The model is simulated with the graphical programming language STELLA, for the years 1985 to 2020. The procedure encourages use of economic theory and econometrics in combination with nonlinear dynamic simulation to enhance our understanding of complex interactions present in models of optimal resource use. (author)

  17. Multi-period mean–variance portfolio optimization based on Monte-Carlo simulation

    F. Cong (Fei); C.W. Oosterlee (Kees)

    2016-01-01

    htmlabstractWe propose a simulation-based approach for solving the constrained dynamic mean– variance portfolio managemen tproblem. For this dynamic optimization problem, we first consider a sub-optimal strategy, called the multi-stage strategy, which can be utilized in a forward fashion. Then,

  18. Metallic Fuel Casting Development and Parameter Optimization Simulations

    Fielding, Randall S.; Kennedy, J.R.; Crapps, J.; Unal, C.

    2013-01-01

    Conclusions: • Gravity casting is a feasible process for casting of metallic fuels: – May not be as robust as CGIC, more parameter dependent to find right “sweet spot” for high quality castings; – Fluid flow is very important and is affected by mold design, vent size, super heat, etc.; – Pressure differential assist was found to be detrimental. • Simulation found that vent location was important to allow adequate filling of mold; • Surface tension plays an important role in determining casting quality; • Casting and simulations high light the need for better characterized fluid physical and thermal properties; • Results from simulations will be incorporated in GACS design such as vent location and physical property characterization

  19. Simulation to Support Local Search in Trajectory Optimization Planning

    Morris, Robert A.; Venable, K. Brent; Lindsey, James

    2012-01-01

    NASA and the international community are investing in the development of a commercial transportation infrastructure that includes the increased use of rotorcraft, specifically helicopters and civil tilt rotors. However, there is significant concern over the impact of noise on the communities surrounding the transportation facilities. One way to address the rotorcraft noise problem is by exploiting powerful search techniques coming from artificial intelligence coupled with simulation and field tests to design low-noise flight profiles which can be tested in simulation or through field tests. This paper investigates the use of simulation based on predictive physical models to facilitate the search for low-noise trajectories using a class of automated search algorithms called local search. A novel feature of this approach is the ability to incorporate constraints directly into the problem formulation that addresses passenger safety and comfort.

  20. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  1. Simulation Optimization for Transportation System: A Real Case Application

    Muhammet Enes Akpınar

    2017-02-01

    Full Text Available Simulation applications help decision makers to give right decisions to eliminate some problems such as: create a new firm, need some changes inside a factory; improve the process of a hospital etc. In this engineering simulation study, there are two points which are used by students to arrive at the University. Initial point is the train station and the final point is the arrival point. Students’ transportation is provided with buses. The main problem is to decide the number of buses by taking number of student into consideration. To be able to solve this real-life application PROMODEL pack software is used.

  2. Optimization of blanking process using neural network simulation

    Hambli, R.

    2005-01-01

    The present work describes a methodology using the finite element method and neural network simulation in order to predict the optimum punch-die clearance during sheet metal blanking processes. A damage model is used in order to describe crack initiation and propagation into the sheet. The proposed approach combines predictive finite element and neural network modeling of the leading blanking parameters. Numerical results obtained by finite element computation including damage and fracture modeling were utilized to train the developed simulation environment based on back propagation neural network modeling. The comparative study between the numerical results and the experimental ones shows the good agreement. (author)

  3. Numerical Simulation and Optimization of Performances of a Solar ...

    This article has as an aim the study and the simulation of the photovoltaic cells containing CdTe materials, contributing to the development of renewable energies, and able to feed from the houses, the shelters as well as photovoltaic stations… etc. CdTe is a semiconductor having a structure of bands with an indirect gap of ...

  4. Optimization of a neutron detector design using adjoint transport simulation

    Yi, C.; Manalo, K.; Huang, M.; Chin, M.; Edgar, C.; Applegate, S.; Sjoden, G.

    2012-01-01

    A synthetic aperture approach has been developed and investigated for Special Nuclear Materials (SNM) detection in vehicles passing a checkpoint at highway speeds. SNM is postulated to be stored in a moving vehicle and detector assemblies are placed on the road-side or in chambers embedded below the road surface. Neutron and gamma spectral awareness is important for the detector assembly design besides high efficiencies, so that different SNMs can be detected and identified with various possible shielding settings. The detector assembly design is composed of a CsI gamma-ray detector block and five neutron detector blocks, with peak efficiencies targeting different energy ranges determined by adjoint simulations. In this study, formulations are derived using adjoint transport simulations to estimate detector efficiencies. The formulations is applied to investigate several neutron detector designs for Block IV, which has its peak efficiency in the thermal range, and Block V, designed to maximize the total neutron counts over the entire energy spectrum. Other Blocks detect different neutron energies. All five neutron detector blocks and the gamma-ray block are assembled in both MCNP and deterministic simulation models, with detector responses calculated to validate the fully assembled design using a 30-group library. The simulation results show that the 30-group library, collapsed from an 80-group library using an adjoint-weighting approach with the YGROUP code, significantly reduced the computational cost while maintaining accuracy. (authors)

  5. An Integrated GIS, optimization and simulation framework for optimal PV size and location in campus area environments

    Kucuksari, Sadik; Khaleghi, Amirreza M.; Hamidi, Maryam; Zhang, Ye; Szidarovszky, Ferenc; Bayraksan, Guzin; Son, Young-Jun

    2014-01-01

    Highlights: • The optimal size and locations for PV units for campus environments are achieved. • The GIS module finds the suitable rooftops and their panel capacity. • The optimization module maximizes the long-term profit of PV installations. • The simulation module evaluates the voltage profile of the distribution network. • The proposed work has been successfully demonstrated for a real university campus. - Abstract: Finding the optimal size and locations for Photovoltaic (PV) units has been a major challenge for distribution system planners and researchers. In this study, a framework is proposed to integrate Geographical Information Systems (GIS), mathematical optimization, and simulation modules to obtain the annual optimal placement and size of PV units for the next two decades in a campus area environment. First, a GIS module is developed to find the suitable rooftops and their panel capacity considering the amount of solar radiation, slope, elevation, and aspect. The optimization module is then used to maximize the long-term net profit of PV installations considering various costs of investment, inverter replacement, operation, and maintenance as well as savings from consuming less conventional energy. A voltage profile of the electricity distribution network is then investigated in the simulation module. In the case of voltage limit violation by intermittent PV generations or load fluctuations, two mitigation strategies, reallocation of the PV units or installation of a local storage unit, are suggested. The proposed framework has been implemented in a real campus area, and the results show that it can effectively be used for long-term installation planning of PV panels considering both the cost and power quality

  6. Greenhouse gases emission assessment in residential sector through buildings simulations and operation optimization

    Stojiljković, Mirko M.; Ignjatović, Marko G.; Vučković, Goran D.

    2015-01-01

    Buildings use a significant amount of primary energy and largely contribute to greenhouse gases emission. Cost optimality and cost effectiveness, including cost-optimal operation, are important for the adoption of energy efficient and environmentally friendly technologies. The long-term assessment of buildings-related greenhouse gases emission might take into account cost-optimal operation of their energy systems. This is often not the case in the literature. Long-term operation optimization problems are often of large scale and computationally intensive and time consuming. This paper formulates a bottom-up methodology relying on an efficient, but precise operation optimization approach, applicable to long-term problems and use with buildings simulations. We suggest moving-horizon short-term optimization to determine near-optimal operation modes and show that this approach, applied to flexible energy systems without seasonal storage, have satisfactory efficiency and accuracy compared with solving problem for an entire year. We also confirm it as a valuable pre-solve technique. Approach applicability and the importance of energy systems optimization are illustrated with a case study considering buildings envelope improvements and cogeneration and heat storage implementation in an urban residential settlement. EnergyPlus is used for buildings simulations while mixed integer linear programming optimization problems are constructed and solved using the custom-built software and the branch-and-cut solver Gurobi Optimizer. - Highlights: • Bottom-up approach for greenhouse gases emission assessment is presented. • Short-term moving-horizon optimization is used to define operation regimes. • Operation optimization and buildings simulations are connected with modeling tool. • Illustrated optimization method performed efficiently and gave accurate results.

  7. Optimization of suspension smelting technology by computer simulation

    Lilius, K; Jokilaakso, A; Ahokainen, T; Teppo, O; Yongxiang, Yang [Helsinki Univ. of Technology, Otaniemi (Finland). Lab. of Materials Processing and Powder Metallurgy

    1997-12-31

    An industrial-scale flash smelting furnace and waste-heat boilers have been modelled by using commercial Computational-Fluid-Dynamics software. The work has proceeded from cold gas flow to heat transfer, combustion, and two-phase flow simulations. In the present study, the modelling task has been divided into three sub-models: (1) the concentrate burner, (2) the flash smelting furnace (reaction shaft and uptake shaft), and (3) the waste-heat boiler. For the concentrate burner, the flow of the process gas and distribution air together with the concentrate or a feed mixture was simulated. Eulerian - Eulerian approach was used for the carrier gas-phase and the dispersed particle-phase. A large parametric study was carried out by simulating a laboratory scale burner with varying turbulence intensities and then extending the simulations to the industrial scale model. For the flash smelting furnace, the simulation work concentrated on gas and gas-particle two-phase flows, as well as the development of combustion model for sulphide concentrate particles. Both Eulerian and Lagrangian approaches have been utilised in describing the particle phase and the spreading of the concentrate in the reaction shaft as well as the particle tracks have been obtained. Combustion of sulphides was first approximated with gaseous combustion by using a built-in combustion model of the software. The real oxidation reactions of the concentrate particles were then coded as a user-defined sub-routine and that was tested with industrial flash smelting cases. For the waste-heat boiler, both flow and heat transfer calculations have been carried out for an old boiler and a modified boiler SULA 2 Research Programme; 23 refs.

  8. Optimization of suspension smelting technology by computer simulation

    Lilius, K.; Jokilaakso, A.; Ahokainen, T.; Teppo, O.; Yang Yongxiang [Helsinki Univ. of Technology, Otaniemi (Finland). Lab. of Materials Processing and Powder Metallurgy

    1996-12-31

    An industrial-scale flash smelting furnace and waste-heat boilers have been modelled by using commercial Computational-Fluid-Dynamics software. The work has proceeded from cold gas flow to heat transfer, combustion, and two-phase flow simulations. In the present study, the modelling task has been divided into three sub-models: (1) the concentrate burner, (2) the flash smelting furnace (reaction shaft and uptake shaft), and (3) the waste-heat boiler. For the concentrate burner, the flow of the process gas and distribution air together with the concentrate or a feed mixture was simulated. Eulerian - Eulerian approach was used for the carrier gas-phase and the dispersed particle-phase. A large parametric study was carried out by simulating a laboratory scale burner with varying turbulence intensities and then extending the simulations to the industrial scale model. For the flash smelting furnace, the simulation work concentrated on gas and gas-particle two-phase flows, as well as the development of combustion model for sulphide concentrate particles. Both Eulerian and Lagrangian approaches have been utilised in describing the particle phase and the spreading of the concentrate in the reaction shaft as well as the particle tracks have been obtained. Combustion of sulphides was first approximated with gaseous combustion by using a built-in combustion model of the software. The real oxidation reactions of the concentrate particles were then coded as a user-defined sub-routine and that was tested with industrial flash smelting cases. For the waste-heat boiler, both flow and heat transfer calculations have been carried out for an old boiler and a modified boiler SULA 2 Research Programme; 23 refs.

  9. A framework for simulation-based optimization demonstrated on reconfigurable robot workcells

    Atorf, Linus; Schorn, Christoph; Roßmann, Jürgen

    2017-01-01

    Today's trends towards automation and robotics, fueled by the emerging Industry 4.0 paradigm shift, open up many new kinds of control and optimization problems. At the same time, advances in 3D simulation technology lead to ever-improving simulation models and algorithms in various domains...

  10. Simulation-Based Optimization of Camera Placement in the Context of Industrial Pose Estimation

    Jørgensen, Troels Bo; Iversen, Thorbjørn Mosekjær; Lindvig, Anders Prier

    2018-01-01

    In this paper, we optimize the placement of a camera in simulation in order to achieve a high success rate for a pose estimation problem. This is achieved by simulating 2D images from a stereo camera in a virtual scene. The stereo images are then used to generate 3D point clouds based on two diff...

  11. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei; Biteus, Jonas

    2014-01-01

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation

  12. Application of Dr. Mainte, integrated simulator of maintenance optimization, to LWRs

    Isobe, Yoshihiro; Sagisaka, Mitsuyuki; Etoh, Junji; Matsunaga, Takashi; Kosaka, Toru; Matsumoto, Satoshi; Yoshimura, Shinobu

    2015-01-01

    Dr. Mainte, an integrated simulator for maintenance optimization of LWRs (Light Water Reactors) is based on PFM (Probabilistic Fracture Mechanics) analyses. The concept of the simulator is to provide a decision-making system to optimize maintenance activities for typical components and piping systems in nuclear power plants totally and quantitatively in terms of safety, availability, economic rationality, environmental impact and social acceptance. For the further improvement of the safety and availability of nuclear power plants, the effect of human error and its reduction on the optimization of maintenance activities have been studied. In addition, an approach of reducing human error is proposed. (author)

  13. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    Tahvili, Sahar [Mälardalen University (Sweden); Österberg, Jonas; Silvestrov, Sergei [Division of Applied Mathematics, Mälardalen University (Sweden); Biteus, Jonas [Scania CV (Sweden)

    2014-12-10

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.

  14. The Dynamic Optimization of the Departure Times of Metro Users during Rush Hour in an Agent-Based Simulation: A Case Study in Shenzhen, China

    Yuliang Xi

    2017-10-01

    Full Text Available As serious traffic problems have increased throughout the world, various types of studies, especially traffic simulations, have been conducted to investigate this issue. Activity-based traffic simulation models, such as MATSim (Multi-Agent Transport Simulation, are intended to identify optimal combinations of activities in time and space. It is also necessary to examine commuting-based traffic simulations. Such simulations focus on optimizing travel times by adjusting departure times, travel modes or travel routes to present travel suggestions to the public. This paper examines the optimal departure times of metro users during rush hour using a newly developed simulation tool. A strategy for identifying relatively optimal departure times is identified. This study examines 103,637 person agents (passengers in Shenzhen, China, and reports their average departure time, travel time and travel utility, as well as the numbers of person agents who are late and miss metro trips in every iteration. The results demonstrate that as the number of iterations increases, the average travel time of these person agents decreases by approximately 4 min. Moreover, the latest average departure time with no risk of being late when going to work is approximately 8:04, and the earliest average departure time with no risk of missing metro trips when getting off work is approximately 17:50.

  15. A Simulation Platform To Model, Optimize And Design Wind Turbines. The Matlab/Simulink Toolbox

    Anca Daniela HANSEN

    2002-12-01

    Full Text Available In the last years Matlab / Simulink® has become the most used software for modeling and simulation of dynamic systems. Wind energy conversion systems are for example such systems, containing subsystems with different ranges of the time constants: wind, turbine, generator, power electronics, transformer and grid. The electrical generator and the power converter need the smallest simulation step and therefore, these blocks decide the simulation speed. This paper presents a new and integrated simulation platform for modeling, optimizing and designing wind turbines. The platform contains different simulation tools: Matlab / Simulink - used as basic modeling tool, HAWC, DIgSilent and Saber.

  16. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  17. Geometry optimization of zirconium sulfophenylphosphonate layers by molecular simulation methods

    Škoda, J.; Pospíšil, M.; Kovář, P.; Melánová, Klára; Svoboda, J.; Beneš, L.; Zima, Vítězslav

    2018-01-01

    Roč. 24, č. 1 (2018), s. 1-12, č. článku 10. ISSN 1610-2940 R&D Projects: GA ČR(CZ) GA14-13368S; GA ČR(CZ) GA17-10639S Institutional support: RVO:61389013 Keywords : zirconium sulfophenylphosphonate * intercalation * molecular simulation Subject RIV: CA - Inorganic Chemistry OBOR OECD: Inorganic and nuclear chemistry Impact factor: 1.425, year: 2016

  18. Cost effective simulation-based multiobjective optimization in the performance of an internal combustion engine

    Aittokoski, Timo; Miettinen, Kaisa

    2008-07-01

    Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.

  19. Optimal fabrication processes for unidirectional metal-matrix composites: A computational simulation

    Saravanos, D. A.; Murthy, P. L. N.; Morel, M.

    1990-01-01

    A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with non-linear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.

  20. Optimal fabrication processes for unidirectional metal-matrix composites - A computational simulation

    Saravanos, D. A.; Murthy, P. L. N.; Morel, M.

    1990-01-01

    A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with nonlinear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.

  1. Optical simulations of laser focusing for optimization of laser betatron

    Stanke, Ladislav; Thakur, Anita; Šmíd, Michal; Gu, Yanjun; Falk, Kateřina

    2017-01-01

    Roč. 12, May (2017), 1-14, č. článku P05004. ISSN 1748-0221 R&D Projects: GA MŠk EF15_008/0000162; GA MŠk LQ1606 Grant - others:ELI Beamlines(XE) CZ.02.1.01/0.0/0.0/15_008/0000162 Institutional support: RVO:68378271 Keywords : matter * accelerator modelling and simulations * multi-particle dynamics * single-particle dynamics * Beam Optics Subject RIV: BH - Optics, Masers, Lasers OBOR OECD: Optics (including laser optics and quantum optics) Impact factor: 1.220, year: 2016

  2. PORFLOW Simulations Supporting Saltstone Disposal Unit Design Optimization

    Flach, G. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Taylor, G. A. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-12-10

    SRNL was requested by SRR to perform PORFLOW simulations to support potential cost-saving design modifications to future Saltstone Disposal Units in Z-Area (SRR-CWDA-2015-00120). The design sensitivity cases are defined in a modeling input specification document SRR-CWDA-2015-00133 Rev. 1. A high-level description of PORFLOW modeling and interpretation of results are provided in SRR-CWDA-2015-00169. The present report focuses on underlying technical issues and details of PORFLOW modeling not addressed by the input specification and results interpretation documents. Design checking of PORFLOW modeling is documented in SRNL-L3200-2015-00146.

  3. Computational modeling, optimization and manufacturing simulation of advanced engineering materials

    2016-01-01

    This volume presents recent research work focused in the development of adequate theoretical and numerical formulations to describe the behavior of advanced engineering materials.  Particular emphasis is devoted to applications in the fields of biological tissues, phase changing and porous materials, polymers and to micro/nano scale modeling. Sensitivity analysis, gradient and non-gradient based optimization procedures are involved in many of the chapters, aiming at the solution of constitutive inverse problems and parameter identification. All these relevant topics are exposed by experienced international and inter institutional research teams resulting in a high level compilation. The book is a valuable research reference for scientists, senior undergraduate and graduate students, as well as for engineers acting in the area of computational material modeling.

  4. Two-dimensional pixel image lag simulation and optimization in a 4-T CMOS image sensor

    Yu Junting; Li Binqiao; Yu Pingping; Xu Jiangtao [School of Electronics Information Engineering, Tianjin University, Tianjin 300072 (China); Mou Cun, E-mail: xujiangtao@tju.edu.c [Logistics Management Office, Hebei University of Technology, Tianjin 300130 (China)

    2010-09-15

    Pixel image lag in a 4-T CMOS image sensor is analyzed and simulated in a two-dimensional model. Strategies of reducing image lag are discussed from transfer gate channel threshold voltage doping adjustment, PPD N-type doping dose/implant tilt adjustment and transfer gate operation voltage adjustment for signal electron transfer. With the computer analysis tool ISE-TCAD, simulation results show that minimum image lag can be obtained at a pinned photodiode n-type doping dose of 7.0 x 10{sup 12} cm{sup -2}, an implant tilt of -2{sup 0}, a transfer gate channel doping dose of 3.0 x 10{sup 12} cm{sup -2} and an operation voltage of 3.4 V. The conclusions of this theoretical analysis can be a guideline for pixel design to improve the performance of 4-T CMOS image sensors. (semiconductor devices)

  5. Multiobjective optimization with a modified simulated annealing algorithm for external beam radiotherapy treatment planning

    Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel

    2006-01-01

    Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented

  6. Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan

    Huang, P. S.; Chiu, Y.

    2015-12-01

    In 1970's, the agriculture and aquaculture were rapidly developed at Pingtung coastal area in southern Taiwan. The groundwater aquifers were over-pumped and caused the seawater intrusion. In order to remedy the contaminated groundwater and find the best strategies of groundwater usage, a management model to search the optimal groundwater operational strategies is developed in this study. The objective function is to minimize the total amount of injection water and a set of constraints are applied to ensure the groundwater levels and concentrations are satisfied. A three-dimension density-dependent flow and transport simulation model, called SEAWAT developed by U.S. Geological Survey, is selected to simulate the phenomenon of seawater intrusion. The simulation model is well calibrated by the field measurements and replaced by the surrogate model of trained artificial neural networks (ANNs) to reduce the computational time. The ANNs are embedded in the management model to link the simulation and optimization models, and the global optimizer of differential evolution (DE) is applied for solving the management model. The optimal results show that the fully trained ANNs could substitute the original simulation model and reduce much computational time. Under appropriate setting of objective function and constraints, DE can find the optimal injection rates at predefined barriers. The concentrations at the target locations could decrease more than 50 percent within the planning horizon of 20 years. Keywords : Seawater intrusion, groundwater management, numerical model, artificial neural networks, differential evolution

  7. Multi-model Simulation for Optimal Control of Aeroacoustics.

    Collis, Samuel Scott; Chen, Guoquan

    2005-05-01

    Flow-generated noise, especially rotorcraft noise has been a serious concern for bothcommercial and military applications. A particular important noise source for rotor-craft is Blade-Vortex-Interaction (BVI)noise, a high amplitude, impulsive sound thatoften dominates other rotorcraft noise sources. Usually BVI noise is caused by theunsteady flow changes around various rotor blades due to interactions with vorticespreviously shed by the blades. A promising approach for reducing the BVI noise isto use on-blade controls, such as suction/blowing, micro-flaps/jets, and smart struc-tures. Because the design and implementation of such experiments to evaluate suchsystems are very expensive, efficient computational tools coupled with optimal con-trol systems are required to explore the relevant physics and evaluate the feasibilityof using various micro-fluidic devices before committing to hardware.In this thesis the research is to formulate and implement efficient computationaltools for the development and study of optimal control and design strategies for com-plex flow and acoustic systems with emphasis on rotorcraft applications, especiallyBVI noise control problem. The main purpose of aeroacoustic computations is todetermine the sound intensity and directivity far away from the noise source. How-ever, the computational cost of using a high-fidelity flow-physics model across thefull domain is usually prohibitive and itmight also be less accurate because of thenumerical diffusion and other problems. Taking advantage of the multi-physics andmulti-scale structure of this aeroacoustic problem, we develop a multi-model, multi-domain (near-field/far-field) method based on a discontinuous Galerkin discretiza-tion. In this approach the coupling of multi-domains and multi-models is achievedby weakly enforcing continuity of normal fluxes across a coupling surface. For ourinterested aeroacoustics control problem, the adjoint equations that determine thesensitivity of the cost

  8. Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method

    Feng Du

    2017-11-01

    Full Text Available This paper presents a global optimization method for structural design optimization, which integrates subset simulation optimization (SSO and the dynamic augmented Lagrangian multiplier method (DALMM. The proposed method formulates the structural design optimization as a series of unconstrained optimization sub-problems using DALMM and makes use of SSO to find the global optimum. The combined strategy guarantees that the proposed method can automatically detect active constraints and provide global optimal solutions with finite penalty parameters. The accuracy and robustness of the proposed method are demonstrated by four classical truss sizing problems. The results are compared with those reported in the literature, and show a remarkable statistical performance based on 30 independent runs.

  9. A Robust and Fast Method to Compute Shallow States without Adjustable Parameters: Simulations for a Silicon-Based Qubit

    Debernardi, Alberto; Fanciulli, Marco

    Within the framework of the envelope function approximation we have computed - without adjustable parameters and with a reduced computational effort due to analytical expression of relevant Hamiltonian terms - the energy levels of the shallow P impurity in silicon and the hyperfine and superhyperfine splitting of the ground state. We have studied the dependence of these quantities on the applied external electric field along the [001] direction. Our results reproduce correctly the experimental splitting of the impurity ground states detected at zero electric field and provide reliable predictions for values of the field where experimental data are lacking. Further, we have studied the effect of confinement of a shallow state of a P atom at the center of a spherical Si-nanocrystal embedded in a SiO2 matrix. In our simulations the valley-orbit interaction of a realistically screened Coulomb potential and of the core potential are included exactly, within the numerical accuracy due to the use of a finite basis set, while band-anisotropy effects are taken into account within the effective-mass approximation.

  10. Comparative analysis of cogeneration power plants optimization based on stochastic method using superstructure and process simulator

    Araujo, Leonardo Rodrigues de [Instituto Federal do Espirito Santo, Vitoria, ES (Brazil)], E-mail: leoaraujo@ifes.edu.br; Donatelli, Joao Luiz Marcon [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil)], E-mail: joaoluiz@npd.ufes.br; Silva, Edmar Alino da Cruz [Instituto Tecnologico de Aeronautica (ITA/CTA), Sao Jose dos Campos, SP (Brazil); Azevedo, Joao Luiz F. [Instituto de Aeronautica e Espaco (CTA/IAE/ALA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    Thermal systems are essential in facilities such as thermoelectric plants, cogeneration plants, refrigeration systems and air conditioning, among others, in which much of the energy consumed by humanity is processed. In a world with finite natural sources of fuels and growing energy demand, issues related with thermal system design, such as cost estimative, design complexity, environmental protection and optimization are becoming increasingly important. Therefore the need to understand the mechanisms that degrade energy, improve energy sources use, reduce environmental impacts and also reduce project, operation and maintenance costs. In recent years, a consistent development of procedures and techniques for computational design of thermal systems has occurred. In this context, the fundamental objective of this study is a performance comparative analysis of structural and parametric optimization of a cogeneration system using stochastic methods: genetic algorithm and simulated annealing. This research work uses a superstructure, modelled in a process simulator, IPSEpro of SimTech, in which the appropriate design case studied options are included. Accordingly, the cogeneration system optimal configuration is determined as a consequence of the optimization process, restricted within the configuration options included in the superstructure. The optimization routines are written in MsExcel Visual Basic, in order to work perfectly coupled to the simulator process. At the end of the optimization process, the system optimal configuration, given the characteristics of each specific problem, should be defined. (author)

  11. Simulation and optimization of continuous extractive fermentation with recycle system

    Widjaja, Tri; Altway, Ali; Rofiqah, Umi; Airlangga, Bramantyo

    2017-05-01

    Extractive fermentation is continuous fermentation method which is believed to be able to substitute conventional fermentation method (batch). The recovery system and ethanol refinery will be easier. Continuous process of fermentation will make the productivity increase although the unconverted sugar in continuous fermentation is still in high concentration. In order to make this process more efficient, the recycle process was used. Increasing recycle flow will enhance the probability of sugar to be re-fermented. However, this will make ethanol enter fermentation column. As a result, the accumulated ethanol will inhibit the growth of microorganism. This research aims to find optimum conditions of solvent to broth ratio (S:B) and recycle flow to fresh feed ratio in order to produce the best yield and productivity. This study employed optimization by Hooke Jeeves method using Matlab 7.8 software. The result indicated that optimum condition occured in S: B=2.615 and R: F=1.495 with yield = 50.2439 %.

  12. Parallel Performance Optimizations on Unstructured Mesh-based Simulations

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid

    2015-01-01

    © The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.

  13. Improved Power Control Using Optimal Adjustable Coefficients for Three-Phase Photovoltaic Inverter under Unbalanced Grid Voltage

    Wang, Qianggang; Zhou, Niancheng; Lou, Xiaoxuan; Chen, Xu

    2014-01-01

    Unbalanced grid faults will lead to several drawbacks in the output power quality of photovoltaic generation (PV) converters, such as power fluctuation, current amplitude swell, and a large quantity of harmonics. The aim of this paper is to propose a flexible AC current generation method by selecting coefficients to overcome these problems in an optimal way. Three coefficients are brought in to tune the output current reference within the required limits of the power quality (the current harmonic distortion, the AC current peak, the power fluctuation, and the DC voltage fluctuation). Through the optimization algorithm, the coefficients can be determined aiming to generate the minimum integrated amplitudes of the active and reactive power references with the constraints of the inverter current and DC voltage fluctuation. Dead-beat controller is utilized to track the optimal current reference in a short period. The method has been verified in PSCAD/EMTDC software. PMID:25243215

  14. Improved power control using optimal adjustable coefficients for three-phase photovoltaic inverter under unbalanced grid voltage.

    Wang, Qianggang; Zhou, Niancheng; Lou, Xiaoxuan; Chen, Xu

    2014-01-01

    Unbalanced grid faults will lead to several drawbacks in the output power quality of photovoltaic generation (PV) converters, such as power fluctuation, current amplitude swell, and a large quantity of harmonics. The aim of this paper is to propose a flexible AC current generation method by selecting coefficients to overcome these problems in an optimal way. Three coefficients are brought in to tune the output current reference within the required limits of the power quality (the current harmonic distortion, the AC current peak, the power fluctuation, and the DC voltage fluctuation). Through the optimization algorithm, the coefficients can be determined aiming to generate the minimum integrated amplitudes of the active and reactive power references with the constraints of the inverter current and DC voltage fluctuation. Dead-beat controller is utilized to track the optimal current reference in a short period. The method has been verified in PSCAD/EMTDC software.

  15. Control Optimization of a LHC 18 KW Cryoplant Warm Compression Station Using Dynamic Simulations

    Bradu, B; Niculescu, S I

    2010-01-01

    This paper addresses the control optimization of a 4.5 K refrigerator used in the cryogenic system of the Large Hadron Collider (LHC) at CERN. First, the compressor station with the cold-box have been modeled and simulated under PROCOS (Process and Control Simulator), a simulation environment developed at CERN. Next, an appropriate parameter identification has been performed on the simulator to obtain a simplified model of the system in order to design an Internal Model Control (IMC) enhancing the regulation of the high pressure. Finally, a floating high pressure control is proposed using a cascade control to reduce operational costs.

  16. Optimization of simulated moving bed (SMB) chromatography: a multi-level optimization procedure

    Jørgensen, Sten Bay; Lim, Young-il

    2004-01-01

    objective functions (productivity and desorbent consumption), employing the standing wave analysis, the true moving bed (TMB) model and the simulated moving bed (SMB) model. The procedure is constructed on a non-worse solution property advancing level by level and its solution does not mean a global optimum...

  17. Amundsen Sea simulation with optimized ocean, sea ice, and thermodynamic ice shelf model parameters

    Nakayama, Y.; Menemenlis, D.; Schodlok, M.; Heimbach, P.; Nguyen, A. T.; Rignot, E. J.

    2016-12-01

    Ice shelves and glaciers of the West Antarctic Ice Sheet are thinning and melting rapidly in the Amundsen Sea (AS). This is thought to be caused by warm Circumpolar Deep Water (CDW) that intrudes via submarine glacial troughs located at the continental shelf break. Recent studies, however, point out that the depth of thermocline, or thickness of Winter Water (WW, potential temperature below -1 °C located above CDW) is critical in determining the melt rate, especially for the Pine Island Glacier (PIG). For example, the basal melt rate of PIG, which decreased by 50% during summer 2012, has been attributed to thickening of WW. Despite the possible importance of WW thickness on ice shelf melting, previous modeling studies in this region have focused primarily on CDW intrusion and have evaluated numerical simulations based on bottom or deep CDW properties. As a result, none of these models have shown a good representation of WW for the AS. In this study, we adjust a small number of model parameters in a regional Amundsen and Bellingshausen Seas configuration of the Massachusetts Institute of Technology general circulation model (MITgcm) to better fit the available observations during the 2007-2010 period. We choose this time period because summer observations during these years show small interannual variability in the eastern AS. As a result of adjustments, our model shows significantly better match with observations than previous modeling studies, especially for WW. Since density of sea water depends largely on salinity at low temperature, this is crucial for assessing the impact of WW on PIG melt rate. In addition, we conduct several sensitivity studies, showing the impact of surface heat loss on the thickness and properties of WW. We also discuss some preliminary results pertaining to further optimization using the adjoint method. Our work is a first step toward improved representation of ice-shelf ocean interactions in the ECCO (Estimating the Circulation and

  18. Hedging Rules for Water Supply Reservoir Based on the Model of Simulation and Optimization

    Yi Ji

    2016-06-01

    Full Text Available This study proposes a hedging rule model which is composed of a two-period reservior operation model considering the damage depth and hedging rule parameter optimization model. The former solves hedging rules based on a given poriod’s water supply weighting factor and carryover storage target, while the latter optimization model is used to optimize the weighting factor and carryover storage target based on the hedging rules. The coupling model gives the optimal poriod’s water supply weighting factor and carryover storage target to guide release. The conclusions achieved from this study as follows: (1 the water supply weighting factor and carryover storage target have a direct impact on the three elements of the hedging rule; (2 parameters can guide reservoirs to supply water reasonably after optimization of the simulation and optimization model; and (3 in order to verify the utility of the hedging rule, the Heiquan reservoir is used as a case study and particle swarm optimization algorithm with a simulation model is adopted for optimizing the parameter. The results show that the proposed hedging rule can improve the operation performances of the water supply reservoir.

  19. Optimizing grade-control drillhole spacing with conditional simulations

    Adrian Martínez-Vargas

    2017-01-01

    Full Text Available This paper summarizes a method to determine the optimum spacing of grade-control drillholes drilled with reverse-circulation. The optimum drillhole spacing was defined as that one whose cost equals the cost of misclassifying ore and waste in selection mining units (SMU. The cost of misclassification of a given drillhole spacing is equal to the cost of processing waste misclassified as ore (Type I error plus the value of the ore misclassified as waste (Type II error. Type I and Type II errors were deduced by comparing true and estimated grades at SMUs, in relation to a cuttoff grade value and assuming free ore selection. True grades at SMUs and grades at drillhole samples were generated with conditional simulations. A set of estimated grades at SMU, one per each drillhole spacing, were generated with ordinary kriging. This method was used to determine the optimum drillhole spacing in a gold deposit. The results showed that the cost of misclassification is sensitive to extreme block values and tend to be overrepresented. Capping SMU’s lost values and implementing diggability constraints was recommended to improve calculations of total misclassification costs.

  20. Simulation and Optimization of Contactless Power Transfer System for Rotary Ultrasonic Machining

    Wang Xinwei

    2016-01-01

    Full Text Available In today’s rotary ultrasonic machining (RUM, the power transfer system is based on a contactless power system (rotary transformer rather than the slip ring that cannot cope with high-speed rotary of the tool. The efficiency of the rotary transformer is vital to the whole rotary ultrasonic machine. This paper focused on simulation of the rotary transformer and enhancing the efficiency of the rotary transformer by optimizing three main factors that influence its efficiency, including the gap between the two ferrite cores, the ratio of length and width of the ferrite core and the thickness of ferrite. The finite element model of rotary transformer was built on Maxwell platform. Simulation and optimization work was based on the finite element model. The optimization results compared with the initial simulation result showed an approximate 18% enhancement in terms of efficiency, from 77.69% to 95.2%.

  1. Optimization of pressurized water reactor shuffling by simulated annealing with heuristics

    Stevens, J.G.; Smith, K.S.; Rempe, K.R.; Downar, T.J.

    1995-01-01

    Simulated-annealing optimization of reactor core loading patterns is implemented with support for design heuristics during candidate pattern generation. The SIMAN optimization module uses the advanced nodal method of SIMULATE-3 and the full cross-section detail of CASMO-3 to evaluate accurately the neutronic performance of each candidate, resulting in high-quality patterns. The use of heuristics within simulated annealing is explored. Heuristics improve the consistency of optimization results for both fast- and slow-annealing runs with no penalty from the exclusion of unusual candidates. Thus, the heuristic application of designer judgment during automated pattern generation is shown to be effective. The capability of the SIMAN module to find and evaluate families of loading patterns that satisfy design constraints and have good objective performance within practical run times is demonstrated. The use of automated evaluations of successive cycles to explore multicycle effects of design decisions is discussed

  2. A simulator-independent optimization tool based on genetic algorithm applied to nuclear reactor design

    Abreu Pereira, Claudio Marcio Nascimento do; Schirru, Roberto; Martinez, Aquilino Senra

    1999-01-01

    Here is presented an engineering optimization tool based on a genetic algorithm, implemented according to the method proposed in recent work that has demonstrated the feasibility of the use of this technique in nuclear reactor core designs. The tool is simulator-independent in the sense that it can be customized to use most of the simulators which have the input parameters read from formatted text files and the outputs also written from a text file. As the nuclear reactor simulators generally use such kind of interface, the proposed tool plays an important role in nuclear reactor designs. Research reactors may often use non-conventional design approaches, causing different situations that may lead the nuclear engineer to face new optimization problems. In this case, a good optimization technique, together with its customizing facility and a friendly man-machine interface could be very interesting. Here, the tool is described and some advantages are outlined. (author)

  3. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  4. Evaluation of a proposed optimization method for discrete-event simulation models

    Alexandre Ferreira de Pinho

    2012-12-01

    Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.

  5. Device simulation and optimization of laterally-contacted-unipolar-nuclear detector

    Lee, E Y

    1999-01-01

    Unipolar gamma-ray detectors offer the possibility of enhanced energy resolution and detection sensitivity over the conventional planar detectors. However, these detectors are difficult to understand and to fabricate, due to their three-dimensional geometry and multiple electrodes. Computer simulation offers a powerful way to design and to optimize these detectors, by giving the internal electric fields, weighting potentials, and spatially resolved detector responses. Simulation and optimization of an unipolar gamma-ray detector called laterally-contacted-unipolar-nuclear detector (LUND) are shown. For 662 keV gamma-rays from a sup 1 sup 3 sup 7 Cs source, the simulation and optimization of LUND resulted in improvement in the energy resolution from 1.6% to 1.3% and improvement in the active detector volume from 4% to 38% of the total detector volume.

  6. Empirical optimization of undulator tapering at FLASH2 and comparison with numerical simulations

    Mak, Alan; Curbis, Francesca; Werin, Sverker [Lund Univ. (Sweden). MAX IV Laboratory; Faatz, Bart [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2016-08-15

    In a free-electron laser equipped with variable-gap undulator modules, the technique of undulator tapering opens up the possibility to increase the radiation power beyond the initial saturation point, thus enhancing the efficiency of the laser. The effectiveness of the enhancement relies on the proper optimization of the taper profile. In this work, a multidimensional optimization approach is implemented empirically in the X-ray free-electron laser FLASH2. The empirical results are compared with numerical simulations.

  7. Loading pattern optimization by multi-objective simulated annealing with screening technique

    Tong, K. P.; Hyun, C. L.; Hyung, K. J.; Chang, H. K.

    2006-01-01

    This paper presents a new multi-objective function which is made up of the main objective term as well as penalty terms related to the constraints. All the terms are represented in the same functional form and the coefficient of each term is normalized so that each term has equal weighting in the subsequent simulated annealing optimization calculations. The screening technique introduced in the previous work is also adopted in order to save computer time in 3-D neutronics evaluation of trial loading patterns. For numerical test of the new multi-objective function in the loading pattern optimization, the optimum loading patterns for the initial and the cycle 7 reload PWR core of Yonggwang Unit 4 are calculated by the simulated annealing algorithm with screening technique. A total of 10 optimum loading patterns are obtained for the initial core through 10 independent simulated annealing optimization runs. For the cycle 7 reload core one optimum loading pattern has been obtained from a single simulated annealing optimization run. More SA optimization runs will be conducted to optimum loading patterns for the cycle 7 reload core and results will be presented in the further work. (authors)

  8. Energy and Delay Optimization of Heterogeneous Multicore Wireless Multimedia Sensor Nodes by Adaptive Genetic-Simulated Annealing Algorithm

    Xing Liu

    2018-01-01

    Full Text Available Energy efficiency and delay optimization are significant for the proliferation of wireless multimedia sensor network (WMSN. In this article, an energy-efficient, delay-efficient, hardware and software cooptimization platform is researched to minimize the energy cost while guaranteeing the deadline of the real-time WMSN tasks. First, a multicore reconfigurable WMSN hardware platform is designed and implemented. This platform uses both the heterogeneous multicore architecture and the dynamic voltage and frequency scaling (DVFS technique. By this means, the nodes can adjust the hardware characteristics dynamically in terms of the software run-time contexts. Consequently, the software can be executed more efficiently with less energy cost and shorter execution time. Then, based on this hardware platform, an energy and delay multiobjective optimization algorithm and a DVFS adaption algorithm are investigated. These algorithms aim to search out the global energy optimization solution within the acceptable calculation time and strip the time redundancy in the task executing process. Thus, the energy efficiency of the WMSN node can be improved significantly even under strict constraint of the execution time. Simulation and real-world experiments proved that the proposed approaches can decrease the energy cost by more than 29% compared to the traditional single-core WMSN node. Moreover, the node can react quickly to the time-sensitive events.

  9. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.

    Lee, Leng-Feng; Umberger, Brian R

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This

  10. Mathematical Modelling, Simulation, and Optimal Control of the 2014 Ebola Outbreak in West Africa

    Amira Rachah

    2015-01-01

    it is crucial to modelize the virus and simulate it. In this paper, we begin by studying a simple mathematical model that describes the 2014 Ebola outbreak in Liberia. Then, we use numerical simulations and available data provided by the World Health Organization to validate the obtained mathematical model. Moreover, we develop a new mathematical model including vaccination of individuals. We discuss different cases of vaccination in order to predict the effect of vaccination on the infected individuals over time. Finally, we apply optimal control to study the impact of vaccination on the spread of the Ebola virus. The optimal control problem is solved numerically by using a direct multiple shooting method.

  11. Mathematical exergoeconomic optimization of a complex cogeneration plant aided by a professional process simulator

    Vieira, Leonardo S.; Donatelli, Joao L.; Cruz, Manuel E.

    2006-01-01

    In this work we present the development and implementation of an integrated approach for mathematical exergoeconomic optimization of complex thermal systems. By exploiting the computational power of a professional process simulator, the proposed integrated approach permits the optimization routine to ignore the variables associated with the thermodynamic balance equations and thus deal only with the decision variables. To demonstrate the capabilities of the integrated approach, it is here applied to a complex cogeneration system, which includes all the major components of a typical thermal plant, and requires more than 800 variables for its simulation

  12. Optimized calibration of neutronic-thermodynamic simulator for low power fast reactors

    Jachic, J.; Waintraub, M.

    1986-01-01

    Aiming to a general optimization of the project, controlled fuel depletion and management and yet motivated the feasibility of application of the SIRZ simulator to solve such problem, we present here an optimized and systematic calibration of this simulator. Are shown explicitly the control variables and the corresponding calibration equations for the buckling factors. After iterative linearizations, the resultant Linear Programming Problems were solved by the SIMPLEX Method. The results show that the optimum calibration is easily obtained if convergence control parameters are adequately chosen. (Author) [pt

  13. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    Brunet, Robert; Cortés, Daniel; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Boer, Dieter

    2012-01-01

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: ► Novel framework for the optimal design of thermdoynamic cycles. ► Combined use of simulation and optimization tools. ► Optimal design and operating conditions according to several economic and LCA impacts. ► Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  14. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    Brunet, Robert; Cortes, Daniel [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Guillen-Gosalbez, Gonzalo [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Jimenez, Laureano [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Boer, Dieter [Departament d' Enginyeria Mecanica, Escola Tecnica Superior d' Enginyeria, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007, Tarragona (Spain)

    2012-12-15

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: Black-Right-Pointing-Pointer Novel framework for the optimal design of thermdoynamic cycles. Black-Right-Pointing-Pointer Combined use of simulation and optimization tools. Black-Right-Pointing-Pointer Optimal design and operating conditions according to several economic and LCA impacts. Black-Right-Pointing-Pointer Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  15. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Berthiau, G

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  16. Numerical Simulation of a Tumor Growth Dynamics Model Using Particle Swarm Optimization.

    Wang, Zhijun; Wang, Qing

    Tumor cell growth models involve high-dimensional parameter spaces that require computationally tractable methods to solve. To address a proposed tumor growth dynamics mathematical model, an instance of the particle swarm optimization method was implemented to speed up the search process in the multi-dimensional parameter space to find optimal parameter values that fit experimental data from mice cancel cells. The fitness function, which measures the difference between calculated results and experimental data, was minimized in the numerical simulation process. The results and search efficiency of the particle swarm optimization method were compared to those from other evolutional methods such as genetic algorithms.

  17. Developing a simulation framework for safe and optimal trajectories considering drivers’ driving style

    Gruber, Thierry; Larue, Grégoire S.; Rakotonirainy, Andry

    2017-01-01

    drivers with the optimal trajectory considering the motorist's driving style in real time. Travel duration and safety are the main parameters used to find the optimal trajectory. A simulation framework to determine the optimal trajectory was developed in which the ego car travels in a highway environment......Advanced driving assistance systems (ADAS) have huge potential for improving road safety and travel times. However, their take-up in the market is very slow; and these systems should consider driver's preferences to increase adoption rates. The aim of this study is to develop a model providing...

  18. Stimulation of a turbofan engine for evaluation of multivariable optimal control concepts. [(computerized simulation)

    Seldner, K.

    1976-01-01

    The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.

  19. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  20. Numerical simulation and optimized design of cased telescoped ammunition interior ballistic

    Jia-gang Wang

    2018-04-01

    Full Text Available In order to achieve the optimized design of a cased telescoped ammunition (CTA interior ballistic design, a genetic algorithm was introduced into the optimal design of CTA interior ballistics with coupling the CTA interior ballistic model. Aiming at the interior ballistic characteristics of a CTA gun, the goal of CTA interior ballistic design is to obtain a projectile velocity as large as possible. The optimal design of CTA interior ballistic is carried out using a genetic algorithm by setting peak pressure, changing the chamber volume and gun powder charge density. A numerical simulation of interior ballistics based on a 35 mm CTA firing experimental scheme was conducted and then the genetic algorithm was used for numerical optimization. The projectile muzzle velocity of the optimized scheme is increased from 1168 m/s for the initial experimental scheme to 1182 m/s. Then four optimization schemes were obtained with several independent optimization processes. The schemes were compared with each other and the difference between these schemes is small. The peak pressure and muzzle velocity of these schemes are almost the same. The result shows that the genetic algorithm is effective in the optimal design of the CTA interior ballistics. This work will be lay the foundation for further CTA interior ballistic design. Keywords: Cased telescoped ammunition, Interior ballistics, Gunpowder, Optimization genetic algorithm

  1. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  2. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  3. Real-time simulation requirements for study and optimization of power system controls

    Nakra, Harbans; McCallum, David; Gagnon, Charles [Institut de Recherche d` Hydro-Quebec, Quebec, PQ (Canada); Venne, Andre; Gagnon, Julien [Hydro-Quebec, Montreal, PQ (Canada)

    1994-12-31

    At the time of ordering for the multi-terminal dc system linking Hydro-Quebec with New England, Hydro-Quebec also ordered functionally duplicate controls of all the converters and installed these in its real time simulation laboratory. The Hydro-Quebec ac system was also simulated in detail and the testing of the controls as thus made possible in a realistic environment. Many field tests were duplicated and many additional tests were done for correction and optimization. This paper describes some of the features of the real-time simulation carried out for this purpose. (author) 3 figs.

  4. Simulation-Based Multiobjective Optimization of Timber-Glass Residential Buildings in Severe Cold Regions

    Yunsong Han

    2017-12-01

    Full Text Available In the current context of increasing energy demand, timber-glass buildings will become a necessary trend in sustainable architecture in the future. Especially in severe cold zones of China, energy consumption and the visual comfort of residential buildings have attracted wide attention, and there are always trade-offs between multiple objectives. This paper aims to propose a simulation-based multiobjective optimization method to improve the daylighting, energy efficiency, and economic performance of timber-glass buildings in severe cold regions. Timber-glass building form variables have been selected as the decision variables, including building width, roof height, south and north window-to-wall ratio (WWR, window height, and orientation. A simulation-based multiobjective optimization model has been developed to optimize these performance objectives simultaneously. The results show that Daylighting Autonomy (DA presents negative correlations with Energy Use Intensity (EUI and total cost. Additionally, with an increase in DA, Useful Daylighting Illuminance (UDI demonstrates a tendency of primary increase and then decrease. Using this optimization model, four building performances have been improved from the initial generation to the final generation, which proves that simulation-based multiobjective optimization is a promising approach to improve the daylighting, energy efficiency, and economic performances of timber-glass buildings in severe cold regions.

  5. Optimizing a physical security configuration using a highly detailed simulation model

    Marechal, T.M.A.; Smith, A.E.; Ustun, V.; Smith, J.S.; Lefeber, A.A.J.; Badiru, A.B.; Thomas, M.U.

    2009-01-01

    This research is focused on using a highly detailed simulation model to create a physical security system to prevent intrusions in a building. Security consists of guards and security cameras. The problem is represented as a binary optimization problem. A new heuristic is proposed to do the security

  6. Comparison of Lasserre's Measure-based Bounds for Polynomial Optimization to Bounds Obtained by Simulated Annealing

    de Klerk, Etienne; Laurent, Monique

    We consider the problem of minimizing a continuous function f over a compact set K. We compare the hierarchy of upper bounds proposed by Lasserre in [SIAM J. Optim. 21(3) (2011), pp. 864-885] to bounds that may be obtained from simulated annealing. We show that, when f is a polynomial and K a convex

  7. Simulation-Based Planning of Optimal Conditions for Industrial Computed Tomography

    Reisinger, S.; Kasperl, S.; Franz, M.

    2011-01-01

    We present a method to optimise conditions for industrial computed tomography (CT). This optimisation is based on a deterministic simulation. Our algorithm finds task-specific CT equipment settings to achieve optimal exposure parameters by means of an STL-model of the specimen and a raytracing...

  8. Robust optimization of robotic pick and place operations for deformable objects through simulation

    Bo Jorgensen, Troels; Debrabant, Kristian; Kruger, Norbert

    2016-01-01

    for the task. The solutions are parameterized in terms of the robot motion and the gripper configuration, and after each simulation various objective scores are determined and combined. This enables the use of various optimization strategies. Based on visual inspection of the most robust solution found...

  9. Simulation and optimization of stable isotope 13C separation by carbon monoxide cryogenic distillation

    Li Hulin; Ju Yonglin; Li Liangjun; Xu Dagang

    2009-01-01

    A stable isotope 13 C separation column was set up by carbon monoxide (CO) cryogenic distillation. Diameter of the column is 45 mm, packing height is 17.5 m, of which enriching section is 15 m and stripping section is 2.5 m. Firstly, computer simulation results were validated by test results. Secondly, tests were replaced by computer simulations in order to obtain the optimal operation conditions in the experimental setup. Comprehensive factors of column pressure, feeding velocity, reflux ratio, withdrawing velocity, and boiling power impacts on the products were studied. Then optimization design of the experimental device was achieved through computer simulations combined with uniform experimental design. The final results show that the optimal operation conditions in the built column are as followings: boiling power, 250 W; column pressure, 54 kPa; reflux ratio, 84. The conclusion is that the method of combination of computer simulation and experimental design could be applied to 13 C industrial design and could be popularized in traditional distillation process to realize optimization design. (authors)

  10. Next-generation simulation and optimization platform for forest management and analysis

    Antti Makinen; Jouni Kalliovirta; Jussi Rasinmaki

    2009-01-01

    Late developments in the objectives and the data collection methods of forestry create new challenges and possibilities in forest management planning. Tools in forest management and forest planning systems must be able to make good use of novel data sources, use new models, and solve complex forest planning tasks at different scales. The SIMulation and Optimization (...

  11. A proposed simulation optimization model framework for emergency department problems in public hospital

    Ibrahim, Ireen Munira; Liong, Choong-Yeun; Bakar, Sakhinah Abu; Ahmad, Norazura; Najmuddin, Ahmad Farid

    2015-12-01

    The Emergency Department (ED) is a very complex system with limited resources to support increase in demand. ED services are considered as good quality if they can meet the patient's expectation. Long waiting times and length of stay is always the main problem faced by the management. The management of ED should give greater emphasis on their capacity of resources in order to increase the quality of services, which conforms to patient satisfaction. This paper is a review of work in progress of a study being conducted in a government hospital in Selangor, Malaysia. This paper proposed a simulation optimization model framework which is used to study ED operations and problems as well as to find an optimal solution to the problems. The integration of simulation and optimization is hoped can assist management in decision making process regarding their resource capacity planning in order to improve current and future ED operations.

  12. Using multi-disciplinary optimization and numerical simulation on the transiting exoplanet survey satellite

    Stoeckel, Gerhard P.; Doyle, Keith B.

    2017-08-01

    The Transiting Exoplanet Survey Satellite (TESS) is an instrument consisting of four, wide fieldof- view CCD cameras dedicated to the discovery of exoplanets around the brightest stars, and understanding the diversity of planets and planetary systems in our galaxy. Each camera utilizes a seven-element lens assembly with low-power and low-noise CCD electronics. Advanced multivariable optimization and numerical simulation capabilities accommodating arbitrarily complex objective functions have been added to the internally developed Lincoln Laboratory Integrated Modeling and Analysis Software (LLIMAS) and used to assess system performance. Various optical phenomena are accounted for in these analyses including full dn/dT spatial distributions in lenses and charge diffusion in the CCD electronics. These capabilities are utilized to design CCD shims for thermal vacuum chamber testing and flight, and verify comparable performance in both environments across a range of wavelengths, field points and temperature distributions. Additionally, optimizations and simulations are used for model correlation and robustness optimizations.

  13. On-Line Optimizing Control of a Simulated Continuous Yeast Fermentation

    Andersen, Maria Y.; Asferg, L.; Brabrand, H.

    1989-01-01

    On-line optimizing control of a simulated fermentation is investigated using a non-segregated dynamic model of aerobic glucose limited growth of saccharomyces cerevisiae. The optimization procedure is carried out with an underlying adaptive regulator to stabilize the culture. This stabilization...... is especially important during the setpoint changes specified by the optimizing routine. A linear ARMAX model structure is used for the fermentation process with dilution rate as input and biomass as output variable. The parameters of the linear model structure are estimated using a pseudo linear regression...... method with bandpass filtering of in- and output variables in order to ensure low frequency validity of the estimated model. An LQ-regulator is used with iterative solution of the Riccati equation. Simulation results illustrate the tuning of the underlying regulator, and the effect of perturbing...

  14. Modeling, Simulation and Optimization of Hydrogen Production Process from Glycerol using Steam Reforming

    Park, Jeongpil; Cho, Sunghyun; Kim, Tae-Ok; Shin, Dongil; Lee, Seunghwan; Moon, Dong Ju

    2014-01-01

    For improved sustainability of the biorefinery industry, biorefinery-byproduct glycerol is being investigated as an alternate source for hydrogen production. This research designs and optimizes a hydrogen-production process for small hydrogen stations using steam reforming of purified glycerol as the main reaction, replacing existing processes relying on steam methane reforming. Modeling, simulation and optimization using a commercial process simulator are performed for the proposed hydrogen production process from glycerol. The mixture of glycerol and steam are used for making syngas in the reforming process. Then hydrogen are produced from carbon monoxide and steam through the water-gas shift reaction. Finally, hydrogen is separated from carbon dioxide using PSA. This study shows higher yield than former U.S.. DOE and Linde studies. Economic evaluations are performed for optimal planning of constructing domestic hydrogen energy infrastructure based on the proposed glycerol-based hydrogen station

  15. Optimization of Gamma Knife treatment planning via guided evolutionary simulated annealing

    Zhang Pengpeng; Dean, David; Metzger, Andrew; Sibata, Claudio

    2001-01-01

    We present a method for generating optimized Gamma Knife trade mark sign (Elekta, Stockholm, Sweden) radiosurgery treatment plans. This semiautomatic method produces a highly conformal shot packing plan for the irradiation of an intracranial tumor. We simulate optimal treatment planning criteria with a probability function that is linked to every voxel in a volumetric (MR or CT) region of interest. This sigmoidal P + parameter models the requirement of conformality (i.e., tumor ablation and normal tissue sparing). After determination of initial radiosurgery treatment parameters, a guided evolutionary simulated annealing (GESA) algorithm is used to find the optimal size, position, and weight for each shot. The three-dimensional GESA algorithm searches the shot parameter space more thoroughly than is possible during manual shot packing and provides one plan that is suitable to the treatment criteria of the attending neurosurgeon and radiation oncologist. The result is a more conformal plan, which also reduces redundancy, and saves treatment administration time

  16. 3rd International Workshop on Advances in Simulation-Driven Optimization and Modeling

    Leifsson, Leifur; Yang, Xin-She

    2016-01-01

    This edited volume is devoted to the now-ubiquitous use of computational models across most disciplines of engineering and science, led by a trio of world-renowned researchers in the field. Focused on recent advances of modeling and optimization techniques aimed at handling computationally-expensive engineering problems involving simulation models, this book will be an invaluable resource for specialists (engineers, researchers, graduate students) working in areas as diverse as electrical engineering, mechanical and structural engineering, civil engineering, industrial engineering, hydrodynamics, aerospace engineering, microwave and antenna engineering, ocean science and climate modeling, and the automotive industry, where design processes are heavily based on CPU-heavy computer simulations. Various techniques, such as knowledge-based optimization, adjoint sensitivity techniques, and fast replacement models (to name just a few) are explored in-depth along with an array of the latest techniques to optimize the...

  17. Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts

    Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo

    This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.

  18. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  19. Efficiency of timing delays and electrode positions in optimization of biventricular pacing: a simulation study.

    Miri, Raz; Graf, Iulia M; Dössel, Olaf

    2009-11-01

    Electrode positions and timing delays influence the efficacy of biventricular pacing (BVP). Accordingly, this study focuses on BVP optimization, using a detailed 3-D electrophysiological model of the human heart, which is adapted to patient-specific anatomy and pathophysiology. The research is effectuated on ten heart models with left bundle branch block and myocardial infarction derived from magnetic resonance and computed tomography data. Cardiac electrical activity is simulated with the ten Tusscher cell model and adaptive cellular automaton at physiological and pathological conduction levels. The optimization methods are based on a comparison between the electrical response of the healthy and diseased heart models, measured in terms of root mean square error (E(RMS)) of the excitation front and the QRS duration error (E(QRS)). Intra- and intermethod associations of the pacing electrodes and timing delays variables were analyzed with statistical methods, i.e., t -test for dependent data, one-way analysis of variance for electrode pairs, and Pearson model for equivalent parameters from the two optimization methods. The results indicate that lateral the left ventricle and the upper or middle septal area are frequently (60% of cases) the optimal positions of the left and right electrodes, respectively. Statistical analysis proves that the two optimization methods are in good agreement. In conclusion, a noninvasive preoperative BVP optimization strategy based on computer simulations can be used to identify the most beneficial patient-specific electrode configuration and timing delays.

  20. Post-processing of Monte Carlo simulations for rapid BNCT source optimization studies

    Bleuel, D.L.; Chu, W.T.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.

    2000-01-01

    A great advantage of some neutron sources, such as accelerator-produced sources, is that they can be tuned to produce different spectra. Unfortunately, optimization studies are often time-consuming and difficult, as they require a lengthy Monte Carlo simulation for each source. When multiple characteristics, such as energy, angle, and spatial distribution of a neutron beam are allowed to vary, an overwhelming number of simulations may be required. Many optimization studies, therefore, suffer from a small number of datapoints, restrictive treatment conditions, or poor statistics. By scoring pertinent information from every particle tally in a Monte Carlo simulation, then applying appropriate source variable weight factors in a post-processing algorithm, a single simulation can be used to model any number of multiple sources. Through this method, the response to a new source can be modeled in minutes or seconds, rather than hours or days, allowing for the analysis of truly variable source conditions of much greater resolution than is normally possible when a new simulation must be run for each datapoint in a study. This method has been benchmarked and used to recreate optimization studies in a small fraction of the time spent in the original studies

  1. Post-processing of Monte Carlo simulations for rapid BNCT source optimization studies

    Bleuel, D.L.; Chu, W.T.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.

    2000-01-01

    A great advantage of some neutron sources, such as accelerator-produced sources, is that they can be tuned to produce different spectra. Unfortunately, optimization studies are often time-consuming and difficult, as they require a lengthy Monte Carlo simulation for each source. When multiple characteristics, such as energy, angle, and spatial distribution of a neutron beam are allowed to vary, an overwhelming number of simulations may be required. Many optimization studies, therefore, suffer from a small number of data points, restrictive treatment conditions, or poor statistics. By scoring pertinent information from every particle tally in a Monte Carlo simulation, then applying appropriate source variable weight factors in a post-processing algorithm; a single simulation can be used to model any number of multiple sources. Through this method, the response to a new source can be modeled in minutes or seconds, rather than hours or days, allowing for the analysis of truly variable source conditions of much greater resolution than is normally possible when a new simulation must be run for each data point in a study. This method has been benchmarked and used to recreate optimization studies in a small fraction of the time spent in the original studies. (author)

  2. Simulated Stochastic Approximation Annealing for Global Optimization With a Square-Root Cooling Schedule

    Liang, Faming

    2014-04-03

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.

  3. Optimal design of a composite space shield based on numerical simulations

    Son, Byung Jin; Yoo, Jeong Hoon; Lee, Min Hyung

    2015-01-01

    In this study, optimal design of a stuffed Whipple shield is proposed by using numerical simulations and new penetration criterion. The target model was selected based on the shield model used in the Columbus module of the international space station. Because experimental results can be obtained only in the low velocity region below 7 km/s, it is required to derive the Ballistic limit curve (BLC) in the high velocity region above 7 km/s by numerical simulation. AUTODYN-2D, the commercial hydro-code package, was used to simulate the nonlinear transient analysis for the hypervelocity impact. The Smoothed particle hydrodynamics (SPH) method was applied to projectile and bumper modeling to represent the debris cloud generated after the impact. Numerical simulation model and selected material properties were validated through a quantitative comparison between numerical and experimental results. A new criterion to determine whether the penetration occurs or not is proposed from kinetic energy analysis by numerical simulation in the velocity region over 7 km/s. The parameter optimization process was performed to improve the protection ability at a specific condition through the Design of experiment (DOE) method and the Response surface methodology (RSM). The performance of the proposed optimal design was numerically verified.

  4. Elektrisk Design og Styring. Simulation Platform to Model, Optimize and Design Wind Turbines

    Iov, Florin; Hansen, A. D.; Soerensen, P.

    This report is a general overview of the results obtained in the project ?Electrical Design and Control. Simulation Platform to Model, Optimize and Design Wind Turbines?. The report is structured in six chapters. First, the background of this project and the main goals as well as the structure...... of the simulation platform is given. The main topologies for wind turbines, which have been taken into account during the project, are briefly presented. Then, the considered simulation tools namely: HAWC, DIgSILENT, Saber and Matlab/Simulink have been used in this simulation platform are described. The focus here...... is on the modelling and simulation time scale aspects. The abilities of these tools are complementary and they can together cover all the modelling aspects of the wind turbines e.g. mechanical loads, power quality, switching, control and grid faults. New models and new control algorithms for wind turbine systems have...

  5. Simulative design and process optimization of the two-stage stretch-blow molding process

    Hopmann, Ch.; Rasche, S.; Windeck, C. [Institute of Plastics Processing at RWTH Aachen University (IKV) Pontstraße 49, 52062 Aachen (Germany)

    2015-05-22

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.

  6. Simulative design and process optimization of the two-stage stretch-blow molding process

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-01-01

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress

  7. Proceedings of the 6. IASTED conference on modelling, simulation, and optimization

    Nyongesa, H. [Botswana Univ., Gaborone (Botswana). Dept. of Computer Science] (ed.)

    2006-07-01

    This conference presented a variety of new optimization and simulation tools for use in several scientific fields. Neural network-based simulation tools were presented, as well as new approaches to optimizing artificial intelligence simulation models. Approaches to image compression were discussed. Control strategies and systems analysis methodologies were presented. Other topics included Gaussian mixture models; helical transformation; fault diagnosis; and stochastic dynamics in economical applications. Decision support system models were also discussed in addition to recursive approaches to virtualization, and intelligent designs for the provision of HIV treatments in Africa. The conference was divided into 8 sessions: (1) scientific applications; (2) system design; (3) environmental applications; (4) economic and financial applications; (5) modelling techniques; (6) general methods; (7) special session; and (8) additional papers. The conference featured 56 presentations, of which 5 of which have been catalogued separately for inclusion in this database. refs., tabs., figs.

  8. Layout optimization of DRAM cells using rigorous simulation model for NTD

    Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe

    2014-03-01

    scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.

  9. Optimal Parameters to Determine the Apparent Diffusion Coefficient in Diffusion Weighted Imaging via Simulation

    Perera, Dimuthu

    Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate

  10. Extended Information Ratio for Portfolio Optimization Using Simulated Annealing with Constrained Neighborhood

    Orito, Yukiko; Yamamoto, Hisashi; Tsujimura, Yasuhiro; Kambayashi, Yasushi

    The portfolio optimizations are to determine the proportion-weighted combination in the portfolio in order to achieve investment targets. This optimization is one of the multi-dimensional combinatorial optimizations and it is difficult for the portfolio constructed in the past period to keep its performance in the future period. In order to keep the good performances of portfolios, we propose the extended information ratio as an objective function, using the information ratio, beta, prime beta, or correlation coefficient in this paper. We apply the simulated annealing (SA) to optimize the portfolio employing the proposed ratio. For the SA, we make the neighbor by the operation that changes the structure of the weights in the portfolio. In the numerical experiments, we show that our portfolios keep the good performances when the market trend of the future period becomes different from that of the past period.

  11. Microwave imaging for conducting scatterers by hybrid particle swarm optimization with simulated annealing

    Mhamdi, B.; Grayaa, K.; Aguili, T.

    2011-01-01

    In this paper, a microwave imaging technique for reconstructing the shape of two-dimensional perfectly conducting scatterers by means of a stochastic optimization approach is investigated. Based on the boundary condition and the measured scattered field derived by transverse magnetic illuminations, a set of nonlinear integral equations is obtained and the imaging problem is reformulated in to an optimization problem. A hybrid approximation algorithm, called PSO-SA, is developed in this work to solve the scattering inverse problem. In the hybrid algorithm, particle swarm optimization (PSO) combines global search and local search for finding the optimal results assignment with reasonable time and simulated annealing (SA) uses certain probability to avoid being trapped in a local optimum. The hybrid approach elegantly combines the exploration ability of PSO with the exploitation ability of SA. Reconstruction results are compared with exact shapes of some conducting cylinders; and good agreements with the original shapes are observed.

  12. Optimization of PWR fuel assembly radial enrichment and burnable poison location based on adaptive simulated annealing

    Rogers, Timothy; Ragusa, Jean; Schultz, Stephen; St Clair, Robert

    2009-01-01

    The focus of this paper is to present a concurrent optimization scheme for the radial pin enrichment and burnable poison location in PWR fuel assemblies. The methodology is based on the Adaptive Simulated Annealing (ASA) technique, coupled with a neutron lattice physics code to update the cost function values. In this work, the variations in the pin U-235 enrichment are variables to be optimized radially, i.e., pin by pin. We consider the optimization of two categories of fuel assemblies, with and without Gadolinium burnable poison pins. When burnable poisons are present, both the radial distribution of enrichment and the poison locations are variables in the optimization process. Results for 15 x 15 PWR fuel assembly designs are provided.

  13. Two-Dimensional IIR Filter Design Using Simulated Annealing Based Particle Swarm Optimization

    Supriya Dhabal

    2014-01-01

    Full Text Available We present a novel hybrid algorithm based on particle swarm optimization (PSO and simulated annealing (SA for the design of two-dimensional recursive digital filters. The proposed method, known as SA-PSO, integrates the global search ability of PSO with the local search ability of SA and offsets the weakness of each other. The acceptance criterion of Metropolis is included in the basic algorithm of PSO to increase the swarm’s diversity by accepting sometimes weaker solutions also. The experimental results reveal that the performance of the optimal filter designed by the proposed SA-PSO method is improved. Further, the convergence behavior as well as optimization accuracy of proposed method has been improved significantly and computational time is also reduced. In addition, the proposed SA-PSO method also produces the best optimal solution with lower mean and variance which indicates that the algorithm can be used more efficiently in realizing two-dimensional digital filters.

  14. NDDP multi-stage flash desalination process simulator design process optimization

    Sashi Kumar, G.N.; Mahendra, A.K.; Sanyal, A.; Gouthaman, G.

    2009-03-01

    The improvement of NDDP-MSF plant's performance ratio (PR) from design value of 9.0 to 13.1 was achieved by optimizing the plant's operating parameters within the feasible zone of operation. This plant has 20% excess heat transfer area over the design condition which helped us to get a PR of 15.1 after optimization. Thus we have obtained, (1) A 45% increase in the output over design value by the optimization carried out with design heat transfer area. (2) A 68% increase in the output over design value by the optimization carried out with increased heat transfer area. This report discusses the approach, methodology and results of the optimization study carried out. A simulator, MSFSIM which predicts the performance of a multi-stage flash (MSF) desalination plant has been coupled with Genetic Algorithm (GA) optimizer. Exhaustive optimization case studies have been conducted on this plant with an objective to increase the performance ratio (PR). The steady state optimization performed was based on obtaining the best stage wise pressure profile to enhance thermal efficiency which in-turn improves the performance ratio. Apart from this, the recirculating brine flow rate was also optimized. This optimization study enabled us to increase the PR of NDDP-MSF plant from design value of 9.0 to an optimized value 13.1. The actual plant is provided with 20% additional heat transfer area over and above the design heat transfer area. Optimization with this additional heat transfer area has taken the PR to 15.1. A desire to maintain equal flashing rates in all of the stages (a feature required for long life of the plant and to avoid cascading effect of non-flashing triggered by any stage) of the MSF plant has also been achieved. The deviation in the flashing rates within stages has been reduced. The startup characteristic of the plant (i.e the variation of stage pressure and the variation of recirculation flow rate with time), have been optimized with a target to minimize the

  15. Simulation and Optimization of Control of Selected Phases of Gyroplane Flight

    Wienczyslaw Stalewski

    2018-02-01

    Full Text Available Optimization methods are increasingly used to solve problems in aeronautical engineering. Typically, optimization methods are utilized in the design of an aircraft airframe or its structure. The presented study is focused on improvement of aircraft flight control procedures through numerical optimization. The optimization problems concern selected phases of flight of a light gyroplane—a rotorcraft using an unpowered rotor in autorotation to develop lift and an engine-powered propeller to provide thrust. An original methodology of computational simulation of rotorcraft flight was developed and implemented. In this approach the aircraft motion equations are solved step-by-step, simultaneously with the solution of the Unsteady Reynolds-Averaged Navier–Stokes equations, which is conducted to assess aerodynamic forces acting on the aircraft. As a numerical optimization method, the BFGS (Broyden–Fletcher–Goldfarb–Shanno algorithm was adapted. The developed methodology was applied to optimize the flight control procedures in selected stages of gyroplane flight in direct proximity to the ground, where proper control of the aircraft is critical to ensure flight safety and performance. The results of conducted computational optimizations proved the qualitative correctness of the developed methodology. The research results can be helpful in the design of easy-to-control gyroplanes and also in the training of pilots for this type of rotorcraft.

  16. The effect of framing on surrogate optimism bias: A simulation study.

    Patel, Dev; Cohen, Elan D; Barnato, Amber E

    2016-04-01

    To explore the effect of emotion priming and physician communication behaviors on optimism bias. We conducted a 5 × 2 between-subject randomized factorial experiment using a Web-based interactive video designed to simulate a family meeting for a critically ill spouse/parent. Eligibility included age at least 35 years and self-identifying as the surrogate for a spouse/parent. The primary outcome was the surrogate's election of code status. We defined optimism bias as the surrogate's estimate of prognosis with cardiopulmonary resuscitation (CPR) > their recollection of the physician's estimate. Of 373 respondents, 256 (69%) logged in and were randomized and 220 (86%) had nonmissing data for prognosis. Sixty-seven (30%) of 220 overall and 56 of (32%) 173 with an accurate recollection of the physician's estimate had optimism bias. Optimism bias correlated with choosing CPR (P optimism bias. Framing the decision as the patient's vs the surrogate's (25% vs 36%, P = .066) and describing the alternative to CPR as "allow natural death" instead of "do not resuscitate" (25% vs 37%, P = .035) decreased optimism bias. Framing of CPR choice during code status conversations may influence surrogates' optimism bias. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  18. Efficiency optimization of class-D biomedical inductive wireless power transfer systems by means of frequency adjustment.

    Schormans, Matthew; Valente, Virgilio; Demosthenous, Andreas

    2015-01-01

    Inductive powering for implanted medical devices is a commonly employed technique, that allows for implants to avoid more dangerous methods such as the use of transcutaneous wires or implanted batteries. However, wireless powering in this way also comes with a number of difficulties and conflicting requirements, which are often met by using designs based on compromise. In particular, one aspect common to most inductive power links is that they are driven with a fixed frequency, which may not be optimal depending on factors such as coupling and load. In this paper, a method is proposed in which an inductive power link is driven by a frequency that is maintained at an optimum value f(opt), to ensure that the link is in resonance. In order to maintain this resonance, a phase tracking technique is employed at the primary side of the link; this allows for compensation of changes in coil separation and load. The technique is shown to provide significant improvements in maintained secondary voltage and efficiency for a range of loads when the link is overcoupled.

  19. Simulation-optimization of large agro-hydrosystems using a decomposition approach

    Schuetze, Niels; Grundmann, Jens

    2014-05-01

    In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.

  20. Simulation and optimization of stable isotope 18O separation by water vacuum distillation

    Chen Yuyan; Qin Chuanjiang; Xiao Bin; Xu Jing'an

    2012-01-01

    In the research, a stable isotope 18 O separation column was set up by water vacuum distillation with 20 m packing height and 0.1 m diameter of the column. The self-developed special packing named PAC- 18 O was packed inside the column. Firstly, a model was created by using the Aspen Plus software, and then the simulation results were validated by test results. Secondly, a group of simulation results were created by Aspen Plus, and the optimal operation conditions were gotten by using the artificial neural network (ANN) and Statistica software. Considering comprehensive factors drawn from column pressure and from withdrawing velocity, conclusions were reached on the study of the impact on the abundance of the isotope 18 O. The final results show that the abundance of the isotope 18 O increases as column pressure dropping and withdrawing velocity decreasing. Besides, the optimal column pressure and the incidence formula between the abundance of the isotope 18 O and withdrawing velocity were gotten. The conclusion is that the method of simulation and optimization can be applied to 18 O industrial design and will be popular in traditional distillation process to realize optimization design. (authors)

  1. Modeling and analysis of a decentralized electricity market: An integrated simulation/optimization approach

    Sarıca, Kemal; Kumbaroğlu, Gürkan; Or, Ilhan

    2012-01-01

    In this study, a model is developed to investigate the implications of an hourly day-ahead competitive power market on generator profits, electricity prices, availability and supply security. An integrated simulation/optimization approach is employed integrating a multi-agent simulation model with two alternative optimization models. The simulation model represents interactions between power generator, system operator, power user and power transmitter agents while the network flow optimization model oversees and optimizes the electricity flows, dispatches generators based on two alternative approaches used in the modeling of the underlying transmission network: a linear minimum cost network flow model and a non-linear alternating current optimal power flow model. Supply, demand, transmission, capacity and other technological constraints are thereby enforced. The transmission network, on which the scenario analyses are carried out, includes 30 bus, 41 lines, 9 generators, and 21 power users. The scenarios examined in the analysis cover various settings of transmission line capacities/fees, and hourly learning algorithms. Results provide insight into key behavioral and structural aspects of a decentralized electricity market under network constraints and reveal the importance of using an AC network instead of a simplified linear network flow approach. -- Highlights: ► An agent-based simulation model with an AC transmission environment with a day-ahead market. ► Physical network parameters have dramatic effects over price levels and stability. ► Due to AC nature of transmission network, adaptive agents have more local market power than minimal cost network flow. ► Behavior of the generators has significant effect over market price formation, as pointed out by bidding strategies. ► Transmission line capacity and fee policies are found to be very effective in price formation in the market.

  2. Optimal Spatial Subdivision method for improving geometry navigation performance in Monte Carlo particle transport simulation

    Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin

    2015-01-01

    Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes

  3. Pareto Optimal Solutions for Network Defense Strategy Selection Simulator in Multi-Objective Reinforcement Learning

    Yang Sun

    2018-01-01

    Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.

  4. Solving iTOUGH2 simulation and optimization problems using the PEST protocol

    Finsterle, S.A.; Zhang, Y.

    2011-02-01

    The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstrate the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.

  5. Simulation and Optimization of Air-Cooled PEMFC Stack for Lightweight Hybrid Vehicle Application

    Jingming Liang

    2015-01-01

    Full Text Available A model of 2 kW air-cooled proton exchange membrane fuel cell (PEMFC stack has been built based upon the application of lightweight hybrid vehicle after analyzing the characteristics of heat transfer of the air-cooled stack. Different dissipating models of the air-cooled stack have been simulated and an optimal simulation model for air-cooled stack called convection heat transfer (CHT model has been figured out by applying the computational fluid dynamics (CFD software, based on which, the structure of the air-cooled stack has been optimized by adding irregular cooling fins at the end of the stack. According to the simulation result, the temperature of the stack has been equally distributed, reducing the cooling density and saving energy. Finally, the 2 kW hydrogen-air air-cooled PEMFC stack is manufactured and tested by comparing the simulation data which is to find out its operating regulations in order to further optimize its structure.

  6. Hydrogen production by onboard gasoline processing – Process simulation and optimization

    Bisaria, Vega; Smith, R.J. Byron,

    2013-12-15

    Highlights: • Process flow sheet for an onboard fuel processor for 100 kW fuel cell output was simulated. • Gasoline fuel requirement was found to be 30.55 kg/hr. • The fuel processor efficiency was found to be 95.98%. • An heat integrated optimum flow sheet was developed. - Abstract: Fuel cell vehicles have reached the commercialization stage and hybrid vehicles are already on the road. While hydrogen storage and infrastructure remain critical issues in stand alone commercialization of the technology, researchers are developing onboard fuel processors, which can convert a variety of fuels into hydrogen to power these fuel cell vehicles. The feasibility study of a 100 kW on board fuel processor based on gasoline fuel is carried out using process simulation. The steady state model has been developed with the help of Aspen HYSYS to analyze the fuel processor and total system performance. The components of the fuel processor are the fuel reforming unit, CO clean-up unit and auxiliary units. Optimization studies were carried out by analyzing the influence of various operating parameters such as oxygen to carbon ratio, steam to carbon ratio, temperature and pressure on the process equipments. From the steady state model optimization using Aspen HYSYS, an optimized reaction composition in terms of hydrogen production and carbon monoxide concentration corresponds to: oxygen to carbon ratio of 0.5 and steam to carbon ratio of 0.5. The fuel processor efficiency of 95.98% is obtained under these optimized conditions. The heat integration of the system using the composite curve, grand composite curve and utility composite curve were studied for the system. The most appropriate heat exchanger network from the generated ones was chosen and that was incorporated into the optimized flow sheet of the100 kW fuel processor. A completely heat integrated 100 kW fuel processor flow sheet using gasoline as fuel was thus successfully simulated and optimized.

  7. Hydrogen production by onboard gasoline processing – Process simulation and optimization

    Bisaria, Vega; Smith, R.J. Byron

    2013-01-01

    Highlights: • Process flow sheet for an onboard fuel processor for 100 kW fuel cell output was simulated. • Gasoline fuel requirement was found to be 30.55 kg/hr. • The fuel processor efficiency was found to be 95.98%. • An heat integrated optimum flow sheet was developed. - Abstract: Fuel cell vehicles have reached the commercialization stage and hybrid vehicles are already on the road. While hydrogen storage and infrastructure remain critical issues in stand alone commercialization of the technology, researchers are developing onboard fuel processors, which can convert a variety of fuels into hydrogen to power these fuel cell vehicles. The feasibility study of a 100 kW on board fuel processor based on gasoline fuel is carried out using process simulation. The steady state model has been developed with the help of Aspen HYSYS to analyze the fuel processor and total system performance. The components of the fuel processor are the fuel reforming unit, CO clean-up unit and auxiliary units. Optimization studies were carried out by analyzing the influence of various operating parameters such as oxygen to carbon ratio, steam to carbon ratio, temperature and pressure on the process equipments. From the steady state model optimization using Aspen HYSYS, an optimized reaction composition in terms of hydrogen production and carbon monoxide concentration corresponds to: oxygen to carbon ratio of 0.5 and steam to carbon ratio of 0.5. The fuel processor efficiency of 95.98% is obtained under these optimized conditions. The heat integration of the system using the composite curve, grand composite curve and utility composite curve were studied for the system. The most appropriate heat exchanger network from the generated ones was chosen and that was incorporated into the optimized flow sheet of the100 kW fuel processor. A completely heat integrated 100 kW fuel processor flow sheet using gasoline as fuel was thus successfully simulated and optimized

  8. Development of an Optimizing Control Concept for Fossil-Fired Boilers using a Simulation Model

    Mortensen, J. H.; Mølbak, T.; Commisso, M.B.

    1997-01-01

    of implementation and commissioning. The optimizing control system takes into account the multivariable and nonlinear characteristics of the boiler process as a gain-scheduled LQG-controller is utilized. For the purpose of facilitating the control concept development a dynamic simulation model of the boiler process......An optimizing control system for improving the load following capabilities of power plant units has been developed. The system is implemented as a complement producing additive control signals to the existing boiler control system, a concept which has various practical advantages in terms...... model when designing a new control concept are discussed....

  9. Analysis and optimization of gyrokinetic toroidal simulations on homogenous and heterogenous platforms

    Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; Wang, Bei; Oliker, Leonid

    2013-01-01

    The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.

  10. Simulation and optimization of agricultural product supply chain system based on Witness

    Jiandong Liu

    2017-03-01

    Full Text Available Researches on agricultural product supply chain have important implications for improving the efficiency of agricultural products circulation, strengthening the construction of agricultural market system, promoting agricultural modernization and solving the three rural issues. Agricultural product supply chain system has begun to be optimized through simulation technique. In this paper, agricultural product supply chain system is reasonably simplified and assumed. A simulation model was developed by using the simulation software Wit-ness to study agricultural product supply chain. Through the analysis of the simulation output data, improvement suggestions were also proposed as follows: improving the organization degree of agricultural products, improving the agricultural products processing, establishing strategic partnership and scientifically developing agricultural products logistics.

  11. A Multi-level hierarchic Markov process with Bayesian updating for herd optimization and simulation in dairy cattle

    Demeter, R.M.; Kristensen, A.R.; Dijkstra, J.; Oude Lansink, A.G.J.M.; Meuwissen, M.P.M.; Arendonk, van J.A.M.

    2011-01-01

    Herd optimization models that determine economically optimal insemination and replacement decisions are valuable research tools to study various aspects of farming systems. The aim of this study was to develop a herd optimization and simulation model for dairy cattle. The model determines

  12. Teaching Simulation and Computer-Aided Separation Optimization in Liquid Chromatography by Means of Illustrative Microsoft Excel Spreadsheets

    Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.

    2017-01-01

    A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…

  13. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.

  14. Simulation of a method for determining one-dimensional {sup 137}Cs distribution using multiple gamma spectroscopic measurements with an adjustable cylindrical collimator and center shield

    Whetstone, Z.D.; Dewey, S.C. [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States); Kearfott, K.J., E-mail: kearfott@umich.ed [Radiological Health Engineering Laboratory, Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, 1906 Cooley Building, Ann Arbor, MI 48109-2104 (United States)

    2011-05-15

    With multiple in situ gamma spectroscopic measurements obtained with an adjustable cylindrical collimator and a circular shield, the arbitrary one-dimensional distribution of radioactive material can be determined. The detector responses are theoretically calculated, field measurements obtained, and a system of equations relating detector response to measurement geometry and activity distribution solved to estimate the distribution. This paper demonstrates the method by simulating multiple scenarios and providing analysis of the system conditioning.

  15. Numerical simulation of the modulation transfer function (MTF) in infrared focal plane arrays: simulation methodology and MTF optimization

    Schuster, J.

    2018-02-01

    Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.

  16. Influence of the method of optimizing adjustments of ARV-SD on attainable degree of system stability. Vliyaniye metoda optimizatsii nastroyek ARV-SD na dostizhimuyu stepen ustoychivosti sistemy

    Gruzdev, I.A.; Trudospekova, G.Kh.

    1983-01-01

    An examination is made of the efficiency of the methods of successive and simultaneous optimization of adjustments of ARV-SD (ARV of strong action) of several PP. It is shown that with the use of the method of simultaneous optimization for an idealized model of complex EPS, it is possible to attain absolute controllability of the degree of stability.

  17. A New Method Based on Simulation-Optimization Approach to Find Optimal Solution in Dynamic Job-shop Scheduling Problem with Breakdown and Rework

    Farzad Amirkhani

    2017-03-01

    The proposed method is implemented on classical job-shop problems with objective of makespan and results are compared with mixed integer programming model. Moreover, the appropriate dispatching priorities are achieved for dynamic job-shop problem minimizing a multi-objective criteria. The results show that simulation-based optimization are highly capable to capture the main characteristics of the shop and produce optimal/near-optimal solutions with highly credibility degree.

  18. Simulation and optimization of a coking wastewater biological treatment process by activated sludge models (ASM).

    Wu, Xiaohui; Yang, Yang; Wu, Gaoming; Mao, Juan; Zhou, Tao

    2016-01-01

    Applications of activated sludge models (ASM) in simulating industrial biological wastewater treatment plants (WWTPs) are still difficult due to refractory and complex components in influents as well as diversity in activated sludges. In this study, an ASM3 modeling study was conducted to simulate and optimize a practical coking wastewater treatment plant (CWTP). First, respirometric characterizations of the coking wastewater and CWTP biomasses were conducted to determine the specific kinetic and stoichiometric model parameters for the consecutive aeration-anoxic-aeration (O-A/O) biological process. All ASM3 parameters have been further estimated and calibrated, through cross validation by the model dynamic simulation procedure. Consequently, an ASM3 model was successfully established to accurately simulate the CWTP performances in removing COD and NH4-N. An optimized CWTP operation condition could be proposed reducing the operation cost from 6.2 to 5.5 €/m(3) wastewater. This study is expected to provide a useful reference for mathematic simulations of practical industrial WWTPs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. GATE simulation of a LYSO-based SPECT imager: Validation and detector optimization

    Li, Suying; Zhang, Qiushi; Xie, Zhaoheng; Liu, Qi; Xu, Baixuan; Yang, Kun; Li, Changhui; Ren, Qiushi

    2015-01-01

    This paper presents a small animal SPECT system that is based on cerium doped lutetium–yttrium oxyorthosilicate (LYSO) scintillation crystal, position sensitive photomultiplier tubes (PSPMTs) and parallel hole collimator. Spatial resolution test and animal experiment were performed to demonstrate the imaging performance of the detector. Preliminary results indicated a spatial resolution of 2.5 mm at FWHM that cannot meet our design requirement. Therefore, we simulated this gamma camera using GATE (GEANT 4 Application for Tomographic Emission) aiming to make detector spatial resolution less than 2 mm. First, the GATE simulation process was validated through comparison between simulated and experimental data. This also indicates the accuracy and effectiveness of GATE simulation for LYSO-based gamma camera. Then the different detector sampling methods (crystal size with 1.5, and 1 mm) and collimator design (collimator height with 30, 34.8, 38, and 43 mm) were studied to figure out an optimized parameter set. Detector sensitivity changes were also focused on with different parameters set that generated different spatial resolution results. Tradeoff curves of spatial resolution and sensitivity were plotted to determine the optimal collimator height with different sampling methods. Simulation results show that scintillation crystal size of 1 mm and collimator height of 38 mm, which can generate a spatial resolution of ∼1.8 mm and sensitivity of ∼0.065 cps/kBq, can be an ideal configuration for our SPECT imager design

  20. Robust Inventory System Optimization Based on Simulation and Multiple Criteria Decision Making

    Ahmad Mortazavi

    2014-01-01

    Full Text Available Inventory management in retailers is difficult and complex decision making process which is related to the conflict criteria, also existence of cyclic changes and trend in demand is inevitable in many industries. In this paper, simulation modeling is considered as efficient tool for modeling of retailer multiproduct inventory system. For simulation model optimization, a novel multicriteria and robust surrogate model is designed based on multiple attribute decision making (MADM method, design of experiments (DOE, and principal component analysis (PCA. This approach as a main contribution of this paper, provides a framework for robust multiple criteria decision making under uncertainty.

  1. On the formulation and numerical simulation of distributed-order fractional optimal control problems

    Zaky, M. A.; Machado, J. A. Tenreiro

    2017-11-01

    In a fractional optimal control problem, the integer order derivative is replaced by a fractional order derivative. The fractional derivative embeds implicitly the time delays in an optimal control process. The order of the fractional derivative can be distributed over the unit interval, to capture delays of distinct sources. The purpose of this paper is twofold. Firstly, we derive the generalized necessary conditions for optimal control problems with dynamics described by ordinary distributed-order fractional differential equations (DFDEs). Secondly, we propose an efficient numerical scheme for solving an unconstrained convex distributed optimal control problem governed by the DFDE. We convert the problem under consideration into an optimal control problem governed by a system of DFDEs, using the pseudo-spectral method and the Jacobi-Gauss-Lobatto (J-G-L) integration formula. Next, we present the numerical solutions for a class of optimal control problems of systems governed by DFDEs. The convergence of the proposed method is graphically analyzed showing that the proposed scheme is a good tool for the simulation of distributed control problems governed by DFDEs.

  2. Numerical simulation for optimization of multipole permanent magnets of multicusp ion source

    Hosseinzadeh, M.; Afarideh, H.

    2014-01-01

    A new ion source will be designed and manufactured for the CYCLONE30 commercial cyclotron with a much advanced performance compared with the previous one. The newly designed ion source has more plasma density, which is designed to deliver an H – beam at 30 keV. In this paper numerical simulation of the magnetic flux density from permanent magnet used for a multicusp ion source, plasma confinement and trapping of fast electrons by the magnetic field has been performed to optimize the number of magnets confining the plasma. A code has been developed to fly electrons in the magnetic field to evaluate the mean life of electrons in plasma in different magnetic conditions to have a better evaluation and comparison of density in different cases. The purpose of this design is to recapture more energetic electrons with permanent magnets. Performance simulations of the optimized ion source show considerable improvement over reported one by IBA

  3. Simulation and optimization of logistics distribution for an engine production line

    Song, L.; Jin, S.; Tang, P.

    2016-07-01

    In order to analyze and study the factors about Logistics distribution system, solve the problems of out of stock on the production line and improve the efficiency of the assembly line. Using the method of industrial engineering, put forward the optimization scheme of distribution system. The simulation model of logistics distribution system for engine assembly line was build based on Witness software. The optimization plan is efficient to improve Logistics distribution efficiency, production of assembly line efficiency and reduce the storage of production line. Based on the study of the modeling and simulation of engine production logistics distribution system, the result reflects some influence factors about production logistics system, which has reference value to improving the efficiency of the production line. (Author)

  4. 3D Model Optimization of Four-Facet Drill for 3D Drilling Simulation

    Buranský Ivan

    2016-09-01

    Full Text Available The article is focused on optimization of four-facet drill for 3D drilling numerical modelling. For optimization, the process of reverse engineering by PowerShape software was used. The design of four-facet drill was created in NumrotoPlus software. The modified 3D model of the drill was used in the numerical analysis of cutting forces. Verification of the accuracy of 3D models for reverse engineering was implemented using the colour deviation maps. The CAD model was in the STEP format. For simulation software, 3D model in the STEP format is ideal. STEP is a solid model. Simulation software automatically splits the 3D model into finite elements. The STEP model was therefore more suitable than the STL model.

  5. Using simulation-based optimization to improve performance at a tire manufacturing company

    Mohamad Darayi

    2013-04-01

    Full Text Available In this paper, a simulation optimization-based decision support tool has been developed to study the capacity enhancement scenarios in a tire manufacturing company located in Iran. This company is experiencing challenges in synchronizing production output with customer demand causing an unbalanced work-in-process (WIP inventory distribution throughout the tire manufacturing process. However, a new opportunity to increase the supplying of raw materials by fifty percent and increase the expected growth in market demand, necessitate this study of the current company situation. This research supported by the company, is to analyze whether the ongoing production logistics system can respond to the increased market demand, considering the raw material expansion. Implementation of a proposed hybrid push/pull production control strategy, together with the facility capacity enhancement options in bottleneck stations and/or heterogeneous lines within the plant, are investigated by the proposed simulation optimization methodology.

  6. Optimization of source pencil deployment based on plant growth simulation algorithm

    Yang Lei; Liu Yibao; Liu Yujuan

    2009-01-01

    A plant growth simulation algorithm was proposed for optimizing source pencil deployment for a 60 Co irradiator. A method used to evaluate the calculation results was presented with the objective function defined by relative standard deviation of the exposure rate at the reference points, and the method to transform two kinds of control variables, i.e., position coordinates x j and y j of source pencils in the source plaque, into proper integer variables was also analyzed and solved. The results show that the plant growth simulation algorithm, which possesses both random and directional search mechanism, has good global search ability and can be used conveniently. The results are affected a little by initial conditions, and improve the uniformity in the irradiation fields. It creates a dependable field for the optimization of source bars arrangement at irradiation facility. (authors)

  7. Structure Optimization and Numerical Simulation of Nozzle for High Pressure Water Jetting

    Shuce Zhang

    2015-01-01

    Full Text Available Three kinds of nozzles normally used in industrial production are numerically simulated, and the structure of nozzle with the best jetting performance out of the three nozzles is optimized. The R90 nozzle displays the most optimal jetting properties, including the smooth transition of the nozzle’s inner surface. Simulation results of all sample nozzles in this study show that the helix nozzle ultimately displays the best jetting performance. Jetting velocity magnitude along Y and Z coordinates is not symmetrical for the helix nozzle. Compared to simply changing the jetting angle, revolving the jet issued from the helix nozzle creates a grinding wheel on the cleaning surface, which makes not only an impact effect but also a shearing action on the cleaning object. This particular shearing action improves the cleaning process overall and forms a wider, effective cleaning range, thus obtaining a broader jet width.

  8. Simulation-optimization model for production planning in the blood supply chain.

    Osorio, Andres F; Brailsford, Sally C; Smith, Honora K; Forero-Matiz, Sonia P; Camacho-Rodríguez, Bernardo A

    2017-12-01

    Production planning in the blood supply chain is a challenging task. Many complex factors such as uncertain supply and demand, blood group proportions, shelf life constraints and different collection and production methods have to be taken into account, and thus advanced methodologies are required for decision making. This paper presents an integrated simulation-optimization model to support both strategic and operational decisions in production planning. Discrete-event simulation is used to represent the flows through the supply chain, incorporating collection, production, storing and distribution. On the other hand, an integer linear optimization model running over a rolling planning horizon is used to support daily decisions, such as the required number of donors, collection methods and production planning. This approach is evaluated using real data from a blood center in Colombia. The results show that, using the proposed model, key indicators such as shortages, outdated units, donors required and cost are improved.

  9. Intermolecular Force Field Parameters Optimization for Computer Simulations of CH4 in ZIF-8

    Phannika Kanthima

    2016-01-01

    Full Text Available The differential evolution (DE algorithm is applied for obtaining the optimized intermolecular interaction parameters between CH4 and 2-methylimidazolate ([C4N2H5]− using quantum binding energies of CH4-[C4N2H5]− complexes. The initial parameters and their upper/lower bounds are obtained from the general AMBER force field. The DE optimized and the AMBER parameters are then used in the molecular dynamics (MD simulations of CH4 molecules in the frameworks of ZIF-8. The results show that the DE parameters are better for representing the quantum interaction energies than the AMBER parameters. The dynamical and structural behaviors obtained from MD simulations with both sets of parameters are also of notable differences.

  10. Effective Energy Simulation and Optimal Design of Side-lit Buildings with Venetian Blinds

    Cheng, Tian

    Venetian blinds are popularly used in buildings to control the amount of incoming daylight for improving visual comfort and reducing heat gains in air-conditioning systems. Studies have shown that the proper design and operation of window systems could result in significant energy savings in both lighting and cooling. However, there is no convenient computer tool that allows effective and efficient optimization of the envelope of side-lit buildings with blinds now. Three computer tools, Adeline, DOE2 and EnergyPlus widely used for the above-mentioned purpose have been experimentally examined in this study. Results indicate that the two former tools give unacceptable accuracy due to unrealistic assumptions adopted while the last one may generate large errors in certain conditions. Moreover, current computer tools have to conduct hourly energy simulations, which are not necessary for life-cycle energy analysis and optimal design, to provide annual cooling loads. This is not computationally efficient, particularly not suitable for optimal designing a building at initial stage because the impacts of many design variations and optional features have to be evaluated. A methodology is therefore developed for efficient and effective thermal and daylighting simulations and optimal design of buildings with blinds. Based on geometric optics and radiosity method, a mathematical model is developed to reasonably simulate the daylighting behaviors of venetian blinds. Indoor illuminance at any reference point can be directly and efficiently computed. They have been validated with both experiments and simulations with Radiance. Validation results show that indoor illuminances computed by the new models agree well with the measured data, and the accuracy provided by them is equivalent to that of Radiance. The computational efficiency of the new models is much higher than that of Radiance as well as EnergyPlus. Two new methods are developed for the thermal simulation of buildings. A

  11. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  12. Simulation and optimization of logistics distribution for an engine production line

    Lijun Song

    2016-02-01

    Full Text Available Purpose: In order to analyze and study the factors about Logistics distribution system, solve the problems of out of stock on the production line and improve the efficiency of the assembly line. Design/methodology/approach: Using the method of industrial engineering, put forward the optimization scheme of distribution system. The simulation model of logistics distribution system for engine assembly line was build based on Witness software. Findings: The optimization plan is efficient to improve Logistics distribution efficiency, production of assembly line efficiency and reduce the storage of production line Originality/value: Based on the study of the modeling and simulation of engine production logistics distribution system, the result reflects some influence factors about production logistics system, which has reference value to improving the efficiency of the production line.

  13. Positioning and number of nutritional levels in dose-response trials to estimate the optimal-level and the adjustment of the models

    Fernando Augusto de Souza

    2014-07-01

    Full Text Available The aim of this research was to evaluate the influence of the number and position of nutrient levels used in dose-response trials in the estimation of the optimal-level (OL and the goodness of fit on the models: quadratic polynomial (QP, exponential (EXP, linear response plateau (LRP and quadratic response plateau (QRP. It was used data from dose-response trials realized in FCAV-Unesp Jaboticabal considering the homogeneity of variances and normal distribution. The fit of the models were evaluated considered the following statistics: adjusted coefficient of determination (R²adj, coefficient of variation (CV and the sum of the squares of deviations (SSD.It was verified in QP and EXP models that small changes on the placement and distribution of the levels caused great changes in the estimation of the OL. The LRP model was deeply influenced by the absence or presence of the level between the response and stabilization phases (change in the straight to plateau. The QRP needed more levels on the response phase and the last level on stabilization phase to estimate correctly the plateau. It was concluded that the OL and the adjust of the models are dependent on the positioning and the number of the levels and the specific characteristics of each model, but levels defined near to the true requirement and not so spaced are better to estimate the OL.

  14. Temperature Simulation of Greenhouse with CFD Methods and Optimal Sensor Placement

    Yanzheng Liu

    2014-03-01

    Full Text Available The accuracy of information monitoring is significant to increase the effect of Greenhouse Environment Control. In this paper, by taking simulation for the temperature field in the greenhouse as an example, the CFD (Computational Fluid Dynamics simulation model for measuring the microclimate environment of greenhouse with the principle of thermal environment formation was established, and the temperature distributions under the condition of mechanical ventilation was also simulated. The results showed that the CFD model and its solution simulated for greenhouse thermal environment could describe the changing process of temperature environment within the greenhouse; the most suitable turbulent simulation model was the standard k?? model. Under the condition of mechanical ventilation, the average deviation between the simulated value and the measured value was 0.6, which was 4.5 percent of the measured value. The distribution of temperature filed had obvious layering structures, and the temperature in the greenhouse model decreased gradually from the periphery to the center. Based on these results, the sensor number and the optimal sensor placement were determined with CFD simulation method.

  15. Virtual reality simulation for the optimization of endovascular procedures: current perspectives

    Rudarakanchana N

    2015-03-01

    Full Text Available Nung Rudarakanchana,1 Isabelle Van Herzeele,2 Liesbeth Desender,2 Nicholas JW Cheshire1 1Department of Surgery, Imperial College London, London, UK; 2Department of Thoracic and Vascular Surgery, Ghent University Hospital, Ghent, BelgiumOn behalf of EVEREST (European Virtual reality Endovascular RESearch TeamAbstract: Endovascular technologies are rapidly evolving, often requiring coordination and cooperation between clinicians and technicians from diverse specialties. These multidisciplinary interactions lead to challenges that are reflected in the high rate of errors occurring during endovascular procedures. Endovascular virtual reality (VR simulation has evolved from simple benchtop devices to full physic simulators with advanced haptics and dynamic imaging and physiological controls. The latest developments in this field include the use of fully immersive simulated hybrid angiosuites to train whole endovascular teams in crisis resource management and novel technologies that enable practitioners to build VR simulations based on patient-specific anatomy. As our understanding of the skills, both technical and nontechnical, required for optimal endovascular performance improves, the requisite tools for objective assessment of these skills are being developed and will further enable the use of VR simulation in the training and assessment of endovascular interventionalists and their entire teams. Simulation training that allows deliberate practice without danger to patients may be key to bridging the gap between new endovascular technology and improved patient outcomes.Keywords: virtual reality, simulation, endovascular, aneurysm

  16. Numerical Simulation of the Francis Turbine and CAD used to Optimized the Runner Design (2nd).

    Sutikno, Priyono

    2010-06-01

    Hydro Power is the most important renewable energy source on earth. The water is free of charge and with the generation of electric energy in a Hydroelectric Power station the production of green house gases (mainly CO2) is negligible. Hydro Power Generation Stations are long term installations and can be used for 50 years and more, care must be taken to guarantee a smooth and safe operation over the years. Maintenance is necessary and critical parts of the machines have to be replaced if necessary. Within modern engineering the numerical flow simulation plays an important role in order to optimize the hydraulic turbine in conjunction with connected components of the plant. Especially for rehabilitation and upgrading existing Power Plants important point of concern are to predict the power output of turbine, to achieve maximum hydraulic efficiency, to avoid or to minimize cavitations, to avoid or to minimized vibrations in whole range operation. Flow simulation can help to solve operational problems and to optimize the turbo machinery for hydro electric generating stations or their component through, intuitive optimization, mathematical optimization, parametric design, the reduction of cavitations through design, prediction of draft tube vortex, trouble shooting by using the simulation. The classic design through graphic-analytical method is cumbersome and can't give in evidence the positive or negative aspects of the designing options. So it was obvious to have imposed as necessity the classical design methods to an adequate design method using the CAD software. There are many option chose during design calculus in a specific step of designing may be verified in ensemble and detail form a point of view. The final graphic post processing would be realized only for the optimal solution, through a 3 D representation of the runner as a whole for the final approval geometric shape. In this article it was investigated the redesign of the hydraulic turbine's runner

  17. Green Infrastructure Simulation and Optimization to Achieve Combined Sewer Overflow Reductions in Philadelphia's Mill Creek Sewershed

    Cohen, J. S.; McGarity, A. E.

    2017-12-01

    The ability for mass deployment of green stormwater infrastructure (GSI) to intercept significant amounts of urban runoff has the potential to reduce the frequency of a city's combined sewer overflows (CSOs). This study was performed to aid in the Overbrook Environmental Education Center's vision of applying this concept to create a Green Commercial Corridor in Philadelphia's Overbrook Neighborhood, which lies in the Mill Creek Sewershed. In an attempt to further implement physical and social reality into previous work using simulation-optimization techniques to produce GSI deployment strategies (McGarity, et al., 2016), this study's models incorporated land use types and a specific neighborhood in the sewershed. The low impact development (LID) feature in EPA's Storm Water Management Model (SWMM) was used to simulate various geographic configurations of GSI in Overbrook. The results from these simulations were used to obtain formulas describing the annual CSO reduction in the sewershed based on the deployed GSI practices. These non-linear hydrologic response formulas were then implemented into the Storm Water Investment Strategy Evaluation (StormWISE) model (McGarity, 2012), a constrained optimization model used to develop optimal stormwater management practices on the watershed scale. By saturating the avenue with GSI, not only will CSOs from the sewershed into the Schuylkill River be reduced, but ancillary social and economic benefits of GSI will also be achieved. The effectiveness of these ancillary benefits changes based on the type of GSI practice and the type of land use in which the GSI is implemented. Thus, the simulation and optimization processes were repeated while delimiting GSI deployment by land use (residential, commercial, industrial, and transportation). The results give a GSI deployment strategy that achieves desired annual CSO reductions at a minimum cost based on the locations of tree trenches, rain gardens, and rain barrels in specified land

  18. Simulation and optimization methods for logistics pooling in the outbound supply chain

    Jesus Gonzalez-Feliu; Carlos Peris-Pla; Dina Rakotonarivo

    2010-01-01

    International audience; Logistics pooling and collaborative transportation systems are relatively new concepts in logistics research, but are very popular in practice. This communication proposes a conceptual framework for logistics and transportation pooling systems, as well as a simulation method for strategic planning optimization. This method is based on a twostep constructive heuristic in order to estimate for big instances the transportation and storage costs at a macroscopic level. Fou...

  19. Simulation optimizing of n-type HIT solar cells with AFORS-HET

    Yao, Yao; Xiao, Shaoqing; Zhang, Xiumei; Gu, Xiaofeng

    2017-07-01

    This paper presents a study of heterojunction with intrinsic thin layer (HIT) solar cells based on n-type silicon substrates by a simulation software AFORS-HET. We have studied the influence of thickness, band gap of intrinsic layer and defect densities of every interface. Details in mechanisms are elaborated as well. The results show that the optimized efficiency reaches more than 23% which may give proper suggestions to practical preparation for HIT solar cells industry.

  20. Optimizing maintenance and repair policies via a combination of genetic algorithms and Monte Carlo simulation

    Marseguerra, M.; Zio, E.

    2000-01-01

    In this paper we present an optimization approach based on the combination of a Genetic Algorithms maximization procedure with a Monte Carlo simulation. The approach is applied within the context of plant logistic management for what concerns the choice of maintenance and repair strategies. A stochastic model of plant operation is developed from the standpoint of its reliability/availability behavior, i.e. of the failure/repair/maintenance processes of its components. The model is evaluated by Monte Carlo simulation in terms of economic costs and revenues of operation. The flexibility of the Monte Carlo method allows us to include several practical aspects such as stand-by operation modes, deteriorating repairs, aging, sequences of periodic maintenances, number of repair teams available for different kinds of repair interventions (mechanical, electronic, hydraulic, etc.), components priority rankings. A genetic algorithm is then utilized to optimize the components maintenance periods and number of repair teams. The fitness function object of the optimization is a profit function which inherently accounts for the safety and economic performance of the plant and whose value is computed by the above Monte Carlo simulation model. For an efficient combination of Genetic Algorithms and Monte Carlo simulation, only few hundreds Monte Carlo histories are performed for each potential solution proposed by the genetic algorithm. Statistical significance of the results of the solutions of interest (i.e. the best ones) is then attained exploiting the fact that during the population evolution the fit chromosomes appear repeatedly many times. The proposed optimization approach is applied on two case studies of increasing complexity

  1. Uncertainty-based simulation-optimization using Gaussian process emulation: Application to coastal groundwater management

    Rajabi, Mohammad Mahdi; Ketabchi, Hamed

    2017-12-01

    Combined simulation-optimization (S/O) schemes have long been recognized as a valuable tool in coastal groundwater management (CGM). However, previous applications have mostly relied on deterministic seawater intrusion (SWI) simulations. This is a questionable simplification, knowing that SWI models are inevitably prone to epistemic and aleatory uncertainty, and hence a management strategy obtained through S/O without consideration of uncertainty may result in significantly different real-world outcomes than expected. However, two key issues have hindered the use of uncertainty-based S/O schemes in CGM, which are addressed in this paper. The first issue is how to solve the computational challenges resulting from the need to perform massive numbers of simulations. The second issue is how the management problem is formulated in presence of uncertainty. We propose the use of Gaussian process (GP) emulation as a valuable tool in solving the computational challenges of uncertainty-based S/O in CGM. We apply GP emulation to the case study of Kish Island (located in the Persian Gulf) using an uncertainty-based S/O algorithm which relies on continuous ant colony optimization and Monte Carlo simulation. In doing so, we show that GP emulation can provide an acceptable level of accuracy, with no bias and low statistical dispersion, while tremendously reducing the computational time. Moreover, five new formulations for uncertainty-based S/O are presented based on concepts such as energy distances, prediction intervals and probabilities of SWI occurrence. We analyze the proposed formulations with respect to their resulting optimized solutions, the sensitivity of the solutions to the intended reliability levels, and the variations resulting from repeated optimization runs.

  2. An optimized efficient dual junction InGaN/CIGS solar cell: A numerical simulation

    Farhadi, Bita; Naseri, Mosayeb

    2016-08-01

    The photovoltaic performance of an efficient double junction InGaN/CIGS solar cell including a CdS antireflector top cover layer is studied using Silvaco ATLAS software. In this study, to gain a desired structure, the different design parameters, including the CIGS various band gaps, the doping concentration and the thickness of CdS layer are optimized. The simulation indicates that under current matching condition, an optimum efficiency of 40.42% is achieved.

  3. Optimization of permanent-magnet undulator magnets ordering using simulated annealing algorithm

    Chen Nian; He Duohui; Li Ge; Jia Qika; Zhang Pengfei; Xu Hongliang; Cai Genwang

    2005-01-01

    Pure permanent-magnet undulator consists of many magnets. The unavoidable remanence divergence of these magnets causes the undulator magnetic field error, which will affect the functional mode of the storage ring and the quality of the spontaneous emission spectrum. Optimizing permanent-magnet undulator magnets ordering using simulated annealing algorithm before installing undulator magnets, the first field integral can be reduced to 10 -6 T·m, the second integral to 10 -6 T·m 2 and the peak field error to less than 10 -4 . The optimized results are independent of the initial solution. This paper gives the optimizing process in detail and puts forward a method to quickly calculate the peak field error and field integral according to the magnet remanence. (authors)

  4. Simulation and optimization of stable isotope 18O separation by cascade distillation

    Jiang Yongyue; Chen Yuyan; Qin Chuanjiang; Liu Yan; Gu Hongsen

    2011-01-01

    The research about started from the plan of four cascade towers design was carried. Firstly, the method of experiment design was using uniform design. Then the incidence formula with the method of binomial stepwise regression was gotten. Last, the optimal operation conditions were gotten by using the method of genetic algorithm. Considering comprehensive factors of drawn from feed rate and from flow rates between cascade column, conclusions were reached on the study of the impact on the abundance of the isotope 18 O. Finally, the incidence formula between the abundance of the isotope 18 O and four operating variables were gotten. Also the incidence formula between heat consumption of the isotope 18 O and four operating variables were gotten. Besides, single factor response diagram of four factors were shown at last. The results showed that the method of simulation and optimization could be applied to 18 O industrial design and would be popular in traditional distillation process to realize optimization design. (authors)

  5. Simulation of microcirculatory hemodynamics: estimation of boundary condition using particle swarm optimization.

    Pan, Qing; Wang, Ruofan; Reglin, Bettina; Fang, Luping; Pries, Axel R; Ning, Gangmin

    2014-01-01

    Estimation of the boundary condition is a critical problem in simulating hemodynamics in microvascular networks. This paper proposed a boundary estimation strategy based on a particle swarm optimization (PSO) algorithm, which aims to minimize the number of vessels with inverted flow direction in comparison to the experimental observation. The algorithm took boundary values as the particle swarm and updated the position of the particles iteratively to approach the optimization target. The method was tested in a real rat mesenteric network. With random initial boundary values, the method achieved a minimized 9 segments with an inverted flow direction in the network with 546 vessels. Compared with reported literature, the current work has the advantage of a better fit with experimental observations and is more suitable for the boundary estimation problem in pulsatile hemodynamic models due to the experiment-based optimization target selection.

  6. A study on optimization of hybrid drive train using Advanced Vehicle Simulator (ADVISOR)

    Same, Adam; Stipe, Alex; Grossman, David; Park, Jae Wan [Department of Mechanical and Aeronautical Engineering, University of California, Davis, One Shields Ave, Davis, CA 95616 (United States)

    2010-10-01

    This study investigates the advantages and disadvantages of three hybrid drive train configurations: series, parallel, and ''through-the-ground'' parallel. Power flow simulations are conducted with the MATLAB/Simulink-based software ADVISOR. These simulations are then applied in an application for the UC Davis SAE Formula Hybrid vehicle. ADVISOR performs simulation calculations for vehicle position using a combined backward/forward method. These simulations are used to study how efficiency and agility are affected by the motor, fuel converter, and hybrid configuration. Three different vehicle models are developed to optimize the drive train of a vehicle for three stages of the SAE Formula Hybrid competition: autocross, endurance, and acceleration. Input cycles are created based on rough estimates of track geometry. The output from these ADVISOR simulations is a series of plots of velocity profile and energy storage State of Charge that provide a good estimate of how the Formula Hybrid vehicle will perform on the given course. The most noticeable discrepancy between the input cycle and the actual velocity profile of the vehicle occurs during deceleration. A weighted ranking system is developed to organize the simulation results and to determine the best drive train configuration for the Formula Hybrid vehicle. Results show that the through-the-ground parallel configuration with front-mounted motors achieves an optimal balance of efficiency, simplicity, and cost. ADVISOR is proven to be a useful tool for vehicle power train design for the SAE Formula Hybrid competition. This vehicle model based on ADVISOR simulation is applicable to various studies concerning performance and efficiency of hybrid drive trains. (author)

  7. Numerical simulation and structural optimization of the inclined oil/water separator.

    Liqiong Chen

    Full Text Available Improving the separation efficiency of the inclined oil/water separator, a new type of gravity separation equipment, is of great importance. In order to obtain a comprehensive understanding of the internal flow field of the separation process of oil and water within this separator, a numerical simulation based on Euler multiphase flow analysis and the realizable k-ε two equation turbulence model was executed using Fluent software. The optimal value ranges of the separator's various structural parameters used in the numerical simulation were selected through orthogonal array experiments. A field experiment on the separator was conducted with optimized structural parameters in order to validate the reliability of the numerical simulation results. The research results indicated that the horizontal position of the dispenser, the hole number, and the diameter had significant effects on the oil/water separation efficiency, and that the longitudinal position of the dispenser and the position of the weir plate had insignificant effects on the oil/water separation efficiency. The optimal structural parameters obtained through the orthogonal array experiments resulted in an oil/water separation efficiency of up to 95%, which was 4.996% greater than that realized by the original structural parameters.

  8. Assembly Line Productivity Assessment by Comparing Optimization-Simulation Algorithms of Trajectory Planning for Industrial Robots

    Francisco Rubio

    2015-01-01

    Full Text Available In this paper an analysis of productivity will be carried out from the resolution of the problem of trajectory planning of industrial robots. The analysis entails economic considerations, thus overcoming some limitations of the existing literature. Two methodologies based on optimization-simulation procedures are compared to calculate the time needed to perform an industrial robot task. The simulation methodology relies on the use of robotics and automation software called GRASP. The optimization methodology developed in this work is based on the kinematics and the dynamics of industrial robots. It allows us to pose a multiobjective optimization problem to assess the trade-offs between the economic variables by means of the Pareto fronts. The comparison is carried out for different examples and from a multidisciplinary point of view, thus, to determine the impact of using each method. Results have shown the opportunity costs of non using the methodology with optimized time trajectories. Furthermore, it allows companies to stay competitive because of the quick adaptation to rapidly changing markets.

  9. System design and improvement of an emergency department using Simulation-Based Multi-Objective Optimization

    Uriarte, A Goienetxea; Zúñiga, E Ruiz; Moris, M Urenda; Ng, A H C

    2015-01-01

    Discrete Event Simulation (DES) is nowadays widely used to support decision makers in system analysis and improvement. However, the use of simulation for improving stochastic logistic processes is not common among healthcare providers. The process of improving healthcare systems involves the necessity to deal with trade-off optimal solutions that take into consideration a multiple number of variables and objectives. Complementing DES with Multi-Objective Optimization (SMO) creates a superior base for finding these solutions and in consequence, facilitates the decision-making process. This paper presents how SMO has been applied for system improvement analysis in a Swedish Emergency Department (ED). A significant number of input variables, constraints and objectives were considered when defining the optimization problem. As a result of the project, the decision makers were provided with a range of optimal solutions which reduces considerably the length of stay and waiting times for the ED patients. SMO has proved to be an appropriate technique to support healthcare system design and improvement processes. A key factor for the success of this project has been the involvement and engagement of the stakeholders during the whole process. (paper)

  10. Simulation-Optimization Framework for Synthesis and Design of Natural Gas Downstream Utilization Networks

    Saad A. Al-Sobhi

    2018-02-01

    Full Text Available Many potential diversification and conversion options are available for utilization of natural gas resources, and several design configurations and technology choices exist for conversion of natural gas to value-added products. Therefore, a detailed mathematical model is desirable for selection of optimal configuration and operating mode among the various options available. In this study, we present a simulation-optimization framework for the optimal selection of economic and environmentally sustainable pathways for natural gas downstream utilization networks by optimizing process design and operational decisions. The main processes (e.g., LNG, GTL, and methanol production, along with different design alternatives in terms of flow-sheeting for each main processing unit (namely syngas preparation, liquefaction, N2 rejection, hydrogen, FT synthesis, methanol synthesis, FT upgrade, and methanol upgrade units, are used for superstructure development. These processes are simulated using ASPEN Plus V7.3 to determine the yields of different processing units under various operating modes. The model has been applied to maximize total profit of the natural gas utilization system with penalties for environmental impact, represented by CO2eq emission obtained using ASPEN Plus for each flowsheet configuration and operating mode options. The performance of the proposed modeling framework is demonstrated using a case study.

  11. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF

  12. Optimal design of wind barriers using 3D computational fluid dynamics simulations

    Fang, H.; Wu, X.; Yang, X.

    2017-12-01

    Desertification is a significant global environmental and ecological problem that requires human-regulated control and management. Wind barriers are commonly used to reduce wind velocity or trap drifting sand in arid or semi-arid areas. Therefore, optimal design of wind barriers becomes critical in Aeolian engineering. In the current study, we perform 3D computational fluid dynamics (CFD) simulations for flow passing through wind barriers with different structural parameters. To validate the simulation results, we first inter-compare the simulated flow field results with those from both wind-tunnel experiments and field measurements. Quantitative analyses of the shelter effect are then conducted based on a series of simulations with different structural parameters (such as wind barrier porosity, row numbers, inter-row spacing and belt schemes). The results show that wind barriers with porosity of 0.35 could provide the longest shelter distance (i.e., where the wind velocity reduction is more than 50%) thus are recommended in engineering designs. To determine the optimal row number and belt scheme, we introduce a cost function that takes both wind-velocity reduction effects and economical expense into account. The calculated cost function show that a 3-row-belt scheme with inter-row spacing of 6h (h as the height of wind barriers) and inter-belt spacing of 12h is the most effective.

  13. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  14. Optimization of metabolite detection by quantum mechanics simulations in magnetic resonance spectroscopy.

    Gambarota, Giulio

    2017-07-15

    Magnetic resonance spectroscopy (MRS) is a well established modality for investigating tissue metabolism in vivo. In recent years, many efforts by the scientific community have been directed towards the improvement of metabolite detection and quantitation. Quantum mechanics simulations allow for investigations of the MR signal behaviour of metabolites; thus, they provide an essential tool in the optimization of metabolite detection. In this review, we will examine quantum mechanics simulations based on the density matrix formalism. The density matrix was introduced by von Neumann in 1927 to take into account statistical effects within the theory of quantum mechanics. We will discuss the main steps of the density matrix simulation of an arbitrary spin system and show some examples for the strongly coupled two spin system. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Design Optimization of a Thermoelectric Cooling Module Using Finite Element Simulations

    Abid, Muhammad; Somdalen, Ragnar; Rodrigo, Marina Sancho

    2018-05-01

    The thermoelectric industry is concerned about the size reduction, cooling performance and, ultimately, the production cost of thermoelectric modules. Optimization of the size and performance of a commercially available thermoelectric cooling module is considered using finite element simulations. Numerical simulations are performed on eight different three-dimensional geometries of a single thermocouple, and the results are further extended for a whole module as well. The maximum temperature rise at the hot and cold sides of a thermocouple is determined by altering its height and cross-sectional area. The influence of the soldering layer is analyzed numerically using temperature dependent and temperature independent thermoelectric properties of the solder material and the semiconductor pellets. Experiments are conducted to test the cooling performance of the thermoelectric module and the results are compared with the results obtained through simulations. Finally, cooling rate and maximum coefficient of performance (COPmax) are computed using convective and non-convective boundary conditions.

  16. The Parameters Optimization of MCR-WPT System Based on the Improved Genetic Simulated Annealing Algorithm

    Sheng Lu

    2015-01-01

    Full Text Available To solve the problem of parameter selection during the design of magnetically coupled resonant wireless power transmission system (MCR-WPT, this paper proposed an improved genetic simulated annealing algorithm. Firstly, the equivalent circuit of the system is analysis in this study and a nonlinear programming mathematical model is built. Secondly, in place of the penalty function method in the genetic algorithm, the selection strategy based on the distance between individuals is adopted to select individual. In this way, it reduces the excess empirical parameters. Meanwhile, it can improve the convergence rate and the searching ability by calculating crossover probability and mutation probability according to the variance of population’s fitness. At last, the simulated annealing operator is added to increase local search ability of the method. The simulation shows that the improved method can break the limit of the local optimum solution and get the global optimum solution faster. The optimized system can achieve the practical requirements.

  17. Clinical trial optimization: Monte Carlo simulation Markov model for planning clinical trials recruitment.

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2007-05-01

    The patient recruitment process of clinical trials is an essential element which needs to be designed properly. In this paper we describe different simulation models under continuous and discrete time assumptions for the design of recruitment in clinical trials. The results of hypothetical examples of clinical trial recruitments are presented. The recruitment time is calculated and the number of recruited patients is quantified for a given time and probability of recruitment. The expected delay and the effective recruitment durations are estimated using both continuous and discrete time modeling. The proposed type of Monte Carlo simulation Markov models will enable optimization of the recruitment process and the estimation and the calibration of its parameters to aid the proposed clinical trials. A continuous time simulation may minimize the duration of the recruitment and, consequently, the total duration of the trial.

  18. Virtual reality simulation for the optimization of endovascular procedures: current perspectives.

    Rudarakanchana, Nung; Van Herzeele, Isabelle; Desender, Liesbeth; Cheshire, Nicholas J W

    2015-01-01

    Endovascular technologies are rapidly evolving, often requiring coordination and cooperation between clinicians and technicians from diverse specialties. These multidisciplinary interactions lead to challenges that are reflected in the high rate of errors occurring during endovascular procedures. Endovascular virtual reality (VR) simulation has evolved from simple benchtop devices to full physic simulators with advanced haptics and dynamic imaging and physiological controls. The latest developments in this field include the use of fully immersive simulated hybrid angiosuites to train whole endovascular teams in crisis resource management and novel technologies that enable practitioners to build VR simulations based on patient-specific anatomy. As our understanding of the skills, both technical and nontechnical, required for optimal endovascular performance improves, the requisite tools for objective assessment of these skills are being developed and will further enable the use of VR simulation in the training and assessment of endovascular interventionalists and their entire teams. Simulation training that allows deliberate practice without danger to patients may be key to bridging the gap between new endovascular technology and improved patient outcomes.

  19. Optimization of partial multicanonical molecular dynamics simulations applied to an alanine dipeptide in explicit water solvent.

    Okumura, Hisashi

    2011-01-07

    The partial multicanonical algorithm for molecular dynamics and Monte Carlo simulations samples a wide range of an important part of the potential energy. Although it is a strong technique for structure prediction of biomolecules, the choice of the partial potential energy has not been optimized. In order to find the best choice, partial multicanonical molecular dynamics simulations of an alanine dipeptide in explicit water solvent were performed with 15 trial choices for the partial potential energy. The best choice was found to be the sum of the electrostatic, Lennard-Jones, and torsion-angle potential energies between solute atoms. In this case, the partial multicanonical simulation sampled all of the local-minimum free-energy states of the P(II), C(5), α(R), α(P), α(L), and C states and visited these states most frequently. Furthermore, backbone dihedral angles ϕ and ψ rotated very well. It is also found that the most important term among these three terms is the electrostatic potential energy and that the Lennard-Jones term also helps the simulation to overcome the steric restrictions. On the other hand, multicanonical simulation sampled all of the six states, but visited these states fewer times. Conventional canonical simulation sampled only four of the six states: The P(II), C(5), α(R), and α(P) states.

  20. Discrete-State Simulated Annealing For Traveling-Wave Tube Slow-Wave Circuit Optimization

    Wilson, Jeffrey D.; Bulson, Brian A.; Kory, Carol L.; Williams, W. Dan (Technical Monitor)

    2001-01-01

    Algorithms based on the global optimization technique of simulated annealing (SA) have proven useful in designing traveling-wave tube (TWT) slow-wave circuits for high RF power efficiency. The characteristic of SA that enables it to determine a globally optimized solution is its ability to accept non-improving moves in a controlled manner. In the initial stages of the optimization, the algorithm moves freely through configuration space, accepting most of the proposed designs. This freedom of movement allows non-intuitive designs to be explored rather than restricting the optimization to local improvement upon the initial configuration. As the optimization proceeds, the rate of acceptance of non-improving moves is gradually reduced until the algorithm converges to the optimized solution. The rate at which the freedom of movement is decreased is known as the annealing or cooling schedule of the SA algorithm. The main disadvantage of SA is that there is not a rigorous theoretical foundation for determining the parameters of the cooling schedule. The choice of these parameters is highly problem dependent and the designer needs to experiment in order to determine values that will provide a good optimization in a reasonable amount of computational time. This experimentation can absorb a large amount of time especially when the algorithm is being applied to a new type of design. In order to eliminate this disadvantage, a variation of SA known as discrete-state simulated annealing (DSSA), was recently developed. DSSA provides the theoretical foundation for a generic cooling schedule which is problem independent, Results of similar quality to SA can be obtained, but without the extra computational time required to tune the cooling parameters. Two algorithm variations based on DSSA were developed and programmed into a Microsoft Excel spreadsheet graphical user interface (GUI) to the two-dimensional nonlinear multisignal helix traveling-wave amplifier analysis program TWA3

  1. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  2. Simulation studies for optimizing the trigger generation criteria for the TACTIC telescope

    Koul, M.K.; Tickoo, A.K.; Dhar, V.K.; Venugopal, K.; Chanchalani, K.; Rannot, R.C.; Yadav, K.K.; Chandra, P.; Kothari, M.; Koul, R.

    2011-01-01

    In this paper, we present the results of Monte Carlo simulations of γ-ray and cosmic-ray proton induced extensive air showers as detected by the TACTIC atmospheric Cherenkov imaging telescope for optimizing its trigger field of view and topological trigger generation scheme. The simulation study has been carried out at several zenith angles. The topological trigger generation uses a coincidence of two or three nearest neighbor pixels for producing an event trigger. The results of this study suggest that a trigger field of 11x11 pixels (∼3.4 0 x3.4 0 ) is quite optimum for achieving maximum effective collection area for γ-rays from a point source. With regard to optimization of topological trigger generation, it is found that both two and three nearest neighbor pixels yield nearly similar results up to a zenith angle of 25 0 with a threshold energy of ∼1.5TeV for γ-rays. Beyond zenith angle of 25 0 , the results suggest that a two-pixel nearest neighbor trigger should be preferred. Comparison of the simulated integral rates has also been made with corresponding measured values for validating the predictions of the Monte Carlo simulations, especially the effective collection area, so that energy spectra of sources (or flux upper limits in case of no detection) can be determined reliably. Reasonably good matching of the measured trigger rates (on the basis of ∼207h of data collected with the telescope in NN-2 and NN-3 trigger configurations) with that obtained from simulations reassures that the procedure followed by us in estimating the threshold energy and detection rates is quite reliable. - Highlights: → Optimization of the trigger field of view and topological trigger generation for the TACTIC telescope. → Monte Carlo simulations of extensive air showers carried out using CORSIKA code. → Trigger generation with two or three nearest neighbor pixels yield similar results up to a zenith angle of 25 deg. → Reasonably good matching of measured trigger

  3. Optimization of cladding parameters for resisting corrosion on low carbon steels using simulated annealing algorithm

    Balan, A. V.; Shivasankaran, N.; Magibalan, S.

    2018-04-01

    Low carbon steels used in chemical industries are frequently affected by corrosion. Cladding is a surfacing process used for depositing a thick layer of filler metal in a highly corrosive materials to achieve corrosion resistance. Flux cored arc welding (FCAW) is preferred in cladding process due to its augmented efficiency and higher deposition rate. In this cladding process, the effect of corrosion can be minimized by controlling the output responses such as minimizing dilution, penetration and maximizing bead width, reinforcement and ferrite number. This paper deals with the multi-objective optimization of flux cored arc welding responses by controlling the process parameters such as wire feed rate, welding speed, Nozzle to plate distance, welding gun angle for super duplex stainless steel material using simulated annealing technique. Regression equation has been developed and validated using ANOVA technique. The multi-objective optimization of weld bead parameters was carried out using simulated annealing to obtain optimum bead geometry for reducing corrosion. The potentiodynamic polarization test reveals the balanced formation of fine particles of ferrite and autenite content with desensitized nature of the microstructure in the optimized clad bead.

  4. Development of GEM detector for plasma diagnostics application: simulations addressing optimization of its performance

    Chernyshova, M.; Malinowski, K.; Kowalska-Strzęciwilk, E.; Czarski, T.; Linczuk, P.; Wojeński, A.; Krawczyk, R. D.

    2017-12-01

    The advanced Soft X-ray (SXR) diagnostics setup devoted to studies of the SXR plasma emissivity is at the moment a highly relevant and important for ITER/DEMO application. Especially focusing on the energy range of tungsten emission lines, as plasma contamination by W and its transport in the plasma must be understood and monitored for W plasma-facing material. The Gas Electron Multiplier, with a spatial and energy-resolved photon detecting chamber, based SXR radiation detection system under development by our group may become such a diagnostic setup considering and solving many physical, technical and technological aspects. This work presents the results of simulations aimed to optimize a design of the detector's internal chamber and its performance. The study of the effect of electrodes alignment allowed choosing the gap distances which maximizes electron transmission and choosing the optimal magnitudes of the applied electric fields. Finally, the optimal readout structure design was identified suitable to collect a total formed charge effectively, basing on the range of the simulated electron cloud at the readout plane which was in the order of ~ 2 mm.

  5. CFD Simulation and Optimization of Very Low Head Axial Flow Turbine Runner

    Yohannis Mitiku Tobo

    2015-10-01

    Full Text Available The main objective of this work is Computational Fluid Dynamics (CFD modelling, simulation and optimization of very low head axial flow turbine runner  to be used to drive  a centrifugal pump of turbine-driven pump. The ultimate goal of the optimization is to produce a power of 1kW at head less than 1m from flowing  river to drive centrifugal pump using mechanical coupling (speed multiplier gear directly. Flow rate, blade numbers, turbine rotational speed, inlet angle are parameters used in CFD modeling,  simulation and design optimization of the turbine runner. The computed results show that power developed by a turbine runner increases with increasing flow rate. Pressure inside the turbine runner increases with flow rate but, runner efficiency increases for some flow rate and almost constant thereafter. Efficiency and power developed by a runner drops quickly if turbine speed increases due to higher pressure losses and conversion of pressure energy to kinetic energy inside the runner. Increasing blade number increases power developed but, efficiency does not increase always. Efficiency increases for some blade number and drops down due to the fact that  change in direction of the relative flow vector at the runner exit, which decreases the net rotational momentum and increases the axial flow velocity.

  6. Optimization of linear and branched alkane interactions with water to simulate hydrophobic hydration

    Ashbaugh, Henry S.; Liu, Lixin; Surampudi, Lalitanand N.

    2011-08-01

    Previous studies of simple gas hydration have demonstrated that the accuracy of molecular simulations at capturing the thermodynamic signatures of hydrophobic hydration is linked both to the fidelity of the water model at replicating the experimental liquid density at ambient pressure and an accounting of polarization interactions between the solute and water. We extend those studies to examine alkane hydration using the transferable potentials for phase equilibria united-atom model for linear and branched alkanes, developed to reproduce alkane phase behavior, and the TIP4P/2005 model for water, which provides one of the best descriptions of liquid water for the available fixed-point charge models. Alkane site/water oxygen Lennard-Jones cross interactions were optimized to reproduce the experimental alkane hydration free energies over a range of temperatures. The optimized model reproduces the hydration free energies of the fitted alkanes with a root mean square difference between simulation and experiment of 0.06 kcal/mol over a wide temperature range, compared to 0.44 kcal/mol for the parent model. The optimized model accurately reproduces the temperature dependence of hydrophobic hydration, as characterized by the hydration enthalpies, entropies, and heat capacities, as well as the pressure response, as characterized by partial molar volumes.

  7. On-line Optimization-Based Simulators for Fractured and Non-fractured Reservoirs

    Milind D. Deo

    2005-08-31

    Oil field development is a multi-million dollar business. Reservoir simulation is often used to guide the field management and development process. Reservoir characterization and geologic modeling tools have become increasingly sophisticated. As a result the geologic models produced are complex. Most reservoirs are fractured to a certain extent. The new geologic characterization methods are making it possible to map features such as faults and fractures, field-wide. Significant progress has been made in being able to predict properties of the faults and of the fractured zones. Traditionally, finite difference methods have been employed in discretizing the domains created by geologic means. For complex geometries, finite-element methods of discretization may be more suitable. Since reservoir simulation is a mature science, some of the advances in numerical methods (linear, nonlinear solvers and parallel computing) have not been fully realized in the implementation of most of the simulators. The purpose of this project was to address some of these issues. {sm_bullet} One of the goals of this project was to develop a series of finite-element simulators to handle problems of complex geometry, including systems containing faults and fractures. {sm_bullet} The idea was to incorporate the most modern computing tools; use of modular object-oriented computer languages, the most sophisticated linear and nonlinear solvers, parallel computing methods and good visualization tools. {sm_bullet} One of the tasks of the project was also to demonstrate the construction of fractures and faults in a reservoir using the available data and to assign properties to these features. {sm_bullet} Once the reservoir model is in place, it is desirable to find the operating conditions, which would provide the best reservoir performance. This can be accomplished by utilization optimization tools and coupling them with reservoir simulation. Optimization-based reservoir simulation was one of the

  8. Simulation Analysis of China’s Energy and Industrial Structure Adjustment Potential to Achieve a Low-carbon Economy by 2020

    Nan Xiang

    2013-11-01

    Full Text Available To achieve a low-carbon economy, China has committed to reducing its carbon dioxide (CO2 emissions per unit of gross domestic product (GDP by 40%–45% by 2020 from 2005 levels and increasing the share of non-fossil fuels in primary energy consumption to approximately 15%. It is necessary to investigate whether this plan is suitable and how this target may be reached. This paper verifies the feasibility of achieving the CO2 emission targets by energy and industrial structure adjustments, and proposes applicable measures for further sustainable development by 2020 through comprehensive simulation. The simulation model comprises three sub-models: an energy flow balance model, a CO2 emission model, and a socio-economic model. The model is constructed based on input-output table and three balances (material, value, and energy flow balance, and it is written in LINGO, a linear dynamic programming language. The simulation results suggest that China’s carbon intensity reduction promise can be realized and even surpassed to 50% and that economic development (annual 10% GDP growth rate can be achieved if energy and industrial structure are adjusted properly by 2020. However, the total amount of CO2 emission will reach a relatively high level—13.68 billion tons—which calls for further sound approaches to realize a low carbon economy, such as energy utilization efficiency improvement, technology innovation, and non-fossil energy’s utilization.

  9. Convexity Adjustments

    M. Gaspar, Raquel; Murgoci, Agatha

    2010-01-01

    A convexity adjustment (or convexity correction) in fixed income markets arises when one uses prices of standard (plain vanilla) products plus an adjustment to price nonstandard products. We explain the basic and appealing idea behind the use of convexity adjustments and focus on the situations...

  10. Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.

    Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A

    2016-05-01

    A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Optimal Acceleration-Velocity-Bounded Trajectory Planning in Dynamic Crowd Simulation

    Fu Yue-wen

    2014-01-01

    Full Text Available Creating complex and realistic crowd behaviors, such as pedestrian navigation behavior with dynamic obstacles, is a difficult and time consuming task. In this paper, we study one special type of crowd which is composed of urgent individuals, normal individuals, and normal groups. We use three steps to construct the crowd simulation in dynamic environment. The first one is that the urgent individuals move forward along a given path around dynamic obstacles and other crowd members. An optimal acceleration-velocity-bounded trajectory planning method is utilized to model their behaviors, which ensures that the durations of the generated trajectories are minimal and the urgent individuals are collision-free with dynamic obstacles (e.g., dynamic vehicles. In the second step, a pushing model is adopted to simulate the interactions between urgent members and normal ones, which ensures that the computational cost of the optimal trajectory planning is acceptable. The third step is obligated to imitate the interactions among normal members using collision avoidance behavior and flocking behavior. Various simulation results demonstrate that these three steps give realistic crowd phenomenon just like the real world.

  12. Simulation Optimization of Search and Rescue in Disaster Relief Based on Distributed Auction Mechanism

    Jian Tang

    2017-11-01

    Full Text Available In this paper, we optimize the search and rescue (SAR in disaster relief through agent-based simulation. We simulate rescue teams’ search behaviors with the improved Truncated Lévy walks. Then we propose a cooperative rescue plan based on a distributed auction mechanism, and illustrate it with the case of landslide disaster relief. The simulation is conducted in three scenarios, including “fatal”, “serious” and “normal”. Compared with the non-cooperative rescue plan, the proposed rescue plan in this paper would increase victims’ relative survival probability by 7–15%, increase the ratio of survivors getting rescued by 5.3–12.9%, and decrease the average elapsed time for one site getting rescued by 16.6–21.6%. The robustness analysis shows that search radius can affect the rescue efficiency significantly, while the scope of cooperation cannot. The sensitivity analysis shows that the two parameters, the time limit for completing rescue operations in one buried site and the maximum turning angle for next step, both have a great influence on rescue efficiency, and there exists optimal value for both of them in view of rescue efficiency.

  13. A Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan

    Po-Syun Huang

    2018-02-01

    Full Text Available The coastal regions of Pingtung Plain in southern Taiwan rely on groundwater as their main source of fresh water for aquaculture, agriculture, domestic, and industrial sectors. The availability of fresh groundwater is threatened by unsustainable groundwater extraction and the over-pumpage leads to the serious problem of seawater intrusion. It is desired to find appropriate management strategies to control groundwater salinity and mitigate seawater intrusion. In this study, a simulation–optimization model has been presented to solve the problem of seawater intrusion along the coastal aquifers in Pingtung Plain and the objective is using injection well barriers and minimizing the total injection rate based on the pre-determined locations of injection barriers. The SEAWAT code is used to simulate the process of seawater intrusion and the surrogate model of artificial neural networks (ANNs is used to approximate the seawater intrusion (SWI numerical model to increase the computational efficiency during the optimization process. The heuristic optimization scheme of differential evolution (DE algorithm is selected to identify the global optimal management solution. Two different management scenarios, one is the injection barriers located along the coast and the other is the injection barrier located at the inland, are considered and the optimized results show that the deployment of injection barriers at the inland is more effective to reduce total dissolved solids (TDS concentrations and mitigate seawater intrusion than that along the coast. The computational time can be reduced by more than 98% when using ANNs to replace the numerical model and the DE algorithm has been confirmed as a robust optimization scheme to solve groundwater management problems. The proposed framework can identify the most reliable management strategies and provide a reference tool for decision making with regard to seawater intrusion remediation.

  14. Study on the mechanism and efficiency of simulated annealing using an LP optimization benchmark problem - 113

    Qianqian, Li; Xiaofeng, Jiang; Shaohong, Zhang

    2010-01-01

    Simulated Annealing Algorithm (SAA) for solving combinatorial optimization problems is a popular method for loading pattern optimization. The main purpose of this paper is to understand the underlying search mechanism of SAA and to study its efficiency. In this study, a general SAA that employs random pair exchange of fuel assemblies to search for the optimum fuel Loading Pattern (LP) is applied to an exhaustively searched LP optimization benchmark problem. All the possible LPs of the benchmark problem have been enumerated and evaluated via the use of the very fast and accurate Hybrid Harmonics and Linear Perturbation (HHLP) method, such that the mechanism of SA for LP optimization can be explicitly analyzed and its search efficiency evaluated. The generic core geometry itself dictates that only a small number LPs can be generated by performing random single pair exchanges and that the LPs are necessarily mostly similar to the initial LP. This phase space effect turns out to be the basic mechanism in SAA that can explain its efficiency and good local search ability. A measure of search efficiency is introduced which shows that the stochastic nature of SAA greatly influences the variability of its search efficiency. It is also found that using fuel assembly k-infinity distribution as a technique to filter the LPs can significantly enhance the SAA search efficiency. (authors)

  15. Optimization Of Thermo-Electric Coolers Using Hybrid Genetic Algorithm And Simulated Annealing

    Khanh Doan V.K.

    2014-06-01

    Full Text Available Thermo-electric Coolers (TECs nowadays are applied in a wide range of thermal energy systems. This is due to their superior features where no refrigerant and dynamic parts are needed. TECs generate no electrical or acoustical noise and are environmentally friendly. Over the past decades, many researches were employed to improve the efficiency of TECs by enhancing the material parameters and design parameters. The material parameters are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of TECs design is to determine a set of design parameters such as leg area, leg length and the number of legs. Two elements that play an important role when considering the suitability of TECs in applications are rated of refrigeration (ROR and coefficient of performance (COP. In this paper, the review of some previous researches will be conducted to see the diversity of optimization in the design of TECs in enhancing the performance and efficiency. After that, single-objective optimization problems (SOP will be tested first by using Genetic Algorithm (GA and Simulated Annealing (SA to optimize geometry properties so that TECs will operate at near optimal conditions. Equality constraint and inequality constraint were taken into consideration.

  16. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  17. Optimization of the GBMV2 implicit solvent force field for accurate simulation of protein conformational equilibria.

    Lee, Kuo Hao; Chen, Jianhan

    2017-06-15

    Accurate treatment of solvent environment is critical for reliable simulations of protein conformational equilibria. Implicit treatment of solvation, such as using the generalized Born (GB) class of models arguably provides an optimal balance between computational efficiency and physical accuracy. Yet, GB models are frequently plagued by a tendency to generate overly compact structures. The physical origins of this drawback are relatively well understood, and the key to a balanced implicit solvent protein force field is careful optimization of physical parameters to achieve a sufficient level of cancellation of errors. The latter has been hampered by the difficulty of generating converged conformational ensembles of non-trivial model proteins using the popular replica exchange sampling technique. Here, we leverage improved sampling efficiency of a newly developed multi-scale enhanced sampling technique to re-optimize the generalized-Born with molecular volume (GBMV2) implicit solvent model with the CHARMM36 protein force field. Recursive optimization of key GBMV2 parameters (such as input radii) and protein torsion profiles (via the CMAP torsion cross terms) has led to a more balanced GBMV2 protein force field that recapitulates the structures and stabilities of both helical and β-hairpin model peptides. Importantly, this force field appears to be free of the over-compaction bias, and can generate structural ensembles of several intrinsically disordered proteins of various lengths that seem highly consistent with available experimental data. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Self-optimized construction of transition rate matrices from accelerated atomistic simulations with Bayesian uncertainty quantification

    Swinburne, Thomas D.; Perez, Danny

    2018-05-01

    A massively parallel method to build large transition rate matrices from temperature-accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher-scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature acceleration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.

  19. Spectral optimization simulation of white light based on the photopic eye-sensitivity curve

    Dai, Qi, E-mail: qidai@tongji.edu.cn [College of Architecture and Urban Planning, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Institute for Advanced Study, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Key Laboratory of Ecology and Energy-saving Study of Dense Habitat (Tongji University), Ministry of Education, 1239 Siping Road, Shanghai 200092 (China); Hao, Luoxi; Lin, Yi; Cui, Zhe [College of Architecture and Urban Planning, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Key Laboratory of Ecology and Energy-saving Study of Dense Habitat (Tongji University), Ministry of Education, 1239 Siping Road, Shanghai 200092 (China)

    2016-02-07

    Spectral optimization simulation of white light is studied to boost maximum attainable luminous efficacy of radiation at high color-rendering index (CRI) and various color temperatures. The photopic eye-sensitivity curve V(λ) is utilized as the dominant portion of white light spectra. Emission spectra of a blue InGaN light-emitting diode (LED) and a red AlInGaP LED are added to the spectrum of V(λ) to match white color coordinates. It is demonstrated that at the condition of color temperature from 2500 K to 6500 K and CRI above 90, such white sources can achieve spectral efficacy of 330–390 lm/W, which is higher than the previously reported theoretical maximum values. We show that this eye-sensitivity-based approach also has advantages on component energy conversion efficiency compared with previously reported optimization solutions.

  20. Spectral optimization simulation of white light based on the photopic eye-sensitivity curve

    Dai, Qi; Hao, Luoxi; Lin, Yi; Cui, Zhe

    2016-01-01

    Spectral optimization simulation of white light is studied to boost maximum attainable luminous efficacy of radiation at high color-rendering index (CRI) and various color temperatures. The photopic eye-sensitivity curve V(λ) is utilized as the dominant portion of white light spectra. Emission spectra of a blue InGaN light-emitting diode (LED) and a red AlInGaP LED are added to the spectrum of V(λ) to match white color coordinates. It is demonstrated that at the condition of color temperature from 2500 K to 6500 K and CRI above 90, such white sources can achieve spectral efficacy of 330–390 lm/W, which is higher than the previously reported theoretical maximum values. We show that this eye-sensitivity-based approach also has advantages on component energy conversion efficiency compared with previously reported optimization solutions

  1. Numerical Simulation and Optimization of Hole Spacing for Cement Grouting in Rocks

    Ping Fu

    2013-01-01

    Full Text Available The fine fissures of V-diabase were the main stratigraphic that affected the effectiveness of foundation grout curtain in Dagang Mountain Hydropower Station. Thus, specialized in situ grouting tests were conducted to determine reasonable hole spacing and other parameters. Considering time variation of the rheological parameters of grout, variation of grouting pressure gradient, and evolution law of the fracture opening, numerical simulations were performed on the diffusion process of cement grouting in the fissures of the rock mass. The distribution of permeability after grouting was obtained on the basis of analysis results, and the grouting hole spacing was discussed based on the reliability analysis. A probability of optimization along with a finer optimization precision as 0.1 m could be adopted when compared with the accuracy of 0.5 m that is commonly used. The results could provide a useful reference for choosing reasonable grouting hole spacing in similar projects.

  2. Predictive simulations and optimization of nanowire field-effect PSA sensors including screening

    Baumgartner, Stefan; Heitzinger, Clemens; Vacic, Aleksandar; Reed, Mark A

    2013-01-01

    We apply our self-consistent PDE model for the electrical response of field-effect sensors to the 3D simulation of nanowire PSA (prostate-specific antigen) sensors. The charge concentration in the biofunctionalized boundary layer at the semiconductor-electrolyte interface is calculated using the propka algorithm, and the screening of the biomolecules by the free ions in the liquid is modeled by a sensitivity factor. This comprehensive approach yields excellent agreement with experimental current-voltage characteristics without any fitting parameters. Having verified the numerical model in this manner, we study the sensitivity of nanowire PSA sensors by changing device parameters, making it possible to optimize the devices and revealing the attributes of the optimal field-effect sensor. © 2013 IOP Publishing Ltd.

  3. Operational Excellence through Schedule Optimization and Production Simulation of Application Specific Integrated Circuits.

    Flory, John Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Padilla, Denise D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gauthier, John H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zwerneman, April Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miller, Steven P [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-05-01

    Upcoming weapon programs require an aggressive increase in Application Specific Integrated Circuit (ASIC) production at Sandia National Laboratories (SNL). SNL has developed unique modeling and optimization tools that have been instrumental in improving ASIC production productivity and efficiency, identifying optimal operational and tactical execution plans under resource constraints, and providing confidence in successful mission execution. With ten products and unprecedented levels of demand, a single set of shared resources, highly variable processes, and the need for external supplier task synchronization, scheduling is an integral part of successful manufacturing. The scheduler uses an iterative multi-objective genetic algorithm and a multi-dimensional performance evaluator. Schedule feasibility is assessed using a discrete event simulation (DES) that incorporates operational uncertainty, variability, and resource availability. The tools provide rapid scenario assessments and responses to variances in the operational environment, and have been used to inform major equipment investments and workforce planning decisions in multiple SNL facilities.

  4. Predictive simulations and optimization of nanowire field-effect PSA sensors including screening

    Baumgartner, Stefan

    2013-05-03

    We apply our self-consistent PDE model for the electrical response of field-effect sensors to the 3D simulation of nanowire PSA (prostate-specific antigen) sensors. The charge concentration in the biofunctionalized boundary layer at the semiconductor-electrolyte interface is calculated using the propka algorithm, and the screening of the biomolecules by the free ions in the liquid is modeled by a sensitivity factor. This comprehensive approach yields excellent agreement with experimental current-voltage characteristics without any fitting parameters. Having verified the numerical model in this manner, we study the sensitivity of nanowire PSA sensors by changing device parameters, making it possible to optimize the devices and revealing the attributes of the optimal field-effect sensor. © 2013 IOP Publishing Ltd.

  5. On performing of interference technique based on self-adjusting Zernike filters (SA-AVT method) to investigate flows and validate 3D flow numerical simulations

    Pavlov, Al. A.; Shevchenko, A. M.; Khotyanovsky, D. V.; Pavlov, A. A.; Shmakov, A. S.; Golubev, M. P.

    2017-10-01

    We present a method for and results of determination of the field of integral density in the structure of flow corresponding to the Mach interaction of shock waves at Mach number M = 3. The optical diagnostics of flow was performed using an interference technique based on self-adjusting Zernike filters (SA-AVT method). Numerical simulations were carried out using the CFS3D program package for solving the Euler and Navier-Stokes equations. Quantitative data on the distribution of integral density on the path of probing radiation in one direction of 3D flow transillumination in the region of Mach interaction of shock waves were obtained for the first time.

  6. A concept for optimizing avalanche rescue strategies using a Monte Carlo simulation approach.

    Ingrid Reiweger

    Full Text Available Recent technical and strategical developments have increased the survival chances for avalanche victims. Still hundreds of people, primarily recreationists, get caught and buried by snow avalanches every year. About 100 die each year in the European Alps-and many more worldwide. Refining concepts for avalanche rescue means to optimize the procedures such that the survival chances are maximized in order to save the greatest possible number of lives. Avalanche rescue includes several parameters related to terrain, natural hazards, the people affected by the event, the rescuers, and the applied search and rescue equipment. The numerous parameters and their complex interaction make it unrealistic for a rescuer to take, in the urgency of the situation, the best possible decisions without clearly structured, easily applicable decision support systems. In order to analyse which measures lead to the best possible survival outcome in the complex environment of an avalanche accident, we present a numerical approach, namely a Monte Carlo simulation. We demonstrate the application of Monte Carlo simulations for two typical, yet tricky questions in avalanche rescue: (1 calculating how deep one should probe in the first passage of a probe line depending on search area, and (2 determining for how long resuscitation should be performed on a specific patient while others are still buried. In both cases, we demonstrate that optimized strategies can be calculated with the Monte Carlo method, provided that the necessary input data are available. Our Monte Carlo simulations also suggest that with a strict focus on the "greatest good for the greatest number", today's rescue strategies can be further optimized in the best interest of patients involved in an avalanche accident.

  7. Optimal Protection of Reactor Hall Under Nuclear Fuel Container Drop Using Simulation Methods

    Králik Juraj

    2014-12-01

    Full Text Available This paper presents of the optimal design of the damping devices cover of reactor hall under impact of nuclear fuel container drop of type TK C30. The finite element idealization of nuclear power plant structure is used in software ANSYS. The steel pipe damper system is proposed for dissipation of the kinetic energy of the container free fall in comparison with the experimental results. The probabilistic and sensitivity analysis of the damping devices was considered on the base of the simulation methods in program AntHill using the Monte Carlo method.

  8. The System of Simulation and Multi-objective Optimization for the Roller Kiln

    Huang, He; Chen, Xishen; Li, Wugang; Li, Zhuoqiu

    It is somewhat a difficult researching problem, to get the building parameters of the ceramic roller kiln simulation model. A system integrated of evolutionary algorithms (PSO, DE and DEPSO) and computational fluid dynamics (CFD), is proposed to solve the problem. And the temperature field uniformity and the environment disruption are studied in this paper. With the help of the efficient parallel calculation, the ceramic roller kiln temperature field uniformity and the NOx emissions field have been researched in the system at the same time. A multi-objective optimization example of the industrial roller kiln proves that the system is of excellent parameter exploration capability.

  9. Optimal Research and Numerical Simulation for Scheduling No-Wait Flow Shop in Steel Production

    Huawei Yuan

    2013-01-01

    Full Text Available This paper considers the m-machine flow shop scheduling problem with the no-wait constraint to minimize total completion time which is the typical model in steel production. First, the asymptotic optimality of the Shortest Processing Time (SPT first rule is proven for this problem. To further evaluate the performance of the algorithm, a new lower bound with performance guarantee is designed. At the end of the paper, numerical simulations show the effectiveness of the proposed algorithm and lower bound.

  10. Optimization of GEANT4 settings for Proton Pencil Beam Scanning simulations using GATE

    Grevillot, Loic, E-mail: loic.grevillot@gmail.co [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); IBA, B-1348 Louvain-la-Neuve (Belgium); Frisson, Thibault [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); Zahra, Nabil [Universite de Lyon, F-69622 Lyon (France); IPNL, CNRS UMR 5822, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France); Bertrand, Damien; Stichelbaut, Frederic [IBA, B-1348 Louvain-la-Neuve (Belgium); Freud, Nicolas [Universite de Lyon, F-69622 Lyon (France); CNDRI, INSA-Lyon, F-69621 Villeurbanne Cedex (France); Sarrut, David [Universite de Lyon, F-69622 Lyon (France); Creatis, CNRS UMR 5220, F-69622 Villeurbanne (France); Centre de Lutte Contre le Cancer Leon Berard, F-69373 Lyon (France)

    2010-10-15

    This study reports the investigation of different GEANT4 settings for proton therapy applications in the context of Treatment Planning System comparisons. The GEANT4.9.2 release was used through the GATE platform. We focused on the Pencil Beam Scanning delivery technique, which allows for intensity modulated proton therapy applications. The most relevant options and parameters (range cut, step size, database binning) for the simulation that influence the dose deposition were investigated, in order to determine a robust, accurate and efficient simulation environment. In this perspective, simulations of depth-dose profiles and transverse profiles at different depths and energies between 100 and 230 MeV have been assessed against reference measurements in water and PMMA. These measurements were performed in Essen, Germany, with the IBA dedicated Pencil Beam Scanning system, using Bragg-peak chambers and radiochromic films. GEANT4 simulations were also compared to the PHITS.2.14 and MCNPX.2.5.0 Monte Carlo codes. Depth-dose simulations reached 0.3 mm range accuracy compared to NIST CSDA ranges, with a dose agreement of about 1% over a set of five different energies. The transverse profiles simulated using the different Monte Carlo codes showed discrepancies, with up to 15% difference in beam widening between GEANT4 and MCNPX in water. A 8% difference between the GEANT4 multiple scattering and single scattering algorithms was observed. The simulations showed the inability of reproducing the measured transverse dose spreading with depth in PMMA, corroborating the fact that GEANT4 underestimates the lateral dose spreading. GATE was found to be a very convenient simulation environment to perform this study. A reference physics-list and an optimized parameters-list have been proposed. Satisfactory agreement against depth-dose profiles measurements was obtained. The simulation of transverse profiles using different Monte Carlo codes showed significant deviations. This point

  11. Making an "Attitude Adjustment": Using a Simulation-Enhanced Interprofessional Education Strategy to Improve Attitudes Toward Teamwork and Communication.

    Wong, Ambrose Hon-Wai; Gang, Maureen; Szyld, Demian; Mahoney, Heather

    2016-04-01

    Health care providers must effectively function in highly skilled teams in a collaborative manner, but there are few interprofessional training strategies in place. Interprofessional education (IPE) using simulation technology has gained popularity to address this need because of its inherent ability to impact learners' cognitive frames and promote peer-to-peer dialog. Provider attitudes toward teamwork have been directly linked to the quality of patient care. Investigators implemented a simulation-enhanced IPE intervention to improve staff attitudes toward teamwork and interprofessional communication in the emergency department setting. The 3-hour course consisted of a didactic session highlighting teamwork and communication strategies, 2 simulation scenarios on septic shock and cardiac arrest, and structured debriefing directed at impacting participant attitudes to teamwork and communication. This was a survey-based observational study. We used the TeamSTEPPS Teamwork Attitudes Questionnaire immediately before and after the session as a measurement of attitude change as well as the Hospital Survey on Patient Safety Culture before the session and 1 year after the intervention for program impact at the behavior level. Seventy-two emergency department nurses and resident physicians participated in the course from July to September 2012. Of the 5 constructs in TeamSTEPPS Teamwork Attitudes Questionnaire, 4 had a significant improvement in scores-6.4%, 2.8%, 4.0%, and 4.0% for team structure, leadership, situation monitoring, and mutual support, respectively (P teamwork and communication showed a significant improvement-20.6%, 20.5%, and 23.9%, for frequency of event reporting, teamwork within hospital units, and hospital handoffs and transitions, respectively (P = 0.028, P = 0.035, and P = 0.024, respectively). A simulation-enhanced IPE curriculum was successful in improving participant attitudes toward teamwork and components of patient safety culture related to

  12. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  13. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  14. Optimization of the SNS magnetism reflectometer neutron-guide optics using Monte Carlo simulations

    Klose, F

    2002-01-01

    The magnetism reflectometer at the spallation neutron source SNS will employ advanced neutron optics to achieve high data rate, improved resolution, and extended dynamic range. Optical components utilized will include a multi-channel polygonal curved bender and a tapered neutron-focusing guide section. The results of a neutron beam interacting with these devices are rather complex. Additional complexity arises due to the spectral/time-emission profile of the moderator and non-perfect neutron optical coatings. While analytic formulae for the individual components provide some design guidelines, a realistic performance assessment of the whole instrument can only be achieved by advanced simulation methods. In this contribution, we present guide optics optimizations for the magnetism reflectometer using Monte Carlo simulations. We compare different instrument configurations and calculate the resulting data rates. (orig.)

  15. Use of simulated annealing in standardization and optimization of the acerola wine production

    Sheyla dos Santos Almeida

    2014-06-01

    Full Text Available In this study, seven wine samples were prepared varying the amount of pulp of acerola fruits and the sugar content using the simulated annealing technique to obtain the optimal sensory qualities and cost for the wine produced. S. cerevisiae yeast was used in the fermentation process and the sensory attributes were evaluated using a hedonic scale. Acerola wines were classified as sweet, with 11°GL of alcohol concentration and with aroma, taste, and color characteristics of the acerola fruit. The simulated annealing experiments showed that the best conditions were found at mass ratio between 1/7.5-1/6 and total soluble solids between 28.6-29.0 °Brix, from which the sensory acceptance scores of 6.9, 6.8, and 8.8 were obtained for color, aroma, and flavor, respectively, with a production cost 43-45% lower than the cost of traditional wines commercialized in Brazil.

  16. Three dimensional design, simulation and optimization of a novel, universal diabetic foot offloading orthosis

    Sukumar, Chand; Ramachandran, K. I.

    2016-09-01

    Leg amputation is a major consequence of aggregated foot ulceration in diabetic patients. A common sense based treatment approach for diabetic foot ulceration is foot offloading where the patient is required to wear a foot offloading orthosis during the entire treatment course. Removable walker is an excellent foot offloading modality compared to the golden standard solution - total contact cast and felt padding. Commercially available foot offloaders are generally customized with huge cost and less patient compliance. This work suggests an optimized 3D model of a new type light weight removable foot offloading orthosis for diabetic patients. The device has simple adjustable features which make this suitable for wide range of patients with weight of 35 to 74 kg and height of 137 to 180 cm. Foot plate of this orthosis is unisexual, with a size adjustability of (US size) 6 to 10. Materials like Aluminum alloy 6061-T6, Acrylonitrile Butadiene Styrene (ABS) and Polyurethane acted as the key player in reducing weight of the device to 0.804 kg. Static analysis of this device indicated that maximum stress developed in this device under a load of 1000 N is only 37.8 MPa, with a small deflection of 0.150 cm and factor of safety of 3.28, keeping the safety limits, whereas dynamic analysis results assures the load bearing capacity of this device. Thus, the proposed device can be safely used as an orthosis for offloading diabetic ulcerated foot.

  17. Scenario analysis of carbon emissions' anti-driving effect on Qingdao's energy structure adjustment with an optimization model, Part II: Energy system planning and management.

    Wu, C B; Huang, G H; Liu, Z P; Zhen, J L; Yin, J G

    2017-03-01

    In this study, an inexact multistage stochastic mixed-integer programming (IMSMP) method was developed for supporting regional-scale energy system planning (EPS) associated with multiple uncertainties presented as discrete intervals, probability distributions and their combinations. An IMSMP-based energy system planning (IMSMP-ESP) model was formulated for Qingdao to demonstrate its applicability. Solutions which can provide optimal patterns of energy resources generation, conversion, transmission, allocation and facility capacity expansion schemes have been obtained. The results can help local decision makers generate cost-effective energy system management schemes and gain a comprehensive tradeoff between economic objectives and environmental requirements. Moreover, taking the CO 2 emissions scenarios mentioned in Part I into consideration, the anti-driving effect of carbon emissions on energy structure adjustment was studied based on the developed model and scenario analysis. Several suggestions can be concluded from the results: (a) to ensure the smooth realization of low-carbon and sustainable development, appropriate price control and fiscal subsidy on high-cost energy resources should be considered by the decision-makers; (b) compared with coal, natural gas utilization should be strongly encouraged in order to insure that Qingdao could reach the carbon discharges peak value in 2020; (c) to guarantee Qingdao's power supply security in the future, the construction of new power plants should be emphasised instead of enhancing the transmission capacity of grid infrastructure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Body measurements and the variability of sitting postures at preschool age as preconditions for an optimal adjustment of chairs and tables.

    Voigt, Andrea; Greil, Holle

    2009-03-01

    Preschool age is a biological stage of intensive longitudinal growth with high plasticity of the growing body and of body postures. It is the period where children learn to persist in a sitting posture for a longer time and to use furniture like chairs or other body supporting systems. The growing body shows a special sensitivity for the manifestation of inappropriate postures. In this study the development of body measurements and sitting behaviour of preschool age children is investigated as a precondition for an optimal adjustment of seats and desks to the growing body. Accordingly to the instructions of Knussmann (1988) and Jiirgens (1988) 6 body measurements were taken from 122 German children aged 3 to 7 years from Potsdam, Province Brandenburg. Additionally, every child was videotaped for 10 minutes while crayoning in a sitting position of its own choice using a chair and a desk. To analyse the tapes, the software Noldus Observer was used and examined, picture by picture, to define the different types of sitting postures as well as the duration of persistence in a posture and the number of changes of postures. The used chairs and desks were also measured. Furthermore, the data of the furniture guideline DIN ISO 5970 (DIN, 1981), which regulates the dimensions of furniture for sitting in educational institutions, were compared with the results of the body measurements and with the dimensions of the furniture used by the children.

  19. A computational fluid dynamics simulation framework for ventricular catheter design optimization.

    Weisenberg, Sofy H; TerMaath, Stephanie C; Barbier, Charlotte N; Hill, Judith C; Killeffer, James A

    2017-11-10

    OBJECTIVE Cerebrospinal fluid (CSF) shunts are the primary treatment for patients suffering from hydrocephalus. While proven effective in symptom relief, these shunt systems are plagued by high failure rates and often require repeated revision surgeries to replace malfunctioning components. One of the leading causes of CSF shunt failure is obstruction of the ventricular catheter by aggregations of cells, proteins, blood clots, or fronds of choroid plexus that occlude the catheter's small inlet holes or even the full internal catheter lumen. Such obstructions can disrupt CSF diversion out of the ventricular system or impede it entirely. Previous studies have suggested that altering the catheter's fluid dynamics may help to reduce the likelihood of complete ventricular catheter failure caused by obstruction. However, systematic correlation between a ventricular catheter's design parameters and its performance, specifically its likelihood to become occluded, still remains unknown. Therefore, an automated, open-source computational fluid dynamics (CFD) simulation framework was developed for use in the medical community to determine optimized ventricular catheter designs and to rapidly explore parameter influence for a given flow objective. METHODS The computational framework was developed by coupling a 3D CFD solver and an iterative optimization algorithm and was implemented in a high-performance computing environment. The capabilities of the framework were demonstrated by computing an optimized ventricular catheter design that provides uniform flow rates through the catheter's inlet holes, a common design objective in the literature. The baseline computational model was validated using 3D nuclear imaging to provide flow velocities at the inlet holes and through the catheter. RESULTS The optimized catheter design achieved through use of the automated simulation framework improved significantly on previous attempts to reach a uniform inlet flow rate distribution using

  20. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  1. Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling

    Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.

    2016-11-01

    A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is

  2. SQUEEZE-E: The Optimal Solution for Molecular Simulations with Periodic Boundary Conditions.

    Wassenaar, Tsjerk A; de Vries, Sjoerd; Bonvin, Alexandre M J J; Bekker, Henk

    2012-10-09

    In molecular simulations of macromolecules, it is desirable to limit the amount of solvent in the system to avoid spending computational resources on uninteresting solvent-solvent interactions. As a consequence, periodic boundary conditions are commonly used, with a simulation box chosen as small as possible, for a given minimal distance between images. Here, we describe how such a simulation cell can be set up for ensembles, taking into account a priori available or estimable information regarding conformational flexibility. Doing so ensures that any conformation present in the input ensemble will satisfy the distance criterion during the simulation. This helps avoid periodicity artifacts due to conformational changes. The method introduces three new approaches in computational geometry: (1) The first is the derivation of an optimal packing of ensembles, for which the mathematical framework is described. (2) A new method for approximating the α-hull and the contact body for single bodies and ensembles is presented, which is orders of magnitude faster than existing routines, allowing the calculation of packings of large ensembles and/or large bodies. 3. A routine is described for searching a combination of three vectors on a discretized contact body forming a reduced base for a lattice with minimal cell volume. The new algorithms reduce the time required to calculate packings of single bodies from minutes or hours to seconds. The use and efficacy of the method is demonstrated for ensembles obtained from NMR, MD simulations, and elastic network modeling. An implementation of the method has been made available online at http://haddock.chem.uu.nl/services/SQUEEZE/ and has been made available as an option for running simulations through the weNMR GRID MD server at http://haddock.science.uu.nl/enmr/services/GROMACS/main.php .

  3. A simulation-optimization model for effective water resources management in the coastal zone

    Spanoudaki, Katerina; Kampanis, Nikolaos

    2015-04-01

    -diffusion equation describing the fate and transport of contaminants introduced in a 3D turbulent flow field to the partial differential equation describing the fate and transport of contaminants in 3D transient groundwater flow systems. The model has been further developed to include the effects of density variations on surface water and groundwater flow, while the already built-in solute transport capabilities are used to simulate salinity interactions. The refined model is based on the finite volume method using a cell-centred structured grid, providing thus flexibility and accuracy in simulating irregular boundary geometries. For addressing water resources management problems, simulation models are usually externally coupled with optimisation-based management models. However this usually requires a very large number of iterations between the optimisation and simulation models in order to obtain the optimal management solution. As an alternative approach, for improved computational efficiency, an Artificial Neural Network (ANN) is trained as an approximate simulator of IRENE. The trained ANN is then linked to a Genetic Algorithm (GA) based optimisation model for managing salinisation problems in the coastal zone. The linked simulation-optimisation model is applied to a hypothetical study area for performance evaluation. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the protection of surface water and groundwater in the coastal zone', (2013 - 2015). References Spanoudaki, K., Stamou, A.I. and Nanou-Giannarou, A. (2009). Development and verification of a 3-D integrated surface water-groundwater model. Journal of Hydrology, 375 (3-4), 410-427. Spanoudaki, K. (2010). Integrated numerical modelling of surface water groundwater systems (in Greek). Ph.D. Thesis, National Technical

  4. An optimization algorithm for simulation-based planning of low-income housing projects

    Mohamed M. Marzouk

    2010-10-01

    Full Text Available Construction of low-income housing projects is a replicated process and is associated with uncertainties that arise from the unavailability of resources. Government agencies and/or contractors have to select a construction system that meets low-income housing projects constraints including project conditions, technical, financial and time constraints. This research presents a framework, using computer simulation, which aids government authorities and contractors in the planning of low-income housing projects. The proposed framework estimates the time and cost required for the construction of low-income housing using pre-cast hollow core with hollow blocks bearing walls. Five main components constitute the proposed framework: a network builder module, a construction alternative selection module, a simulation module, an optimization module and a reporting module. An optimization module utilizing a genetic algorithm enables the defining of different options and ranges of parameters associated with low-income housing projects that influence the duration and total cost of the pre-cast hollow core with hollow blocks bearing walls method. A computer prototype, named LIHouse_Sim, was developed in MS Visual Basic 6.0 as proof of concept for the proposed framework. A numerical example is presented to demonstrate the use of the developed framework and to illustrate its essential features.

  5. Measurements and simulation for design optimization for low NOx coal-firing system

    E. Bar-Ziv; Y. Yasur; B. Chudnovsky; L. Levin; A. Talanker [Ben-Gurion University of Negev, Beer-Sheva (Israel)

    2003-07-01

    The information required to design a utility steam generator is the heat balance, fuel analysis and emission. These establish the furnace wall configuration, the heat release rates, and the firing technology. The furnace must be sized for (1) residence time for complete combustion with low NOx, and (2) reduction of flue gas temperature to minimize ash deposition. To meet these, computational fluid dynamics (CFD) of the combustion process in the furnace were performed and proven to be a powerful tool for this purpose. Still, reliable numerical simulations require careful interpretation and comparison with measurements. We report numerical results and measurements for a 575 MW pulverized coal tangential firing boiler of the Hadera power plant of Israel Electric Corporation (IEC). Measured and calculated values were found to be in reasonable agreement. We used the simulations for optimization and investigated temperature distribution, heat fluxes and concentration of chemical species. We optimized both the furnace flue gas temperature entering the convective path and the staged residence time for low NOx. We tested mass flow rates through close-coupled and separate overfire air ports and its arrangement and the coal powder fineness. These parameters can control the mixing rate between the fuel and the oxidizer streams and can affect the most important characteristics of the boiler such as temperature regimes, coal burning rate and nitrogen oxidation/reduction. From this effort, IEC started to improve the boiler performance by replacing the existing typical tangential burners to low NOx firing system to ensure the current regulation requirements of emission pollutions.

  6. Simulating quantum search algorithm using vibronic states of I2 manipulated by optimally designed gate pulses

    Ohtsuki, Yukiyoshi

    2010-01-01

    In this paper, molecular quantum computation is numerically studied with the quantum search algorithm (Grover's algorithm) by means of optimal control simulation. Qubits are implemented in the vibronic states of I 2 , while gate operations are realized by optimally designed laser pulses. The methodological aspects of the simulation are discussed in detail. We show that the algorithm for solving a gate pulse-design problem has the same mathematical form as a state-to-state control problem in the density matrix formalism, which provides monotonically convergent algorithms as an alternative to the Krotov method. The sequential irradiation of separately designed gate pulses leads to the population distribution predicted by Grover's algorithm. The computational accuracy is reduced by the imperfect quality of the pulse design and by the electronic decoherence processes that are modeled by the non-Markovian master equation. However, as long as we focus on the population distribution of the vibronic qubits, we can search a target state with high probability without introducing error-correction processes during the computation. A generalized gate pulse-design scheme to explicitly include decoherence effects is outlined, in which we propose a new objective functional together with its solution algorithm that guarantees monotonic convergence.

  7. Optimization of a general-purpose, actively scanned proton beamline for ocular treatments: Geant4 simulations.

    Piersimoni, Pierluigi; Rimoldi, Adele; Riccardi, Cristina; Pirola, Michele; Molinelli, Silvia; Ciocca, Mario

    2015-03-08

    The Italian National Center for Hadrontherapy (CNAO, Centro Nazionale di Adroterapia Oncologica), a synchrotron-based hospital facility, started the treatment of patients within selected clinical trials in late 2011 and 2012 with actively scanned proton and carbon ion beams, respectively. The activation of a new clinical protocol for the irradiation of uveal melanoma using the existing general-purpose proton beamline is foreseen for late 2014. Beam characteristics and patient treatment setup need to be tuned to meet the specific requirements for such a type of treatment technique. The aim of this study is to optimize the CNAO transport beamline by adding passive components and minimizing air gap to achieve the optimal conditions for ocular tumor irradiation. The CNAO setup with the active and passive components along the transport beamline, as well as a human eye-modeled detector also including a realistic target volume, were simulated using the Monte Carlo Geant4 toolkit. The strong reduction of the air gap between the nozzle and patient skin, as well as the insertion of a range shifter plus a patient-specific brass collimator at a short distance from the eye, were found to be effective tools to be implemented. In perspective, this simulation toolkit could also be used as a benchmark for future developments and testing purposes on commercial treatment planning systems.

  8. SIMULATION AND OPTIMIZATION OF THE HYDRAULIC FRACTURING OPERATION IN A HEAVY OIL RESERVOIR IN SOUTHERN IRAN

    REZA MASOOMI

    2017-01-01

    Full Text Available Extraction of oil from some Iranian reservoirs due to high viscosity of their oil or reducing the formation permeability due to asphaltene precipitation or other problems is not satisfactory. Hydraulic fracturing method increases production in the viscous oil reservoirs that the production rate is low. So this is very important for some Iranian reservoirs that contain these characteristics. In this study, hydraulic fracturing method has been compositionally simulated in a heavy oil reservoir in southern Iran. In this study, the parameters of the fracture half length, the propagation direction of the cracks and the depth of fracturing have been considered in this oil reservoir. The aim of this study is to find the best scenario which has the highest recovery factor in this oil reservoir. For this purpose the parameters of the length, propagation direction and depth of fracturing have been optimized in this reservoir. Through this study the cumulative oil production has been evaluated with the compositional simulation for the next 10 years in this reservoir. Also at the end of this paper, increasing the final production of this oil reservoir caused by optimized hydraulic fracturing has been evaluated.

  9. Diffuser Optimation at Exhaust System with Catalytic Converter for 110 cc Mopet with Fluid Flow CFD Simulation

    Tresna Soemardi

    2010-10-01

    Full Text Available CFD simulation used to get behavior of exhaust gas through catalyst, this result will be used to optimize geometry form to perform uniform stream distribution to catalyst, and CFD Simulation will used to analyze backpressure that happened at the model.

  10. Geometry optimization of a fibrous scaffold based on mathematical modelling and CFD simulation of a dynamic cell culture

    Tajsoleiman, Tannaz; J. Abdekhodaie, Mohammad; Gernaey, Krist

    2016-01-01

    simulation of cartilage cell culture under a perfusion flow, which allows not only to characterize the supply of nutrients and metabolic products inside a fibrous scaffold, but also to assess the overall culture condition and predict the cell growth rate. Afterwards, the simulation results supported finding...... an optimized design of the scaffold within a new mathematical optimization algorithm that is proposed. The main concept of this optimization routine isto maintain a large effective surface while simultaneously keeping the shear stress levelin an operating range that is expected to be supporting growth....... Therewith, it should bepossible to gradually reach improved culture efficiency as defined in the objective function....

  11. Simulation of neuro-fuzzy model for optimization of combine header setting

    S Zareei

    2016-09-01

    of reel tine bar from cutter bar and vertical distance of reel tine bar from cutter bar could be recommended according to minimize header loss. Conclusions In the final step, the designed controller was simulated in SIMULINK. The Controller can change setting of header components in order to their impaction gathering loss and in each step, compare gathering loss with optimal value and If it was more than optimum then change the settings again. The simulation results were evaluated satisfactory.

  12. Simulation and Optimization of the Heat Exchanger for Automotive Exhaust-Based Thermoelectric Generators

    Su, C. Q.; Huang, C.; Deng, Y. D.; Wang, Y. P.; Chu, P. Q.; Zheng, S. J.

    2016-03-01

    In order to enhance the exhaust waste heat recovery efficiency of the automotive exhaust-based thermoelectric generator (TEG) system, a three-segment heat exchanger with folded-shaped internal structure for the TEG system is investigated in this study. As the major effect factors of the performance for the TEG system, surface temperature, and thermal uniformity of the heat exchanger are analyzed in this research, pressure drop along the heat exchanger is also considered. Based on computational fluid dynamics simulations and temperature distribution, the pressure drop along the heat exchanger is obtained. By considering variable length and thickness of folded plates in each segment of the heat exchanger, response surface methodology and optimization by a multi-objective genetic algorithm is applied for surface temperature, thermal uniformity, and pressure drop for the folded-shaped heat exchanger. An optimum design based on the optimization is proposed to improve the overall performance of the TEG system. The performance of the optimized heat exchanger in different engine conditions is discussed.

  13. Simulation-optimization model of reservoir operation based on target storage curves

    Hong-bin Fang

    2014-10-01

    Full Text Available This paper proposes a new storage allocation rule based on target storage curves. Joint operating rules are also proposed to solve the operation problems of a multi-reservoir system with joint demands and water transfer-supply projects. The joint operating rules include a water diversion rule to determine the amount of diverted water in a period, a hedging rule based on an aggregated reservoir to determine the total release from the system, and a storage allocation rule to specify the release from each reservoir. A simulation-optimization model was established to optimize the key points of the water diversion curves, the hedging rule curves, and the target storage curves using the improved particle swarm optimization (IPSO algorithm. The multi-reservoir water supply system located in Liaoning Province, China, including a water transfer-supply project, was employed as a case study to verify the effectiveness of the proposed join operating rules and target storage curves. The results indicate that the proposed operating rules are suitable for the complex system. The storage allocation rule based on target storage curves shows an improved performance with regard to system storage distribution.

  14. Design, simulation, and optimization of an RGB polarization independent transmission volume hologram

    Mahamat, Adoum Hassan

    Volume phase holographic (VPH) gratings have been designed for use in many areas of science and technology such as optical communication, medical imaging, spectroscopy and astronomy. The goal of this dissertation is to design a volume phase holographic grating that provides diffraction efficiencies of at least 70% for the entire visible wavelengths and higher than 90% for red, green, and blue light when the incident light is unpolarized. First, the complete design, simulation and optimization of the volume hologram are presented. The optimization is done using a Monte Carlo analysis to solve for the index modulation needed to provide higher diffraction efficiencies. The solutions are determined by solving the diffraction efficiency equations determined by Kogelnik's two wave coupled-wave theory. The hologram is further optimized using the rigorous coupled-wave analysis to correct for effects of absorption omitted by Kogelnik's method. Second, the fabrication or recording process of the volume hologram is described in detail. The active region of the volume hologram is created by interference of two coherent beams within the thin film. Third, the experimental set up and measurement of some properties including the diffraction efficiencies of the volume hologram, and the thickness of the active region are conducted. Fourth, the polarimetric response of the volume hologram is investigated. The polarization study is developed to provide insight into the effect of the refractive index modulation onto the polarization state and diffraction efficiency of incident light.

  15. Optimization of MBR hydrodynamics for cake layer fouling control through CFD simulation and RSM design.

    Yang, Min; Yu, Dawei; Liu, Mengmeng; Zheng, Libing; Zheng, Xiang; Wei, Yuansong; Wang, Fang; Fan, Yaobo

    2017-03-01

    Membrane fouling is an important issue for membrane bioreactor (MBR) operation. This paper aims at the investigation and the controlling of reversible membrane fouling due to cake layer formation and foulants deposition by optimizing MBR hydrodynamics through the combination of computational fluid dynamics (CFD) and design of experiment (DOE). The model was validated by comparing simulations with measurements of liquid velocity and dissolved oxygen (DO) concentration in a lab-scale submerged MBR. The results demonstrated that the sludge concentration is the most influencing for responses including shear stress, particle deposition propensity (PDP), sludge viscosity and strain rate. A medium sludge concentration of 8820mgL -1 is optimal for the reduction of reversible fouling in this submerged MBR. The bubble diameter is more decisive than air flowrate for membrane shear stress due to its role in sludge viscosity. The optimal bubble diameter was at around 4.8mm for both of shear stress and PDP. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-01-01

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  17. Cross Layer Optimization and Simulation of Smart Grid Home Area Network

    Lipi K. Chhaya

    2018-01-01

    Full Text Available An electrical “Grid” is a network that carries electricity from power plants to customer premises. Smart Grid is an assimilation of electrical and communication infrastructure. Smart Grid is characterized by bidirectional flow of electricity and information. Smart Grid is a complex network with hierarchical architecture. Realization of complete Smart Grid architecture necessitates diverse set of communication standards and protocols. Communication network protocols are engineered and established on the basis of layered approach. Each layer is designed to produce an explicit functionality in association with other layers. Layered approach can be modified with cross layer approach for performance enhancement. Complex and heterogeneous architecture of Smart Grid demands a deviation from primitive approach and reworking of an innovative approach. This paper describes a joint or cross layer optimization of Smart Grid home/building area network based on IEEE 802.11 standard using RIVERBED OPNET network design and simulation tool. The network performance can be improved by selecting various parameters pertaining to different layers. Simulation results are obtained for various parameters such as WLAN throughput, delay, media access delay, and retransmission attempts. The graphical results show that various parameters have divergent effects on network performance. For example, frame aggregation decreases overall delay but the network throughput is also reduced. To prevail over this effect, frame aggregation is used in combination with RTS and fragmentation mechanisms. The results show that this combination notably improves network performance. Higher value of buffer size considerably increases throughput but the delay is also greater and thus the choice of optimum value of buffer size is inevitable for network performance optimization. Parameter optimization significantly enhances the performance of a designed network. This paper is expected to serve

  18. Multi-Objective Patch Optimization with Integrated Kinematic Draping Simulation for Continuous–Discontinuous Fiber-Reinforced Composite Structures

    Benedikt Fengler

    2018-03-01

    Full Text Available Discontinuous fiber-reinforced polymers (DiCoFRP in combination with local continuous fiber reinforced polymers (CoFRP provide both a high design freedom and high weight-specific mechanical properties. For the optimization of CoFRP patches on complexly shaped DiCoFRP structures, an optimization strategy is needed which considers manufacturing constraints during the optimization procedure. Therefore, a genetic algorithm is combined with a kinematic draping simulation. To determine the optimal patch position with regard to structural performance and overall material consumption, a multi-objective optimization strategy is used. The resulting Pareto front and a corresponding heat-map of the patch position are useful tools for the design engineer to choose the right amount of reinforcement. The proposed patch optimization procedure is applied to two example structures and the effect of different optimization setups is demonstrated.

  19. Optimizing load transfer in multiwall nanotubes through interwall coupling: Theory and simulation

    Byrne, E.M.; Letertre, A.; McCarthy, M.A.; Curtin, W.A.; Xia, Z.

    2010-01-01

    An analytical model is developed to determine the length scales over which load is transferred from outer to inner walls of multiwall carbon nanotubes (MWCNTs) as a function of the amount of bonding between walls. The model predicts that the characteristic length for load transfer scales as l∼t√(E/μ-bar), where t is the CNT wall spacing, E is the effective wall Young's modulus, and μ-bar is the average interwall shear modulus due to interwall coupling. Molecular dynamics simulations for MWCNTs with up to six walls, and with interwall coupling achieved by interwall sp 3 bonding at various densities, provide data against which the model is tested. For interwall bonding having a uniform axial distribution, the analytic and simulation models agree well, showing that continuum mechanics concepts apply down to the atomic scale in this problem. The simulation models show, however, that load transfer is sensitive to natural statistical fluctuations in the spatial distribution of the interwall bonding between pairs of walls, and such fluctuations generally increase the net load transfer length needed to fully load an MWCNT. Optimal load transfer is achieved when bonding is uniformly distributed axially, and all interwall regions have the same shear stiffness, implying a linear decrease in the number of interwall bonds with distance from the outer wall. Optimal load transfer into an n-wall MWCNT is shown to occur over a length of ∼1.5nl. The model can be used to design MWCNTs for structural materials, and to interpret load transfer characteristics deduced from experiments on individual MWCNTs.

  20. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm.

    Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li

    2017-03-01

    The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.

  1. Constellations of Next Generation Gravity Missions: Simulations regarding optimal orbits and mitigation of aliasing errors

    Hauk, M.; Pail, R.; Gruber, T.; Purkhauser, A.

    2017-12-01

    The CHAMP and GRACE missions have demonstrated the tremendous potential for observing mass changes in the Earth system from space. In order to fulfil future user needs a monitoring of mass distribution and mass transport with higher spatial and temporal resolution is required. This can be achieved by a Bender-type Next Generation Gravity Mission (NGGM) consisting of a constellation of satellite pairs flying in (near-)polar and inclined orbits, respectively. For these satellite pairs the observation concept of the GRACE Follow-on mission with a laser-based low-low satellite-to-satellite tracking (ll-SST) system and more precise accelerometers and state-of-the-art star trackers is adopted. By choosing optimal orbit constellations for these satellite pairs high frequency mass variations will be observable and temporal aliasing errors from under-sampling will not be the limiting factor anymore. As part of the European Space Agency (ESA) study "ADDCON" (ADDitional CONstellation and Scientific Analysis Studies of the Next Generation Gravity Mission) a variety of mission design parameters for such constellations are investigated by full numerical simulations. These simulations aim at investigating the impact of several orbit design choices and at the mitigation of aliasing errors in the gravity field retrieval by co-parametrization for various constellations of Bender-type NGGMs. Choices for orbit design parameters such as altitude profiles during mission lifetime, length of retrieval period, value of sub-cycles and choice of prograde versus retrograde orbits are investigated as well. Results of these simulations are presented and optimal constellations for NGGM's are identified. Finally, a short outlook towards new geophysical applications like a near real time service for hydrology is given.

  2. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  3. A two-parameter preliminary optimization study for a fluidized-bed boiler through a comprehensive mathematical simulator

    Rabi, Jose A.; Souza-Santos, Marcio L. de [Universidade Estadual de Campinas, SP (Brazil). Faculdade de Engenharia Mecanica. Dept. de Energia]. E-mails: jrabi@fem.unicamp.br; dss@fem.unicamp.br

    2000-07-01

    Modeling and simulation of fluidized-bed equipment have demonstrated their importance as a tool for design and optimization of industrial equipment. Accordingly, this work carries on an optimization study of a fluidized-bed boiler with the aid of a comprehensive mathematical simulator. The configuration data of the boiler are based on a particular Babcock and Wilcox Co. (USA) test unit. Due to their importance, the number of tubes in the bed section and the air excess are chosen as the parameters upon which the optimization study is based. On their turn, the fixed-carbon conversion factor and the boiler efficiency are chosen as two distinct optimization objectives. The results from both preliminary searches are compared. The present work is intended to be just a study on possible routes for future optimization of larger boilers. Nonetheless, the present discussion might give some insight on the equipment behavior. (author)

  4. Optimization of a hydrometric network extension using specific flow, kriging and simulated annealing

    Chebbi, Afef; Kebaili Bargaoui, Zoubeida; Abid, Nesrine; da Conceição Cunha, Maria

    2017-12-01

    In hydrometric stations, water levels are continuously observed and discharge rating curves are constantly updated to achieve accurate river levels and discharge observations. An adequate spatial distribution of hydrological gauging stations presents a lot of interest in linkage with the river regime characterization, water infrastructures design, water resources management and ecological survey. Due to the increase of riverside population and the associated flood risk, hydrological networks constantly need to be developed. This paper suggests taking advantage of kriging approaches to improve the design of a hydrometric network. The context deals with the application of an optimization approach using ordinary kriging and simulated annealing (SA) in order to identify the best locations to install new hydrometric gauges. The task at hand is to extend an existing hydrometric network in order to estimate, at ungauged sites, the average specific annual discharge which is a key basin descriptor. This methodology is developed for the hydrometric network of the transboundary Medjerda River in the North of Tunisia. A Geographic Information System (GIS) is adopted to delineate basin limits and centroids. The latter are adopted to assign the location of basins in kriging development. Scenarios where the size of an existing 12 stations network is alternatively increased by 1, 2, 3, 4 and 5 new station(s) are investigated using geo-regression and minimization of the variance of kriging errors. The analysis of the optimized locations from a scenario to another shows a perfect conformity with respect to the location of the new sites. The new locations insure a better spatial coverage of the study area as seen with the increase of both the average and the maximum of inter-station distances after optimization. The optimization procedure selects the basins that insure the shifting of the mean drainage area towards higher specific discharges.

  5. Minimizing patient waiting time in emergency department of public hospital using simulation optimization approach

    Ibrahim, Ireen Munira; Liong, Choong-Yeun; Bakar, Sakhinah Abu; Ahmad, Norazura; Najmuddin, Ahmad Farid

    2017-04-01

    Emergency department (ED) is the main unit of a hospital that provides emergency treatment. Operating 24 hours a day with limited number of resources invites more problems to the current chaotic situation in some hospitals in Malaysia. Delays in getting treatments that caused patients to wait for a long period of time are among the frequent complaints against government hospitals. Therefore, the ED management needs a model that can be used to examine and understand resource capacity which can assist the hospital managers to reduce patients waiting time. Simulation model was developed based on 24 hours data collection. The model developed using Arena simulation replicates the actual ED's operations of a public hospital in Selangor, Malaysia. The OptQuest optimization in Arena is used to find the possible combinations of a number of resources that can minimize patients waiting time while increasing the number of patients served. The simulation model was modified for improvement based on results from OptQuest. The improvement model significantly improves ED's efficiency with an average of 32% reduction in average patients waiting times and 25% increase in the total number of patients served.

  6. The Optimization of the Local Public Policies’ Development Process Through Modeling And Simulation

    Minodora URSĂCESCU

    2012-06-01

    Full Text Available The local public policies development in Romania represents an empirically realized measure, the strategic management practices in this domain not being based on a scientific instrument capable to anticipate and evaluate the results of implementing a local public policy in a logic of needs-policies-effects type. Beginning from this motivation, the purpose of the paper resides in the reconceptualization of the public policies process on functioning principles of the dynamic systems with inverse connection, by means of mathematical modeling and techniques simulation. Therefore, the research is oriented in the direction of developing an optimization method for the local public policies development process, using as instruments the mathematical modeling and the techniques simulation. The research’s main results are on the one side constituted by generating a new process concept of the local public policies, and on the other side by proposing the conceptual model of a complex software product which will permit the parameterized modeling in a virtual environment of these policies development process. The informatic product’s finality resides in modeling and simulating each local public policy type, taking into account the respective policy’s characteristics, but also the value of their appliance environment parameters in a certain moment.

  7. Dynamic Simulation and Exergo-Economic Optimization of a Hybrid Solar–Geothermal Cogeneration Plant

    Francesco Calise

    2015-04-01

    Full Text Available This paper presents a dynamic simulation model and a parametric analysis of a solar-geothermal hybrid cogeneration plant based on an Organic Rankine Cycle (ORC powered by a medium-enthalpy geothermal resource and a Parabolic Trough Collector solar field. The fluid temperature supplying heat to the ORC varies continuously as a function of the solar irradiation, affecting both the electrical and thermal energies produced by the system. Thus, a dynamic simulation was performed. The ORC model, developed in Engineering Equation Solver, is based on zero-dimensional energy and mass balances and includes specific algorithms to evaluate the off-design system performance. The overall simulation model of the solar-geothermal cogenerative plant was implemented in the TRNSYS environment. Here, the ORC model is imported, whereas the models of the other components of the system are developed on the basis of literature data. Results are analyzed on different time bases presenting energetic, economic and exergetic performance data. Finally, a rigorous optimization has been performed to determine the set of system design/control parameters minimizing simple payback period and exergy destruction rate. The system is profitable when a significant amount of the heat produced is consumed. The highest irreversibilities are due to the solar field and to the heat exchangers.

  8. Design and Optimization of Large Accelerator Systems through High-Fidelity Electromagnetic Simulations

    Ng, Cho; Akcelik, Volkan; Candel, Arno; Chen, Sheng; Ge, Lixin; Kabel, Andreas; Lee, Lie-Quan; Li, Zenghai; Prudencio, Ernesto; Schussman, Greg; Uplenchwar1, Ravi; Xiao1, Liling; Ko1, Kwok; Austin, T.; Cary, J.R.; Ovtchinnikov, S.; Smith, D.N.; Werner, G.R.; Bellantoni, L.; TechX Corp.; Fermilab

    2008-01-01

    SciDAC1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' (AST) project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC CETs/Institutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider (ILC) and the Large Hadron Collider (LHC) in High Energy Physics (HEP), the JLab 12-GeV Upgrade in Nuclear Physics (NP), as well as the Spallation Neutron Source (SNS) and the Linac Coherent Light Source (LCLS) in Basic Energy Sciences (BES)

  9. Monte Carlo simulations for optimal light delivery in photodynamic therapy of non-melanoma skin cancer

    Valentine, R M; Ibbotson, S H; Moseley, H; Wood, K; Brown, C T A

    2012-01-01

    The choice of light source is important for the efficacy of photodynamic therapy (PDT) of non-melanoma skin cancer. We simulated the photodynamic dose (PDD) delivered to a tumour during PDT using theoretical radiation transfer simulations performed via our 3D Monte Carlo radiation transfer (MCRT) model for a range of light sources with light doses up to 75 J cm −2 . The PDD delivered following superficial irradiation from (A) non-laser light sources, (B) monochromatic light, (C) alternate beam diameters and (D) re-positioning of the tumour within the tissue was computed. (A) The final PDD deposited to the tumour at a depth of 2 mm by the Paterson light source was 2.75, 2.50 and 1.04 times greater than the Waldmann 1200, Photocure and Aktilite, respectively. (B) Tumour necrosis occurred at a depth of 2.23 mm and increased to 3.81 mm for wavelengths 405 and 630 nm, respectively. (C) Increasing the beam diameter from 10 to 50 mm had very little effect on depth of necrosis. (D) As expected, necrosis depths were reduced when the tumour was re-positioned deeper into the tissue. These MCRT simulations show clearly the importance of choosing the correct light source to ensure optimal light delivery to achieve tumour necrosis. (paper)

  10. Design and optimization of large accelerator systems through high-fidelity electromagnetic simulations

    Ng, C; Akcelik, V; Candel, A; Chen, S; Ge, L; Kabel, A; Lee, Lie-Quan; Li, Z; Prudencio, E; Schussman, G; Uplenchwar, R; Xiao, L; Ko, K; Austin, T; Cary, J R; Ovtchinnikov, S; Smith, D N; Werner, G R; Bellantoni, L

    2008-01-01

    SciDAC-1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC Centers and Insitutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider and the Large Hadron Collider in high energy physics, the JLab 12-GeV Upgrade in nuclear physics, and the Spallation Neutron Source and the Linac Coherent Light Source in basic energy sciences

  11. Optimization of Multiple Traveling Salesman Problem Based on Simulated Annealing Genetic Algorithm

    Xu Mingji

    2017-01-01

    Full Text Available It is very effective to solve the multi variable optimization problem by using hierarchical genetic algorithm. This thesis analyzes both advantages and disadvantages of hierarchical genetic algorithm and puts forward an improved simulated annealing genetic algorithm. The new algorithm is applied to solve the multiple traveling salesman problem, which can improve the performance of the solution. First, it improves the design of chromosomes hierarchical structure in terms of redundant hierarchical algorithm, and it suggests a suffix design of chromosomes; Second, concerning to some premature problems of genetic algorithm, it proposes a self-identify crossover operator and mutation; Third, when it comes to the problem of weak ability of local search of genetic algorithm, it stretches the fitness by mixing genetic algorithm with simulated annealing algorithm. Forth, it emulates the problems of N traveling salesmen and M cities so as to verify its feasibility. The simulation and calculation shows that this improved algorithm can be quickly converged to a best global solution, which means the algorithm is encouraging in practical uses.

  12. Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization

    Xiang, Xingcan; Mutlu, Rahim; Alici, Gursel; Li, Weihua

    2014-01-01

    Conducting polymer actuators have shown significant potential in articulating micro instruments, manipulation devices, and robotics. However, implementing a feedback control strategy to enhance their positioning ability and accuracy in any application requires a feedback sensor, which is extremely large in size compared to the size of the actuators. Therefore, this paper proposes a new sensorless control scheme without the use of a position feedback sensor. With the help of the system identification technique and particle swarm optimization, the control scheme, which we call the simulated feedback control system, showed a satisfactory command tracking performance for the conducting polymer actuator’s step and dynamic displacement responses, especially under a disturbance, without needing a physical feedback loop, but using a simulated feedback loop. The primary contribution of this study is to propose and experimentally evaluate the simulated feedback control scheme for a class of the conducting polymer actuators known as tri-layer polymer actuators, which can operate both in dry and wet media. This control approach can also be extended to other smart actuators or systems, for which the feedback control based on external sensing is impractical. (paper)

  13. a Comparison of Simulated Annealing, Genetic Algorithm and Particle Swarm Optimization in Optimal First-Order Design of Indoor Tls Networks

    Jia, F.; Lichti, D.

    2017-09-01

    The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.

  14. Optimization of parameters in the simulation of the interdiffusion layer growth in Al-U couples

    Kniznik, Laura; Alonso, Paula R.; Gargano, Pablo H.; Rubiolo, Gerardo H.

    2009-01-01

    U-Mo alloy dispersed in aluminum is considered as a high U density fuel for research reactors. In and out of pile experiments showed a reaction layer in U-Mo/Al interphase with formation of intermetallics compounds: Al 2 U, Al 3 U and Al 4 U. Under irradiation, porosities originate an unacceptable swelling of the fuel plate. The kinetics of growth of the intermetallic compounds in the U-Mo/Al interphase is treated in the Al 3 U/Al couple as a planar moving boundary problem due to diffusion of Al and U atoms in the direction perpendicular to the interphase surface. Using data from literature, we built a thermodynamic database to be read by the Thermocalc code to calculate phase equilibria. The diffusion problem was carried out by the DICTRA simulation package which articulates data evaluated by Thermocalc with a mobility database. In a previous work we built preliminary databases, for both free energy and mobilities. In the present work, we adjust the parameters from experimental thermodynamic equilibria and concentration profiles existing in literature, and we simulate satisfactorily the growth of the Al 4 U phase. (author)

  15. Silicon alleviates simulated acid rain stress of Oryza sativa L. seedlings by adjusting physiology activity and mineral nutrients.

    Ju, Shuming; Wang, Liping; Yin, Ningning; Li, Dan; Wang, Yukun; Zhang, Cuiying

    2017-11-01

    Silicon (Si) has been a modulator in plants under abiotic stresses, such as acid rain. To understand how silicon made an effect on rice (Oryza sativa L.) exposed to simulated acid rain (SAR) stress, the growth, physiologic activity, and mineral nutrient content in leaves of rice were investigated. The results showed that combined treatments with Si (1.0, 2.0, or 4.0 mM) and SAR (pH 4.0, 3.0, or 2.0) obviously improved the rice growth compared with the single treatment with SAR. Incorporation of Si into SAR treatment decreased malondialdehyde (MDA) content; increased soluble protein and proline contents; promoted CAT, POD, SOD, and APX activity; and maintained the K, Ca, Mg, Fe, Zn, Cu content balance in leaves of rice seedlings under SAR stress. The moderate concentration of Si (2.0 mM) was better than the low and high concentration of Si (1.0 and 4.0 mM). Therefore, application of Si could be a better strategy for maintaining the crop productivity in acid rain regions.

  16. Optimization of the energy production for the Baghdara hydropower plant in Afghanistan using simulated annealing; Optimierung der Energieerzeugung fuer das Wasserkraftwerk Baghdara in Afghanistan mit simulated annealing

    Ayros, E.; Hildebrandt, H.; Peissner, K. [Fichtner GmbH und Co. KG, Stuttgart (Germany). Wasserbau und Wasserkraftwerke; Bardossy, A. [Stuttgart Univ. (Germany). Inst. fuer Wasserbau

    2008-07-01

    Simulated Annealing (SA) is an optimization method analogous to the thermodynamic method and is a new alternative for optimising the energy production of hydropower systems with storage capabilities. The SA-Algorithm is presented here and it was applied for the maximization of the energy production of the Baghdara hydropower plant in Afghanistan. The results were also compared with a non-linear optimization method NLP. (orig.)

  17. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  18. Optimizing clinical trial supply requirements: simulation of computer-controlled supply chain management.

    Peterson, Magnus; Byrom, Bill; Dowlman, Nikki; McEntegart, Damian

    2004-01-01

    Computer-controlled systems are commonly used in clinical trials to control dispensing and manage site inventories of trial supplies. Typically such systems are used with an interactive telephone or web system that provide an interface with the study site. Realizing the maximum savings in medication associated with this approach has, in the past, been problematic as it has been difficult to fully estimate medication requirements due to the complexities of these algorithms and the inherent variation in the clinical trial recruitment process. We describe the traditional and automated methods of supplying sites. We detail a simulation approach that models the automated system. We design a number of simulation experiments using this model to investigate the supply strategy properties that influence medication overage and other strategy performance metrics. The computer-controlled medication system gave superior performance to the traditional method. In one example, a 75% overage of wasted medication in the traditional system was associated with higher supply failure than an automated system strategy with an overage of 47%. In a further example, we demonstrate that the impact of using a country stratified as opposed to site stratified scheme affects the number of deliveries and probability of supply failures more than the amount of drug wasted with respective increases of 20, 2300 and 4%. Medication savings with automated systems are particularly significant in repeat dispensing designs. We show that the number of packs required can fall by as much as 50% if one uses a predictive medication algorithm. We conclude that a computer-controlled supply chain enables medication savings to be realized and that it is possible to quantify the distribution of these savings using a simulation model. The simulation model can be used to optimize the prestudy medication supply strategy and for midstudy monitoring using real-time data contained in the study database.

  19. Experimental and simulation optimization analysis of the Whipple shields against shaped charge

    Hussain, G.; Hameed, A.; Horsfall, I.; Barton, P.; Malik, A. Q.

    2012-06-01

    Occasionally, the Whipple shields are used for the protection of a space station and a satellite against the meteoroids and orbital debris. In the Whipple shields each layer of the shield depletes part of high speed projectile energy either by breaking the projectile or absorbing its energy. Similarly, this investigation uses the Whipple shields against the shaped charge to protect the light armour such as infantry fighting vehicles with a little modification in their design. The unsteady multiple interactions of shaped charge jet with the Whipple shield package against the steady homogeneous target is scrutinized to optimize the shield thickness. Simulations indicate that the shield thickness of 0.75 mm offers an optimum configuration against the shaped charge. Experiments also support this evidence.

  20. SAFTAC, Monte-Carlo Fault Tree Simulation for System Design Performance and Optimization

    Crosetti, P.A.; Garcia de Viedma, L.

    1976-01-01

    1 - Description of problem or function: SAFTAC is a Monte Carlo fault tree simulation program that provides a systematic approach for analyzing system design, performing trade-off studies, and optimizing system changes or additions. 2 - Method of solution: SAFTAC assumes an exponential failure distribution for basic input events and a choice of either Gaussian distributed or constant repair times. The program views the system represented by the fault tree as a statistical assembly of independent basic input events, each characterized by an exponential failure distribution and, if used, a constant or normal repair distribution. 3 - Restrictions on the complexity of the problem: The program is dimensioned to handle 1100 basic input events and 1100 logical gates. It can be re-dimensioned to handle up to 2000 basic input events and 2000 logical gates within the existing core memory

  1. Optimization of droplets for UV-NIL using coarse-grain simulation of resist flow

    Sirotkin, Vadim; Svintsov, Alexander; Zaitsev, Sergey

    2009-03-01

    A mathematical model and numerical method are described, which make it possible to simulate ultraviolet ("step and flash") nanoimprint lithography (UV-NIL) process adequately even using standard Personal Computers. The model is derived from 3D Navier-Stokes equations with the understanding that the resist motion is largely directed along the substrate surface and characterized by ultra-low values of the Reynolds number. By the numerical approximation of the model, a special finite difference method is applied (a coarse-grain method). A coarse-grain modeling tool for detailed analysis of resist spreading in UV-NIL at the structure-scale level is tested. The obtained results demonstrate the high ability of the tool to calculate optimal dispensing for given stamp design and process parameters. This dispensing provides uniform filled areas and a homogeneous residual layer thickness in UV-NIL.

  2. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  3. Railway optimal network simulation for the development of regional transport-logistics system

    Mikhail Borisovich Petrov

    2013-12-01

    Full Text Available The dependence of logistics on mineral fuel is a stable tendency of regions development, though when making strategic plans of logistics in the regions, it is necessary to provide the alternative possibilities of power-supply sources change together with population density, transport infrastructure peculiarities, and demographic changes forecast. On the example of timber processing complex of the Sverdlovsk region, the authors suggest the algorithm of decision of the optimal logistics infrastructure allocation. The problem of regional railway network organization at the stage of slow transition from the prolonged stagnation to the new development is carried out. The transport networks’ configurations of countries on the Pacific Rim, which successfully developed nowadays, are analyzed. The authors offer some results of regional transport network simulation on the basis of artificial intelligence method. These methods let to solve the task with incomplete data. The ways of the transport network improvement in the Sverdlovsk region are offered.

  4. Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization

    Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)

    2002-01-01

    In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."

  5. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  6. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  7. Simulation of optimal arctic routes using a numerical sea ice model based on an ice-coupled ocean circulation method

    Jong-Ho Nam

    2013-06-01

    Full Text Available Ever since the Arctic region has opened its mysterious passage to mankind, continuous attempts to take advantage of its fastest route across the region has been made. The Arctic region is still covered by thick ice and thus finding a feasible navigating route is essential for an economical voyage. To find the optimal route, it is necessary to establish an efficient transit model that enables us to simulate every possible route in advance. In this work, an enhanced algorithm to determine the optimal route in the Arctic region is introduced. A transit model based on the simulated sea ice and environmental data numerically modeled in the Arctic is developed. By integrating the simulated data into a transit model, further applications such as route simulation, cost estimation or hindcast can be easily performed. An interactive simulation system that determines the optimal Arctic route using the transit model is developed. The simulation of optimal routes is carried out and the validity of the results is discussed.

  8. Optimization of tagged MRI for quantification of liver stiffness using computer simulated data.

    Serena Monti

    Full Text Available The heartbeat has been proposed as an intrinsic source of motion that can be used in combination with tagged Magnetic Resonance Imaging (MRI to measure displacements induced in the liver as an index of liver stiffness. Optimizing a tagged MRI acquisition protocol in terms of sensitivity to these displacements, which are in the order of pixel size, is necessary to develop the method as a quantification tool for staging fibrosis. We reproduced a study of cardiac-induced strain in the liver at 3T and simulated tagged MR images with different grid tag patterns to evaluate the performance of the Harmonic Phase (HARP image analysis method and its dependence on the parameters of tag spacing and grid angle. The Partial Volume Effect (PVE, T1 relaxation, and different levels of noise were taken into account. Four displacement fields of increasing intensity were created and applied to the tagged MR images of the liver. These fields simulated the deformation at different liver stiffnesses. An Error Index (EI was calculated to evaluate the estimation accuracy for various parameter values. In the absence of noise, the estimation accuracy of the displacement fields increased as tag spacings decreased. EIs for each of the four displacement fields were lower at 0° and the local minima of the EI were found to correspond to multiples of pixel size. The accuracy of the estimation decreased for increasing levels of added noise; as the level increased, the improved estimation caused by decreasing the tag spacing tended to zero. The optimal tag spacing turned out to be a compromise between the smallest tag period that is a multiple of the pixel size and is achievable in a real acquisition and the tag spacing that guarantees an accurate liver displacement measure in the presence of realistic levels of noise.

  9. Optimization of accelerator target and detector for portal imaging using Monte Carlo simulation and experiment

    Flampouri, S.; Evans, P.M.; Partridge, M.; Nahum, A.E.; Verhaegen, A.E.; Spezi, E.

    2002-01-01

    Megavoltage portal images suffer from poor quality compared to those produced with kilovoltage x-rays. Several authors have shown that the image quality can be improved by modifying the linear accelerator to generate more low-energy photons. This work addresses the problem of using Monte Carlo simulation and experiment to optimize the beam and detector combination to maximize image quality for a given patient thickness. A simple model of the whole imaging chain was developed for investigation of the effect of the target parameters on the quality of the image. The optimum targets (6 mm thick aluminium and 1.6 mm copper) were installed in an Elekta SL25 accelerator. The first beam will be referred to as Al6 and the second as Cu1.6. A tissue-equivalent contrast phantom was imaged with the 6 MV standard photon beam and the experimental beams with standard radiotherapy and mammography film/screen systems. The arrangement with a thin Al target/mammography system improved the contrast from 1.4 cm bone in 5 cm water to 19% compared with 2% for the standard arrangement of a thick, high-Z target/radiotherapy verification system. The linac/phantom/detector system was simulated with the BEAM/EGS4 Monte Carlo code. Contrast calculated from the predicted images was in good agreement with the experiment (to within 2.5%). The use of MC techniques to predict images accurately, taking into account the whole imaging system, is a powerful new method for portal imaging system design optimization. (author)

  10. A simulation-optimization model for Stone column-supported embankment stability considering rainfall effect

    Deb, Kousik; Dhar, Anirban; Purohit, Sandip

    2016-01-01

    Landslide due to rainfall has been and continues to be one of the most important concerns of geotechnical engineering. The paper presents the variation of factor of safety of stone column-supported embankment constructed over soft soil due to change in water level for an incessant period of rainfall. A combined simulation-optimization based methodology has been proposed to predict the critical surface of failure of the embankment and to optimize the corresponding factor of safety under rainfall conditions using an evolutionary genetic algorithm NSGA-II (Non-Dominated Sorted Genetic Algorithm-II). It has been observed that the position of water table can be reliably estimated with varying periods of infiltration using developed numerical method. The parametric study is presented to study the optimum factor of safety of the embankment and its corresponding critical failure surface under the steady-state infiltration condition. Results show that in case of floating stone columns, period of infiltration has no effect on factor of safety. Even critical failure surfaces for a particular floating column length remain same irrespective of rainfall duration

  11. Multi-Objective Optimization for Pure Permanent-Magnet Undulator Magnets Ordering Using Modified Simulated Annealing

    Chen Nian; Li, Ge

    2004-01-01

    Undulator field errors influence the electron beam trajectories and lower the radiation quality. Angular deflection of electron beam is determined by first field integral, orbital displacement of electron beam is determined by second field integral and radiation quality can be evaluated by rms field error or phase error. Appropriate ordering of magnets can greatly reduce the errors. We apply a modified simulated annealing algorithm to this multi-objective optimization problem, taking first field integral, second field integral and rms field error as objective functions. Undulator with small field errors can be designed by this method within a reasonable calculation time even for the case of hundreds of magnets (first field integral reduced to 10-6T·m, second integral to 10-6T·m2 and rms field error to 0.01%). Thus, the field correction after assembling of undulator will be greatly simplified. This paper gives the optimizing process in detail and puts forward a new method to quickly calculate the rms field e...

  12. A simplified counter diffusion method combined with a 1D simulation program for optimizing crystallization conditions.

    Tanaka, Hiroaki; Inaka, Koji; Sugiyama, Shigeru; Takahashi, Sachiko; Sano, Satoshi; Sato, Masaru; Yoshitomi, Susumu

    2004-01-01

    We developed a new protein crystallization method has been developed using a simplified counter-diffusion method for optimizing crystallization condition. It is composed of only a single capillary, the gel in the silicon tube and the screw-top test tube, which are readily available in the laboratory. The one capillary can continuously scan a wide range of crystallization conditions (combination of the concentrations of the precipitant and the protein) unless crystallization occurs, which means that it corresponds to many drops in the vapor-diffusion method. The amount of the precipitant and the protein solutions can be much less than in conventional methods. In this study, lysozyme and alpha-amylase were used as model proteins for demonstrating the efficiency of this method. In addition, one-dimensional (1-D) simulations of the crystal growth were performed based on the 1-D diffusion model. The optimized conditions can be applied to the initial crystallization conditions for both other counter-diffusion methods with the Granada Crystallization Box (GCB) and for the vapor-diffusion method after some modification.

  13. A simulation-optimization model for Stone column-supported embankment stability considering rainfall effect

    Deb, Kousik, E-mail: kousik@civil.iitkgp.ernet.in [Associate Professor, Department of Civil Engineering, IIT Kharagpur, Kharagpur-721302 (India); Dhar, Anirban, E-mail: anirban@civil.iitkgp.ernet.in [Assistant Professor, Department of Civil Engineering, IIT Kharagpur, Kharagpur-721302 (India); Purohit, Sandip, E-mail: sandip.purohit91@gmail.com [Former B.Tech Student, Department of Civil Engineering, NIT Rourkela, Rourkela (India)

    2016-02-01

    Landslide due to rainfall has been and continues to be one of the most important concerns of geotechnical engineering. The paper presents the variation of factor of safety of stone column-supported embankment constructed over soft soil due to change in water level for an incessant period of rainfall. A combined simulation-optimization based methodology has been proposed to predict the critical surface of failure of the embankment and to optimize the corresponding factor of safety under rainfall conditions using an evolutionary genetic algorithm NSGA-II (Non-Dominated Sorted Genetic Algorithm-II). It has been observed that the position of water table can be reliably estimated with varying periods of infiltration using developed numerical method. The parametric study is presented to study the optimum factor of safety of the embankment and its corresponding critical failure surface under the steady-state infiltration condition. Results show that in case of floating stone columns, period of infiltration has no effect on factor of safety. Even critical failure surfaces for a particular floating column length remain same irrespective of rainfall duration.

  14. Penentuan Rute Angkutan Umum Optimal Dengan Transport Network Simulator (TRANETSIM di Kota Tuban

    Any Riaya Nikita Ratriaga

    2015-12-01

    Full Text Available Seiring perkembangan ekonomi, jumlah penduduk yang mendiami Kota Tuban terus mengalami peningkatan. Kondisi tersebut menimbulkan dampak terhadap kegiatan di beberapa ruas jalan pada Kota Tuban. Perkembangan permukiman yang ekspansif ke pinggiran Kota Tuban juga menimbulkan bangkitan-bangkitan pergerakan baru.. Sirkulasi angkutan umum yang terdapat di Kota Tuban memiliki kondisi eksisting yang belum mencakup keseluruhan zona yang menjadi bangkitan dan tarikan pergerakan. Penelitian ini bertujuan untuk menentukan rute angkutan umum yang optimal untuk Kota Tuban. Untuk itu, dilakukan tiga tahapan untuk mencapai tujuan tersebut. Tahap pertama adalah mengukur bangkitan dan tarikan pergerakan tiap zona dengan matriks asal-tujuan. Tahap selanjutnya adalah melakukan pembobotan terhadap faktor-faktor penentu rute angkutan umum dengan teknik analisis Analytical Hierarchy Process (AHP menggunakan software Expert Choice. Tahap terakhir adalah menentukan rute angkutan umum yang optimal menggunakan software Transport Network Simulator (TRANETSIM. Berdasarkan analisis yang digunakan dalam tahapan penelitian, hasil yang diperoleh yaitu rute Terminal Kambang Putih – Desa Tunah (PP, Desa Tunah – Terminal Kambang Putih (PP, Terminal Kambang Putih – Desa Semanding (PP, serta Desa Semanding – Desa Tunah (PP.

  15. Impact of optimized mixing heights on simulated regional atmospheric transport of CO2

    R. Kretschmer

    2014-07-01

    Full Text Available The mixing height (MH is a crucial parameter in commonly used transport models that proportionally affects air concentrations of trace gases with sources/sinks near the ground and on diurnal scales. Past synthetic data experiments indicated the possibility to improve tracer transport by minimizing errors of simulated MHs. In this paper we evaluate a method to constrain the Lagrangian particle dispersion model STILT (Stochastic Time-Inverted Lagrangian Transport with MH diagnosed from radiosonde profiles using a bulk Richardson method. The same method was used to obtain hourly MHs for the period September/October 2009 from the Weather Research and Forecasting (WRF model, which covers the European continent at 10 km horizontal resolution. Kriging with external drift (KED was applied to estimate optimized MHs from observed and modelled MHs, which were used as input for STILT to assess the impact on CO2 transport. Special care has been taken to account for uncertainty in MH retrieval in this estimation process. MHs and CO2 concentrations were compared to vertical profiles from aircraft in situ data. We put an emphasis on testing the consistency of estimated MHs to observed vertical mixing of CO2. Modelled CO2 was also compared with continuous measurements made at Cabauw and Heidelberg stations. WRF MHs were significantly biased by ~10–20% during day and ~40–60% during night. Optimized MHs reduced this bias to ~5% with additional slight improvements in random errors. The KED MHs were generally more consistent with observed CO2 mixing. The use of optimized MHs had in general a favourable impact on CO2 transport, with bias reductions of 5–45% (day and 60–90% (night. This indicates that a large part of the found CO2 model–data mismatch was indeed due to MH errors. Other causes for CO2 mismatch are discussed. Applicability of our method is discussed in the context of CO2 inversions at regional scales.

  16. Impact of optimized mixing heights on simulated regional atmospheric transport of CO2

    Kretschmer, R.; Gerbig, C.; Karstens, U.; Biavati, G.; Vermeulen, A.; Vogel, E.; Hammer, S.; Totsche, K.U.

    2014-01-01

    The mixing height (MH) is a crucial parameter in commonly used transport models that proportionally affects air concentrations of trace gases with sources/sinks near the ground and on diurnal scales. Past synthetic data experiments indicated the possibility to improve tracer transport by minimizing errors of simulated MHs. In this paper we evaluate a method to constrain the Lagrangian particle dispersion model STILT (Stochastic Time-Inverted Lagrangian Transport) with MH diagnosed from radiosonde profiles using a bulk Richardson method. The same method was used to obtain hourly MHs for the period September/October 2009 from the Weather Research and Forecasting (WRF) model, which covers the European continent at 10 km horizontal resolution. Kriging with external drift (KED) was applied to estimate optimized MHs from observed and modelled MHs, which were used as input for STILT to assess the impact on CO 2 transport. Special care has been taken to account for uncertainty in MH retrieval in this estimation process.MHs and CO 2 concentrations were compared to vertical profiles from aircraft in situ data.We put an emphasis on testing the consistency of estimated MHs to observed vertical mixing of CO 2 . Modelled CO 2 was also compared with continuous measurements made at Cabauw and Heidelberg stations. WRF MHs were significantly biased by 10-20% during day and 40-60% during night. Optimized MHs reduced this bias to 5% with additional slight improvements in random errors. The KED MHs were generally more consistent with observed CO 2 mixing. The use of optimized MHs had in general a favourable impact on CO 2 transport, with bias reductions of 5-45% (day) and 60-90% (night). This indicates that a large part of the found CO 2 model-data mismatch was indeed due to MH errors. Other causes for CO 2 mismatch are discussed. Applicability of our method is discussed in the context of CO 2 inversions at regional scales. (authors)

  17. A fuzzy-stochastic simulation-optimization model for planning electric power systems with considering peak-electricity demand: A case study of Qingdao, China

    Yu, L.; Li, Y.P.; Huang, G.H.

    2016-01-01

    In this study, a FSSOM (fuzzy-stochastic simulation-optimization model) is developed for planning EPS (electric power systems) with considering peak demand under uncertainty. FSSOM integrates techniques of SVR (support vector regression), Monte Carlo simulation, and FICMP (fractile interval chance-constrained mixed-integer programming). In FSSOM, uncertainties expressed as fuzzy boundary intervals and random variables can be effectively tackled. In addition, SVR coupled Monte Carlo technique is used for predicting the peak-electricity demand. The FSSOM is applied to planning EPS for the City of Qingdao, China. Solutions of electricity generation pattern to satisfy the city's peak demand under different probability levels and p-necessity levels have been generated. Results reveal that the city's electricity supply from renewable energies would be low (only occupying 8.3% of the total electricity generation). Compared with the energy model without considering peak demand, the FSSOM can better guarantee the city's power supply and thus reduce the system failure risk. The findings can help decision makers not only adjust the existing electricity generation/supply pattern but also coordinate the conflict interaction among system cost, energy supply security, pollutant mitigation, as well as constraint-violation risk. - Highlights: • FSSOM (Fuzzy-stochastic simulation-optimization model) is developed for planning EPS. • It can address uncertainties as fuzzy-boundary intervals and random variables. • FSSOM can satisfy peak-electricity demand and optimize power allocation. • Solutions under different probability levels and p-necessity levels are analyzed. • Results create tradeoff among system cost and peak-electricity demand violation risk.

  18. Molecular dynamics simulation of nano-indentation of (111) cubic boron nitride with optimized Tersoff potential

    Zhao, Yinbo; Peng, Xianghe; Fu, Tao; Huang, Cheng; Feng, Chao; Yin, Deqiang; Wang, Zhongchang

    2016-01-01

    Highlights: • We optimize Tersoff potential to simulate the cBN better under nanoidentation. • Dislocations slip more easily along and directions on the {111} plane. • Shuffle-set dislocation slip along direction on {111} plane first. • A tetrahedron structure is found in the initial stage of the indentation. - Abstract: We conduct molecular dynamics simulation of nanoindentation on (111) surface of cubic boron nitride and find that shuffle-set dislocations slip along direction on {111} plane at the initial stage of the indentation. The shuffle-set dislocations are then found to meet together, forming surfaces of a tetrahedron. We also find that the surfaces are stacking-fault zones, which intersect with each other, forming edges of stair-rod dislocations along direction. Moreover, we also calculate the generalized stacking fault (GSF) energies along various gliding directions on several planes and find that the GSF energies of the {111} and {111} systems are relatively smaller, indicating that dislocations slip more easily along and directions on the {111} plane.

  19. Integrating biology, field logistics, and simulations to optimize parameter estimation for imperiled species

    Lanier, Wendy E.; Bailey, Larissa L.; Muths, Erin L.

    2016-01-01

    Conservation of imperiled species often requires knowledge of vital rates and population dynamics. However, these can be difficult to estimate for rare species and small populations. This problem is further exacerbated when individuals are not available for detection during some surveys due to limited access, delaying surveys and creating mismatches between the breeding behavior and survey timing. Here we use simulations to explore the impacts of this issue using four hypothetical boreal toad (Anaxyrus boreas boreas) populations, representing combinations of logistical access (accessible, inaccessible) and breeding behavior (synchronous, asynchronous). We examine the bias and precision of survival and breeding probability estimates generated by survey designs that differ in effort and timing for these populations. Our findings indicate that the logistical access of a site and mismatch between the breeding behavior and survey design can greatly limit the ability to yield accurate and precise estimates of survival and breeding probabilities. Simulations similar to what we have performed can help researchers determine an optimal survey design(s) for their system before initiating sampling efforts.

  20. Dynamic optimization of walker-assisted FES-activated paraplegic walking: simulation and experimental studies.

    Nekoukar, Vahab; Erfanian, Abbas

    2013-11-01

    In this paper, we propose a musculoskeletal model of walker-assisted FES-activated paraplegic walking for the generation of muscle stimulation patterns and characterization of the causal relationships between muscle excitations, multi-joint movement, and handle reaction force (HRF). The model consists of the lower extremities, trunk, hands, and a walker. The simulation of walking is performed using particle swarm optimization to minimize the tracking errors from the desired trajectories for the lower extremity joints, to reduce the stimulations of the muscle groups acting around the hip, knee, and ankle joints, and to minimize the HRF. The results of the simulation studies using data recorded from healthy subjects performing walker-assisted walking indicate that the model-generated muscle stimulation patterns are in agreement with the EMG patterns that have been reported in the literature. The experimental results on two paraplegic subjects demonstrate that the proposed methodology can improve walking performance, reduce HRF, and increase walking speed when compared to the conventional FES-activated paraplegic walking. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.