WorldWideScience

Sample records for simulate optimal adjustments

  1. Magnetic measurement, sorting optimization and adjustment of SDUV-FEL hybrid undulator

    International Nuclear Information System (INIS)

    Wang Tao; Jia Qika

    2007-01-01

    Construction of an undulator includes magnet block measurement, sorting, field measurement and adjustment. Optimizing SDUV-FEL undulator by simulated annealing algorithm using measurement results of the magnet blocks by Helmholtz coil before installing undulator magnets, the cost function can be reduced by three orders of magnitude. The practical parameters of one segment meet the design specifications after adjusting the magnetic field. (authors)

  2. Primal and dual approaches to adjustable robust optimization

    NARCIS (Netherlands)

    de Ruiter, Frans

    2018-01-01

    Robust optimization has become an important paradigm to deal with optimization under uncertainty. Adjustable robust optimization is an extension that deals with multistage problems. This thesis starts with a short but comprehensive introduction to adjustable robust optimization. Then the two

  3. The Study of an Optimal Robust Design and Adjustable Ordering Strategies in the HSCM.

    Science.gov (United States)

    Liao, Hung-Chang; Chen, Yan-Kwang; Wang, Ya-huei

    2015-01-01

    The purpose of this study was to establish a hospital supply chain management (HSCM) model in which three kinds of drugs in the same class and with the same indications were used in creating an optimal robust design and adjustable ordering strategies to deal with a drug shortage. The main assumption was that although each doctor has his/her own prescription pattern, when there is a shortage of a particular drug, the doctor may choose a similar drug with the same indications as a replacement. Four steps were used to construct and analyze the HSCM model. The computation technology used included a simulation, a neural network (NN), and a genetic algorithm (GA). The mathematical methods of the simulation and the NN were used to construct a relationship between the factor levels and performance, while the GA was used to obtain the optimal combination of factor levels from the NN. A sensitivity analysis was also used to assess the change in the optimal factor levels. Adjustable ordering strategies were also developed to prevent drug shortages.

  4. Humans make near-optimal adjustments of control to initial body configuration in vertical squat jumping.

    Science.gov (United States)

    Bobbert, Maarten F; Richard Casius, L J; Kistemaker, Dinant A

    2013-05-01

    We investigated adjustments of control to initial posture in squat jumping. Eleven male subjects jumped from three initial postures: preferred initial posture (PP), a posture in which the trunk was rotated 18° more backward (BP) and a posture in which it was rotated 15° more forward (FP) than in PP. Kinematics, ground reaction forces and electromyograms (EMG) were collected. EMG was rectified and smoothed to obtain smoothed rectified EMG (srEMG). Subjects showed adjustments in srEMG histories, most conspicuously a shift in srEMG-onset of rectus femoris (REC): from early in BP to late in FP. Jumps from the subjects' initial postures were simulated with a musculoskeletal model comprising four segments and six Hill-type muscles, which had muscle stimulation (STIM) over time as input. STIM of each muscle changed from initial to maximal at STIM-onset, and STIM-onsets were optimized using jump height as criterion. Optimal simulated jumps from BP, PP and FP were similar to jumps of the subjects. Optimal solutions primarily differed in STIM-onset of REC: from early in BP to late in FP. Because the subjects' adjustments in srEMG-onsets were similar to adjustments of the model's optimal STIM-onsets, it was concluded that the former were near-optimal. With the model we also showed that near-maximum jumps from BP, PP and FP could be achieved when STIM-onset of REC depended on initial hip joint angle and STIM-onsets of the other muscles were posture-independent. A control theory that relies on a mapping from initial posture to STIM-onsets seems a parsimonious alternative to theories relying on internal optimal control models. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Simulation Based Optimization for World Line Card Production System

    Directory of Open Access Journals (Sweden)

    Sinan APAK

    2012-07-01

    Full Text Available Simulation based decision support system is one of the commonly used tool to examine complex production systems. The simulation approach provides process modules which can be adjusted with certain parameters by using data relatively easily obtainable in production process. World Line Card production system simulation is developed to evaluate the optimality of existing production line via using discrete event simulation model with variaty of alternative proposals. The current production system is analysed by a simulation model emphasizing the bottlenecks and the poorly utilized production line. Our analysis identified some improvements and efficient solutions for the existing system.

  6. Optimism, Social Support, and Adjustment in African American Women with Breast Cancer

    Science.gov (United States)

    Shelby, Rebecca A.; Crespin, Tim R.; Wells-Di Gregorio, Sharla M.; Lamdan, Ruth M.; Siegel, Jamie E.; Taylor, Kathryn L.

    2013-01-01

    Past studies show that optimism and social support are associated with better adjustment following breast cancer treatment. Most studies have examined these relationships in predominantly non-Hispanic White samples. The present study included 77 African American women treated for nonmetastatic breast cancer. Women completed measures of optimism, social support, and adjustment within 10-months of surgical treatment. In contrast to past studies, social support did not mediate the relationship between optimism and adjustment in this sample. Instead, social support was a moderator of the optimism-adjustment relationship, as it buffered the negative impact of low optimism on psychological distress, well-being, and psychosocial functioning. Women with high levels of social support experienced better adjustment even when optimism was low. In contrast, among women with high levels of optimism, increasing social support did not provide an added benefit. These data suggest that perceived social support is an important resource for women with low optimism. PMID:18712591

  7. Study on Gas Field Optimization Distribution with Parameters Adjustment of the Air Duct Outlet for Mechanized Heading Face in Coal Mine

    Science.gov (United States)

    Gong, Xiao-Yan; Zhang, Xin-Yi; Wu, Yue; Xia, Zhi-Xin; Li, Ying

    2017-12-01

    At present, as the increasingly drilling dimensions with cross-section expansion and distance prolong in coal mine, the situation of gas accumulation in mechanized heading face becomes severe. In this paper, optimization research of gas distribution was carried out by adjusting parameters of the air duct outlet, including angle, caliber and the front and rear distance of air duct outlet. Mechanized heading face of Ningtiaota coal mine was taken as the research object, simulated and analyzed the problems of original gas field, the reasonable parameters range of the air duct outlet was determined according to the allowable range of wind speed and the effect of gas dilution, the adjustment range of each parameter of the air duct outlet is preliminarily determined. Base on this, the distribution of gas field under different parameters adjustment of air duct outlet was simulated. The specific parameters under the different distance between the air duct outlet and the mechanized heading face were obtained, and a new method of optimizing the gas distribution by adjusting parameters of the air duct outlet was provided.

  8. Handbook of simulation optimization

    CERN Document Server

    Fu, Michael C

    2014-01-01

    The Handbook of Simulation Optimization presents an overview of the state of the art of simulation optimization, providing a survey of the most well-established approaches for optimizing stochastic simulation models and a sampling of recent research advances in theory and methodology. Leading contributors cover such topics as discrete optimization via simulation, ranking and selection, efficient simulation budget allocation, random search methods, response surface methodology, stochastic gradient estimation, stochastic approximation, sample average approximation, stochastic constraints, variance reduction techniques, model-based stochastic search methods and Markov decision processes. This single volume should serve as a reference for those already in the field and as a means for those new to the field for understanding and applying the main approaches. The intended audience includes researchers, practitioners and graduate students in the business/engineering fields of operations research, management science,...

  9. Simulation-based optimization parametric optimization techniques and reinforcement learning

    CERN Document Server

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  10. Model-data fusion across ecosystems: from multisite optimizations to global simulations

    Science.gov (United States)

    Kuppel, S.; Peylin, P.; Maignan, F.; Chevallier, F.; Kiely, G.; Montagnani, L.; Cescatti, A.

    2014-11-01

    This study uses a variational data assimilation framework to simultaneously constrain a global ecosystem model with eddy covariance measurements of daily net ecosystem exchange (NEE) and latent heat (LE) fluxes from a large number of sites grouped in seven plant functional types (PFTs). It is an attempt to bridge the gap between the numerous site-specific parameter optimization works found in the literature and the generic parameterization used by most land surface models within each PFT. The present multisite approach allows deriving PFT-generic sets of optimized parameters enhancing the agreement between measured and simulated fluxes at most of the sites considered, with performances often comparable to those of the corresponding site-specific optimizations. Besides reducing the PFT-averaged model-data root-mean-square difference (RMSD) and the associated daily output uncertainty, the optimization improves the simulated CO2 balance at tropical and temperate forests sites. The major site-level NEE adjustments at the seasonal scale are reduced amplitude in C3 grasslands and boreal forests, increased seasonality in temperate evergreen forests, and better model-data phasing in temperate deciduous broadleaf forests. Conversely, the poorer performances in tropical evergreen broadleaf forests points to deficiencies regarding the modelling of phenology and soil water stress for this PFT. An evaluation with data-oriented estimates of photosynthesis (GPP - gross primary productivity) and ecosystem respiration (Reco) rates indicates distinctively improved simulations of both gross fluxes. The multisite parameter sets are then tested against CO2 concentrations measured at 53 locations around the globe, showing significant adjustments of the modelled seasonality of atmospheric CO2 concentration, whose relevance seems PFT-dependent, along with an improved interannual variability. Lastly, a global-scale evaluation with remote sensing NDVI (normalized difference vegetation index

  11. Interrelations of stress, optimism and control in older people's psychological adjustment.

    Science.gov (United States)

    Bretherton, Susan Jane; McLean, Louise Anne

    2015-06-01

    To investigate the influence of perceived stress, optimism and perceived control of internal states on the psychological adjustment of older adults. The sample consisted of 212 older adults, aged between 58 and 103 (M = 80.42 years, SD = 7.31 years), living primarily in retirement villages in Melbourne, Victoria. Participants completed the Perceived Stress Scale, Life Orientation Test-Revised, Perceived Control of Internal States Scale and the World Health Organisation Quality of Life-Bref. Optimism significantly mediated the relationship between older people's perceived stress and psychological health, and perceived control of internal states mediated the relationships among stress, optimism and psychological health. The variables explained 49% of the variance in older people's psychological adjustment. It is suggested that strategies to improve optimism and perceived control may improve the psychological adjustment of older people struggling to adapt to life's stressors. © 2014 ACOTA.

  12. Simulation-based optimization of sustainable national energy systems

    International Nuclear Information System (INIS)

    Batas Bjelić, Ilija; Rajaković, Nikola

    2015-01-01

    The goals of the EU2030 energy policy should be achieved cost-effectively by employing the optimal mix of supply and demand side technical measures, including energy efficiency, renewable energy and structural measures. In this paper, the achievement of these goals is modeled by introducing an innovative method of soft-linking of EnergyPLAN with the generic optimization program (GenOpt). This soft-link enables simulation-based optimization, guided with the chosen optimization algorithm, rather than manual adjustments of the decision vectors. In order to obtain EnergyPLAN simulations within the optimization loop of GenOpt, the decision vectors should be chosen and explained in GenOpt for scenarios created in EnergyPLAN. The result of the optimization loop is an optimal national energy master plan (as a case study, energy policy in Serbia was taken), followed with sensitivity analysis of the exogenous assumptions and with focus on the contribution of the smart electricity grid to the achievement of EU2030 goals. It is shown that the increase in the policy-induced total costs of less than 3% is not significant. This general method could be further improved and used worldwide in the optimal planning of sustainable national energy systems. - Highlights: • Innovative method of soft-linking of EnergyPLAN with GenOpt has been introduced. • Optimal national energy master plan has been developed (the case study for Serbia). • Sensitivity analysis on the exogenous world energy and emission price development outlook. • Focus on the contribution of smart energy systems to the EU2030 goals. • Innovative soft-linking methodology could be further improved and used worldwide.

  13. Competition Leverage : How the Demand Side Affects Optimal Risk Adjustment

    NARCIS (Netherlands)

    Bijlsma, M.; Boone, J.; Zwart, Gijsbert

    2011-01-01

    We study optimal risk adjustment in imperfectly competitive health insurance markets when high-risk consumers are less likely to switch insurer than low-risk consumers. First, we find that insurers still have an incentive to select even if risk adjustment perfectly corrects for cost differences

  14. Numerical simulation and optimal design of Segmented Planar Imaging Detector for Electro-Optical Reconnaissance

    Science.gov (United States)

    Chu, Qiuhui; Shen, Yijie; Yuan, Meng; Gong, Mali

    2017-12-01

    Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is a cutting-edge electro-optical imaging technology to realize miniaturization and complanation of imaging systems. In this paper, the principle of SPIDER has been numerically demonstrated based on the partially coherent light theory, and a novel concept of adjustable baseline pairing SPIDER system has further been proposed. Based on the results of simulation, it is verified that the imaging quality could be effectively improved by adjusting the Nyquist sampling density, optimizing the baseline pairing method and increasing the spectral channel of demultiplexer. Therefore, an adjustable baseline pairing algorithm is established for further enhancing the image quality, and the optimal design procedure in SPIDER for arbitrary targets is also summarized. The SPIDER system with adjustable baseline pairing method can broaden its application and reduce cost under the same imaging quality.

  15. Validation of a novel laparoscopic adjustable gastric band simulator.

    Science.gov (United States)

    Sankaranarayanan, Ganesh; Adair, James D; Halic, Tansel; Gromski, Mark A; Lu, Zhonghua; Ahn, Woojin; Jones, Daniel B; De, Suvranu

    2011-04-01

    Morbid obesity accounts for more than 90,000 deaths per year in the United States. Laparoscopic adjustable gastric banding (LAGB) is the second most common weight loss procedure performed in the US and the most common in Europe and Australia. Simulation in surgical training is a rapidly advancing field that has been adopted by many to prepare surgeons for surgical techniques and procedures. The aim of our study was to determine face, construct, and content validity for a novel virtual reality laparoscopic adjustable gastric band simulator. Twenty-eight subjects were categorized into two groups (expert and novice), determined by their skill level in laparoscopic surgery. Experts consisted of subjects who had at least 4 years of laparoscopic training and operative experience. Novices consisted of subjects with medical training but with less than 4 years of laparoscopic training. The subjects used the virtual reality laparoscopic adjustable band surgery simulator. They were automatically scored according to various tasks. The subjects then completed a questionnaire to evaluate face and content validity. On a 5-point Likert scale (1 = lowest score, 5 = highest score), the mean score for visual realism was 4.00 ± 0.67 and the mean score for realism of the interface and tool movements was 4.07 ± 0.77 (face validity). There were significant differences in the performances of the two subject groups (expert and novice) based on total scores (p virtual reality laparoscopic adjustable gastric band simulator. Our initial results demonstrate excellent face, construct, and content validity findings. To our knowledge, this is the first virtual reality simulator with haptic feedback for training residents and surgeons in the laparoscopic adjustable gastric banding procedure.

  16. CALCULATION METHODS OF OPTIMAL ADJUSTMENT OF CONTROL SYSTEM THROUGH DISTURBANCE CHANNEL

    Directory of Open Access Journals (Sweden)

    I. M. Golinko

    2014-01-01

    Full Text Available In the process of automatic control system debugging the great attention is paid to determining formulas’ parameters of optimal dynamic adjustment of regulators, taking into account the dynamics of Objects control. In most cases the known formulas are oriented on design of automatic control system through channel “input-output definition”. But practically in all continuous processes the main task of all regulators is stabilization of output parameters. The Methods of parameters calculation for dynamic adjustment of regulations were developed. These methods allow to optimize the analog and digital regulators, taking into account minimization of regulated influences. There were suggested to use the fact of detuning and maximum value of regulated influence. As the automatic control system optimization with proportional plus reset controllers on disturbance channel is an unimodal task, the main algorithm of optimization is realized by Hooke – Jeeves method. For controllers optimization through channel external disturbance there were obtained functional dependences of parameters calculations of dynamic proportional plus reset controllers from dynamic characteristics of Object control. The obtained dependences allow to improve the work of controllers (regulators of automatic control on external disturbance channel and so it allows to improve the quality of regulation of transient processes. Calculation formulas provide high accuracy and convenience in usage. In suggested method there are no nomographs and this fact expels subjectivity of investigation in determination of parameters of dynamic adjustment of proportional plus reset controllers. Functional dependences can be used for calculation of adjustment of PR controllers in a great range of change of dynamic characteristics of Objects control.

  17. A Three-Stage Optimization Algorithm for the Stochastic Parallel Machine Scheduling Problem with Adjustable Production Rates

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2013-01-01

    Full Text Available We consider a parallel machine scheduling problem with random processing/setup times and adjustable production rates. The objective functions to be minimized consist of two parts; the first part is related with the due date performance (i.e., the tardiness of the jobs, while the second part is related with the setting of machine speeds. Therefore, the decision variables include both the production schedule (sequences of jobs and the production rate of each machine. The optimization process, however, is significantly complicated by the stochastic factors in the manufacturing system. To address the difficulty, a simulation-based three-stage optimization framework is presented in this paper for high-quality robust solutions to the integrated scheduling problem. The first stage (crude optimization is featured by the ordinal optimization theory, the second stage (finer optimization is implemented with a metaheuristic called differential evolution, and the third stage (fine-tuning is characterized by a perturbation-based local search. Finally, computational experiments are conducted to verify the effectiveness of the proposed approach. Sensitivity analysis and practical implications are also discussed.

  18. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  19. An optimal multivariable controller for transcritical CO2 refrigeration cycle with an adjustable ejector

    International Nuclear Information System (INIS)

    He, Yang; Deng, Jianqiang; Yang, Fusheng; Zhang, Zaoxiao

    2017-01-01

    Highlights: • Dynamic model for transcritical CO 2 ejector refrigeration system is developed. • A model-driven optimal multivariable controller is proposed. • Gas cooler pressure and cooling capacity are tracked independently. • Maximal performance for a given load is achieved by the optimal controller. - Abstract: The fixed ejector has to work under a restricted operating condition to keep its positive effectiveness on the transcritical CO 2 refrigeration cycle, and a controllable ejector will be helpful. In this paper, an optimal multivariable controller based on the dynamic model is proposed to improve transcritical CO 2 refrigeration cycle with an adjustable ejector (TCRAE). A nonlinear dynamic model is first developed to model the dynamic characteristic of TCRAE. The corresponding model linearization is carried out and the simulation results reproduce transient behavior of the nonlinear model very well. Based on the developed model, an optimal multivariable controller with a tracker based linear quadratic state feedback algorithm and a predictor using steepest descent method is designed. The controller is finally applied on the experimental apparatus and the performance is verified. Using the tracker only, the gas cooler pressure and chilled water outlet temperature (cooling capacity) are well tracked rejecting the disturbances from each other. Furthermore, by the predictor, the optimal gas cooler pressure for a constant cooling capacity is actually approached on the experimental apparatus with a settling time about 700 s.

  20. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS

    Science.gov (United States)

    Arce, Pedro; Lagares, Juan Ignacio

    2018-02-01

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm2 to 40  ×  40 cm2, a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  1. Automatic efficiency optimization of an axial compressor with adjustable inlet guide vanes

    Science.gov (United States)

    Li, Jichao; Lin, Feng; Nie, Chaoqun; Chen, Jingyi

    2012-04-01

    The inlet attack angle of rotor blade reasonably can be adjusted with the change of the stagger angle of inlet guide vane (IGV); so the efficiency of each condition will be affected. For the purpose to improve the efficiency, the DSP (Digital Signal Processor) controller is designed to adjust the stagger angle of IGV automatically in order to optimize the efficiency at any operating condition. The A/D signal collection includes inlet static pressure, outlet static pressure, outlet total pressure, rotor speed and torque signal, the efficiency can be calculated in the DSP, and the angle signal for the stepping motor which control the IGV will be sent out from the D/A. Experimental investigations are performed in a three-stage, low-speed axial compressor with variable inlet guide vanes. It is demonstrated that the DSP designed can well adjust the stagger angle of IGV online, the efficiency under different conditions can be optimized. This establishment of DSP online adjustment scheme may provide a practical solution for improving performance of multi-stage axial flow compressor when its operating condition is varied.

  2. Adjustment and Optimization of the Cropping Systems under Water Constraint

    Directory of Open Access Journals (Sweden)

    Pingli An

    2016-11-01

    Full Text Available The water constraint on agricultural production receives growing concern with the increasingly sharp contradiction between demand and supply of water resources. How to mitigate and adapt to potential water constraint is one of the key issues for ensuring food security and achieving sustainable agriculture in the context of climate change. It has been suggested that adjustment and optimization of cropping systems could be an effective measure to improve water management and ensure food security. However, a knowledge gap still exists in how to quantify potential water constraint and how to select appropriate cropping systems. Here, we proposed a concept of water constraint risk and developed an approach for the evaluation of the water constraint risks for agricultural production by performing a case study in Daxing District, Beijing, China. The results show that, over the whole growth period, the order of the water constraint risks of crops from high to low was wheat, rice, broomcorn, foxtail millet, summer soybean, summer peanut, spring corn, and summer corn, and the order of the water constraint risks of the cropping systems from high to low was winter wheat-summer grain crops, rice, broomcorn, foxtail millet, and spring corn. Our results are consistent with the actual evolving process of cropping system. This indicates that our proposed method is practicable to adjust and optimize the cropping systems to mitigate and adapt to potential water risks. This study provides an insight into the adjustment and optimization of cropping systems under resource constraints.

  3. Towards Optimal PDE Simulations

    International Nuclear Information System (INIS)

    Keyes, David

    2009-01-01

    The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.

  4. Terascale Optimal PDE Simulations

    Energy Technology Data Exchange (ETDEWEB)

    David Keyes

    2009-07-28

    The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.

  5. Optimization Model for Web Based Multimodal Interactive Simulations.

    Science.gov (United States)

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  6. Coupled multiscale simulation and optimization in nanoelectronics

    CERN Document Server

    2015-01-01

    Designing complex integrated circuits relies heavily on mathematical methods and calls for suitable simulation and optimization tools. The current design approach involves simulations and optimizations in different physical domains (device, circuit, thermal, electromagnetic) and in a range of electrical engineering disciplines (logic, timing, power, crosstalk, signal integrity, system functionality). COMSON was a Marie Curie Research Training Network created to meet these new scientific and training challenges by (a) developing new descriptive models that take these mutual dependencies into account, (b) combining these models with existing circuit descriptions in new simulation strategies, and (c) developing new optimization techniques that will accommodate new designs. The book presents the main project results in the fields of PDAE modeling and simulation, model order reduction techniques and optimization, based on merging the know-how of three major European semiconductor companies with the combined expe...

  7. Westinghouse waste simulation and optimization software tool

    International Nuclear Information System (INIS)

    Mennicken, Kim; Aign, Jorg

    2013-01-01

    Applications for dynamic simulation can be found in virtually all areas of process engineering. The tangible benefits of using dynamic simulation can be seen in tighter design, smoother start-ups and optimized operation. Thus, proper implementation of dynamic simulation can deliver substantial benefits. These benefits are typically derived from improved process understanding. Simulation gives confidence in evidence based decisions and enables users to try out lots of 'what if' scenarios until one is sure that a decision is the right one. In radioactive waste treatment tasks different kinds of waste with different volumes and properties have to be treated, e.g. from NPP operation or D and D activities. Finding a commercially and technically optimized waste treatment concept is a time consuming and difficult task. The Westinghouse Waste Simulation and Optimization Software Tool will enable the user to quickly generate reliable simulation models of various process applications based on equipment modules. These modules can be built with ease and be integrated into the simulation model. This capability ensures that this tool is applicable to typical waste treatment tasks. The identified waste streams and the selected treatment methods are the basis of the simulation and optimization software. After implementing suitable equipment data into the model, process requirements and waste treatment data are fed into the simulation to finally generate primary simulation results. A sensitivity analysis of automated optimization features of the software generates the lowest possible lifecycle cost for the simulated waste stream. In combination with proven waste management equipments and integrated waste management solutions, this tool provides reliable qualitative results that lead to an effective planning and minimizes the total project planning risk of any waste management activity. It is thus the ideal tool for designing a waste treatment facility in an optimum manner

  8. Westinghouse waste simulation and optimization software tool

    Energy Technology Data Exchange (ETDEWEB)

    Mennicken, Kim; Aign, Jorg [Westinghouse Electric Germany GmbH, Hamburg (Germany)

    2013-07-01

    Applications for dynamic simulation can be found in virtually all areas of process engineering. The tangible benefits of using dynamic simulation can be seen in tighter design, smoother start-ups and optimized operation. Thus, proper implementation of dynamic simulation can deliver substantial benefits. These benefits are typically derived from improved process understanding. Simulation gives confidence in evidence based decisions and enables users to try out lots of 'what if' scenarios until one is sure that a decision is the right one. In radioactive waste treatment tasks different kinds of waste with different volumes and properties have to be treated, e.g. from NPP operation or D and D activities. Finding a commercially and technically optimized waste treatment concept is a time consuming and difficult task. The Westinghouse Waste Simulation and Optimization Software Tool will enable the user to quickly generate reliable simulation models of various process applications based on equipment modules. These modules can be built with ease and be integrated into the simulation model. This capability ensures that this tool is applicable to typical waste treatment tasks. The identified waste streams and the selected treatment methods are the basis of the simulation and optimization software. After implementing suitable equipment data into the model, process requirements and waste treatment data are fed into the simulation to finally generate primary simulation results. A sensitivity analysis of automated optimization features of the software generates the lowest possible lifecycle cost for the simulated waste stream. In combination with proven waste management equipments and integrated waste management solutions, this tool provides reliable qualitative results that lead to an effective planning and minimizes the total project planning risk of any waste management activity. It is thus the ideal tool for designing a waste treatment facility in an optimum manner

  9. Simulation-based optimization of thermal systems

    International Nuclear Information System (INIS)

    Jaluria, Yogesh

    2009-01-01

    This paper considers the design and optimization of thermal systems on the basis of the mathematical and numerical modeling of the system. Many complexities are often encountered in practical thermal processes and systems, making the modeling challenging and involved. These include property variations, complicated regions, combined transport mechanisms, chemical reactions, and intricate boundary conditions. The paper briefly presents approaches that may be used to accurately simulate these systems. Validation of the numerical model is a particularly critical aspect and is discussed. It is important to couple the modeling with the system performance, design, control and optimization. This aspect, which has often been ignored in the literature, is considered in this paper. Design of thermal systems based on concurrent simulation and experimentation is also discussed in terms of dynamic data-driven optimization methods. Optimization of the system and of the operating conditions is needed to minimize costs and improve product quality and system performance. Different optimization strategies that are currently used for thermal systems are outlined, focusing on new and emerging strategies. Of particular interest is multi-objective optimization, since most thermal systems involve several important objective functions, such as heat transfer rate and pressure in electronic cooling systems. A few practical thermal systems are considered in greater detail to illustrate these approaches and to present typical simulation, design and optimization results

  10. Method for optimum determination of adjustable parameters in the boiling water reactor core simulator using operating data on flux distribution

    International Nuclear Information System (INIS)

    Kiguchi, T.; Kawai, T.

    1975-01-01

    A method has been developed to optimally and automatically determine the adjustable parameters of the boiling water reactor three-dimensional core simulator FLARE. The steepest gradient method is adopted for the optimization. The parameters are adjusted to best fit the operating data on power distribution measured by traversing in-core probes (TIP). The average error in the calculated TIP readings normalized by the core average is 0.053 at the rated power. The k-infinity correction term has also been derived theoretically to reduce the relatively large error in the calculated TIP readings near the tips of control rods, which is induced by the coarseness of mesh points. By introducing this correction, the average error decreases to 0.047. The void-quality relation is recognized as a function of coolant flow rate. The relation is estimated to fit the measured distributions of TIP reading at the partial power states

  11. An Indirect Simulation-Optimization Model for Determining Optimal TMDL Allocation under Uncertainty

    Directory of Open Access Journals (Sweden)

    Feng Zhou

    2015-11-01

    Full Text Available An indirect simulation-optimization model framework with enhanced computational efficiency and risk-based decision-making capability was developed to determine optimal total maximum daily load (TMDL allocation under uncertainty. To convert the traditional direct simulation-optimization model into our indirect equivalent model framework, we proposed a two-step strategy: (1 application of interval regression equations derived by a Bayesian recursive regression tree (BRRT v2 algorithm, which approximates the original hydrodynamic and water-quality simulation models and accurately quantifies the inherent nonlinear relationship between nutrient load reductions and the credible interval of algal biomass with a given confidence interval; and (2 incorporation of the calibrated interval regression equations into an uncertain optimization framework, which is further converted to our indirect equivalent framework by the enhanced-interval linear programming (EILP method and provides approximate-optimal solutions at various risk levels. The proposed strategy was applied to the Swift Creek Reservoir’s nutrient TMDL allocation (Chesterfield County, VA to identify the minimum nutrient load allocations required from eight sub-watersheds to ensure compliance with user-specified chlorophyll criteria. Our results indicated that the BRRT-EILP model could identify critical sub-watersheds faster than the traditional one and requires lower reduction of nutrient loadings compared to traditional stochastic simulation and trial-and-error (TAE approaches. This suggests that our proposed framework performs better in optimal TMDL development compared to the traditional simulation-optimization models and provides extreme and non-extreme tradeoff analysis under uncertainty for risk-based decision making.

  12. Cogeneration system simulation/optimization

    International Nuclear Information System (INIS)

    Puppa, B.A.; Chandrashekar, M.

    1992-01-01

    Companies are increasingly turning to computer software programs to improve and streamline the analysis o cogeneration systems. This paper introduces a computer program which originated with research at the University of Waterloo. The program can simulate and optimize any type of layout of cogeneration plant. An application of the program to a cogeneration feasibility study for a university campus is described. The Steam and Power Plant Optimization System (SAPPOS) is a PC software package which allows users to model any type of steam/power plant on a component-by-component basis. Individual energy/steam balances can be done quickly to model any scenario. A typical days per month cogeneration simulation can also be carried out to provide a detailed monthly cash flow and energy forecast. This paper reports that SAPPOS can be used for scoping, feasibility, and preliminary design work, along with financial studies, gas contract studies, and optimizing the operation of completed plants. In the feasibility study presented, SAPPOS is used to evaluate both diesel engine and gas turbine combined cycle options

  13. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  14. Modeling, simulation, and optimization of a front-end system for acetylene hydrogenation reactors

    Directory of Open Access Journals (Sweden)

    R. Gobbo

    2004-12-01

    Full Text Available The modeling, simulation, and dynamic optimization of an industrial reaction system for acetylene hydrogenation are discussed in the present work. The process consists of three adiabatic fixed-bed reactors, in series, with interstage cooling. These reactors are located after the compression and the caustic scrubbing sections of an ethylene plant, characterizing a front-end system; in contrast to the tail-end system where the reactors are placed after the de-ethanizer unit. The acetylene conversion and selectivity profiles for the reactors are optimized, taking into account catalyst deactivation and process constraints. A dynamic optimal temperature profile that maximizes ethylene production and meets product specifications is obtained by controlling the feed and intercoolers temperatures. An industrial acetylene hydrogenation system is used to provide the necessary data to adjust kinetics and transport parameters and to validate the approach.

  15. Global optimization and simulated annealing

    NARCIS (Netherlands)

    Dekkers, A.; Aarts, E.H.L.

    1988-01-01

    In this paper we are concerned with global optimization, which can be defined as the problem of finding points on a bounded subset of Rn in which some real valued functionf assumes its optimal (i.e. maximal or minimal) value. We present a stochastic approach which is based on the simulated annealing

  16. Optimization of startup and shutdown operation of simulated moving bed chromatographic processes.

    Science.gov (United States)

    Li, Suzhou; Kawajiri, Yoshiaki; Raisch, Jörg; Seidel-Morgenstern, Andreas

    2011-06-24

    This paper presents new multistage optimal startup and shutdown strategies for simulated moving bed (SMB) chromatographic processes. The proposed concept allows to adjust transient operating conditions stage-wise, and provides capability to improve transient performance and to fulfill product quality specifications simultaneously. A specially tailored decomposition algorithm is developed to ensure computational tractability of the resulting dynamic optimization problems. By examining the transient operation of a literature separation example characterized by nonlinear competitive isotherm, the feasibility of the solution approach is demonstrated, and the performance of the conventional and multistage optimal transient regimes is evaluated systematically. The quantitative results clearly show that the optimal operating policies not only allow to significantly reduce both duration of the transient phase and desorbent consumption, but also enable on-spec production even during startup and shutdown periods. With the aid of the developed transient procedures, short-term separation campaigns with small batch sizes can be performed more flexibly and efficiently by SMB chromatography. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. GPU-accelerated CFD Simulations for Turbomachinery Design Optimization

    NARCIS (Netherlands)

    Aissa, M.H.

    2017-01-01

    Design optimization relies heavily on time-consuming simulations, especially when using gradient-free optimization methods. These methods require a large number of simulations in order to get a remarkable improvement over reference designs, which are nowadays based on the accumulated engineering

  18. Performance Optimization of the ATLAS Detector Simulation

    CERN Document Server

    AUTHOR|(CDS)2091018

    In the thesis at hand the current performance of the ATLAS detector simulation, part of the Athena framework, is analyzed and possible optimizations are examined. For this purpose the event based sampling profiler VTune Amplifier by Intel is utilized. As the most important metric to measure improvements, the total execution time of the simulation of $t\\bar{t}$ events is also considered. All efforts are focused on structural changes, which do not influence the simulation output and can be attributed to CPU specific issues, especially front end stalls and vectorization. The most promising change is the activation of profile guided optimization for Geant4, which is a critical external dependency of the simulation. Profile guided optimization gives an average improvement of $8.9\\%$ and $10.0\\%$ for the two considered cases at the cost of one additional compilation (instrumented binaries) and execution (training to obtain profiling data) at build time.

  19. A Framework for the Optimization of Discrete-Event Simulation Models

    Science.gov (United States)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  20. Optimization of Operations Resources via Discrete Event Simulation Modeling

    Science.gov (United States)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  1. Exploiting Expert Knowledge to Enhance Simulation-based Optimization of Environmental Remediation Systems

    Science.gov (United States)

    Reslink, C. F.; Matott, L. S.

    2012-12-01

    Designing cost-effective systems to safeguard national water supplies from contaminated sites is often aided by simulation-based optimization - where a flow or transport model is linked with an "off-the-shelf" global optimization search algorithm. However, achieving good performance from these types of optimizers within a reasonable computational budget has proven to be difficult. Therefore, this research seeks to boost optimization efficiency by augmenting search procedures with non-traditional information, such as site-specific knowledge and practitioner rules-of-thumb. An example application involving pump-and-treat optimization is presented in which a series of extraction wells are to be installed to intercept pollutants at a contaminated site in Billings, Montana. Selected heuristic algorithms (e.g. Genetic Algorithm) are interfaced with a rules engine that makes inline adjustments to the well locations of candidate pump-and-treat designs. If necessary, the rules engine modifies a given pump-and-treat design so that: (1) wells are placed within plume boundaries; and (2) well placement is biased toward areas where, if left untreated, the plume is predicted to spread most rapidly. Results suggest that incorporating this kind of expert knowledge can significantly increase the search efficiency of many popular global optimizers.

  2. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  3. Optimization of Simulated Inventory Systems : OptQuest and Alternatives

    OpenAIRE

    Kleijnen, J.P.C.; Wan, J.

    2006-01-01

    This article illustrates simulation optimization through an (s, S) inventory manage- ment system. In this system, the goal function to be minimized is the expected value of speci…c inventory costs. Moreover, speci…c constraints must be satis…ed for some random simulation responses, namely the service or …ll rate, and for some determin- istic simulation inputs, namely the constraint s optimization methods, including the popular OptQuest method. The optimal...

  4. Minimizing the Discrepancy between Simulated and Historical Failures in Turbine Engines: A Simulation-Based Optimization Method

    Directory of Open Access Journals (Sweden)

    Ahmed Kibria

    2015-01-01

    Full Text Available The reliability modeling of a module in a turbine engine requires knowledge of its failure rate, which can be estimated by identifying statistical distributions describing the percentage of failure per component within the turbine module. The correct definition of the failure statistical behavior per component is highly dependent on the engineer skills and may present significant discrepancies with respect to the historical data. There is no formal methodology to approach this problem and a large number of labor hours are spent trying to reduce the discrepancy by manually adjusting the distribution’s parameters. This paper addresses this problem and provides a simulation-based optimization method for the minimization of the discrepancy between the simulated and the historical percentage of failures for turbine engine components. The proposed methodology optimizes the parameter values of the component’s failure statistical distributions within the component’s likelihood confidence bounds. A complete testing of the proposed method is performed on a turbine engine case study. The method can be considered as a decision-making tool for maintenance, repair, and overhaul companies and will potentially reduce the cost of labor associated to finding the appropriate value of the distribution parameters for each component/failure mode in the model and increase the accuracy in the prediction of the mean time to failures (MTTF.

  5. A design of calibration single star simulator with adjustable magnitude and optical spectrum output system

    Science.gov (United States)

    Hu, Guansheng; Zhang, Tao; Zhang, Xuan; Shi, Gentai; Bai, Haojie

    2018-03-01

    In order to achieve multi-color temperature and multi-magnitude output, magnitude and temperature can real-time adjust, a new type of calibration single star simulator was designed with adjustable magnitude and optical spectrum output in this article. xenon lamp and halogen tungsten lamp were used as light source. The control of spectrum band and temperature of star was realized with different multi-beam narrow band spectrum with light of varying intensity. When light source with different spectral characteristics and color temperature go into the magnitude regulator, the light energy attenuation were under control by adjusting the light luminosity. This method can completely satisfy the requirements of calibration single star simulator with adjustable magnitude and optical spectrum output in order to achieve the adjustable purpose of magnitude and spectrum.

  6. CASTING IMPROVEMENT BASED ON METAHEURISTIC OPTIMIZATION AND NUMERICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    Radomir Radiša

    2017-12-01

    Full Text Available This paper presents the use of metaheuristic optimization techniques to support the improvement of casting process. Genetic algorithm (GA, Ant Colony Optimization (ACO, Simulated annealing (SA and Particle Swarm Optimization (PSO have been considered as optimization tools to define the geometry of the casting part’s feeder. The proposed methodology has been demonstrated in the design of the feeder for casting Pelton turbine bucket. The results of the optimization are dimensional characteristics of the feeder, and the best result from all the implemented optimization processes has been adopted. Numerical simulation has been used to verify the validity of the presented design methodology and the feeding system optimization in the casting system of the Pelton turbine bucket.

  7. A Simulation Framework for Optimal Energy Storage Sizing

    Directory of Open Access Journals (Sweden)

    Carlos Suazo-Martínez

    2014-05-01

    Full Text Available Despite the increasing interest in Energy Storage Systems (ESS, quantification of their technical and economical benefits remains a challenge. To assess the use of ESS, a simulation approach for ESS optimal sizing is presented. The algorithm is based on an adapted Unit Commitment, including ESS operational constraints, and the use of high performance computing (HPC. Multiple short-term simulations are carried out within a multiple year horizon. Evaluation is performed for Chile's Northern Interconnected Power System (SING. The authors show that a single year evaluation could lead to sub-optimal results when evaluating optimal ESS size. Hence, it is advisable to perform long-term evaluations of ESS. Additionally, the importance of detailed simulation for adequate assessment of ESS contributions and to fully capture storage value is also discussed. Furthermore, the robustness of the optimal sizing approach is evaluated by means of a sensitivity analyses. The results suggest that regulatory frameworks should recognize multiple value streams from storage in order to encourage greater ESS integration.

  8. Conceptual Model for Simulating the Adjustments of Bankfull Characteristics in the Lower Yellow River, China

    Directory of Open Access Journals (Sweden)

    Yuanjian Wang

    2014-01-01

    Full Text Available We present a conceptual model for simulating the temporal adjustments in the banks of the Lower Yellow River (LYR. Basic conservation equations for mass, friction, and sediment transport capacity and the Exner equation were adopted to simulate the hydrodynamics underlying fluvial processes. The relationship between changing rates in bankfull width and depth, derived from quasiuniversal hydraulic geometries, was used as a closure for the hydrodynamic equations. On inputting the daily flow discharge and sediment load, the conceptual model successfully simulated the 30-year adjustments in the bankfull geometries of typical reaches of the LYR. The square of the correlating coefficient reached 0.74 for Huayuankou Station in the multiple-thread reach and exceeded 0.90 for Lijin Station in the meandering reach. This proposed model allows multiple dependent variables and the input of daily hydrological data for long-term simulations. This links the hydrodynamic and geomorphic processes in a fluvial river and has potential applicability to fluvial rivers undergoing significant adjustments.

  9. Simulated annealing algorithm for optimal capital growth

    Science.gov (United States)

    Luo, Yong; Zhu, Bo; Tang, Yong

    2014-08-01

    We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.

  10. MODELLING, SIMULATING AND OPTIMIZING BOILERS

    DEFF Research Database (Denmark)

    Sørensen, Kim; Condra, Thomas Joseph; Houbak, Niels

    2004-01-01

    In the present work a framework for optimizing the design of boilers for dynamic operation has been developed. A cost function to be minimized during the optimization has been formulated and for the present design variables related to the Boiler Volume and the Boiler load Gradient (i.e. ring rate...... on the boiler) have been dened. Furthermore a number of constraints related to: minimum and maximum boiler load gradient, minimum boiler size, Shrinking and Swelling and Steam Space Load have been dened. For dening the constraints related to the required boiler volume a dynamic model for simulating the boiler...... performance has been developed. Outputs from the simulations are shrinking and swelling of water level in the drum during for example a start-up of the boiler, these gures combined with the requirements with respect to allowable water level uctuations in the drum denes the requirements with respect to drum...

  11. Optimizing Chromatographic Separation: An Experiment Using an HPLC Simulator

    Science.gov (United States)

    Shalliker, R. A.; Kayillo, S.; Dennis, G. R.

    2008-01-01

    Optimization of a chromatographic separation within the time constraints of a laboratory session is practically impossible. However, by employing a HPLC simulator, experiments can be designed that allow students to develop an appreciation of the complexities involved in optimization procedures. In the present exercise, a HPLC simulator from "JCE…

  12. A Micro Dynamically Tuned Gyroscope with Adjustable Static Capacitance

    Directory of Open Access Journals (Sweden)

    Lun Kong

    2013-02-01

    Full Text Available This paper presents a novel micro dynamically tuned gyroscope (MDTG with adjustable static capacitance. First, the principle of MDTG is theoretically analyzed. Next, some simulations under the optimized structure parameters are given as a reference for the mask design of the rotor wafer and electrode plates. As two key components, the process flows of the rotor wafer and electrode plates are described in detail. All the scanning electron microscopy (SEM photos show that the fabrication process is effective and optimized. Then, an assembly model is designed for the static capacitance adjustable MDTG, whose static capacitance can be changed by rotating the lower electrode plate support and substituting gasket rings of different thicknesses. Thus, the scale factor is easily changeable. Afterwards, the digitalized closed-loop measurement circuit is simulated. The discrete correction and decoupling modules are designed to make the closed-loop stable and cross-coupling effect small. The dual axis closed-loop system bandwidths can reach more than 60 Hz and the dual axis scale factors are completely symmetrical. All the simulation results demonstrate the proposed fabrication of the MDTG can meet the application requirements. Finally, the paper presents the test results of static and dynamic capacitance values which are consistent with the simulation values.

  13. Modeling, simulation and optimization of bipedal walking

    CERN Document Server

    Berns, Karsten

    2013-01-01

    The model-based investigation of motions of anthropomorphic systems is an important interdisciplinary research topic involving specialists from many fields such as Robotics, Biomechanics, Physiology, Orthopedics, Psychology, Neurosciences, Sports, Computer Graphics and Applied Mathematics. This book presents a study of basic locomotion forms such as walking and running is of particular interest due to the high demand on dynamic coordination, actuator efficiency and balance control. Mathematical models and numerical simulation and optimization techniques are explained, in combination with experimental data, which can help to better understand the basic underlying mechanisms of these motions and to improve them. Example topics treated in this book are Modeling techniques for anthropomorphic bipedal walking systems Optimized walking motions for different objective functions Identification of objective functions from measurements Simulation and optimization approaches for humanoid robots Biologically inspired con...

  14. Optimization and simulation of tandem column supercritical fluid chromatography separations using column back pressure as a unique parameter.

    Science.gov (United States)

    Wang, Chunlei; Tymiak, Adrienne A; Zhang, Yingru

    2014-04-15

    Tandem column supercritical fluid chromatography (SFC) has demonstrated to be a useful technique to resolve complex mixtures by serially coupling two columns of different selectivity. The overall selectivity of a tandem column separation is the retention time weighted average of selectivity from each coupled column. Currently, the method development merely relies on extensive screenings and is often a hit-or-miss process. No attention is paid to independently adjust retention and selectivity contributions from individual columns. In this study, we show how tandem column SFC selectivity can be optimized by changing relative dimensions (length or inner diameter) of the coupled columns. Moreover, we apply column back pressure as a unique parameter for SFC optimization. Continuous tuning of tandem column SFC selectivity is illustrated through column back pressure adjustments of the upstream column, for the first time. In addition, we show how and why changing coupling order of the columns can produce dramatically different separations. Using the empirical mathematical equation derived in our previous study, we also demonstrate a simulation of tandem column separations based on a single retention time measurement on each column. The simulation compares well with experimental results and correctly predicts column order and back pressure effects on the separations. Finally, considerations on instrument and column hardware requirements are discussed.

  15. Multipacting Simulations of Tuner-adjustable waveguide coupler (TaCo) with CST

    CERN Document Server

    Shafqat, Nuaman; Wegner, Rolf

    2015-01-01

    Tuner-adjustable waveguide couplers (TaCo) are used to feed microwave power to different RF structures of LINAC4. This paper studies the multipacting phenomenon for TaCo using PIC solver of CST PS. Simulations are performed for complete field sweeps and results are analysed.

  16. Control parameter optimization for AP1000 reactor using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Wang, Pengfei; Wan, Jiashuang; Luo, Run; Zhao, Fuyu; Wei, Xinyu

    2016-01-01

    Highlights: • The PSO algorithm is applied for control parameter optimization of AP1000 reactor. • Key parameters of the MSHIM control system are optimized. • Optimization results are evaluated though simulations and quantitative analysis. - Abstract: The advanced mechanical shim (MSHIM) core control strategy is implemented in the AP1000 reactor for core reactivity and axial power distribution control simultaneously. The MSHIM core control system can provide superior reactor control capabilities via automatic rod control only. This enables the AP1000 to perform power change operations automatically without the soluble boron concentration adjustments. In this paper, the Particle Swarm Optimization (PSO) algorithm has been applied for the parameter optimization of the MSHIM control system to acquire better reactor control performance for AP1000. System requirements such as power control performance, control bank movement and AO control constraints are reflected in the objective function. Dynamic simulations are performed based on an AP1000 reactor simulation platform in each iteration of the optimization process to calculate the fitness values of particles in the swarm. The simulation platform is developed in Matlab/Simulink environment with implementation of a nodal core model and the MSHIM control strategy. Based on the simulation platform, the typical 10% step load decrease transient from 100% to 90% full power is simulated and the objective function used for control parameter tuning is directly incorporated in the simulation results. With successful implementation of the PSO algorithm in the control parameter optimization of AP1000 reactor, four key parameters of the MSHIM control system are optimized. It has been demonstrated by the calculation results that the optimized MSHIM control system parameters can improve the reactor power control capability and reduce the control rod movement without compromising AO control. Therefore, the PSO based optimization

  17. Automatic CT simulation optimization for radiation therapy: A general strategy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M.; Mutic, Sasa [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Yu, Lifeng [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2014-03-15

    Purpose: In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. Methods: The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Results: Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube

  18. Automatic CT simulation optimization for radiation therapy: A general strategy.

    Science.gov (United States)

    Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa

    2014-03-01

    In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes

  19. Concrete Plant Operations Optimization Using Combined Simulation and Genetic Algorithms

    NARCIS (Netherlands)

    Cao, Ming; Lu, Ming; Zhang, Jian-Ping

    2004-01-01

    This work presents a new approach for concrete plant operations optimization by combining a ready mixed concrete (RMC) production simulation tool (called HKCONSIM) with a genetic algorithm (GA) based optimization procedure. A revamped HKCONSIM computer system can be used to automate the simulation

  20. Simulation of Optimal Decision-Making Under the Impacts of Climate Change.

    Science.gov (United States)

    Møller, Lea Ravnkilde; Drews, Martin; Larsen, Morten Andreas Dahl

    2017-07-01

    Climate change causes transformations to the conditions of existing agricultural practices appointing farmers to continuously evaluate their agricultural strategies, e.g., towards optimising revenue. In this light, this paper presents a framework for applying Bayesian updating to simulate decision-making, reaction patterns and updating of beliefs among farmers in a developing country, when faced with the complexity of adapting agricultural systems to climate change. We apply the approach to a case study from Ghana, where farmers seek to decide on the most profitable of three agricultural systems (dryland crops, irrigated crops and livestock) by a continuous updating of beliefs relative to realised trajectories of climate (change), represented by projections of temperature and precipitation. The climate data is based on combinations of output from three global/regional climate model combinations and two future scenarios (RCP4.5 and RCP8.5) representing moderate and unsubstantial greenhouse gas reduction policies, respectively. The results indicate that the climate scenario (input) holds a significant influence on the development of beliefs, net revenues and thereby optimal farming practices. Further, despite uncertainties in the underlying net revenue functions, the study shows that when the beliefs of the farmer (decision-maker) opposes the development of the realised climate, the Bayesian methodology allows for simulating an adjustment of such beliefs, when improved information becomes available. The framework can, therefore, help facilitating the optimal choice between agricultural systems considering the influence of climate change.

  1. Simulated annealing algorithm for reactor in-core design optimizations

    International Nuclear Information System (INIS)

    Zhong Wenfa; Zhou Quan; Zhong Zhaopeng

    2001-01-01

    A nuclear reactor must be optimized for in core fuel management to make full use of the fuel, to reduce the operation cost and to flatten the power distribution reasonably. The author presents a simulated annealing algorithm. The optimized objective function and the punishment function were provided for optimizing the reactor physics design. The punishment function was used to practice the simulated annealing algorithm. The practical design of the NHR-200 was calculated. The results show that the K eff can be increased by 2.5% and the power distribution can be flattened

  2. Optimization of forging processes using finite element simulations

    NARCIS (Netherlands)

    Bonte, M.H.A.; Fourment, Lionel; Do, Tien-tho; van den Boogaard, Antonius H.; Huetink, Han

    2010-01-01

    During the last decades, simulation software based on the Finite Element Method (FEM) has significantly contributed to the design of feasible forming processes. Coupling FEM to mathematical optimization algorithms offers a promising opportunity to design optimal metal forming processes rather than

  3. Numerical simulation and optimization of nickel-hydrogen batteries

    Science.gov (United States)

    Yu, Li-Jun; Qin, Ming-Jun; Zhu, Peng; Yang, Li

    2008-05-01

    A three-dimensional, transient numerical model of an individual pressure vessel (IPV) nickel-hydrogen battery has been developed based on energy conservation law, mechanisms of heat and mass transfer, and electrochemical reactions in the battery. The model, containing all components of a battery including the battery shell, was utilized to simulate the transient temperature of the battery, using computational fluid dynamics (CFD) technology. The comparison of the model prediction and experimental data shows a good agreement, which means that the present model can be used for the engineering design and parameter optimization of nickel-hydrogen batteries in aerospace power systems. Two kinds of optimization schemes were provided and evaluated by the simulated temperature field. Based on the model, the temperature simulation during five successive periods in a designed space battery was conducted and the simulation results meet the requirement of safe operation.

  4. NUMERICAL SIMULATION AND OPTIMIZATION OF ...

    African Journals Online (AJOL)

    30 juin 2011 ... This article has as an aim the study and the simulation of the photovoltaic cells containing CdTe materials, contributing to the development of renewable energies, and able to feed from the houses, the shelters as well as ... and the output energy of conversion is 18.26%.Optimization is made according to the.

  5. Dr. Mainte. Integrated simulator of maintenance optimization of LWRs

    International Nuclear Information System (INIS)

    Isobe, Yoshihiro; Sagisaka, Mitsuyuki; Etoh, Junji; Matsunaga, Takashi; Kosaka, Toru; Matsumoto, Satoshi; Yoshimura, Shinobu

    2014-01-01

    Dr. Mainte, an integrated simulator for maintenance optimization of LWRs (Light Water Reactors) has been developed based on PFM (Probabilistic Fracture Mechanics) analyses. The concept of the simulator is to provide a decision-making system to optimize maintenance activities for representative components and piping systems in nuclear power plants totally and quantitatively in terms of safety, availability and economic efficiency, environmental impact and social acceptance. For the further improvement of the safety and availability, the effect of human error and its reduction on the optimization of plant maintenance activities and approaches of reducing it have been studied. (author)

  6. Optimization and Simulation in the Danish Fishing Industry

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Clausen, Jens

    and simulation can be applied in a holistic modeling framework. Using the insights into supply chain theory and the Danish fishing industry, we investigate how the fishing industry as a whole may benefit from the formulation and use of mathematical optimization and simulation models. Finally, an appendix......We consider the Danish fishing industry from a holistic viewpoint, and give a review of the main aspects, and the important actors. We also consider supply chain theory, and identify both theoretically, and based on other application areas, e.g. other fresh food industries, how optimization...

  7. Fuzzy Simulation-Optimization Model for Waste Load Allocation

    Directory of Open Access Journals (Sweden)

    Motahhare Saadatpour

    2006-01-01

    Full Text Available This paper present simulation-optimization models for waste load allocation from multiple point sources which include uncertainty due to vagueness of the parameters and goals. This model employs fuzzy sets with appropriate membership functions to deal with uncertainties due to vagueness. The fuzzy waste load allocation model (FWLAM incorporate QUAL2E as a water quality simulation model and Genetic Algorithm (GA as an optimization tool to find the optimal combination of the fraction removal level to the dischargers and pollution control agency (PCA. Penalty functions are employed to control the violations in the system.  The results demonstrate that the goal of PCA to achieve the best water quality and the goal of the dischargers to use the full assimilative capacity of the river have not been satisfied completely and a compromise solution between these goals is provided. This fuzzy optimization model with genetic algorithm has been used for a hypothetical problem. Results demonstrate a very suitable convergence of proposed optimization algorithm to the global optima.

  8. The Dynamic Optimization of the Departure Times of Metro Users during Rush Hour in an Agent-Based Simulation: A Case Study in Shenzhen, China

    Directory of Open Access Journals (Sweden)

    Yuliang Xi

    2017-10-01

    Full Text Available As serious traffic problems have increased throughout the world, various types of studies, especially traffic simulations, have been conducted to investigate this issue. Activity-based traffic simulation models, such as MATSim (Multi-Agent Transport Simulation, are intended to identify optimal combinations of activities in time and space. It is also necessary to examine commuting-based traffic simulations. Such simulations focus on optimizing travel times by adjusting departure times, travel modes or travel routes to present travel suggestions to the public. This paper examines the optimal departure times of metro users during rush hour using a newly developed simulation tool. A strategy for identifying relatively optimal departure times is identified. This study examines 103,637 person agents (passengers in Shenzhen, China, and reports their average departure time, travel time and travel utility, as well as the numbers of person agents who are late and miss metro trips in every iteration. The results demonstrate that as the number of iterations increases, the average travel time of these person agents decreases by approximately 4 min. Moreover, the latest average departure time with no risk of being late when going to work is approximately 8:04, and the earliest average departure time with no risk of missing metro trips when getting off work is approximately 17:50.

  9. Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations

    Science.gov (United States)

    Niemeier, Wolfgang; Tengen, Dieter

    2017-06-01

    In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.

  10. Simulation and Optimization of SCR System for Direct-injection Diesel Engine

    Directory of Open Access Journals (Sweden)

    Guanqiang Ruan

    2014-11-01

    Full Text Available The turbo diesel SCR system has been researched and analyzed in this paper. By using software of CATIA, three-dimensional physical model of SCR system has been established, and with software of AVL-FIRE, the boundary conditions have been set, simulated and optimized. In the process of SCR system optimizing, it mainly optimized the pray angle. Compare the effects of processing NO to obtain batter optimization results. At last the optimization results are compared by bench test, and the experimental results are quite consistent with simulation.

  11. Multiphysics simulation electromechanical system applications and optimization

    CERN Document Server

    Dede, Ercan M; Nomura, Tsuyoshi

    2014-01-01

    This book highlights a unique combination of numerical tools and strategies for handling the challenges of multiphysics simulation, with a specific focus on electromechanical systems as the target application. Features: introduces the concept of design via simulation, along with the role of multiphysics simulation in today's engineering environment; discusses the importance of structural optimization techniques in the design and development of electromechanical systems; provides an overview of the physics commonly involved with electromechanical systems for applications such as electronics, ma

  12. Optimized multi area AGC simulation in restructured power systems

    International Nuclear Information System (INIS)

    Bhatt, Praghnesh; Roy, Ranjit; Ghoshal, S.P.

    2010-01-01

    In this paper, the traditional automatic generation control loop with modifications is incorporated for simulating automatic generation control (AGC) in restructured power system. Federal energy regulatory commission (FERC) encourages an open market system for price based operation. FERC has issued a notice for proposed rulemaking of various ancillary services. One of these ancillary services is load following with frequency control which comes broadly under Automatic Generation Control in deregulated regime. The concept of DISCO participation matrix is used to simulate the bilateral contracts in the three areas and four area diagrams. Hybrid particle swarm optimization is used to obtain optimal gain parameters for optimal transient performance. (author)

  13. Adjustment, error analysis and modular strategy for Space Solar Power Station

    International Nuclear Information System (INIS)

    Meng, Xian-Long; Xia, Xin-Lin; Sun, Chuang; Hou, Xin-Bin

    2014-01-01

    Highlights: • The optimal adjustment method for SSPS when it travels on orbit is determined. • Two solutions for the arrangement of transverse truss are proposed. • The effect and regulating method for the tracking error are investigated. • The mathematical partition model of a flat hexagon module concept is built. • The flux distributions on solar panel based on different number of modules are simulated. - Abstract: Space Solar Power Station (SSPS) is a very potential candidate for supplying abundant electrical energy. Symmetrical two-stage flat reflected concentrator (STFC) has many advantages when used in SSPS. However the steady performance and control method on orbit has become a big problem which will be discussed in this paper. The actual posture of entire station is analyzed in detail due to the requirements of good flux uniformity, circular concentrated spot and controlled concentration ratio. Here two regulating directions are studied. And the most optimal method in multidimensional space of adjusting parameters is developed. In order to verify the correctness and reliability, the concentrating characteristics in different cases are simulated by Monte-Carlo ray tracing method (MCRTM). Based on the optimal adjusting parameters, solutions for the arrangement of transverse truss are proposed. After that the effect and regulating method for tracking error is investigated to improve the tolerance performance as highly as possible. Finally the construction of concentrators is much important to the realizability, cost and working performance. A flat hexagon module concept and the regular pattern are investigated to build the optical model. The flux distribution on solar panel based on different big number of modules is simulated, which provides certain reference for the build of SSPS

  14. Optimized Loading for Particle-in-cell Gyrokinetic Simulations

    International Nuclear Information System (INIS)

    Lewandowski, J.L.V.

    2004-01-01

    The problem of particle loading in particle-in-cell gyrokinetic simulations is addressed using a quadratic optimization algorithm. Optimized loading in configuration space dramatically reduces the short wavelength modes in the electrostatic potential that are partly responsible for the non-conservation of total energy; further, the long wavelength modes are resolved with good accuracy. As a result, the conservation of energy for the optimized loading is much better that the conservation of energy for the random loading. The method is valid for any geometry and can be coupled to optimization algorithms in velocity space

  15. Structure optimization and simulation analysis of the quartz micromachined gyroscope

    Directory of Open Access Journals (Sweden)

    Xuezhong Wu

    2014-02-01

    Full Text Available Structure optimization and simulation analysis of the quartz micromachined gyroscope are reported in this paper. The relationships between the structure parameters and the frequencies of work mode were analysed by finite element analysis. The structure parameters of the quartz micromachined gyroscope were optimized to reduce the difference between the frequencies of the drive mode and the sense mode. The simulation results were proved by testing the prototype gyroscope, which was fabricated by micro-electromechanical systems (MEMS technology. Therefore, the frequencies of the drive mode and the sense mode can match each other by the structure optimization and simulation analysis of the quartz micromachined gyroscope, which is helpful in the design of the high sensitivity quartz micromachined gyroscope.

  16. A Thermodynamic Library for Simulation and Optimization of Dynamic Processes

    DEFF Research Database (Denmark)

    Ritschel, Tobias Kasper Skovborg; Gaspar, Jozsef; Jørgensen, John Bagterp

    2017-01-01

    Process system tools, such as simulation and optimization of dynamic systems, are widely used in the process industries for development of operational strategies and control for process systems. These tools rely on thermodynamic models and many thermodynamic models have been developed for different...... compounds and mixtures. However, rigorous thermodynamic models are generally computationally intensive and not available as open-source libraries for process simulation and optimization. In this paper, we describe the application of a novel open-source rigorous thermodynamic library, ThermoLib, which...... is designed for dynamic simulation and optimization of vapor-liquid processes. ThermoLib is implemented in Matlab and C and uses cubic equations of state to compute vapor and liquid phase thermodynamic properties. The novelty of ThermoLib is that it provides analytical first and second order derivatives...

  17. Device simulation and optimization of laterally-contacted-unipolar-nuclear detector

    CERN Document Server

    Lee, E Y

    1999-01-01

    Unipolar gamma-ray detectors offer the possibility of enhanced energy resolution and detection sensitivity over the conventional planar detectors. However, these detectors are difficult to understand and to fabricate, due to their three-dimensional geometry and multiple electrodes. Computer simulation offers a powerful way to design and to optimize these detectors, by giving the internal electric fields, weighting potentials, and spatially resolved detector responses. Simulation and optimization of an unipolar gamma-ray detector called laterally-contacted-unipolar-nuclear detector (LUND) are shown. For 662 keV gamma-rays from a sup 1 sup 3 sup 7 Cs source, the simulation and optimization of LUND resulted in improvement in the energy resolution from 1.6% to 1.3% and improvement in the active detector volume from 4% to 38% of the total detector volume.

  18. Loading pattern optimization by multi-objective simulated annealing with screening technique

    International Nuclear Information System (INIS)

    Tong, K. P.; Hyun, C. L.; Hyung, K. J.; Chang, H. K.

    2006-01-01

    This paper presents a new multi-objective function which is made up of the main objective term as well as penalty terms related to the constraints. All the terms are represented in the same functional form and the coefficient of each term is normalized so that each term has equal weighting in the subsequent simulated annealing optimization calculations. The screening technique introduced in the previous work is also adopted in order to save computer time in 3-D neutronics evaluation of trial loading patterns. For numerical test of the new multi-objective function in the loading pattern optimization, the optimum loading patterns for the initial and the cycle 7 reload PWR core of Yonggwang Unit 4 are calculated by the simulated annealing algorithm with screening technique. A total of 10 optimum loading patterns are obtained for the initial core through 10 independent simulated annealing optimization runs. For the cycle 7 reload core one optimum loading pattern has been obtained from a single simulated annealing optimization run. More SA optimization runs will be conducted to optimum loading patterns for the cycle 7 reload core and results will be presented in the further work. (authors)

  19. Simulation and optimization of an industrial PSA unit

    Directory of Open Access Journals (Sweden)

    Barg C.

    2000-01-01

    Full Text Available The Pressure Swing Adsorption (PSA units have been used as a low cost alternative to the usual gas separation processes. Its largest commercial application is for hydrogen purification systems. Several studies have been made about the simulation of pressure swing adsorption units, but there are only few reports on the optimization of such processes. The objective of this study is to simulate and optimize an industrial PSA unit for hydrogen purification. This unit consists of six beds, each of them have three layers of different kinds of adsorbents. The main impurities are methane, carbon monoxide and sulfidric gas. The product stream has 99.99% purity in hydrogen, and the recovery is around 90%. A mathematical model for a commercial PSA unit is developed. The cycle time and the pressure swing steps are optimized. All the features concerning with complex commercial processes are considered.

  20. Optimization and simulation of MEMS rectilinear ion trap

    Directory of Open Access Journals (Sweden)

    Huang Gang

    2015-04-01

    Full Text Available In this paper, the design of a MEMS rectilinear ion trap was optimized under simulated conditions. The size range of the MEMS rectilinear ion trap’s electrodes studied in this paper is measured at micron scale. SIMION software was used to simulate the MEMS rectilinear ion trap with different sizes and different radio-frequency signals. The ion-trapping efficiencies of the ion trap under these different simulation conditions were obtained. The ion-trapping efficiencies were compared to determine the performance of the MEMS rectilinear ion trap in different conditions and to find the optimum conditions. The simulation results show that for the ion trap at micron scale or smaller, the optimized length–width ratio was 0.8, and a higher frequency of radio-frequency signal is necessary to obtain a higher ion-trapping efficiency. These results have a guiding role in the process of developing MEMS rectilinear ion traps, and great application prospects in the research fields of the MEMS rectilinear ion trap and the MEMS mass spectrometer.

  1. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Berthiau, G

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  2. Simulation optimization based ant colony algorithm for the uncertain quay crane scheduling problem

    Directory of Open Access Journals (Sweden)

    Naoufal Rouky

    2019-01-01

    Full Text Available This work is devoted to the study of the Uncertain Quay Crane Scheduling Problem (QCSP, where the loading /unloading times of containers and travel time of quay cranes are considered uncertain. The problem is solved with a Simulation Optimization approach which takes advantage of the great possibilities offered by the simulation to model the real details of the problem and the capacity of the optimization to find solutions with good quality. An Ant Colony Optimization (ACO meta-heuristic hybridized with a Variable Neighborhood Descent (VND local search is proposed to determine the assignments of tasks to quay cranes and the sequences of executions of tasks on each crane. Simulation is used inside the optimization algorithm to generate scenarios in agreement with the probabilities of the distributions of the uncertain parameters, thus, we carry out stochastic evaluations of the solutions found by each ant. The proposed optimization algorithm is tested first for the deterministic case on several well-known benchmark instances. Then, in the stochastic case, since no other work studied exactly the same problem with the same assumptions, the Simulation Optimization approach is compared with the deterministic version. The experimental results show that the optimization algorithm is competitive as compared to the existing methods and that the solutions found by the Simulation Optimization approach are more robust than those found by the optimization algorithm.

  3. Parameter identification using optimization techniques in the continuous simulation programs FORSIM and MACKSIM

    International Nuclear Information System (INIS)

    Carver, M.B.; Austin, C.F.; Ross, N.E.

    1980-02-01

    This report discusses the mechanics of automated parameter identification in simulation packages, and reviews available integration and optimization algorithms and their interaction within the recently developed optimization options in the FORSIM and MACKSIM simulation packages. In the MACKSIM mass-action chemical kinetics simulation package, the form and structure of the ordinary differential equations involved is known, so the implementation of an optimizing option is relatively straightforward. FORSIM, however, is designed to integrate ordinary and partial differential equations of abritrary definition. As the form of the equations is not known in advance, the design of the optimizing option is more intricate, but the philosophy could be applied to most simulation packages. In either case, however, the invocation of the optimizing interface is simple and user-oriented. Full details for the use of the optimizing mode for each program are given; specific applications are used as examples. (O.T.)

  4. An Optimization Algorithm for Multipath Parallel Allocation for Service Resource in the Simulation Task Workflow

    Directory of Open Access Journals (Sweden)

    Zhiteng Wang

    2014-01-01

    Full Text Available Service oriented modeling and simulation are hot issues in the field of modeling and simulation, and there is need to call service resources when simulation task workflow is running. How to optimize the service resource allocation to ensure that the task is complete effectively is an important issue in this area. In military modeling and simulation field, it is important to improve the probability of success and timeliness in simulation task workflow. Therefore, this paper proposes an optimization algorithm for multipath service resource parallel allocation, in which multipath service resource parallel allocation model is built and multiple chains coding scheme quantum optimization algorithm is used for optimization and solution. The multiple chains coding scheme quantum optimization algorithm is to extend parallel search space to improve search efficiency. Through the simulation experiment, this paper investigates the effect for the probability of success in simulation task workflow from different optimization algorithm, service allocation strategy, and path number, and the simulation result shows that the optimization algorithm for multipath service resource parallel allocation is an effective method to improve the probability of success and timeliness in simulation task workflow.

  5. Adjustment Criterion and Algorithm in Adjustment Model with Uncertain

    Directory of Open Access Journals (Sweden)

    SONG Yingchun

    2015-02-01

    Full Text Available Uncertainty often exists in the process of obtaining measurement data, which affects the reliability of parameter estimation. This paper establishes a new adjustment model in which uncertainty is incorporated into the function model as a parameter. A new adjustment criterion and its iterative algorithm are given based on uncertainty propagation law in the residual error, in which the maximum possible uncertainty is minimized. This paper also analyzes, with examples, the different adjustment criteria and features of optimal solutions about the least-squares adjustment, the uncertainty adjustment and total least-squares adjustment. Existing error theory is extended with new observational data processing method about uncertainty.

  6. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions

    KAUST Repository

    Atallah, Nabil M.

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester.

  7. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions

    KAUST Repository

    Atallah, Nabil M.; El-Fadel, Mutasem E.; Ghanimeh, Sophia A.; Saikaly, Pascal; Abou Najm, Majdi R.

    2014-01-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester.

  8. Simulation and optimal control of wind-farm boundary layers

    Science.gov (United States)

    Meyers, Johan; Goit, Jay

    2014-05-01

    In large wind farms, the effect of turbine wakes, and their interaction leads to a reduction in farm efficiency, with power generated by turbines in a farm being lower than that of a lone-standing turbine by up to 50%. In very large wind farms or `deep arrays', this efficiency loss is related to interaction of the wind farms with the planetary boundary layer, leading to lower wind speeds at turbine level. Moreover, for these cases it has been demonstrated both in simulations and wind-tunnel experiments that the wind-farm energy extraction is dominated by the vertical turbulent transport of kinetic energy from higher regions in the boundary layer towards the turbine level. In the current study, we investigate the use of optimal control techniques combined with Large-Eddy Simulations (LES) of wind-farm boundary layer interaction for the increase of total energy extraction in very large `infinite' wind farms. We consider the individual wind turbines as flow actuators, whose energy extraction can be dynamically regulated in time so as to optimally influence the turbulent flow field, maximizing the wind farm power. For the simulation of wind-farm boundary layers we use large-eddy simulations in combination with actuator-disk and actuator-line representations of wind turbines. Simulations are performed in our in-house pseudo-spectral code SP-Wind that combines Fourier-spectral discretization in horizontal directions with a fourth-order finite-volume approach in the vertical direction. For the optimal control study, we consider the dynamic control of turbine-thrust coefficients in an actuator-disk model. They represent the effect of turbine blades that can actively pitch in time, changing the lift- and drag coefficients of the turbine blades. Optimal model-predictive control (or optimal receding horizon control) is used, where the model simply consists of the full LES equations, and the time horizon is approximately 280 seconds. The optimization is performed using a

  9. Optimal design and uncertainty quantification in blood flow simulations for congenital heart disease

    Science.gov (United States)

    Marsden, Alison

    2009-11-01

    Recent work has demonstrated substantial progress in capabilities for patient-specific cardiovascular flow simulations. Recent advances include increasingly complex geometries, physiological flow conditions, and fluid structure interaction. However inputs to these simulations, including medical image data, catheter-derived pressures and material properties, can have significant uncertainties associated with them. For simulations to predict clinically useful and reliable output information, it is necessary to quantify the effects of input uncertainties on outputs of interest. In addition, blood flow simulation tools can now be efficiently coupled to shape optimization algorithms for surgery design applications, and these tools should incorporate uncertainty information. We present a unified framework to systematically and efficient account for uncertainties in simulations using adaptive stochastic collocation. In addition, we present a framework for derivative-free optimization of cardiovascular geometries, and layer these tools to perform optimization under uncertainty. These methods are demonstrated using simulations and surgery optimization to improve hemodynamics in pediatric cardiology applications.

  10. An optimization method of relativistic backward wave oscillator using particle simulation and genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zaigao; Wang, Jianguo [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China); Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Wang, Yue; Qiao, Hailiang; Zhang, Dianhui [Northwest Institute of Nuclear Technology, P.O. Box 69-12, Xi' an, Shaanxi 710024 (China); Guo, Weijie [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi' an, Shaanxi 710049 (China)

    2013-11-15

    Optimal design method of high-power microwave source using particle simulation and parallel genetic algorithms is presented in this paper. The output power, simulated by the fully electromagnetic particle simulation code UNIPIC, of the high-power microwave device is given as the fitness function, and the float-encoding genetic algorithms are used to optimize the high-power microwave devices. Using this method, we encode the heights of non-uniform slow wave structure in the relativistic backward wave oscillators (RBWO), and optimize the parameters on massively parallel processors. Simulation results demonstrate that we can obtain the optimal parameters of non-uniform slow wave structure in the RBWO, and the output microwave power enhances 52.6% after the device is optimized.

  11. Bandwidth Optimization of Normal Equation Matrix in Bundle Block Adjustment in Multi-baseline Rotational Photography

    Directory of Open Access Journals (Sweden)

    WANG Xiang

    2016-02-01

    Full Text Available A new bandwidth optimization method of normal equation matrix in bundle block adjustment in multi-baseline rotational close range photography by image index re-sorting is proposed. The equivalent exposure station of each image is calculated by its object space coverage and the relationship with other adjacent images. Then, according to the coordinate relations between equivalent exposure stations, new logical indices of all images are computed, based on which, the optimized bandwidth value can be obtained. Experimental results show that the bandwidth determined by our proposed method is significantly better than its original value, thus the operational efficiency, as well as the memory consumption of multi-baseline rotational close range photography in real-data applications, is optimized to a certain extent.

  12. Metamodel-based robust simulation-optimization : An overview

    NARCIS (Netherlands)

    Dellino, G.; Meloni, C.; Kleijnen, J.P.C.; Dellino, Gabriella; Meloni, Carlo

    2015-01-01

    Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a "robust" methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by

  13. Sequential use of simulation and optimization in analysis and planning

    Science.gov (United States)

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  14. Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.

    Science.gov (United States)

    Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi

    2014-12-01

    In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Optimal Scheme Selection of Agricultural Production Structure Adjustment - Based on DEA Model; Punjab (Pakistan)

    Institute of Scientific and Technical Information of China (English)

    Zeeshan Ahmad; Meng Jun; Muhammad Abdullah; Mazhar Nadeem Ishaq; Majid Lateef; Imran Khan

    2015-01-01

    This paper used the modern evaluation method of DEA (Data Envelopment Analysis) to assess the comparative efficiency and then on the basis of this among multiple schemes chose the optimal scheme of agricultural production structure adjustment. Based on the results of DEA model, we dissected scale advantages of each discretionary scheme or plan. We examined scale advantages of each discretionary scheme, tested profoundly a definitive purpose behind not-DEA efficient, which elucidated the system and methodology to enhance these discretionary plans. At the end, another method had been proposed to rank and select the optimal scheme. The research was important to guide the practice if the modification of agricultural production industrial structure was carried on.

  16. Optimal array factor radiation pattern synthesis for linear antenna array using cat swarm optimization: validation by an electromagnetic simulator

    Institute of Scientific and Technical Information of China (English)

    Gopi RAM; Durbadal MANDAL; Sakti Prasad GHOSHAL; Rajib KAR

    2017-01-01

    In this paper, an optimal design of linear antenna arrays having microstrip patch antenna elements has been carried out. Cat swarm optimization (CSO) has been applied for the optimization of the control parameters of radiation pattern of an antenna array. The optimal radiation patterns of isotropic antenna elements are obtained by optimizing the current excitation weight of each element and the inter-element spacing. The antenna arrays of 12, 16, and 20 elements are taken as examples. The arrays are de-signed by using MATLAB computation and are validated through Computer Simulation Technology-Microwave Studio (CST-MWS). From the simulation results it is evident that CSO is able to yield the optimal design of linear antenna arrays of patch antenna elements.

  17. Applied simulation and optimization in logistics, industrial and aeronautical practice

    CERN Document Server

    Mota, Idalia; Serrano, Daniel

    2015-01-01

    Presenting techniques, case-studies and methodologies that combine the use of simulation approaches with optimization techniques for facing problems in manufacturing, logistics, or aeronautical problems, this book provides solutions to common industrial problems in several fields, which range from manufacturing to aviation problems, where the common denominator is the combination of simulation’s flexibility with optimization techniques’ robustness. Providing readers with a comprehensive guide to tackle similar issues in industrial environments, this text explores novel ways to face industrial problems through hybrid approaches (simulation-optimization) that benefit from the advantages of both paradigms, in order to give solutions to important problems in service industry, production processes, or supply chains, such as scheduling, routing problems and resource allocations, among others.

  18. Simulation-optimization of large agro-hydrosystems using a decomposition approach

    Science.gov (United States)

    Schuetze, Niels; Grundmann, Jens

    2014-05-01

    In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.

  19. Conventional treatment planning optimization using simulated annealing

    International Nuclear Information System (INIS)

    Morrill, S.M.; Langer, M.; Lane, R.G.

    1995-01-01

    Purpose: Simulated annealing (SA) allows for the implementation of realistic biological and clinical cost functions into treatment plan optimization. However, a drawback to the clinical implementation of SA optimization is that large numbers of beams appear in the final solution, some with insignificant weights, preventing the delivery of these optimized plans using conventional (limited to a few coplanar beams) radiation therapy. A preliminary study suggested two promising algorithms for restricting the number of beam weights. The purpose of this investigation was to compare these two algorithms using our current SA algorithm with the aim of producing a algorithm to allow clinically useful radiation therapy treatment planning optimization. Method: Our current SA algorithm, Variable Stepsize Generalized Simulated Annealing (VSGSA) was modified with two algorithms to restrict the number of beam weights in the final solution. The first algorithm selected combinations of a fixed number of beams from the complete solution space at each iterative step of the optimization process. The second reduced the allowed number of beams by a factor of two at periodic steps during the optimization process until only the specified number of beams remained. Results of optimization of beam weights and angles using these algorithms were compared using a standard cadre of abdominal cases. The solution space was defined as a set of 36 custom-shaped open and wedged-filtered fields at 10 deg. increments with a target constant target volume margin of 1.2 cm. For each case a clinically-accepted cost function, minimum tumor dose was maximized subject to a set of normal tissue binary dose-volume constraints. For this study, the optimized plan was restricted to four (4) fields suitable for delivery with conventional therapy equipment. Results: The table gives the mean value of the minimum target dose obtained for each algorithm averaged over 5 different runs and the comparable manual treatment

  20. Validation of Multibody Program to Optimize Simulated Trajectories II Parachute Simulation with Interacting Forces

    Science.gov (United States)

    Raiszadeh, Behzad; Queen, Eric M.; Hotchko, Nathaniel J.

    2009-01-01

    A capability to simulate trajectories of multiple interacting rigid bodies has been developed, tested and validated. This capability uses the Program to Optimize Simulated Trajectories II (POST 2). The standard version of POST 2 allows trajectory simulation of multiple bodies without force interaction. In the current implementation, the force interaction between the parachute and the suspended bodies has been modeled using flexible lines, allowing accurate trajectory simulation of the individual bodies in flight. The POST 2 multibody capability is intended to be general purpose and applicable to any parachute entry trajectory simulation. This research paper explains the motivation for multibody parachute simulation, discusses implementation methods, and presents validation of this capability.

  1. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

    Directory of Open Access Journals (Sweden)

    Hailong Wang

    2018-01-01

    Full Text Available The backtracking search optimization algorithm (BSA is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed.

  2. Simulated Annealing-Based Krill Herd Algorithm for Global Optimization

    Directory of Open Access Journals (Sweden)

    Gai-Ge Wang

    2013-01-01

    Full Text Available Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH, for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH method is proposed for optimization tasks. A new krill selecting (KS operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA. In addition, a kind of elitism scheme is used to save the best individuals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.

  3. Optimizing Cognitive Load for Learning from Computer-Based Science Simulations

    Science.gov (United States)

    Lee, Hyunjeong; Plass, Jan L.; Homer, Bruce D.

    2006-01-01

    How can cognitive load in visual displays of computer simulations be optimized? Middle-school chemistry students (N = 257) learned with a simulation of the ideal gas law. Visual complexity was manipulated by separating the display of the simulations in two screens (low complexity) or presenting all information on one screen (high complexity). The…

  4. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    Science.gov (United States)

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  5. Optimization and Simulation Modeling of Disaster Relief Supply Chain: A Literature Review

    OpenAIRE

    Feng, Keli; Bizimana, Emmanuel; Agu, Deedee D.; Issac, Tana T.

    2012-01-01

    Recent natural and man-made disasters underscore the need of a resilient and agile disaster relief supply chain to mitigate the damages and save people’s lives. Optimization and simulation modeling have become powerful and useful tools to help decision makers tackle problems related to disaster relief supply chain. This paper reviews optimization and simulation models used in the field of disaster relief supply chain. We review the literature of the facility location optimization problems of ...

  6. Application of Dr. Mainte, integrated simulator of maintenance optimization, to LWRs

    International Nuclear Information System (INIS)

    Isobe, Yoshihiro; Sagisaka, Mitsuyuki; Etoh, Junji; Matsunaga, Takashi; Kosaka, Toru; Matsumoto, Satoshi; Yoshimura, Shinobu

    2015-01-01

    Dr. Mainte, an integrated simulator for maintenance optimization of LWRs (Light Water Reactors) is based on PFM (Probabilistic Fracture Mechanics) analyses. The concept of the simulator is to provide a decision-making system to optimize maintenance activities for typical components and piping systems in nuclear power plants totally and quantitatively in terms of safety, availability, economic rationality, environmental impact and social acceptance. For the further improvement of the safety and availability of nuclear power plants, the effect of human error and its reduction on the optimization of maintenance activities have been studied. In addition, an approach of reducing human error is proposed. (author)

  7. Simulation platform to model, optimize and design wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Iov, F.; Hansen, A.D.; Soerensen, P.; Blaabjerg, F.

    2004-03-01

    This report is a general overview of the results obtained in the project 'Electrical Design and Control. Simulation Platform to Model, Optimize and Design Wind Turbines'. The motivation for this research project is the ever-increasing wind energy penetration into the power network. Therefore, the project has the main goal to create a model database in different simulation tools for a system optimization of the wind turbine systems. Using this model database a simultaneous optimization of the aerodynamic, mechanical, electrical and control systems over the whole range of wind speeds and grid characteristics can be achieved. The report is structured in six chapters. First, the background of this project and the main goals as well as the structure of the simulation platform is given. The main topologies for wind turbines, which have been taken into account during the project, are briefly presented. Then, the considered simulation tools namely: HAWC, DIgSILENT, Saber and Matlab/Simulink have been used in this simulation platform are described. The focus here is on the modelling and simulation time scale aspects. The abilities of these tools are complementary and they can together cover all the modelling aspects of the wind turbines e.g. mechanical loads, power quality, switching, control and grid faults. However, other simulation packages e.g PSCAD/EMTDC can easily be added in the simulation platform. New models and new control algorithms for wind turbine systems have been developed and tested in these tools. All these models are collected in dedicated libraries in Matlab/Simulink as well as in Saber. Some simulation results from the considered tools are presented for MW wind turbines. These simulation results focuses on fixed-speed and variable speed/pitch wind turbines. A good agreement with the real behaviour of these systems is obtained for each simulation tool. These models can easily be extended to model different kinds of wind turbines or large wind

  8. Logic hybrid simulation-optimization algorithm for distillation design

    OpenAIRE

    Caballero Suárez, José Antonio

    2014-01-01

    In this paper, we propose a novel algorithm for the rigorous design of distillation columns that integrates a process simulator in a generalized disjunctive programming formulation. The optimal distillation column, or column sequence, is obtained by selecting, for each column section, among a set of column sections with different number of theoretical trays. The selection of thermodynamic models, properties estimation etc., are all in the simulation environment. All the numerical issues relat...

  9. Applied simulation and optimization 2 new applications in logistics, industrial and aeronautical practice

    CERN Document Server

    Mota, Idalia

    2017-01-01

    Building on the author’s earlier Applied Simulation and Optimization, this book presents novel methods for solving problems in industry, based on hybrid simulation-optimization approaches that combine the advantages of both paradigms. The book serves as a comprehensive guide to tackling scheduling, routing problems, resource allocations and other issues in industrial environments, the service industry, production processes, or supply chains and aviation. Logistics, manufacturing and operational problems can either be modelled using optimization techniques or approaches based on simulation methodologies. Optimization techniques have the advantage of performing efficiently when the problems are properly defined, but they are often developed through rigid representations that do not include or accurately represent the stochasticity inherent in real systems. Furthermore, important information is lost during the abstraction process to fit each problem into the optimization technique. On the other hand, simulatio...

  10. Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan

    Science.gov (United States)

    Huang, P. S.; Chiu, Y.

    2015-12-01

    In 1970's, the agriculture and aquaculture were rapidly developed at Pingtung coastal area in southern Taiwan. The groundwater aquifers were over-pumped and caused the seawater intrusion. In order to remedy the contaminated groundwater and find the best strategies of groundwater usage, a management model to search the optimal groundwater operational strategies is developed in this study. The objective function is to minimize the total amount of injection water and a set of constraints are applied to ensure the groundwater levels and concentrations are satisfied. A three-dimension density-dependent flow and transport simulation model, called SEAWAT developed by U.S. Geological Survey, is selected to simulate the phenomenon of seawater intrusion. The simulation model is well calibrated by the field measurements and replaced by the surrogate model of trained artificial neural networks (ANNs) to reduce the computational time. The ANNs are embedded in the management model to link the simulation and optimization models, and the global optimizer of differential evolution (DE) is applied for solving the management model. The optimal results show that the fully trained ANNs could substitute the original simulation model and reduce much computational time. Under appropriate setting of objective function and constraints, DE can find the optimal injection rates at predefined barriers. The concentrations at the target locations could decrease more than 50 percent within the planning horizon of 20 years. Keywords : Seawater intrusion, groundwater management, numerical model, artificial neural networks, differential evolution

  11. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.

    Science.gov (United States)

    Lee, Leng-Feng; Umberger, Brian R

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This

  12. Integrating Multibody Simulation and CFD: toward Complex Multidisciplinary Design Optimization

    Science.gov (United States)

    Pieri, Stefano; Poloni, Carlo; Mühlmeier, Martin

    This paper describes the use of integrated multidisciplinary analysis and optimization of a race car model on a predefined circuit. The objective is the definition of the most efficient geometric configuration that can guarantee the lowest lap time. In order to carry out this study it has been necessary to interface the design optimization software modeFRONTIER with the following softwares: CATIA v5, a three dimensional CAD software, used for the definition of the parametric geometry; A.D.A.M.S./Motorsport, a multi-body dynamic simulation software; IcemCFD, a mesh generator, for the automatic generation of the CFD grid; CFX, a Navier-Stokes code, for the fluid-dynamic forces prediction. The process integration gives the possibility to compute, for each geometrical configuration, a set of aerodynamic coefficients that are then used in the multiboby simulation for the computation of the lap time. Finally an automatic optimization procedure is started and the lap-time minimized. The whole process is executed on a Linux cluster running CFD simulations in parallel.

  13. Cost effective simulation-based multiobjective optimization in the performance of an internal combustion engine

    Science.gov (United States)

    Aittokoski, Timo; Miettinen, Kaisa

    2008-07-01

    Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.

  14. Multipacting Simulations of Tuner-adjustable waveguide coupler (TaCo) with CST Particle Studio®

    CERN Document Server

    Shafqat, N; Wegner, R

    2014-01-01

    Tuner-adjustable waveguide couplers (TaCo) are used to feed microwave power to different RF structures of LINAC4. This paper studies the multipacting phenomenon for TaCo using the PIC solver of CST PS. Simulations are performed for complete field sweeps and results are analysed.

  15. Derivative-free optimization under uncertainty applied to costly simulators

    International Nuclear Information System (INIS)

    Pauwels, Benoit

    2016-01-01

    The modeling of complex phenomena encountered in industrial issues can lead to the study of numerical simulation codes. These simulators may require extensive execution time (from hours to days), involve uncertain parameters and even be intrinsically stochastic. Importantly within the context of simulation-based optimization, the derivatives of the outputs with respect to the inputs may be inexistent, inaccessible or too costly to approximate reasonably. This thesis is organized in four chapters. The first chapter discusses the state of the art in derivative-free optimization and uncertainty modeling. The next three chapters introduce three independent - although connected - contributions to the field of derivative-free optimization in the presence of uncertainty. The second chapter addresses the emulation of costly stochastic simulation codes - stochastic in the sense simulations run with the same input parameters may lead to distinct outputs. Such was the matter of the CODESTOCH project carried out at the Summer mathematical research center on scientific computing and its applications (CEMRACS) during the summer of 2013, together with two Ph.D. students from Electricity of France (EDF) and the Atomic Energy and Alternative Energies Commission (CEA). We designed four methods to build emulators for functions whose values are probability density functions. These methods were tested on two toy functions and applied to industrial simulation codes concerned with three complex phenomena: the spatial distribution of molecules in a hydrocarbon system (IFPEN), the life cycle of large electric transformers (EDF) and the repercussions of a hypothetical accidental in a nuclear plant (CEA). Emulation was a preliminary process towards optimization in the first two cases. In the third chapter we consider the influence of inaccurate objective function evaluations on direct search - a classical derivative-free optimization method. In real settings inaccuracy may never vanish

  16. On-Line Optimizing Control of a Simulated Continuous Yeast Fermentation

    DEFF Research Database (Denmark)

    Andersen, Maria Y.; Asferg, L.; Brabrand, H.

    1989-01-01

    On-line optimizing control of a simulated fermentation is investigated using a non-segregated dynamic model of aerobic glucose limited growth of saccharomyces cerevisiae. The optimization procedure is carried out with an underlying adaptive regulator to stabilize the culture. This stabilization...... is especially important during the setpoint changes specified by the optimizing routine. A linear ARMAX model structure is used for the fermentation process with dilution rate as input and biomass as output variable. The parameters of the linear model structure are estimated using a pseudo linear regression...... method with bandpass filtering of in- and output variables in order to ensure low frequency validity of the estimated model. An LQ-regulator is used with iterative solution of the Riccati equation. Simulation results illustrate the tuning of the underlying regulator, and the effect of perturbing...

  17. Optimization of pressurized water reactor shuffling by simulated annealing with heuristics

    International Nuclear Information System (INIS)

    Stevens, J.G.; Smith, K.S.; Rempe, K.R.; Downar, T.J.

    1995-01-01

    Simulated-annealing optimization of reactor core loading patterns is implemented with support for design heuristics during candidate pattern generation. The SIMAN optimization module uses the advanced nodal method of SIMULATE-3 and the full cross-section detail of CASMO-3 to evaluate accurately the neutronic performance of each candidate, resulting in high-quality patterns. The use of heuristics within simulated annealing is explored. Heuristics improve the consistency of optimization results for both fast- and slow-annealing runs with no penalty from the exclusion of unusual candidates. Thus, the heuristic application of designer judgment during automated pattern generation is shown to be effective. The capability of the SIMAN module to find and evaluate families of loading patterns that satisfy design constraints and have good objective performance within practical run times is demonstrated. The use of automated evaluations of successive cycles to explore multicycle effects of design decisions is discussed

  18. Evaluation of a proposed optimization method for discrete-event simulation models

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira de Pinho

    2012-12-01

    Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.

  19. Simulation, optimization and control of a thermal cracking furnace

    International Nuclear Information System (INIS)

    Masoumi, M.E.; Sadrameli, S.M.; Towfighi, J.; Niaei, A.

    2006-01-01

    The ethylene production process is one of the most important aspect of a petrochemical plant and the cracking furnace is the heart of the process. Since, ethylene is one of the raw materials in the chemical industry and the market situation of not only the feed and the product, but also the utility is rapidly changing, the optimal operation and control of the plant is important. A mathematical model, which describes the static and dynamic operations of a pilot plant furnace, was developed. The static simulation was used to predict the steady-state profiles of temperature, pressure and products yield. The dynamic simulation of the process was used to predict the transient behavior of thermal cracking reactor. Using a dynamic programming technique, an optimal temperature profile was developed along the reactor. Performances of temperature control loop were tested for different controller parameters and disturbances. The results of the simulation were tested experimentally in a computer control pilot plant

  20. Simulation-Based Optimization for Storage Allocation Problem of Outbound Containers in Automated Container Terminals

    Directory of Open Access Journals (Sweden)

    Ning Zhao

    2015-01-01

    Full Text Available Storage allocation of outbound containers is a key factor of the performance of container handling system in automated container terminals. Improper storage plans of outbound containers make QC waiting inevitable; hence, the vessel handling time will be lengthened. A simulation-based optimization method is proposed in this paper for the storage allocation problem of outbound containers in automated container terminals (SAPOBA. A simulation model is built up by Timed-Colored-Petri-Net (TCPN, used to evaluate the QC waiting time of storage plans. Two optimization approaches, based on Particle Swarm Optimization (PSO and Genetic Algorithm (GA, are proposed to form the complete simulation-based optimization method. Effectiveness of this method is verified by experiment, as the comparison of the two optimization approaches.

  1. Opportunities for Improving Army Modeling and Simulation Development: Making Fundamental Adjustments and Borrowing Commercial Business Practices

    National Research Council Canada - National Science Library

    Lee, John

    2000-01-01

    ...; requirements which span the conflict spectrum. The Army's current staff training simulation development process could better support all possible scenarios by making some fundamental adjustments and borrowing commercial business practices...

  2. Optimal design of a composite space shield based on numerical simulations

    International Nuclear Information System (INIS)

    Son, Byung Jin; Yoo, Jeong Hoon; Lee, Min Hyung

    2015-01-01

    In this study, optimal design of a stuffed Whipple shield is proposed by using numerical simulations and new penetration criterion. The target model was selected based on the shield model used in the Columbus module of the international space station. Because experimental results can be obtained only in the low velocity region below 7 km/s, it is required to derive the Ballistic limit curve (BLC) in the high velocity region above 7 km/s by numerical simulation. AUTODYN-2D, the commercial hydro-code package, was used to simulate the nonlinear transient analysis for the hypervelocity impact. The Smoothed particle hydrodynamics (SPH) method was applied to projectile and bumper modeling to represent the debris cloud generated after the impact. Numerical simulation model and selected material properties were validated through a quantitative comparison between numerical and experimental results. A new criterion to determine whether the penetration occurs or not is proposed from kinetic energy analysis by numerical simulation in the velocity region over 7 km/s. The parameter optimization process was performed to improve the protection ability at a specific condition through the Design of experiment (DOE) method and the Response surface methodology (RSM). The performance of the proposed optimal design was numerically verified.

  3. Simulation-based optimization for product and process design

    NARCIS (Netherlands)

    Driessen, L.

    2006-01-01

    The design of products and processes has gradually shifted from a purely physical process towards a process that heavily relies on computer simulations (virtual prototyping). To optimize this virtual design process in terms of speed and final product quality, statistical methods and mathematical

  4. Robust Optimization in Simulation : Taguchi and Krige Combined

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.

    2009-01-01

    Optimization of simulated systems is the goal of many methods, but most methods as- sume known environments. We, however, develop a `robust' methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by

  5. Robust optimization in simulation : Taguchi and Krige combined

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, Jack P.C.; Meloni, C.

    2012-01-01

    Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a “robust” methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by

  6. Indoor environment and energy consumption optimization using field measurements and building energy simulation

    DEFF Research Database (Denmark)

    Christensen, Jørgen Erik; Chasapis, Kleanthis; Gazovic, Libor

    2015-01-01

    Modern buildings are usually equipped with advanced climate conditioning systems to ensure comfort of their occupants. However, analysis of their actual operation usually identifies large potential for improvements with respect to their efficiency. Present study investigated potential for improve......, which was used for optimization of building’s performance. Proposed optimization scenarios bring 21-37% reduction on heating consumption and thermal comfort improvement by 7-12%. The approach (procedure) can help to optimize building operation and shorten the adjustment period....

  7. Classification and optimization of training tools for NPP simulator

    International Nuclear Information System (INIS)

    Billoen, G. van

    1994-01-01

    The training cycle of nuclear power plant (NPP) operators has evolved during the last decade in parallel with the evolution of the training tools. The phases of the training cycle can be summarized as follows: (1) basic principle learning, (2) specific functional training, (3) full operating range training, and (4) detailed accident analyses. The progress in simulation technology and man/machine interface (MMI) gives the training centers new opportunities to improve their training methods and effectiveness in the transfer of knowledge. To take advantage of these new opportunities a significant investment in simulation tools may be required. It is therefore important to propose an optimized approach when dealing with the overall equipment program for these training centers. An overall look of tools proposed on the international simulation market shows that there is a need for systematic approach in this field. Classification of the different training tools needed for each training cycle is the basis for an optimized approach in terms of hardware configuration and software specifications of the equipment to install in training centers. The 'Multi-Function Simulator' is one of the approaches. (orig.) (3 tabs.)

  8. Depletion calculations of adjuster rods in Darlington

    Energy Technology Data Exchange (ETDEWEB)

    Arsenault, B.; Tsang, K., E-mail: benoit.arsenault@amecfw.com, E-mail: kwok.tsang@amecfw.com [AMEC Foster Wheeler, Toronto, ON (Canada)

    2015-07-01

    This paper describes the simulation methodology and reactivity worth calculated for aged adjuster rods in the Darlington core. ORIGEN-S IST was applied to simulate the isotope transmutation process of the stainless steel and titanium adjusters. The compositions were used in DRAGON-IST to calculate the change in incremental properties of aged adjusters. Pre-simulations of the reactivity worth of the stainless steel and titanium adjusters in Darlington were performed using RFSP-IST and the results showed that the titanium adjuster rods exhibit faster reactivity-worth drop than that of stainless steel rods. (author)

  9. QCAD simulation and optimization of semiconductor double quantum dots

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Erik; Gao, Xujiao; Kalashnikova, Irina; Muller, Richard Partain; Salinger, Andrew Gerhard; Young, Ralph Watson

    2013-12-01

    We present the Quantum Computer Aided Design (QCAD) simulator that targets modeling quantum devices, particularly silicon double quantum dots (DQDs) developed for quantum qubits. The simulator has three di erentiating features: (i) its core contains nonlinear Poisson, e ective mass Schrodinger, and Con guration Interaction solvers that have massively parallel capability for high simulation throughput, and can be run individually or combined self-consistently for 1D/2D/3D quantum devices; (ii) the core solvers show superior convergence even at near-zero-Kelvin temperatures, which is critical for modeling quantum computing devices; (iii) it couples with an optimization engine Dakota that enables optimization of gate voltages in DQDs for multiple desired targets. The Poisson solver includes Maxwell- Boltzmann and Fermi-Dirac statistics, supports Dirichlet, Neumann, interface charge, and Robin boundary conditions, and includes the e ect of dopant incomplete ionization. The solver has shown robust nonlinear convergence even in the milli-Kelvin temperature range, and has been extensively used to quickly obtain the semiclassical electrostatic potential in DQD devices. The self-consistent Schrodinger-Poisson solver has achieved robust and monotonic convergence behavior for 1D/2D/3D quantum devices at very low temperatures by using a predictor-correct iteration scheme. The QCAD simulator enables the calculation of dot-to-gate capacitances, and comparison with experiment and between solvers. It is observed that computed capacitances are in the right ballpark when compared to experiment, and quantum con nement increases capacitance when the number of electrons is xed in a quantum dot. In addition, the coupling of QCAD with Dakota allows to rapidly identify which device layouts are more likely leading to few-electron quantum dots. Very efficient QCAD simulations on a large number of fabricated and proposed Si DQDs have made it possible to provide fast feedback for design

  10. Kanban simulation model for production process optimization

    Directory of Open Access Journals (Sweden)

    Golchev Riste

    2015-01-01

    Full Text Available A long time has passed since the KANBAN system has been established as an efficient method for coping with the excessive inventory. Still, the possibilities for its improvement through its integration with other different approaches should be investigated further. The basic research challenge of this paper is to present benefits of KANBAN implementation supported with Discrete Event Simulation (DES. In that direction, at the beginning, the basics of KANBAN system are presented with emphasis on the information and material flow, together with a methodology for implementation of KANBAN system. Certain analysis on combining the simulation with this methodology is presented. The paper is concluded with a practical example which shows that through understanding the philosophy of the implementation methodology of KANBAN system and the simulation methodology, a simulation model can be created which can serve as a basis for a variety of experiments that can be conducted within a short period of time, resulting with production process optimization.

  11. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Science.gov (United States)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  12. Greenhouse gases emission assessment in residential sector through buildings simulations and operation optimization

    International Nuclear Information System (INIS)

    Stojiljković, Mirko M.; Ignjatović, Marko G.; Vučković, Goran D.

    2015-01-01

    Buildings use a significant amount of primary energy and largely contribute to greenhouse gases emission. Cost optimality and cost effectiveness, including cost-optimal operation, are important for the adoption of energy efficient and environmentally friendly technologies. The long-term assessment of buildings-related greenhouse gases emission might take into account cost-optimal operation of their energy systems. This is often not the case in the literature. Long-term operation optimization problems are often of large scale and computationally intensive and time consuming. This paper formulates a bottom-up methodology relying on an efficient, but precise operation optimization approach, applicable to long-term problems and use with buildings simulations. We suggest moving-horizon short-term optimization to determine near-optimal operation modes and show that this approach, applied to flexible energy systems without seasonal storage, have satisfactory efficiency and accuracy compared with solving problem for an entire year. We also confirm it as a valuable pre-solve technique. Approach applicability and the importance of energy systems optimization are illustrated with a case study considering buildings envelope improvements and cogeneration and heat storage implementation in an urban residential settlement. EnergyPlus is used for buildings simulations while mixed integer linear programming optimization problems are constructed and solved using the custom-built software and the branch-and-cut solver Gurobi Optimizer. - Highlights: • Bottom-up approach for greenhouse gases emission assessment is presented. • Short-term moving-horizon optimization is used to define operation regimes. • Operation optimization and buildings simulations are connected with modeling tool. • Illustrated optimization method performed efficiently and gave accurate results.

  13. An Integrated GIS, optimization and simulation framework for optimal PV size and location in campus area environments

    International Nuclear Information System (INIS)

    Kucuksari, Sadik; Khaleghi, Amirreza M.; Hamidi, Maryam; Zhang, Ye; Szidarovszky, Ferenc; Bayraksan, Guzin; Son, Young-Jun

    2014-01-01

    Highlights: • The optimal size and locations for PV units for campus environments are achieved. • The GIS module finds the suitable rooftops and their panel capacity. • The optimization module maximizes the long-term profit of PV installations. • The simulation module evaluates the voltage profile of the distribution network. • The proposed work has been successfully demonstrated for a real university campus. - Abstract: Finding the optimal size and locations for Photovoltaic (PV) units has been a major challenge for distribution system planners and researchers. In this study, a framework is proposed to integrate Geographical Information Systems (GIS), mathematical optimization, and simulation modules to obtain the annual optimal placement and size of PV units for the next two decades in a campus area environment. First, a GIS module is developed to find the suitable rooftops and their panel capacity considering the amount of solar radiation, slope, elevation, and aspect. The optimization module is then used to maximize the long-term net profit of PV installations considering various costs of investment, inverter replacement, operation, and maintenance as well as savings from consuming less conventional energy. A voltage profile of the electricity distribution network is then investigated in the simulation module. In the case of voltage limit violation by intermittent PV generations or load fluctuations, two mitigation strategies, reallocation of the PV units or installation of a local storage unit, are suggested. The proposed framework has been implemented in a real campus area, and the results show that it can effectively be used for long-term installation planning of PV panels considering both the cost and power quality

  14. Optimized calibration of neutronic-thermodynamic simulator for low power fast reactors

    International Nuclear Information System (INIS)

    Jachic, J.; Waintraub, M.

    1986-01-01

    Aiming to a general optimization of the project, controlled fuel depletion and management and yet motivated the feasibility of application of the SIRZ simulator to solve such problem, we present here an optimized and systematic calibration of this simulator. Are shown explicitly the control variables and the corresponding calibration equations for the buckling factors. After iterative linearizations, the resultant Linear Programming Problems were solved by the SIMPLEX Method. The results show that the optimum calibration is easily obtained if convergence control parameters are adequately chosen. (Author) [pt

  15. Land Surface Model and Particle Swarm Optimization Algorithm Based on the Model-Optimization Method for Improving Soil Moisture Simulation in a Semi-Arid Region.

    Science.gov (United States)

    Yang, Qidong; Zuo, Hongchao; Li, Weidong

    2016-01-01

    Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.

  16. Multiobjective optimization with a modified simulated annealing algorithm for external beam radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel

    2006-01-01

    Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented

  17. A regulatory adjustment process for the determination of the optimal percentage requirement in an electricity market with Tradable Green Certificates

    International Nuclear Information System (INIS)

    Currier, Kevin M.

    2013-01-01

    A system of Tradable Green Certificates (TGCs) is a market-based subsidy scheme designed to promote electricity generation from renewable energy sources such as wind power. Under a TGC system, the principal policy instrument is the “percentage requirement,” which stipulates the percentage of total electricity production (“green” plus “black”) that must be obtained from renewable sources. In this paper, we propose a regulatory adjustment process that a regulator can employ to determine the socially optimal percentage requirement, explicitly accounting for environmental damages resulting from black electricity generation. - Highlights: • A Tradable Green Certificate (TGC) system promotes energy production from renewable sources. • We consider an electricity oligopoly operated under a TGC system. • Welfare analysis must account for damages from “black” electricity production. • We characterize the welfare maximizing (optimal) “percentage requirement.” • We present a regulatory adjustment process that computes the optimal percentage requirement iteratively

  18. Multi-period mean–variance portfolio optimization based on Monte-Carlo simulation

    NARCIS (Netherlands)

    F. Cong (Fei); C.W. Oosterlee (Kees)

    2016-01-01

    htmlabstractWe propose a simulation-based approach for solving the constrained dynamic mean– variance portfolio managemen tproblem. For this dynamic optimization problem, we first consider a sub-optimal strategy, called the multi-stage strategy, which can be utilized in a forward fashion. Then,

  19. Post-processing of Monte Carlo simulations for rapid BNCT source optimization studies

    International Nuclear Information System (INIS)

    Bleuel, D.L.; Chu, W.T.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.

    2000-01-01

    A great advantage of some neutron sources, such as accelerator-produced sources, is that they can be tuned to produce different spectra. Unfortunately, optimization studies are often time-consuming and difficult, as they require a lengthy Monte Carlo simulation for each source. When multiple characteristics, such as energy, angle, and spatial distribution of a neutron beam are allowed to vary, an overwhelming number of simulations may be required. Many optimization studies, therefore, suffer from a small number of datapoints, restrictive treatment conditions, or poor statistics. By scoring pertinent information from every particle tally in a Monte Carlo simulation, then applying appropriate source variable weight factors in a post-processing algorithm, a single simulation can be used to model any number of multiple sources. Through this method, the response to a new source can be modeled in minutes or seconds, rather than hours or days, allowing for the analysis of truly variable source conditions of much greater resolution than is normally possible when a new simulation must be run for each datapoint in a study. This method has been benchmarked and used to recreate optimization studies in a small fraction of the time spent in the original studies

  20. Solving iTOUGH2 simulation and optimization problems using the PEST protocol

    Energy Technology Data Exchange (ETDEWEB)

    Finsterle, S.A.; Zhang, Y.

    2011-02-01

    The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstrate the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.

  1. When teams shift among processes: insights from simulation and optimization.

    Science.gov (United States)

    Kennedy, Deanna M; McComb, Sara A

    2014-09-01

    This article introduces process shifts to study the temporal interplay among transition and action processes espoused in the recurring phase model proposed by Marks, Mathieu, and Zacarro (2001). Process shifts are those points in time when teams complete a focal process and change to another process. By using team communication patterns to measure process shifts, this research explores (a) when teams shift among different transition processes and initiate action processes and (b) the potential of different interventions, such as communication directives, to manipulate process shift timing and order and, ultimately, team performance. Virtual experiments are employed to compare data from observed laboratory teams not receiving interventions, simulated teams receiving interventions, and optimal simulated teams generated using genetic algorithm procedures. Our results offer insights about the potential for different interventions to affect team performance. Moreover, certain interventions may promote discussions about key issues (e.g., tactical strategies) and facilitate shifting among transition processes in a manner that emulates optimal simulated teams' communication patterns. Thus, we contribute to theory regarding team processes in 2 important ways. First, we present process shifts as a way to explore the timing of when teams shift from transition to action processes. Second, we use virtual experimentation to identify those interventions with the greatest potential to affect performance by changing when teams shift among processes. Additionally, we employ computational methods including neural networks, simulation, and optimization, thereby demonstrating their applicability in conducting team research. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  2. Optimization of reconstruction algorithms using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Hanson, K.M.

    1989-01-01

    A method for optimizing reconstruction algorithms is presented that is based on how well a specified task can be performed using the reconstructed images. Task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, reconstruction, and performance of the stated task based on the final image. The use of this method is demonstrated through the optimization of the Algebraic Reconstruction Technique (ART), which reconstructs images from their projections by a iterative procedure. The optimization is accomplished by varying the relaxation factor employed in the updating procedure. In some of the imaging situations studied, it is found that the optimization of constrained ART, in which a nonnegativity constraint is invoked, can vastly increase the detectability of objects. There is little improvement attained for unconstrained ART. The general method presented may be applied to the problem of designing neutron-diffraction spectrometers. 11 refs., 6 figs., 2 tabs

  3. Optimization of reconstruction algorithms using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Hanson, K.M.

    1989-01-01

    A method for optimizing reconstruction algorithms is presented that is based on how well a specified task can be performed using the reconstructed images. Task performance is numerically assessed by a Monte Carlo simulation of the complete imaging process including the generation of scenes appropriate to the desired application, subsequent data taking, reconstruction, and performance of the stated task based on the final image. The use of this method is demonstrated through the optimization of the Algebraic Reconstruction Technique (ART), which reconstructs images from their projections by an iterative procedure. The optimization is accomplished by varying the relaxation factor employed in the updating procedure. In some of the imaging situations studied, it is found that the optimization of constrained ART, in which a non-negativity constraint is invoked, can vastly increase the detectability of objects. There is little improvement attained for unconstrained ART. The general method presented may be applied to the problem of designing neutron-diffraction spectrometers. (author)

  4. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  5. Optimal simulation of a perfect entangler

    International Nuclear Information System (INIS)

    Yu Nengkun; Duan Runyao; Ying Mingsheng

    2010-01-01

    A 2 x 2 unitary operation is called a perfect entangler if it can generate a maximally entangled state from some unentangled input. We study the following question: How many runs of a given two-qubit entangling unitary operation are required to simulate some perfect entangler with one-qubit unitary operations as free resources? We completely solve this problem by presenting an analytical formula for the optimal number of runs of the entangling operation. Our result reveals an entanglement strength of two-qubit unitary operations.

  6. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  7. Optimal level of continuous positive airway pressure: auto-adjusting titration versus titration with a predictive equation.

    Science.gov (United States)

    Choi, Ji Ho; Jun, Young Joon; Oh, Jeong In; Jung, Jong Yoon; Hwang, Gyu Ho; Kwon, Soon Young; Lee, Heung Man; Kim, Tae Hoon; Lee, Sang Hag; Lee, Seung Hoon

    2013-05-01

    The aims of the present study were twofold. We sought to compare two methods of titrating the level of continuous positive airway pressure (CPAP) - auto-adjusting titration and titration using a predictive equation - with full-night manual titration used as the benchmark. We also investigated the reliability of the two methods in patients with obstructive sleep apnea syndrome (OSAS). Twenty consecutive adult patients with OSAS who had successful, full-night manual and auto-adjusting CPAP titration participated in this study. The titration pressure level was calculated with a previously developed predictive equation based on body mass index and apnea-hypopnea index. The mean titration pressure levels obtained with the manual, auto-adjusting, and predictive equation methods were 9.0 +/- 3.6, 9.4 +/- 3.0, and 8.1 +/- 1.6 cm H2O,respectively. There was a significant difference in the concordance within the range of +/- 2 cm H2O (p = 0.019) between both the auto-adjusting titration and the titration using the predictive equation compared to the full-night manual titration. However, there was no significant difference in the concordance within the range of +/- 1 cm H2O (p > 0.999). When compared to full-night manual titration as the standard method, auto-adjusting titration appears to be more reliable than using a predictive equation for determining the optimal CPAP level in patients with OSAS.

  8. Modelling, simulating and optimizing boiler heating surfaces and evaporator circuits

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for optimizing the dynamic performance of boiler have been developed. Design variables related to the size of the boiler and its dynamic performance have been defined. The object function to be optimized takes the weight of the boiler and its dynamic capability into account. As constraints...... for the optimization a dynamic model for the boiler is applied. Furthermore a function for the value of the dynamic performance is included in the model. The dynamic models for simulating boiler performance consists of a model for the flue gas side, a model for the evaporator circuit and a model for the drum....... The dynamic model has been developed for the purpose of determining boiler material temperatures and heat transfer from the flue gas side to the water-/steam side in order to simulate the circulation in the evaporator circuit and hereby the water level fluctuations in the drum. The dynamic model has been...

  9. A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios

    Directory of Open Access Journals (Sweden)

    Hiroshi Shiraishi

    2012-01-01

    Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.

  10. Simulation and Optimization of Contactless Power Transfer System for Rotary Ultrasonic Machining

    Directory of Open Access Journals (Sweden)

    Wang Xinwei

    2016-01-01

    Full Text Available In today’s rotary ultrasonic machining (RUM, the power transfer system is based on a contactless power system (rotary transformer rather than the slip ring that cannot cope with high-speed rotary of the tool. The efficiency of the rotary transformer is vital to the whole rotary ultrasonic machine. This paper focused on simulation of the rotary transformer and enhancing the efficiency of the rotary transformer by optimizing three main factors that influence its efficiency, including the gap between the two ferrite cores, the ratio of length and width of the ferrite core and the thickness of ferrite. The finite element model of rotary transformer was built on Maxwell platform. Simulation and optimization work was based on the finite element model. The optimization results compared with the initial simulation result showed an approximate 18% enhancement in terms of efficiency, from 77.69% to 95.2%.

  11. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    Science.gov (United States)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  12. Post-processing of Monte Carlo simulations for rapid BNCT source optimization studies

    International Nuclear Information System (INIS)

    Bleuel, D.L.; Chu, W.T.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.

    2000-01-01

    A great advantage of some neutron sources, such as accelerator-produced sources, is that they can be tuned to produce different spectra. Unfortunately, optimization studies are often time-consuming and difficult, as they require a lengthy Monte Carlo simulation for each source. When multiple characteristics, such as energy, angle, and spatial distribution of a neutron beam are allowed to vary, an overwhelming number of simulations may be required. Many optimization studies, therefore, suffer from a small number of data points, restrictive treatment conditions, or poor statistics. By scoring pertinent information from every particle tally in a Monte Carlo simulation, then applying appropriate source variable weight factors in a post-processing algorithm; a single simulation can be used to model any number of multiple sources. Through this method, the response to a new source can be modeled in minutes or seconds, rather than hours or days, allowing for the analysis of truly variable source conditions of much greater resolution than is normally possible when a new simulation must be run for each data point in a study. This method has been benchmarked and used to recreate optimization studies in a small fraction of the time spent in the original studies. (author)

  13. Conceptualizing and measuring illness self-concept: a comparison with self-esteem and optimism in predicting fibromyalgia adjustment.

    Science.gov (United States)

    Morea, Jessica M; Friend, Ronald; Bennett, Robert M

    2008-12-01

    Illness self-concept (ISC), or the extent to which individuals are consumed by their illness, was theoretically described and evaluated with the Illness Self-Concept Scale (ISCS), a new 23-item scale, to predict adjustment in fibromyalgia. To establish convergent and discriminant validity, illness self-concept was compared to self-esteem and optimism in predicting health status, illness intrusiveness, depression, and life satisfaction. The ISCS demonstrated good reliability (alpha = .94; test-retest r = .80) and was a strong predictor of outcomes, even after controlling for optimism or self-esteem. The ISCS predicted unique variance in health-related outcomes; optimism and self-esteem did not, providing construct validation. Illness self-concept may play a significant role in coping with fibromyalgia and may prove useful in the evaluation of other chronic illnesses. (c) 2008 Wiley Periodicals, Inc.

  14. A framework for simulation-based optimization demonstrated on reconfigurable robot workcells

    DEFF Research Database (Denmark)

    Atorf, Linus; Schorn, Christoph; Roßmann, Jürgen

    2017-01-01

    Today's trends towards automation and robotics, fueled by the emerging Industry 4.0 paradigm shift, open up many new kinds of control and optimization problems. At the same time, advances in 3D simulation technology lead to ever-improving simulation models and algorithms in various domains...

  15. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  16. Robust Optimization in Simulation : Taguchi and Response Surface Methodology

    NARCIS (Netherlands)

    Dellino, G.; Kleijnen, J.P.C.; Meloni, C.

    2008-01-01

    Optimization of simulated systems is tackled by many methods, but most methods assume known environments. This article, however, develops a 'robust' methodology for uncertain environments. This methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by

  17. Simulation and Optimization of Air-Cooled PEMFC Stack for Lightweight Hybrid Vehicle Application

    Directory of Open Access Journals (Sweden)

    Jingming Liang

    2015-01-01

    Full Text Available A model of 2 kW air-cooled proton exchange membrane fuel cell (PEMFC stack has been built based upon the application of lightweight hybrid vehicle after analyzing the characteristics of heat transfer of the air-cooled stack. Different dissipating models of the air-cooled stack have been simulated and an optimal simulation model for air-cooled stack called convection heat transfer (CHT model has been figured out by applying the computational fluid dynamics (CFD software, based on which, the structure of the air-cooled stack has been optimized by adding irregular cooling fins at the end of the stack. According to the simulation result, the temperature of the stack has been equally distributed, reducing the cooling density and saving energy. Finally, the 2 kW hydrogen-air air-cooled PEMFC stack is manufactured and tested by comparing the simulation data which is to find out its operating regulations in order to further optimize its structure.

  18. Simulative design and process optimization of the two-stage stretch-blow molding process

    Energy Technology Data Exchange (ETDEWEB)

    Hopmann, Ch.; Rasche, S.; Windeck, C. [Institute of Plastics Processing at RWTH Aachen University (IKV) Pontstraße 49, 52062 Aachen (Germany)

    2015-05-22

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.

  19. Simulative design and process optimization of the two-stage stretch-blow molding process

    International Nuclear Information System (INIS)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-01-01

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress

  20. Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method

    Directory of Open Access Journals (Sweden)

    Feng Du

    2017-11-01

    Full Text Available This paper presents a global optimization method for structural design optimization, which integrates subset simulation optimization (SSO and the dynamic augmented Lagrangian multiplier method (DALMM. The proposed method formulates the structural design optimization as a series of unconstrained optimization sub-problems using DALMM and makes use of SSO to find the global optimum. The combined strategy guarantees that the proposed method can automatically detect active constraints and provide global optimal solutions with finite penalty parameters. The accuracy and robustness of the proposed method are demonstrated by four classical truss sizing problems. The results are compared with those reported in the literature, and show a remarkable statistical performance based on 30 independent runs.

  1. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    Science.gov (United States)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF

  2. Optimal velocity difference model for a car-following theory

    International Nuclear Information System (INIS)

    Peng, G.H.; Cai, X.H.; Liu, C.Q.; Cao, B.F.; Tuo, M.X.

    2011-01-01

    In this Letter, we present a new optimal velocity difference model for a car-following theory based on the full velocity difference model. The linear stability condition of the new model is obtained by using the linear stability theory. The unrealistically high deceleration does not appear in OVDM. Numerical simulation of traffic dynamics shows that the new model can avoid the disadvantage of negative velocity occurred at small sensitivity coefficient λ in full velocity difference model by adjusting the coefficient of the optimal velocity difference, which shows that collision can disappear in the improved model. -- Highlights: → A new optimal velocity difference car-following model is proposed. → The effects of the optimal velocity difference on the stability of traffic flow have been explored. → The starting and braking process were carried out through simulation. → The effects of the optimal velocity difference can avoid the disadvantage of negative velocity.

  3. Simulation and optimization of stable isotope 13C separation by carbon monoxide cryogenic distillation

    International Nuclear Information System (INIS)

    Li Hulin; Ju Yonglin; Li Liangjun; Xu Dagang

    2009-01-01

    A stable isotope 13 C separation column was set up by carbon monoxide (CO) cryogenic distillation. Diameter of the column is 45 mm, packing height is 17.5 m, of which enriching section is 15 m and stripping section is 2.5 m. Firstly, computer simulation results were validated by test results. Secondly, tests were replaced by computer simulations in order to obtain the optimal operation conditions in the experimental setup. Comprehensive factors of column pressure, feeding velocity, reflux ratio, withdrawing velocity, and boiling power impacts on the products were studied. Then optimization design of the experimental device was achieved through computer simulations combined with uniform experimental design. The final results show that the optimal operation conditions in the built column are as followings: boiling power, 250 W; column pressure, 54 kPa; reflux ratio, 84. The conclusion is that the method of combination of computer simulation and experimental design could be applied to 13 C industrial design and could be popularized in traditional distillation process to realize optimization design. (authors)

  4. Optimization and Simulation in Drug Development - Review and Analysis

    DEFF Research Database (Denmark)

    Schjødt-Eriksen, Jens; Clausen, Jens

    2003-01-01

    We give a review of pharmaceutical R&D and mathematical simulation and optimization methods used to support decision making within the pharmaceutical development process. The complex nature of drug development is pointed out through a description of the various phases of the pharmaceutical develo...... development process. A part of the paper is dedicated to the use of simulation techniques to support clinical trials. The paper ends with a section describing portfolio modelling methods in the context of the pharmaceutical industry....

  5. Numerical simulation and structural optimization of the inclined oil/water separator.

    Directory of Open Access Journals (Sweden)

    Liqiong Chen

    Full Text Available Improving the separation efficiency of the inclined oil/water separator, a new type of gravity separation equipment, is of great importance. In order to obtain a comprehensive understanding of the internal flow field of the separation process of oil and water within this separator, a numerical simulation based on Euler multiphase flow analysis and the realizable k-ε two equation turbulence model was executed using Fluent software. The optimal value ranges of the separator's various structural parameters used in the numerical simulation were selected through orthogonal array experiments. A field experiment on the separator was conducted with optimized structural parameters in order to validate the reliability of the numerical simulation results. The research results indicated that the horizontal position of the dispenser, the hole number, and the diameter had significant effects on the oil/water separation efficiency, and that the longitudinal position of the dispenser and the position of the weir plate had insignificant effects on the oil/water separation efficiency. The optimal structural parameters obtained through the orthogonal array experiments resulted in an oil/water separation efficiency of up to 95%, which was 4.996% greater than that realized by the original structural parameters.

  6. Simulated Stochastic Approximation Annealing for Global Optimization With a Square-Root Cooling Schedule

    KAUST Repository

    Liang, Faming

    2014-04-03

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to use this much CPU time. This article proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation, it is shown that the new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, for example, a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors. Supplementary materials for this article are available online.

  7. Simulation Optimization by Genetic Search: A Comprehensive Study with Applications to Production Management

    National Research Council Canada - National Science Library

    Yunker, James

    2003-01-01

    In this report, a relatively new simulation optimization technique, the genetic search, is compared to two more established simulation techniques-the pattern search and the response surface methodology search...

  8. Optimal fabrication processes for unidirectional metal-matrix composites: A computational simulation

    Science.gov (United States)

    Saravanos, D. A.; Murthy, P. L. N.; Morel, M.

    1990-01-01

    A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with non-linear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.

  9. Optimal fabrication processes for unidirectional metal-matrix composites - A computational simulation

    Science.gov (United States)

    Saravanos, D. A.; Murthy, P. L. N.; Morel, M.

    1990-01-01

    A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with nonlinear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.

  10. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  11. Parametric Optimization Through Numerical Simulation of VCR Diesel Engine

    Science.gov (United States)

    Ganji, Prabhakara Rao; Mahmood, Al-Qarttani Abdulrahman Shakir; Kandula, Aasrith; Raju, Vysyaraju Rajesh Khana; Rao, Surapaneni Srinivasa

    2017-08-01

    In the present study, the Variable Compression Ratio (VCR) engine was analyzed numerically using CONVERGE™ Computational Fluid Dynamics code in order to optimize the design/operating parameters such as Compression Ratio (CR), Start of Injection (SOI) and Exhaust Gas Recirculation (EGR). VCR engine was run for 100 % load to test its performance and it was validated for standard configuration. Simulations were performed by varying the design/operating parameters such as CR (18-14), SOI (17°-26° bTDC) and EGR (0-15 %) at constant fuel injection pressure of 230 bar and speed of 1500 rpm. The effect of each of these parameters on pressure, oxides of nitrogen (NOx) and soot are presented. Finally, regression equations were developed for pressure, NOx and soot by using the simulation results. The regression equations were solved for multi objective criteria in order to reduce the NOx and soot while maintaining the baseline performance. The optimized configuration was tested for validation and found satisfactory.

  12. Simulation-Based Multiobjective Optimization of Timber-Glass Residential Buildings in Severe Cold Regions

    Directory of Open Access Journals (Sweden)

    Yunsong Han

    2017-12-01

    Full Text Available In the current context of increasing energy demand, timber-glass buildings will become a necessary trend in sustainable architecture in the future. Especially in severe cold zones of China, energy consumption and the visual comfort of residential buildings have attracted wide attention, and there are always trade-offs between multiple objectives. This paper aims to propose a simulation-based multiobjective optimization method to improve the daylighting, energy efficiency, and economic performance of timber-glass buildings in severe cold regions. Timber-glass building form variables have been selected as the decision variables, including building width, roof height, south and north window-to-wall ratio (WWR, window height, and orientation. A simulation-based multiobjective optimization model has been developed to optimize these performance objectives simultaneously. The results show that Daylighting Autonomy (DA presents negative correlations with Energy Use Intensity (EUI and total cost. Additionally, with an increase in DA, Useful Daylighting Illuminance (UDI demonstrates a tendency of primary increase and then decrease. Using this optimization model, four building performances have been improved from the initial generation to the final generation, which proves that simulation-based multiobjective optimization is a promising approach to improve the daylighting, energy efficiency, and economic performances of timber-glass buildings in severe cold regions.

  13. Proceedings of the 6. IASTED conference on modelling, simulation, and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Nyongesa, H. [Botswana Univ., Gaborone (Botswana). Dept. of Computer Science] (ed.)

    2006-07-01

    This conference presented a variety of new optimization and simulation tools for use in several scientific fields. Neural network-based simulation tools were presented, as well as new approaches to optimizing artificial intelligence simulation models. Approaches to image compression were discussed. Control strategies and systems analysis methodologies were presented. Other topics included Gaussian mixture models; helical transformation; fault diagnosis; and stochastic dynamics in economical applications. Decision support system models were also discussed in addition to recursive approaches to virtualization, and intelligent designs for the provision of HIV treatments in Africa. The conference was divided into 8 sessions: (1) scientific applications; (2) system design; (3) environmental applications; (4) economic and financial applications; (5) modelling techniques; (6) general methods; (7) special session; and (8) additional papers. The conference featured 56 presentations, of which 5 of which have been catalogued separately for inclusion in this database. refs., tabs., figs.

  14. Lipid and Creatinine Adjustment to Evaluate Health Effects of Environmental Exposures.

    Science.gov (United States)

    O'Brien, Katie M; Upson, Kristen; Buckley, Jessie P

    2017-03-01

    Urine- and serum-based biomarkers are useful for assessing individuals' exposure to environmental factors. However, variations in urinary creatinine (a measure of dilution) or serum lipid levels, if not adequately corrected for, can directly impact biomarker concentrations and bias exposure-disease association measures. Recent methodological literature has considered the complex relationships between creatinine or serum lipid levels, exposure biomarkers, outcomes, and other potentially relevant factors using directed acyclic graphs and simulation studies. The optimal measures of urinary dilution and serum lipids have also been investigated. Existing evidence supports the use of covariate-adjusted standardization plus creatinine adjustment for urinary biomarkers and standardization plus serum lipid adjustment for lipophilic, serum-based biomarkers. It is unclear which urinary dilution measure is best, but all serum lipid measures performed similarly. Future research should assess methods for pooled biomarkers and for studying diseases and exposures that affect creatinine or serum lipids directly.

  15. Design-Based Comparison of Spine Surgery Simulators: Optimizing Educational Features of Surgical Simulators.

    Science.gov (United States)

    Ryu, Won Hyung A; Mostafa, Ahmed E; Dharampal, Navjit; Sharlin, Ehud; Kopp, Gail; Jacobs, W Bradley; Hurlbert, R John; Chan, Sonny; Sutherland, Garnette R

    2017-10-01

    Simulation-based education has made its entry into surgical residency training, particularly as an adjunct to hands-on clinical experience. However, one of the ongoing challenges to wide adoption is the capacity of simulators to incorporate educational features required for effective learning. The aim of this study was to identify strengths and limitations of spine simulators to characterize design elements that are essential in enhancing resident education. We performed a mixed qualitative and quantitative cohort study with a focused survey and interviews of stakeholders in spine surgery pertaining to their experiences on 3 spine simulators. Ten participants were recruited spanning all levels of training and expertise until qualitative analysis reached saturation of themes. Participants were asked to perform lumbar pedicle screw insertion on 3 simulators. Afterward, a 10-item survey was administrated and a focused interview was conducted to explore topics pertaining to the design features of the simulators. Overall impressions of the simulators were positive with regards to their educational benefit, but our qualitative analysis revealed differing strengths and limitations. Main design strengths of the computer-based simulators were incorporation of procedural guidance and provision of performance feedback. The synthetic model excelled in achieving more realistic haptic feedback and incorporating use of actual surgical tools. Stakeholders from trainees to experts acknowledge the growing role of simulation-based education in spine surgery. However, different simulation modalities have varying design elements that augment learning in distinct ways. Characterization of these design characteristics will allow for standardization of simulation curricula in spinal surgery, optimizing educational benefit. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. A fuzzy-stochastic simulation-optimization model for planning electric power systems with considering peak-electricity demand: A case study of Qingdao, China

    International Nuclear Information System (INIS)

    Yu, L.; Li, Y.P.; Huang, G.H.

    2016-01-01

    In this study, a FSSOM (fuzzy-stochastic simulation-optimization model) is developed for planning EPS (electric power systems) with considering peak demand under uncertainty. FSSOM integrates techniques of SVR (support vector regression), Monte Carlo simulation, and FICMP (fractile interval chance-constrained mixed-integer programming). In FSSOM, uncertainties expressed as fuzzy boundary intervals and random variables can be effectively tackled. In addition, SVR coupled Monte Carlo technique is used for predicting the peak-electricity demand. The FSSOM is applied to planning EPS for the City of Qingdao, China. Solutions of electricity generation pattern to satisfy the city's peak demand under different probability levels and p-necessity levels have been generated. Results reveal that the city's electricity supply from renewable energies would be low (only occupying 8.3% of the total electricity generation). Compared with the energy model without considering peak demand, the FSSOM can better guarantee the city's power supply and thus reduce the system failure risk. The findings can help decision makers not only adjust the existing electricity generation/supply pattern but also coordinate the conflict interaction among system cost, energy supply security, pollutant mitigation, as well as constraint-violation risk. - Highlights: • FSSOM (Fuzzy-stochastic simulation-optimization model) is developed for planning EPS. • It can address uncertainties as fuzzy-boundary intervals and random variables. • FSSOM can satisfy peak-electricity demand and optimize power allocation. • Solutions under different probability levels and p-necessity levels are analyzed. • Results create tradeoff among system cost and peak-electricity demand violation risk.

  17. Layout optimization of DRAM cells using rigorous simulation model for NTD

    Science.gov (United States)

    Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe

    2014-03-01

    scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.

  18. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search

    Directory of Open Access Journals (Sweden)

    Meiqin Liu

    2017-12-01

    Full Text Available Underwater wireless sensor networks (UWSNs can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme.

  19. Stochastic search in structural optimization - Genetic algorithms and simulated annealing

    Science.gov (United States)

    Hajela, Prabhat

    1993-01-01

    An account is given of illustrative applications of genetic algorithms and simulated annealing methods in structural optimization. The advantages of such stochastic search methods over traditional mathematical programming strategies are emphasized; it is noted that these methods offer a significantly higher probability of locating the global optimum in a multimodal design space. Both genetic-search and simulated annealing can be effectively used in problems with a mix of continuous, discrete, and integer design variables.

  20. Optimization of Gamma Knife treatment planning via guided evolutionary simulated annealing

    International Nuclear Information System (INIS)

    Zhang Pengpeng; Dean, David; Metzger, Andrew; Sibata, Claudio

    2001-01-01

    We present a method for generating optimized Gamma Knife trade mark sign (Elekta, Stockholm, Sweden) radiosurgery treatment plans. This semiautomatic method produces a highly conformal shot packing plan for the irradiation of an intracranial tumor. We simulate optimal treatment planning criteria with a probability function that is linked to every voxel in a volumetric (MR or CT) region of interest. This sigmoidal P + parameter models the requirement of conformality (i.e., tumor ablation and normal tissue sparing). After determination of initial radiosurgery treatment parameters, a guided evolutionary simulated annealing (GESA) algorithm is used to find the optimal size, position, and weight for each shot. The three-dimensional GESA algorithm searches the shot parameter space more thoroughly than is possible during manual shot packing and provides one plan that is suitable to the treatment criteria of the attending neurosurgeon and radiation oncologist. The result is a more conformal plan, which also reduces redundancy, and saves treatment administration time

  1. Optimal allocation of testing resources for statistical simulations

    Science.gov (United States)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  2. Towards information-optimal simulation of partial differential equations.

    Science.gov (United States)

    Leike, Reimar H; Enßlin, Torsten A

    2018-03-01

    Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.

  3. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    International Nuclear Information System (INIS)

    Brunet, Robert; Cortés, Daniel; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Boer, Dieter

    2012-01-01

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: ► Novel framework for the optimal design of thermdoynamic cycles. ► Combined use of simulation and optimization tools. ► Optimal design and operating conditions according to several economic and LCA impacts. ► Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  4. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    Energy Technology Data Exchange (ETDEWEB)

    Brunet, Robert; Cortes, Daniel [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Guillen-Gosalbez, Gonzalo [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Jimenez, Laureano [Departament d' Enginyeria Quimica, Escola Tecnica Superior d' Enginyeria Quimica, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007 Tarragona (Spain); Boer, Dieter [Departament d' Enginyeria Mecanica, Escola Tecnica Superior d' Enginyeria, Universitat Rovira i Virgili, Campus Sescelades, Avinguda Paisos Catalans 26, 43007, Tarragona (Spain)

    2012-12-15

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: Black-Right-Pointing-Pointer Novel framework for the optimal design of thermdoynamic cycles. Black-Right-Pointing-Pointer Combined use of simulation and optimization tools. Black-Right-Pointing-Pointer Optimal design and operating conditions according to several economic and LCA impacts. Black-Right-Pointing-Pointer Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  5. Motor models and transient analysis for high-temperature, superconductor switch-based adjustable speed drive applications. Final report

    International Nuclear Information System (INIS)

    Bailey, J.M.

    1996-06-01

    New high-temperature superconductor (HTSC) technology may allow development of an energy-efficient power electronics switch for adjustable speed drive (ASD) applications involving variable-speed motors, superconducting magnetic energy storage systems, and other power conversion equipment. This project developed a motor simulation module for determining optimal applications of HTSC-based power switches in ASD systems

  6. Multi-time scale Climate Informed Stochastic Hybrid Simulation-Optimization Model (McISH model) for Multi-Purpose Reservoir System

    Science.gov (United States)

    Lu, M.; Lall, U.

    2013-12-01

    decadal flow simulations are re-initialized every year with updated climate projections to improve the reliability of the operation rules for the next year, within which the seasonal operation strategies are nested. The multi-level structure can be repeated for monthly operation with weekly subperiods to take advantage of evolving weather forecasts and seasonal climate forecasts. As a result of the hierarchical structure, sub-seasonal even weather time scale updates and adjustment can be achieved. Given an ensemble of these scenarios, the McISH reservoir simulation-optimization model is able to derive the desired reservoir storage levels, including minimum and maximum, as a function of calendar date, and the associated release patterns. The multi-time scale approach allows adaptive management of water supplies acknowledging the changing risks, meeting both the objectives over the decade in expected value and controlling the near term and planning period risk through probabilistic reliability constraints. For the applications presented, the target season is the monsoon season from June to September. The model also includes a monthly flood volume forecast model, based on a Copula density fit to the monthly flow and the flood volume flow. This is used to guide dynamic allocation of the flood control volume given the forecasts.

  7. Optimization analysis of thermal management system for electric vehicle battery pack

    Science.gov (United States)

    Gong, Huiqi; Zheng, Minxin; Jin, Peng; Feng, Dong

    2018-04-01

    Electric vehicle battery pack can increase the temperature to affect the power battery system cycle life, charge-ability, power, energy, security and reliability. The Computational Fluid Dynamics simulation and experiment of the charging and discharging process of the battery pack were carried out for the thermal management system of the battery pack under the continuous charging of the battery. The simulation result and the experimental data were used to verify the rationality of the Computational Fluid Dynamics calculation model. In view of the large temperature difference of the battery module in high temperature environment, three optimization methods of the existing thermal management system of the battery pack were put forward: adjusting the installation position of the fan, optimizing the arrangement of the battery pack and reducing the fan opening temperature threshold. The feasibility of the optimization method is proved by simulation and experiment of the thermal management system of the optimized battery pack.

  8. Multiobjective Shape Optimization for Deployment and Adjustment Properties of Cable-Net of Deployable Antenna

    Directory of Open Access Journals (Sweden)

    Guoqiang You

    2015-01-01

    Full Text Available Based on structural features of cable-net of deployable antenna, a multiobjective shape optimization method is proposed to help to engineer antenna’s cable-net structure that has better deployment and adjustment properties. In this method, the multiobjective optimum mathematical model is built with lower nodes’ locations of cable-net as variables, the average stress ratio of cable elements and strain energy as objectives, and surface precision and natural frequency of cable-net as constraints. Sequential quadratic programming method is used to solve this nonlinear mathematical model in conditions with different weighting coefficients, and the results show the validity and effectiveness of the proposed method and model.

  9. Simulation and optimization of stable isotope 18O separation by water vacuum distillation

    International Nuclear Information System (INIS)

    Chen Yuyan; Qin Chuanjiang; Xiao Bin; Xu Jing'an

    2012-01-01

    In the research, a stable isotope 18 O separation column was set up by water vacuum distillation with 20 m packing height and 0.1 m diameter of the column. The self-developed special packing named PAC- 18 O was packed inside the column. Firstly, a model was created by using the Aspen Plus software, and then the simulation results were validated by test results. Secondly, a group of simulation results were created by Aspen Plus, and the optimal operation conditions were gotten by using the artificial neural network (ANN) and Statistica software. Considering comprehensive factors drawn from column pressure and from withdrawing velocity, conclusions were reached on the study of the impact on the abundance of the isotope 18 O. The final results show that the abundance of the isotope 18 O increases as column pressure dropping and withdrawing velocity decreasing. Besides, the optimal column pressure and the incidence formula between the abundance of the isotope 18 O and withdrawing velocity were gotten. The conclusion is that the method of simulation and optimization can be applied to 18 O industrial design and will be popular in traditional distillation process to realize optimization design. (authors)

  10. Applied simulation and optimization : in logistics, industrial and aeronautical practice

    NARCIS (Netherlands)

    Mujica Mota, Miguel; De la Mota, Idalia Flores; Guimarans Serrano, Daniel

    2015-01-01

    Presenting techniques, case-studies and methodologies that combine the use of simulation approaches with optimization techniques for facing problems in manufacturing, logistics, or aeronautical problems, this book provides solutions to common industrial problems in several fields, which range from

  11. Hydrogen production by onboard gasoline processing – Process simulation and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Bisaria, Vega; Smith, R.J. Byron,

    2013-12-15

    Highlights: • Process flow sheet for an onboard fuel processor for 100 kW fuel cell output was simulated. • Gasoline fuel requirement was found to be 30.55 kg/hr. • The fuel processor efficiency was found to be 95.98%. • An heat integrated optimum flow sheet was developed. - Abstract: Fuel cell vehicles have reached the commercialization stage and hybrid vehicles are already on the road. While hydrogen storage and infrastructure remain critical issues in stand alone commercialization of the technology, researchers are developing onboard fuel processors, which can convert a variety of fuels into hydrogen to power these fuel cell vehicles. The feasibility study of a 100 kW on board fuel processor based on gasoline fuel is carried out using process simulation. The steady state model has been developed with the help of Aspen HYSYS to analyze the fuel processor and total system performance. The components of the fuel processor are the fuel reforming unit, CO clean-up unit and auxiliary units. Optimization studies were carried out by analyzing the influence of various operating parameters such as oxygen to carbon ratio, steam to carbon ratio, temperature and pressure on the process equipments. From the steady state model optimization using Aspen HYSYS, an optimized reaction composition in terms of hydrogen production and carbon monoxide concentration corresponds to: oxygen to carbon ratio of 0.5 and steam to carbon ratio of 0.5. The fuel processor efficiency of 95.98% is obtained under these optimized conditions. The heat integration of the system using the composite curve, grand composite curve and utility composite curve were studied for the system. The most appropriate heat exchanger network from the generated ones was chosen and that was incorporated into the optimized flow sheet of the100 kW fuel processor. A completely heat integrated 100 kW fuel processor flow sheet using gasoline as fuel was thus successfully simulated and optimized.

  12. Hydrogen production by onboard gasoline processing – Process simulation and optimization

    International Nuclear Information System (INIS)

    Bisaria, Vega; Smith, R.J. Byron

    2013-01-01

    Highlights: • Process flow sheet for an onboard fuel processor for 100 kW fuel cell output was simulated. • Gasoline fuel requirement was found to be 30.55 kg/hr. • The fuel processor efficiency was found to be 95.98%. • An heat integrated optimum flow sheet was developed. - Abstract: Fuel cell vehicles have reached the commercialization stage and hybrid vehicles are already on the road. While hydrogen storage and infrastructure remain critical issues in stand alone commercialization of the technology, researchers are developing onboard fuel processors, which can convert a variety of fuels into hydrogen to power these fuel cell vehicles. The feasibility study of a 100 kW on board fuel processor based on gasoline fuel is carried out using process simulation. The steady state model has been developed with the help of Aspen HYSYS to analyze the fuel processor and total system performance. The components of the fuel processor are the fuel reforming unit, CO clean-up unit and auxiliary units. Optimization studies were carried out by analyzing the influence of various operating parameters such as oxygen to carbon ratio, steam to carbon ratio, temperature and pressure on the process equipments. From the steady state model optimization using Aspen HYSYS, an optimized reaction composition in terms of hydrogen production and carbon monoxide concentration corresponds to: oxygen to carbon ratio of 0.5 and steam to carbon ratio of 0.5. The fuel processor efficiency of 95.98% is obtained under these optimized conditions. The heat integration of the system using the composite curve, grand composite curve and utility composite curve were studied for the system. The most appropriate heat exchanger network from the generated ones was chosen and that was incorporated into the optimized flow sheet of the100 kW fuel processor. A completely heat integrated 100 kW fuel processor flow sheet using gasoline as fuel was thus successfully simulated and optimized

  13. 3rd International Workshop on Advances in Simulation-Driven Optimization and Modeling

    CERN Document Server

    Leifsson, Leifur; Yang, Xin-She

    2016-01-01

    This edited volume is devoted to the now-ubiquitous use of computational models across most disciplines of engineering and science, led by a trio of world-renowned researchers in the field. Focused on recent advances of modeling and optimization techniques aimed at handling computationally-expensive engineering problems involving simulation models, this book will be an invaluable resource for specialists (engineers, researchers, graduate students) working in areas as diverse as electrical engineering, mechanical and structural engineering, civil engineering, industrial engineering, hydrodynamics, aerospace engineering, microwave and antenna engineering, ocean science and climate modeling, and the automotive industry, where design processes are heavily based on CPU-heavy computer simulations. Various techniques, such as knowledge-based optimization, adjoint sensitivity techniques, and fast replacement models (to name just a few) are explored in-depth along with an array of the latest techniques to optimize the...

  14. Simulation based optimization on automated fibre placement process

    Science.gov (United States)

    Lei, Shi

    2018-02-01

    In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.

  15. Simulation and optimization of logistics distribution for an engine production line

    Energy Technology Data Exchange (ETDEWEB)

    Song, L.; Jin, S.; Tang, P.

    2016-07-01

    In order to analyze and study the factors about Logistics distribution system, solve the problems of out of stock on the production line and improve the efficiency of the assembly line. Using the method of industrial engineering, put forward the optimization scheme of distribution system. The simulation model of logistics distribution system for engine assembly line was build based on Witness software. The optimization plan is efficient to improve Logistics distribution efficiency, production of assembly line efficiency and reduce the storage of production line. Based on the study of the modeling and simulation of engine production logistics distribution system, the result reflects some influence factors about production logistics system, which has reference value to improving the efficiency of the production line. (Author)

  16. Energy and Delay Optimization of Heterogeneous Multicore Wireless Multimedia Sensor Nodes by Adaptive Genetic-Simulated Annealing Algorithm

    Directory of Open Access Journals (Sweden)

    Xing Liu

    2018-01-01

    Full Text Available Energy efficiency and delay optimization are significant for the proliferation of wireless multimedia sensor network (WMSN. In this article, an energy-efficient, delay-efficient, hardware and software cooptimization platform is researched to minimize the energy cost while guaranteeing the deadline of the real-time WMSN tasks. First, a multicore reconfigurable WMSN hardware platform is designed and implemented. This platform uses both the heterogeneous multicore architecture and the dynamic voltage and frequency scaling (DVFS technique. By this means, the nodes can adjust the hardware characteristics dynamically in terms of the software run-time contexts. Consequently, the software can be executed more efficiently with less energy cost and shorter execution time. Then, based on this hardware platform, an energy and delay multiobjective optimization algorithm and a DVFS adaption algorithm are investigated. These algorithms aim to search out the global energy optimization solution within the acceptable calculation time and strip the time redundancy in the task executing process. Thus, the energy efficiency of the WMSN node can be improved significantly even under strict constraint of the execution time. Simulation and real-world experiments proved that the proposed approaches can decrease the energy cost by more than 29% compared to the traditional single-core WMSN node. Moreover, the node can react quickly to the time-sensitive events.

  17. Two-dimensional pixel image lag simulation and optimization in a 4-T CMOS image sensor

    Energy Technology Data Exchange (ETDEWEB)

    Yu Junting; Li Binqiao; Yu Pingping; Xu Jiangtao [School of Electronics Information Engineering, Tianjin University, Tianjin 300072 (China); Mou Cun, E-mail: xujiangtao@tju.edu.c [Logistics Management Office, Hebei University of Technology, Tianjin 300130 (China)

    2010-09-15

    Pixel image lag in a 4-T CMOS image sensor is analyzed and simulated in a two-dimensional model. Strategies of reducing image lag are discussed from transfer gate channel threshold voltage doping adjustment, PPD N-type doping dose/implant tilt adjustment and transfer gate operation voltage adjustment for signal electron transfer. With the computer analysis tool ISE-TCAD, simulation results show that minimum image lag can be obtained at a pinned photodiode n-type doping dose of 7.0 x 10{sup 12} cm{sup -2}, an implant tilt of -2{sup 0}, a transfer gate channel doping dose of 3.0 x 10{sup 12} cm{sup -2} and an operation voltage of 3.4 V. The conclusions of this theoretical analysis can be a guideline for pixel design to improve the performance of 4-T CMOS image sensors. (semiconductor devices)

  18. Optimal Acceleration-Velocity-Bounded Trajectory Planning in Dynamic Crowd Simulation

    Directory of Open Access Journals (Sweden)

    Fu Yue-wen

    2014-01-01

    Full Text Available Creating complex and realistic crowd behaviors, such as pedestrian navigation behavior with dynamic obstacles, is a difficult and time consuming task. In this paper, we study one special type of crowd which is composed of urgent individuals, normal individuals, and normal groups. We use three steps to construct the crowd simulation in dynamic environment. The first one is that the urgent individuals move forward along a given path around dynamic obstacles and other crowd members. An optimal acceleration-velocity-bounded trajectory planning method is utilized to model their behaviors, which ensures that the durations of the generated trajectories are minimal and the urgent individuals are collision-free with dynamic obstacles (e.g., dynamic vehicles. In the second step, a pushing model is adopted to simulate the interactions between urgent members and normal ones, which ensures that the computational cost of the optimal trajectory planning is acceptable. The third step is obligated to imitate the interactions among normal members using collision avoidance behavior and flocking behavior. Various simulation results demonstrate that these three steps give realistic crowd phenomenon just like the real world.

  19. Using simulation-optimization techniques to improve multiphase aquifer remediation

    Energy Technology Data Exchange (ETDEWEB)

    Finsterle, S.; Pruess, K. [Lawrence Berkeley Laboratory, Berkeley, CA (United States)

    1995-03-01

    The T2VOC computer model for simulating the transport of organic chemical contaminants in non-isothermal multiphase systems has been coupled to the ITOUGH2 code which solves parameter optimization problems. This allows one to use linear programming and simulated annealing techniques to solve groundwater management problems, i.e. the optimization of operations for multiphase aquifer remediation. A cost function has to be defined, containing the actual and hypothetical expenses of a cleanup operation which depend - directly or indirectly - on the state variables calculated by T2VOC. Subsequently, the code iteratively determines a remediation strategy (e.g. pumping schedule) which minimizes, for instance, pumping and energy costs, the time for cleanup, and residual contamination. We discuss an illustrative sample problem to discuss potential applications of the code. The study shows that the techniques developed for estimating model parameters can be successfully applied to the solution of remediation management problems. The resulting optimum pumping scheme depends, however, on the formulation of the remediation goals and the relative weighting between individual terms of the cost function.

  20. Simulation Modeling to Compare High-Throughput, Low-Iteration Optimization Strategies for Metabolic Engineering.

    Science.gov (United States)

    Heinsch, Stephen C; Das, Siba R; Smanski, Michael J

    2018-01-01

    Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems.

  1. [Numerical simulation and operation optimization of biological filter].

    Science.gov (United States)

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  2. Simulation and optimization of fractional crystallization processes

    DEFF Research Database (Denmark)

    Thomsen, Kaj; Rasmussen, Peter; Gani, Rafiqul

    1998-01-01

    A general method for the calculation of various types of phase diagrams for aqueous electrolyte mixtures is outlined. It is shown how the thermodynamic equilibrium precipitation process can be used to satisfy the operational needs of industrial crystallizer/centrifuge units. Examples of simulation...... and optimization of fractional crystallization processes are shown. In one of these examples, a process with multiple steady states is analyzed. The thermodynamic model applied for describing the highly non-ideal aqueous electrolyte systems is the Extended UNIQUAC model. (C) 1998 Published by Elsevier Science Ltd...

  3. Hedging Rules for Water Supply Reservoir Based on the Model of Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Yi Ji

    2016-06-01

    Full Text Available This study proposes a hedging rule model which is composed of a two-period reservior operation model considering the damage depth and hedging rule parameter optimization model. The former solves hedging rules based on a given poriod’s water supply weighting factor and carryover storage target, while the latter optimization model is used to optimize the weighting factor and carryover storage target based on the hedging rules. The coupling model gives the optimal poriod’s water supply weighting factor and carryover storage target to guide release. The conclusions achieved from this study as follows: (1 the water supply weighting factor and carryover storage target have a direct impact on the three elements of the hedging rule; (2 parameters can guide reservoirs to supply water reasonably after optimization of the simulation and optimization model; and (3 in order to verify the utility of the hedging rule, the Heiquan reservoir is used as a case study and particle swarm optimization algorithm with a simulation model is adopted for optimizing the parameter. The results show that the proposed hedging rule can improve the operation performances of the water supply reservoir.

  4. Simulation-Based Planning of Optimal Conditions for Industrial Computed Tomography

    DEFF Research Database (Denmark)

    Reisinger, S.; Kasperl, S.; Franz, M.

    2011-01-01

    We present a method to optimise conditions for industrial computed tomography (CT). This optimisation is based on a deterministic simulation. Our algorithm finds task-specific CT equipment settings to achieve optimal exposure parameters by means of an STL-model of the specimen and a raytracing...

  5. Effective Energy Simulation and Optimal Design of Side-lit Buildings with Venetian Blinds

    Science.gov (United States)

    Cheng, Tian

    Venetian blinds are popularly used in buildings to control the amount of incoming daylight for improving visual comfort and reducing heat gains in air-conditioning systems. Studies have shown that the proper design and operation of window systems could result in significant energy savings in both lighting and cooling. However, there is no convenient computer tool that allows effective and efficient optimization of the envelope of side-lit buildings with blinds now. Three computer tools, Adeline, DOE2 and EnergyPlus widely used for the above-mentioned purpose have been experimentally examined in this study. Results indicate that the two former tools give unacceptable accuracy due to unrealistic assumptions adopted while the last one may generate large errors in certain conditions. Moreover, current computer tools have to conduct hourly energy simulations, which are not necessary for life-cycle energy analysis and optimal design, to provide annual cooling loads. This is not computationally efficient, particularly not suitable for optimal designing a building at initial stage because the impacts of many design variations and optional features have to be evaluated. A methodology is therefore developed for efficient and effective thermal and daylighting simulations and optimal design of buildings with blinds. Based on geometric optics and radiosity method, a mathematical model is developed to reasonably simulate the daylighting behaviors of venetian blinds. Indoor illuminance at any reference point can be directly and efficiently computed. They have been validated with both experiments and simulations with Radiance. Validation results show that indoor illuminances computed by the new models agree well with the measured data, and the accuracy provided by them is equivalent to that of Radiance. The computational efficiency of the new models is much higher than that of Radiance as well as EnergyPlus. Two new methods are developed for the thermal simulation of buildings. A

  6. Optimal control and quantum simulations in superconducting quantum devices

    Energy Technology Data Exchange (ETDEWEB)

    Egger, Daniel J.

    2014-10-31

    Quantum optimal control theory is the science of steering quantum systems. In this thesis we show how to overcome the obstacles in implementing optimal control for superconducting quantum bits, a promising candidate for the creation of a quantum computer. Building such a device will require the tools of optimal control. We develop pulse shapes to solve a frequency crowding problem and create controlled-Z gates. A methodology is developed for the optimisation towards a target non-unitary process. We show how to tune-up control pulses for a generic quantum system in an automated way using a combination of open- and closed-loop optimal control. This will help scaling of quantum technologies since algorithms can calibrate control pulses far more efficiently than humans. Additionally we show how circuit QED can be brought to the novel regime of multi-mode ultrastrong coupling using a left-handed transmission line coupled to a right-handed one. We then propose to use this system as an analogue quantum simulator for the Spin-Boson model to show how dissipation arises in quantum systems.

  7. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  8. Simulation and optimization of logistics distribution for an engine production line

    Directory of Open Access Journals (Sweden)

    Lijun Song

    2016-02-01

    Full Text Available Purpose: In order to analyze and study the factors about Logistics distribution system, solve the problems of out of stock on the production line and improve the efficiency of the assembly line. Design/methodology/approach: Using the method of industrial engineering, put forward the optimization scheme of distribution system. The simulation model of logistics distribution system for engine assembly line was build based on Witness software. Findings: The optimization plan is efficient to improve Logistics distribution efficiency, production of assembly line efficiency and reduce the storage of production line Originality/value: Based on the study of the modeling and simulation of engine production logistics distribution system, the result reflects some influence factors about production logistics system, which has reference value to improving the efficiency of the production line.

  9. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Peng Jiang

    2016-07-01

    Full Text Available Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D convex hull and spanning tree (NDACS for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability.

  10. Time-optimal control of reactor power

    International Nuclear Information System (INIS)

    Bernard, J.A.

    1987-01-01

    Control laws that permit adjustments in reactor power to be made in minimum time and without overshoot have been formulated and demonstrated. These control laws which are derived from the standard and alternate dynamic period equations, are closed-form expressions of general applicability. These laws were deduced by noting that if a system is subject to one or more operating constraints, then the time-optimal response is to move the system along these constraints. Given that nuclear reactors are subject to limitations on the allowed reactor period, a time-optimal control law would step the period from infinity to the minimum allowed value, hold the period at that value for the duration of the transient, and then step the period back to infinity. The change in reactor would therefore be accomplished in minimum time. The resulting control laws are superior to other forms of time-optimal control because they are general-purpose, closed-form expressions that are both mathematically tractable and readily implanted. Moreover, these laws include provisions for the use of feedback. The results of simulation studies and actual experiments on the 5 MWt MIT Research Reactor in which these time-optimal control laws were used successfully to adjust the reactor power are presented

  11. Modeling and analysis of a decentralized electricity market: An integrated simulation/optimization approach

    International Nuclear Information System (INIS)

    Sarıca, Kemal; Kumbaroğlu, Gürkan; Or, Ilhan

    2012-01-01

    In this study, a model is developed to investigate the implications of an hourly day-ahead competitive power market on generator profits, electricity prices, availability and supply security. An integrated simulation/optimization approach is employed integrating a multi-agent simulation model with two alternative optimization models. The simulation model represents interactions between power generator, system operator, power user and power transmitter agents while the network flow optimization model oversees and optimizes the electricity flows, dispatches generators based on two alternative approaches used in the modeling of the underlying transmission network: a linear minimum cost network flow model and a non-linear alternating current optimal power flow model. Supply, demand, transmission, capacity and other technological constraints are thereby enforced. The transmission network, on which the scenario analyses are carried out, includes 30 bus, 41 lines, 9 generators, and 21 power users. The scenarios examined in the analysis cover various settings of transmission line capacities/fees, and hourly learning algorithms. Results provide insight into key behavioral and structural aspects of a decentralized electricity market under network constraints and reveal the importance of using an AC network instead of a simplified linear network flow approach. -- Highlights: ► An agent-based simulation model with an AC transmission environment with a day-ahead market. ► Physical network parameters have dramatic effects over price levels and stability. ► Due to AC nature of transmission network, adaptive agents have more local market power than minimal cost network flow. ► Behavior of the generators has significant effect over market price formation, as pointed out by bidding strategies. ► Transmission line capacity and fee policies are found to be very effective in price formation in the market.

  12. On-line Optimization-Based Simulators for Fractured and Non-fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Milind D. Deo

    2005-08-31

    Oil field development is a multi-million dollar business. Reservoir simulation is often used to guide the field management and development process. Reservoir characterization and geologic modeling tools have become increasingly sophisticated. As a result the geologic models produced are complex. Most reservoirs are fractured to a certain extent. The new geologic characterization methods are making it possible to map features such as faults and fractures, field-wide. Significant progress has been made in being able to predict properties of the faults and of the fractured zones. Traditionally, finite difference methods have been employed in discretizing the domains created by geologic means. For complex geometries, finite-element methods of discretization may be more suitable. Since reservoir simulation is a mature science, some of the advances in numerical methods (linear, nonlinear solvers and parallel computing) have not been fully realized in the implementation of most of the simulators. The purpose of this project was to address some of these issues. {sm_bullet} One of the goals of this project was to develop a series of finite-element simulators to handle problems of complex geometry, including systems containing faults and fractures. {sm_bullet} The idea was to incorporate the most modern computing tools; use of modular object-oriented computer languages, the most sophisticated linear and nonlinear solvers, parallel computing methods and good visualization tools. {sm_bullet} One of the tasks of the project was also to demonstrate the construction of fractures and faults in a reservoir using the available data and to assign properties to these features. {sm_bullet} Once the reservoir model is in place, it is desirable to find the operating conditions, which would provide the best reservoir performance. This can be accomplished by utilization optimization tools and coupling them with reservoir simulation. Optimization-based reservoir simulation was one of the

  13. D1+ Simulator: A cost and risk optimized approach to nuclear power plant simulator modernization

    International Nuclear Information System (INIS)

    Wischert, W.

    2006-01-01

    D1-Simulator is operated by Kraftwerks-Simulator-Gesellschaft (KSG) and Gesellschaft f?r Simulatorschulung (GfS) at the Simulator Centre in Essen since 1977. The full-scope control room training simulator, used for Kernkraftwerk Biblis (KWB) is based on a PDP-11 hardware platform and is mainly programmed in ASSEMBLER language. The Simulator has reached a continuous high availability of operation throughout the years due to specialized hardware and software support from KSG maintenance team. Nevertheless, D1-Simulator largely reveals limitations with respect to computer capacity and spares and suffers progressively from the non-availability of hardware replacement materials. In order to ensure long term maintainability within the framework of the consensus on nuclear energy, a 2-years refurbishing program has been launched by KWB focusing on quality and budgetary aspects. The so-called D1+ Simulator project is based on the re-use of validated data from existing simulators. Allowing for flexible project management methods, the project outlines a cost and risk optimized approach to Nuclear Power Plant (NPP) Simulator modernization. D1+ Simulator is being built by KSG/GfS in close collaboration with KWB and the simulator vendor THALES by re-using a modern hardware and software development environment from D56-Simulator, used by Kernkraftwerk Obrigheim (KWO) before its decommissioning in 2005. The Simulator project, launched in 2004, is expected to be completed by end of 2006. (author)

  14. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    Science.gov (United States)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  15. Simulation studies for optimizing the trigger generation criteria for the TACTIC telescope

    International Nuclear Information System (INIS)

    Koul, M.K.; Tickoo, A.K.; Dhar, V.K.; Venugopal, K.; Chanchalani, K.; Rannot, R.C.; Yadav, K.K.; Chandra, P.; Kothari, M.; Koul, R.

    2011-01-01

    In this paper, we present the results of Monte Carlo simulations of γ-ray and cosmic-ray proton induced extensive air showers as detected by the TACTIC atmospheric Cherenkov imaging telescope for optimizing its trigger field of view and topological trigger generation scheme. The simulation study has been carried out at several zenith angles. The topological trigger generation uses a coincidence of two or three nearest neighbor pixels for producing an event trigger. The results of this study suggest that a trigger field of 11x11 pixels (∼3.4 0 x3.4 0 ) is quite optimum for achieving maximum effective collection area for γ-rays from a point source. With regard to optimization of topological trigger generation, it is found that both two and three nearest neighbor pixels yield nearly similar results up to a zenith angle of 25 0 with a threshold energy of ∼1.5TeV for γ-rays. Beyond zenith angle of 25 0 , the results suggest that a two-pixel nearest neighbor trigger should be preferred. Comparison of the simulated integral rates has also been made with corresponding measured values for validating the predictions of the Monte Carlo simulations, especially the effective collection area, so that energy spectra of sources (or flux upper limits in case of no detection) can be determined reliably. Reasonably good matching of the measured trigger rates (on the basis of ∼207h of data collected with the telescope in NN-2 and NN-3 trigger configurations) with that obtained from simulations reassures that the procedure followed by us in estimating the threshold energy and detection rates is quite reliable. - Highlights: → Optimization of the trigger field of view and topological trigger generation for the TACTIC telescope. → Monte Carlo simulations of extensive air showers carried out using CORSIKA code. → Trigger generation with two or three nearest neighbor pixels yield similar results up to a zenith angle of 25 deg. → Reasonably good matching of measured trigger

  16. Simulation and OR (operations research) in combination for practical optimization

    NARCIS (Netherlands)

    van Dijk, N.; van der Sluis, E.; Haijema, R.; Al-Ibrahim, A.; van der Wal, J.; Kuhl, M.E.; Steiger, N.M.; Armstrong, F.B.; Joines, J.A.

    2005-01-01

    Should we pool capacities or not? This is a question that one can regularly be confronted with in operations and service management. It is a question that necessarily requires a combination of queueing (as OR discipline) and simulation (as evaluative tool) and further steps for optimization. It will

  17. Optimization of the particle pusher in a diode simulation code

    International Nuclear Information System (INIS)

    Theimer, M.M.; Quintenz, J.P.

    1979-09-01

    The particle pusher in Sandia's particle-in-cell diode simulation code has been rewritten to reduce the required run time of a typical simulation. The resulting new version of the code has been found to run up to three times as fast as the original with comparable accuracy. The cost of this optimization was an increase in storage requirements of about 15%. The new version has also been written to run efficiently on a CRAY-1 computing system. Steps taken to affect this reduced run time are described. Various test cases are detailed

  18. A simulator-independent optimization tool based on genetic algorithm applied to nuclear reactor design

    International Nuclear Information System (INIS)

    Abreu Pereira, Claudio Marcio Nascimento do; Schirru, Roberto; Martinez, Aquilino Senra

    1999-01-01

    Here is presented an engineering optimization tool based on a genetic algorithm, implemented according to the method proposed in recent work that has demonstrated the feasibility of the use of this technique in nuclear reactor core designs. The tool is simulator-independent in the sense that it can be customized to use most of the simulators which have the input parameters read from formatted text files and the outputs also written from a text file. As the nuclear reactor simulators generally use such kind of interface, the proposed tool plays an important role in nuclear reactor designs. Research reactors may often use non-conventional design approaches, causing different situations that may lead the nuclear engineer to face new optimization problems. In this case, a good optimization technique, together with its customizing facility and a friendly man-machine interface could be very interesting. Here, the tool is described and some advantages are outlined. (author)

  19. Numerical simulation of CICC design based on optimization of ratio of copper to superconductor

    International Nuclear Information System (INIS)

    Jiang Huawei; Li Yuan; Yan Shuailing

    2007-01-01

    For cable-in-conduit conductor (CICC) structure design, a numeric simulation is proposed for conductor configuration based on optimization of ratio of copper to superconductor. The simulation outcome is in agreement with engineering design one. (authors)

  20. On the Structure and Adjustment of Inversion-Capped Neutral Atmospheric Boundary-Layer Flows: Large-Eddy Simulation Study

    DEFF Research Database (Denmark)

    Pedersen, Jesper Grønnegaard; Gryning, Sven-Erik; Kelly, Mark C.

    2014-01-01

    A range of large-eddy simulations, with differing free atmosphere stratification and zero or slightly positive surface heat flux, is investigated to improve understanding of the neutral and near-neutral, inversion-capped, horizontally homogeneous, barotropic atmospheric boundary layer with emphasis...... on the upper region. We find that an adjustment time of at least 16 h is needed for the simulated flow to reach a quasi-steady state. The boundary layer continues to grow, but at a slow rate that changes little after 8 h of simulation time. A common feature of the neutral simulations is the development...... of a super-geostrophic jet near the top of the boundary layer. The analytical wind-shear models included do not account for such a jet, and the best agreement with simulated wind shear is seen in cases with weak stratification above the boundary layer. Increasing the surface heat flux decreases the magnitude...

  1. A Novel Structure and Design Optimization of Compact Spline-Parameterized UWB Slot Antenna

    Directory of Open Access Journals (Sweden)

    Koziel Slawomir

    2016-12-01

    Full Text Available In this paper, a novel structure of a compact UWB slot antenna and its design optimization procedure has been presented. In order to achieve a sufficient number of degrees of freedom necessary to obtain a considerable size reduction rate, the slot is parameterized using spline curves. All antenna dimensions are simultaneously adjusted using numerical optimization procedures. The fundamental bottleneck here is a high cost of the electromagnetic (EM simulation model of the structure that includes (for reliability an SMA connector. Another problem is a large number of geometry parameters (nineteen. For the sake of computational efficiency, the optimization process is therefore performed using variable-fidelity EM simulations and surrogate-assisted algorithms. The optimization process is oriented towards explicit reduction of the antenna size and leads to a compact footprint of 199 mm2 as well as acceptable matching within the entire UWB band. The simulation results are validated using physical measurements of the fabricated antenna prototype.

  2. Optimization of source pencil deployment based on plant growth simulation algorithm

    International Nuclear Information System (INIS)

    Yang Lei; Liu Yibao; Liu Yujuan

    2009-01-01

    A plant growth simulation algorithm was proposed for optimizing source pencil deployment for a 60 Co irradiator. A method used to evaluate the calculation results was presented with the objective function defined by relative standard deviation of the exposure rate at the reference points, and the method to transform two kinds of control variables, i.e., position coordinates x j and y j of source pencils in the source plaque, into proper integer variables was also analyzed and solved. The results show that the plant growth simulation algorithm, which possesses both random and directional search mechanism, has good global search ability and can be used conveniently. The results are affected a little by initial conditions, and improve the uniformity in the irradiation fields. It creates a dependable field for the optimization of source bars arrangement at irradiation facility. (authors)

  3. Routing Optimization of Intelligent Vehicle in Automated Warehouse

    Directory of Open Access Journals (Sweden)

    Yan-cong Zhou

    2014-01-01

    Full Text Available Routing optimization is a key technology in the intelligent warehouse logistics. In order to get an optimal route for warehouse intelligent vehicle, routing optimization in complex global dynamic environment is studied. A new evolutionary ant colony algorithm based on RFID and knowledge-refinement is proposed. The new algorithm gets environmental information timely through the RFID technology and updates the environment map at the same time. It adopts elite ant kept, fallback, and pheromones limitation adjustment strategy. The current optimal route in population space is optimized based on experiential knowledge. The experimental results show that the new algorithm has higher convergence speed and can jump out the U-type or V-type obstacle traps easily. It can also find the global optimal route or approximate optimal one with higher probability in the complex dynamic environment. The new algorithm is proved feasible and effective by simulation results.

  4. Modeling, Simulation and Optimization of Hydrogen Production Process from Glycerol using Steam Reforming

    International Nuclear Information System (INIS)

    Park, Jeongpil; Cho, Sunghyun; Kim, Tae-Ok; Shin, Dongil; Lee, Seunghwan; Moon, Dong Ju

    2014-01-01

    For improved sustainability of the biorefinery industry, biorefinery-byproduct glycerol is being investigated as an alternate source for hydrogen production. This research designs and optimizes a hydrogen-production process for small hydrogen stations using steam reforming of purified glycerol as the main reaction, replacing existing processes relying on steam methane reforming. Modeling, simulation and optimization using a commercial process simulator are performed for the proposed hydrogen production process from glycerol. The mixture of glycerol and steam are used for making syngas in the reforming process. Then hydrogen are produced from carbon monoxide and steam through the water-gas shift reaction. Finally, hydrogen is separated from carbon dioxide using PSA. This study shows higher yield than former U.S.. DOE and Linde studies. Economic evaluations are performed for optimal planning of constructing domestic hydrogen energy infrastructure based on the proposed glycerol-based hydrogen station

  5. A simulation-based optimization approach for free distributed repairable multi-state availability-redundancy allocation problems

    International Nuclear Information System (INIS)

    Attar, Ahmad; Raissi, Sadigh; Khalili-Damghani, Kaveh

    2017-01-01

    A simulation-based optimization (SBO) method is proposed to handle multi-objective joint availability-redundancy allocation problem (JARAP). Here, there is no emphasis on probability distributions of time to failures and repair times for multi-state multi-component series-parallel configuration under active, cold and hot standby strategies. Under such conditions, estimation of availability is not a trivial task. First, an efficient computer simulation model is proposed to estimate the availability of the aforementioned system. Then, the estimated availability values are used in a repetitive manner as parameter of a two-objective joint availability-redundancy allocation optimization model through SBO mechanism. The optimization model is then solved using two well-known multi-objective evolutionary computation algorithms, i.e., non-dominated sorting genetic algorithm (NSGA-II), and Strength Pareto Evolutionary Algorithm (SPEA2). The proposed SBO approach is tested using non-exponential numerical example with multi-state repairable components. The results are presented and discussed through different demand scenarios under cold and hot standby strategies. Furthermore, performance of NSGA-II and SPEA2 are statistically compared regarding multi-objective accuracy, and diversity metrics. - Highlights: • A Simulation-Based Optimization (SBO) procedure is introduced for JARAP. • The proposed SBO works for any given failure and repair times. • An efficient simulation procedure is developed to estimate availability. • Customized NSGA-II and SPEA2 are proposed to solve the bi-objective JARAP. • Statistical analysis is employed to test the performance of optimization methods.

  6. Energy-Dissipation Performance of Combined Low Yield Point Steel Plate Damper Based on Topology Optimization and Its Application in Structural Control

    Directory of Open Access Journals (Sweden)

    Haoxiang He

    2016-01-01

    Full Text Available In view of the disadvantages such as higher yield stress and inadequate adjustability, a combined low yield point steel plate damper involving low yield point steel plates and common steel plates is proposed. Three types of combined plate dampers with new hollow shapes are proposed, and the specific forms include interior hollow, boundary hollow, and ellipse hollow. The “maximum stiffness” and “full stress state” are used as the optimization objectives, and the topology optimization of different hollow forms by alternating optimization method is to obtain the optimal shape. Various combined steel plate dampers are calculated by finite element simulation, the results indicate that the initial stiffness of the boundary optimized damper and interior optimized damper is lager, the hysteresis curves are full, and there is no stress concentration. These two types of optimization models made in different materials rations are studied by numerical simulation, and the adjustability of yield stress of these combined dampers is verified. The nonlinear dynamic responses, seismic capacity, and damping effect of steel frame structures with different combined dampers are analyzed. The results show that the boundary optimized damper has better energy-dissipation capacity and is suitable for engineering application.

  7. Uncertainty-based simulation-optimization using Gaussian process emulation: Application to coastal groundwater management

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ketabchi, Hamed

    2017-12-01

    Combined simulation-optimization (S/O) schemes have long been recognized as a valuable tool in coastal groundwater management (CGM). However, previous applications have mostly relied on deterministic seawater intrusion (SWI) simulations. This is a questionable simplification, knowing that SWI models are inevitably prone to epistemic and aleatory uncertainty, and hence a management strategy obtained through S/O without consideration of uncertainty may result in significantly different real-world outcomes than expected. However, two key issues have hindered the use of uncertainty-based S/O schemes in CGM, which are addressed in this paper. The first issue is how to solve the computational challenges resulting from the need to perform massive numbers of simulations. The second issue is how the management problem is formulated in presence of uncertainty. We propose the use of Gaussian process (GP) emulation as a valuable tool in solving the computational challenges of uncertainty-based S/O in CGM. We apply GP emulation to the case study of Kish Island (located in the Persian Gulf) using an uncertainty-based S/O algorithm which relies on continuous ant colony optimization and Monte Carlo simulation. In doing so, we show that GP emulation can provide an acceptable level of accuracy, with no bias and low statistical dispersion, while tremendously reducing the computational time. Moreover, five new formulations for uncertainty-based S/O are presented based on concepts such as energy distances, prediction intervals and probabilities of SWI occurrence. We analyze the proposed formulations with respect to their resulting optimized solutions, the sensitivity of the solutions to the intended reliability levels, and the variations resulting from repeated optimization runs.

  8. Simulation of optimal arctic routes using a numerical sea ice model based on an ice-coupled ocean circulation method

    Directory of Open Access Journals (Sweden)

    Jong-Ho Nam

    2013-06-01

    Full Text Available Ever since the Arctic region has opened its mysterious passage to mankind, continuous attempts to take advantage of its fastest route across the region has been made. The Arctic region is still covered by thick ice and thus finding a feasible navigating route is essential for an economical voyage. To find the optimal route, it is necessary to establish an efficient transit model that enables us to simulate every possible route in advance. In this work, an enhanced algorithm to determine the optimal route in the Arctic region is introduced. A transit model based on the simulated sea ice and environmental data numerically modeled in the Arctic is developed. By integrating the simulated data into a transit model, further applications such as route simulation, cost estimation or hindcast can be easily performed. An interactive simulation system that determines the optimal Arctic route using the transit model is developed. The simulation of optimal routes is carried out and the validity of the results is discussed.

  9. Mathematical exergoeconomic optimization of a complex cogeneration plant aided by a professional process simulator

    International Nuclear Information System (INIS)

    Vieira, Leonardo S.; Donatelli, Joao L.; Cruz, Manuel E.

    2006-01-01

    In this work we present the development and implementation of an integrated approach for mathematical exergoeconomic optimization of complex thermal systems. By exploiting the computational power of a professional process simulator, the proposed integrated approach permits the optimization routine to ignore the variables associated with the thermodynamic balance equations and thus deal only with the decision variables. To demonstrate the capabilities of the integrated approach, it is here applied to a complex cogeneration system, which includes all the major components of a typical thermal plant, and requires more than 800 variables for its simulation

  10. Pareto Optimal Solutions for Network Defense Strategy Selection Simulator in Multi-Objective Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Yang Sun

    2018-01-01

    Full Text Available Using Pareto optimization in Multi-Objective Reinforcement Learning (MORL leads to better learning results for network defense games. This is particularly useful for network security agents, who must often balance several goals when choosing what action to take in defense of a network. If the defender knows his preferred reward distribution, the advantages of Pareto optimization can be retained by using a scalarization algorithm prior to the implementation of the MORL. In this paper, we simulate a network defense scenario by creating a multi-objective zero-sum game and using Pareto optimization and MORL to determine optimal solutions and compare those solutions to different scalarization approaches. We build a Pareto Defense Strategy Selection Simulator (PDSSS system for assisting network administrators on decision-making, specifically, on defense strategy selection, and the experiment results show that the Satisficing Trade-Off Method (STOM scalarization approach performs better than linear scalarization or GUESS method. The results of this paper can aid network security agents attempting to find an optimal defense policy for network security games.

  11. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    International Nuclear Information System (INIS)

    Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei; Biteus, Jonas

    2014-01-01

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation

  12. Solving complex maintenance planning optimization problems using stochastic simulation and multi-criteria fuzzy decision making

    Energy Technology Data Exchange (ETDEWEB)

    Tahvili, Sahar [Mälardalen University (Sweden); Österberg, Jonas; Silvestrov, Sergei [Division of Applied Mathematics, Mälardalen University (Sweden); Biteus, Jonas [Scania CV (Sweden)

    2014-12-10

    One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.

  13. Embedded FPGA Design for Optimal Pixel Adjustment Process of Image Steganography

    Directory of Open Access Journals (Sweden)

    Chiung-Wei Huang

    2018-01-01

    Full Text Available We propose a prototype of field programmable gate array (FPGA implementation for optimal pixel adjustment process (OPAP algorithm of image steganography. In the proposed scheme, the cover image and the secret message are transmitted from a personal computer (PC to an FPGA board using RS232 interface for hardware processing. We firstly embed k-bit secret message into each pixel of the cover image by the last-significant-bit (LSB substitution method, followed by executing associated OPAP calculations to construct a stego pixel. After all pixels of the cover image have been embedded, a stego image is created and transmitted from FPGA back to the PC and stored in the PC. Moreover, we have extended the basic pixel-wise structure to a parallel structure which can fully use the hardware devices to speed up the embedding process and embed several bits of secret message at the same time. Through parallel mechanism of the hardware based design, the data hiding process can be completed in few clock cycles to produce steganography outcome. Experimental results show the effectiveness and correctness of the proposed scheme.

  14. Optimization of observation plan based on the stochastic characteristics of the geodetic network

    Directory of Open Access Journals (Sweden)

    Pachelski Wojciech

    2016-06-01

    Full Text Available Optimal design of geodetic network is a basic subject of many engineering projects. An observation plan is a concluding part of the process. Any particular observation within the network has through adjustment a different contribution and impact on values and accuracy characteristics of unknowns. The problem of optimal design can be solved by means of computer simulation. This paper presents a new method of simulation based on sequential estimation of individual observations in a step-by-step manner, by means of the so-called filtering equations. The algorithm aims at satisfying different criteria of accuracy according to various interpretations of the covariance matrix. Apart of them, the optimization criterion is also amount of effort, defined as the minimum number of observations required.

  15. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    Science.gov (United States)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  16. Simulation-optimization model for production planning in the blood supply chain.

    Science.gov (United States)

    Osorio, Andres F; Brailsford, Sally C; Smith, Honora K; Forero-Matiz, Sonia P; Camacho-Rodríguez, Bernardo A

    2017-12-01

    Production planning in the blood supply chain is a challenging task. Many complex factors such as uncertain supply and demand, blood group proportions, shelf life constraints and different collection and production methods have to be taken into account, and thus advanced methodologies are required for decision making. This paper presents an integrated simulation-optimization model to support both strategic and operational decisions in production planning. Discrete-event simulation is used to represent the flows through the supply chain, incorporating collection, production, storing and distribution. On the other hand, an integer linear optimization model running over a rolling planning horizon is used to support daily decisions, such as the required number of donors, collection methods and production planning. This approach is evaluated using real data from a blood center in Colombia. The results show that, using the proposed model, key indicators such as shortages, outdated units, donors required and cost are improved.

  17. A New Method Based on Simulation-Optimization Approach to Find Optimal Solution in Dynamic Job-shop Scheduling Problem with Breakdown and Rework

    Directory of Open Access Journals (Sweden)

    Farzad Amirkhani

    2017-03-01

    The proposed method is implemented on classical job-shop problems with objective of makespan and results are compared with mixed integer programming model. Moreover, the appropriate dispatching priorities are achieved for dynamic job-shop problem minimizing a multi-objective criteria. The results show that simulation-based optimization are highly capable to capture the main characteristics of the shop and produce optimal/near-optimal solutions with highly credibility degree.

  18. The effect of framing on surrogate optimism bias: A simulation study.

    Science.gov (United States)

    Patel, Dev; Cohen, Elan D; Barnato, Amber E

    2016-04-01

    To explore the effect of emotion priming and physician communication behaviors on optimism bias. We conducted a 5 × 2 between-subject randomized factorial experiment using a Web-based interactive video designed to simulate a family meeting for a critically ill spouse/parent. Eligibility included age at least 35 years and self-identifying as the surrogate for a spouse/parent. The primary outcome was the surrogate's election of code status. We defined optimism bias as the surrogate's estimate of prognosis with cardiopulmonary resuscitation (CPR) > their recollection of the physician's estimate. Of 373 respondents, 256 (69%) logged in and were randomized and 220 (86%) had nonmissing data for prognosis. Sixty-seven (30%) of 220 overall and 56 of (32%) 173 with an accurate recollection of the physician's estimate had optimism bias. Optimism bias correlated with choosing CPR (P optimism bias. Framing the decision as the patient's vs the surrogate's (25% vs 36%, P = .066) and describing the alternative to CPR as "allow natural death" instead of "do not resuscitate" (25% vs 37%, P = .035) decreased optimism bias. Framing of CPR choice during code status conversations may influence surrogates' optimism bias. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. A proposed simulation optimization model framework for emergency department problems in public hospital

    Science.gov (United States)

    Ibrahim, Ireen Munira; Liong, Choong-Yeun; Bakar, Sakhinah Abu; Ahmad, Norazura; Najmuddin, Ahmad Farid

    2015-12-01

    The Emergency Department (ED) is a very complex system with limited resources to support increase in demand. ED services are considered as good quality if they can meet the patient's expectation. Long waiting times and length of stay is always the main problem faced by the management. The management of ED should give greater emphasis on their capacity of resources in order to increase the quality of services, which conforms to patient satisfaction. This paper is a review of work in progress of a study being conducted in a government hospital in Selangor, Malaysia. This paper proposed a simulation optimization model framework which is used to study ED operations and problems as well as to find an optimal solution to the problems. The integration of simulation and optimization is hoped can assist management in decision making process regarding their resource capacity planning in order to improve current and future ED operations.

  20. Numerical Simulation of a Tumor Growth Dynamics Model Using Particle Swarm Optimization.

    Science.gov (United States)

    Wang, Zhijun; Wang, Qing

    Tumor cell growth models involve high-dimensional parameter spaces that require computationally tractable methods to solve. To address a proposed tumor growth dynamics mathematical model, an instance of the particle swarm optimization method was implemented to speed up the search process in the multi-dimensional parameter space to find optimal parameter values that fit experimental data from mice cancel cells. The fitness function, which measures the difference between calculated results and experimental data, was minimized in the numerical simulation process. The results and search efficiency of the particle swarm optimization method were compared to those from other evolutional methods such as genetic algorithms.

  1. Genetic algorithms and Monte Carlo simulation for optimal plant design

    International Nuclear Information System (INIS)

    Cantoni, M.; Marseguerra, M.; Zio, E.

    2000-01-01

    We present an approach to the optimal plant design (choice of system layout and components) under conflicting safety and economic constraints, based upon the coupling of a Monte Carlo evaluation of plant operation with a Genetic Algorithms-maximization procedure. The Monte Carlo simulation model provides a flexible tool, which enables one to describe relevant aspects of plant design and operation, such as standby modes and deteriorating repairs, not easily captured by analytical models. The effects of deteriorating repairs are described by means of a modified Brown-Proschan model of imperfect repair which accounts for the possibility of an increased proneness to failure of a component after a repair. The transitions of a component from standby to active, and vice versa, are simulated using a multiplicative correlation model. The genetic algorithms procedure is demanded to optimize a profit function which accounts for the plant safety and economic performance and which is evaluated, for each possible design, by the above Monte Carlo simulation. In order to avoid an overwhelming use of computer time, for each potential solution proposed by the genetic algorithm, we perform only few hundreds Monte Carlo histories and, then, exploit the fact that during the genetic algorithm population evolution, the fit chromosomes appear repeatedly many times, so that the results for the solutions of interest (i.e. the best ones) attain statistical significance

  2. Comparative analysis of cogeneration power plants optimization based on stochastic method using superstructure and process simulator

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Leonardo Rodrigues de [Instituto Federal do Espirito Santo, Vitoria, ES (Brazil)], E-mail: leoaraujo@ifes.edu.br; Donatelli, Joao Luiz Marcon [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil)], E-mail: joaoluiz@npd.ufes.br; Silva, Edmar Alino da Cruz [Instituto Tecnologico de Aeronautica (ITA/CTA), Sao Jose dos Campos, SP (Brazil); Azevedo, Joao Luiz F. [Instituto de Aeronautica e Espaco (CTA/IAE/ALA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    Thermal systems are essential in facilities such as thermoelectric plants, cogeneration plants, refrigeration systems and air conditioning, among others, in which much of the energy consumed by humanity is processed. In a world with finite natural sources of fuels and growing energy demand, issues related with thermal system design, such as cost estimative, design complexity, environmental protection and optimization are becoming increasingly important. Therefore the need to understand the mechanisms that degrade energy, improve energy sources use, reduce environmental impacts and also reduce project, operation and maintenance costs. In recent years, a consistent development of procedures and techniques for computational design of thermal systems has occurred. In this context, the fundamental objective of this study is a performance comparative analysis of structural and parametric optimization of a cogeneration system using stochastic methods: genetic algorithm and simulated annealing. This research work uses a superstructure, modelled in a process simulator, IPSEpro of SimTech, in which the appropriate design case studied options are included. Accordingly, the cogeneration system optimal configuration is determined as a consequence of the optimization process, restricted within the configuration options included in the superstructure. The optimization routines are written in MsExcel Visual Basic, in order to work perfectly coupled to the simulator process. At the end of the optimization process, the system optimal configuration, given the characteristics of each specific problem, should be defined. (author)

  3. Wind turbine optimal control during storms

    International Nuclear Information System (INIS)

    Petrović, V; Bottasso, C L

    2014-01-01

    This paper proposes a control algorithm that enables wind turbine operation in high winds. With this objective, an online optimization procedure is formulated that, based on the wind turbine state, estimates those extremal wind speed variations that would produce maximal allowable wind turbine loads. Optimization results are compared to the actual wind speed and, if there is a danger of excessive loading, the wind turbine power reference is adjusted to ensure that loads stay within allowed limits. This way, the machine can operate safely even above the cut-out wind speed, thereby realizing a soft envelope-protecting cut-out. The proposed control strategy is tested and verified using a high-fidelity aeroservoelastic simulation model

  4. Simulation-Based Optimization of Camera Placement in the Context of Industrial Pose Estimation

    DEFF Research Database (Denmark)

    Jørgensen, Troels Bo; Iversen, Thorbjørn Mosekjær; Lindvig, Anders Prier

    2018-01-01

    In this paper, we optimize the placement of a camera in simulation in order to achieve a high success rate for a pose estimation problem. This is achieved by simulating 2D images from a stereo camera in a virtual scene. The stereo images are then used to generate 3D point clouds based on two diff...

  5. Nonlinear dynamic simulation of optimal depletion of crude oil in the lower 48 United States

    International Nuclear Information System (INIS)

    Ruth, M.; Cleveland, C.J.

    1993-01-01

    This study combines the economic theory of optimal resource use with econometric estimates of demand and supply parameters to develop a nonlinear dynamic model of crude oil exploration, development, and production in the lower 48 United States. The model is simulated with the graphical programming language STELLA, for the years 1985 to 2020. The procedure encourages use of economic theory and econometrics in combination with nonlinear dynamic simulation to enhance our understanding of complex interactions present in models of optimal resource use. (author)

  6. Numerical simulation and optimized design of cased telescoped ammunition interior ballistic

    Directory of Open Access Journals (Sweden)

    Jia-gang Wang

    2018-04-01

    Full Text Available In order to achieve the optimized design of a cased telescoped ammunition (CTA interior ballistic design, a genetic algorithm was introduced into the optimal design of CTA interior ballistics with coupling the CTA interior ballistic model. Aiming at the interior ballistic characteristics of a CTA gun, the goal of CTA interior ballistic design is to obtain a projectile velocity as large as possible. The optimal design of CTA interior ballistic is carried out using a genetic algorithm by setting peak pressure, changing the chamber volume and gun powder charge density. A numerical simulation of interior ballistics based on a 35 mm CTA firing experimental scheme was conducted and then the genetic algorithm was used for numerical optimization. The projectile muzzle velocity of the optimized scheme is increased from 1168 m/s for the initial experimental scheme to 1182 m/s. Then four optimization schemes were obtained with several independent optimization processes. The schemes were compared with each other and the difference between these schemes is small. The peak pressure and muzzle velocity of these schemes are almost the same. The result shows that the genetic algorithm is effective in the optimal design of the CTA interior ballistics. This work will be lay the foundation for further CTA interior ballistic design. Keywords: Cased telescoped ammunition, Interior ballistics, Gunpowder, Optimization genetic algorithm

  7. Optimization of a middle atmosphere diagnostic scheme

    Science.gov (United States)

    Akmaev, Rashid A.

    1997-06-01

    A new assimilative diagnostic scheme based on the use of a spectral model was recently tested on the CIRA-86 empirical model. It reproduced the observed climatology with an annual global rms temperature deviation of 3.2 K in the 15-110 km layer. The most important new component of the scheme is that the zonal forcing necessary to maintain the observed climatology is diagnosed from empirical data and subsequently substituted into the simulation model at the prognostic stage of the calculation in an annual cycle mode. The simulation results are then quantitatively compared with the empirical model, and the above mentioned rms temperature deviation provides an objective measure of the `distance' between the two climatologies. This quantitative criterion makes it possible to apply standard optimization procedures to the whole diagnostic scheme and/or the model itself. The estimates of the zonal drag have been improved in this study by introducing a nudging (Newtonian-cooling) term into the thermodynamic equation at the diagnostic stage. A proper optimal adjustment of the strength of this term makes it possible to further reduce the rms temperature deviation of simulations down to approximately 2.7 K. These results suggest that direct optimization can successfully be applied to atmospheric model parameter identification problems of moderate dimensionality.

  8. Ring rolling process simulation for geometry optimization

    Science.gov (United States)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  9. Correlational indicators of psychosocial adjustment among senior ...

    African Journals Online (AJOL)

    There was a significant joint contribution of the independent variables (sense of coherence, optimism and self-efficacy) to the prediction of psychosocial adjustment. This suggested that the three independent variables combined accounted for 30.4% (Adj.R2= .304) variation in the prediction of psychosocial adjustment.

  10. Optimization of permanent-magnet undulator magnets ordering using simulated annealing algorithm

    International Nuclear Information System (INIS)

    Chen Nian; He Duohui; Li Ge; Jia Qika; Zhang Pengfei; Xu Hongliang; Cai Genwang

    2005-01-01

    Pure permanent-magnet undulator consists of many magnets. The unavoidable remanence divergence of these magnets causes the undulator magnetic field error, which will affect the functional mode of the storage ring and the quality of the spontaneous emission spectrum. Optimizing permanent-magnet undulator magnets ordering using simulated annealing algorithm before installing undulator magnets, the first field integral can be reduced to 10 -6 T·m, the second integral to 10 -6 T·m 2 and the peak field error to less than 10 -4 . The optimized results are independent of the initial solution. This paper gives the optimizing process in detail and puts forward a method to quickly calculate the peak field error and field integral according to the magnet remanence. (authors)

  11. Temperature Simulation of Greenhouse with CFD Methods and Optimal Sensor Placement

    Directory of Open Access Journals (Sweden)

    Yanzheng Liu

    2014-03-01

    Full Text Available The accuracy of information monitoring is significant to increase the effect of Greenhouse Environment Control. In this paper, by taking simulation for the temperature field in the greenhouse as an example, the CFD (Computational Fluid Dynamics simulation model for measuring the microclimate environment of greenhouse with the principle of thermal environment formation was established, and the temperature distributions under the condition of mechanical ventilation was also simulated. The results showed that the CFD model and its solution simulated for greenhouse thermal environment could describe the changing process of temperature environment within the greenhouse; the most suitable turbulent simulation model was the standard k?? model. Under the condition of mechanical ventilation, the average deviation between the simulated value and the measured value was 0.6, which was 4.5 percent of the measured value. The distribution of temperature filed had obvious layering structures, and the temperature in the greenhouse model decreased gradually from the periphery to the center. Based on these results, the sensor number and the optimal sensor placement were determined with CFD simulation method.

  12. Simulation-based optimal Bayesian experimental design for nonlinear systems

    KAUST Repository

    Huan, Xun

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics. © 2012 Elsevier Inc.

  13. a Comparison of Simulated Annealing, Genetic Algorithm and Particle Swarm Optimization in Optimal First-Order Design of Indoor Tls Networks

    Science.gov (United States)

    Jia, F.; Lichti, D.

    2017-09-01

    The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.

  14. Using multi-disciplinary optimization and numerical simulation on the transiting exoplanet survey satellite

    Science.gov (United States)

    Stoeckel, Gerhard P.; Doyle, Keith B.

    2017-08-01

    The Transiting Exoplanet Survey Satellite (TESS) is an instrument consisting of four, wide fieldof- view CCD cameras dedicated to the discovery of exoplanets around the brightest stars, and understanding the diversity of planets and planetary systems in our galaxy. Each camera utilizes a seven-element lens assembly with low-power and low-noise CCD electronics. Advanced multivariable optimization and numerical simulation capabilities accommodating arbitrarily complex objective functions have been added to the internally developed Lincoln Laboratory Integrated Modeling and Analysis Software (LLIMAS) and used to assess system performance. Various optical phenomena are accounted for in these analyses including full dn/dT spatial distributions in lenses and charge diffusion in the CCD electronics. These capabilities are utilized to design CCD shims for thermal vacuum chamber testing and flight, and verify comparable performance in both environments across a range of wavelengths, field points and temperature distributions. Additionally, optimizations and simulations are used for model correlation and robustness optimizations.

  15. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    Science.gov (United States)

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the…

  16. An Evaluation of the Use of Simulated Annealing to Optimize Thinning Rates for Single Even-Aged Stands

    Directory of Open Access Journals (Sweden)

    Kai Moriguchi

    2015-01-01

    Full Text Available We evaluated the potential of simulated annealing as a reliable method for optimizing thinning rates for single even-aged stands. Four types of yield models were used as benchmark models to examine the algorithm’s versatility. Thinning rate, which was constrained to 0–50% every 5 years at stand ages of 10–45 years, was optimized to maximize the net present value for one fixed rotation term (50 years. The best parameters for the simulated annealing were chosen from 113 patterns, using the mean of the net present value from 39 runs to ensure the best performance. We compared the solutions with those from coarse full enumeration to evaluate the method’s reliability and with 39 runs of random search to evaluate its efficiency. In contrast to random search, the best run of simulated annealing for each of the four yield models resulted in a better solution than coarse full enumeration. However, variations in the objective function for two yield models obtained with simulated annealing were significantly larger than those of random search. In conclusion, simulated annealing with optimized parameters is more efficient for optimizing thinning rates than random search. However, it is necessary to execute multiple runs to obtain reliable solutions.

  17. Simulation and optimization of agricultural product supply chain system based on Witness

    Directory of Open Access Journals (Sweden)

    Jiandong Liu

    2017-03-01

    Full Text Available Researches on agricultural product supply chain have important implications for improving the efficiency of agricultural products circulation, strengthening the construction of agricultural market system, promoting agricultural modernization and solving the three rural issues. Agricultural product supply chain system has begun to be optimized through simulation technique. In this paper, agricultural product supply chain system is reasonably simplified and assumed. A simulation model was developed by using the simulation software Wit-ness to study agricultural product supply chain. Through the analysis of the simulation output data, improvement suggestions were also proposed as follows: improving the organization degree of agricultural products, improving the agricultural products processing, establishing strategic partnership and scientifically developing agricultural products logistics.

  18. Optimizing a physical security configuration using a highly detailed simulation model

    NARCIS (Netherlands)

    Marechal, T.M.A.; Smith, A.E.; Ustun, V.; Smith, J.S.; Lefeber, A.A.J.; Badiru, A.B.; Thomas, M.U.

    2009-01-01

    This research is focused on using a highly detailed simulation model to create a physical security system to prevent intrusions in a building. Security consists of guards and security cameras. The problem is represented as a binary optimization problem. A new heuristic is proposed to do the security

  19. Simulation and Optimization of Control of Selected Phases of Gyroplane Flight

    Directory of Open Access Journals (Sweden)

    Wienczyslaw Stalewski

    2018-02-01

    Full Text Available Optimization methods are increasingly used to solve problems in aeronautical engineering. Typically, optimization methods are utilized in the design of an aircraft airframe or its structure. The presented study is focused on improvement of aircraft flight control procedures through numerical optimization. The optimization problems concern selected phases of flight of a light gyroplane—a rotorcraft using an unpowered rotor in autorotation to develop lift and an engine-powered propeller to provide thrust. An original methodology of computational simulation of rotorcraft flight was developed and implemented. In this approach the aircraft motion equations are solved step-by-step, simultaneously with the solution of the Unsteady Reynolds-Averaged Navier–Stokes equations, which is conducted to assess aerodynamic forces acting on the aircraft. As a numerical optimization method, the BFGS (Broyden–Fletcher–Goldfarb–Shanno algorithm was adapted. The developed methodology was applied to optimize the flight control procedures in selected stages of gyroplane flight in direct proximity to the ground, where proper control of the aircraft is critical to ensure flight safety and performance. The results of conducted computational optimizations proved the qualitative correctness of the developed methodology. The research results can be helpful in the design of easy-to-control gyroplanes and also in the training of pilots for this type of rotorcraft.

  20. Developing a simulation framework for safe and optimal trajectories considering drivers’ driving style

    DEFF Research Database (Denmark)

    Gruber, Thierry; Larue, Grégoire S.; Rakotonirainy, Andry

    2017-01-01

    drivers with the optimal trajectory considering the motorist's driving style in real time. Travel duration and safety are the main parameters used to find the optimal trajectory. A simulation framework to determine the optimal trajectory was developed in which the ego car travels in a highway environment......Advanced driving assistance systems (ADAS) have huge potential for improving road safety and travel times. However, their take-up in the market is very slow; and these systems should consider driver's preferences to increase adoption rates. The aim of this study is to develop a model providing...

  1. Real-time simulation requirements for study and optimization of power system controls

    Energy Technology Data Exchange (ETDEWEB)

    Nakra, Harbans; McCallum, David; Gagnon, Charles [Institut de Recherche d` Hydro-Quebec, Quebec, PQ (Canada); Venne, Andre; Gagnon, Julien [Hydro-Quebec, Montreal, PQ (Canada)

    1994-12-31

    At the time of ordering for the multi-terminal dc system linking Hydro-Quebec with New England, Hydro-Quebec also ordered functionally duplicate controls of all the converters and installed these in its real time simulation laboratory. The Hydro-Quebec ac system was also simulated in detail and the testing of the controls as thus made possible in a realistic environment. Many field tests were duplicated and many additional tests were done for correction and optimization. This paper describes some of the features of the real-time simulation carried out for this purpose. (author) 3 figs.

  2. GATE simulation of a LYSO-based SPECT imager: Validation and detector optimization

    International Nuclear Information System (INIS)

    Li, Suying; Zhang, Qiushi; Xie, Zhaoheng; Liu, Qi; Xu, Baixuan; Yang, Kun; Li, Changhui; Ren, Qiushi

    2015-01-01

    This paper presents a small animal SPECT system that is based on cerium doped lutetium–yttrium oxyorthosilicate (LYSO) scintillation crystal, position sensitive photomultiplier tubes (PSPMTs) and parallel hole collimator. Spatial resolution test and animal experiment were performed to demonstrate the imaging performance of the detector. Preliminary results indicated a spatial resolution of 2.5 mm at FWHM that cannot meet our design requirement. Therefore, we simulated this gamma camera using GATE (GEANT 4 Application for Tomographic Emission) aiming to make detector spatial resolution less than 2 mm. First, the GATE simulation process was validated through comparison between simulated and experimental data. This also indicates the accuracy and effectiveness of GATE simulation for LYSO-based gamma camera. Then the different detector sampling methods (crystal size with 1.5, and 1 mm) and collimator design (collimator height with 30, 34.8, 38, and 43 mm) were studied to figure out an optimized parameter set. Detector sensitivity changes were also focused on with different parameters set that generated different spatial resolution results. Tradeoff curves of spatial resolution and sensitivity were plotted to determine the optimal collimator height with different sampling methods. Simulation results show that scintillation crystal size of 1 mm and collimator height of 38 mm, which can generate a spatial resolution of ∼1.8 mm and sensitivity of ∼0.065 cps/kBq, can be an ideal configuration for our SPECT imager design

  3. Stochastic simulation and robust design optimization of integrated photonic filters

    Directory of Open Access Journals (Sweden)

    Weng Tsui-Wei

    2016-07-01

    Full Text Available Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%–35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  4. A two-parameter preliminary optimization study for a fluidized-bed boiler through a comprehensive mathematical simulator

    Energy Technology Data Exchange (ETDEWEB)

    Rabi, Jose A.; Souza-Santos, Marcio L. de [Universidade Estadual de Campinas, SP (Brazil). Faculdade de Engenharia Mecanica. Dept. de Energia]. E-mails: jrabi@fem.unicamp.br; dss@fem.unicamp.br

    2000-07-01

    Modeling and simulation of fluidized-bed equipment have demonstrated their importance as a tool for design and optimization of industrial equipment. Accordingly, this work carries on an optimization study of a fluidized-bed boiler with the aid of a comprehensive mathematical simulator. The configuration data of the boiler are based on a particular Babcock and Wilcox Co. (USA) test unit. Due to their importance, the number of tubes in the bed section and the air excess are chosen as the parameters upon which the optimization study is based. On their turn, the fixed-carbon conversion factor and the boiler efficiency are chosen as two distinct optimization objectives. The results from both preliminary searches are compared. The present work is intended to be just a study on possible routes for future optimization of larger boilers. Nonetheless, the present discussion might give some insight on the equipment behavior. (author)

  5. Cross-entropy optimization for neuromodulation.

    Science.gov (United States)

    Brar, Harleen K; Yunpeng Pan; Mahmoudi, Babak; Theodorou, Evangelos A

    2016-08-01

    This study presents a reinforcement learning approach for the optimization of the proportional-integral gains of the feedback controller represented in a computational model of epilepsy. The chaotic oscillator model provides a feedback control systems view of the dynamics of an epileptic brain with an internal feedback controller representative of the natural seizure suppression mechanism within the brain circuitry. Normal and pathological brain activity is simulated in this model by adjusting the feedback gain values of the internal controller. With insufficient gains, the internal controller cannot provide enough feedback to the brain dynamics causing an increase in correlation between different brain sites. This increase in synchronization results in the destabilization of the brain dynamics, which is representative of an epileptic seizure. To provide compensation for an insufficient internal controller an external controller is designed using proportional-integral feedback control strategy. A cross-entropy optimization algorithm is applied to the chaotic oscillator network model to learn the optimal feedback gains for the external controller instead of hand-tuning the gains to provide sufficient control to the pathological brain and prevent seizure generation. The correlation between the dynamics of neural activity within different brain sites is calculated for experimental data to show similar dynamics of epileptic neural activity as simulated by the network of chaotic oscillators.

  6. Optimal Bidding Strategy for Renewable Microgrid with Active Network Management

    Directory of Open Access Journals (Sweden)

    Seung Wan Kim

    2016-01-01

    Full Text Available Active Network Management (ANM enables a microgrid to optimally dispatch the active/reactive power of its Renewable Distributed Generation (RDG and Battery Energy Storage System (BESS units in real time. Thus, a microgrid with high penetration of RDGs can handle their uncertainties and variabilities to achieve the stable operation using ANM. However, the actual power flow in the line connecting the main grid and microgrid may deviate significantly from the day-ahead bids if the bids are determined without consideration of the real-time adjustment through ANM, which will lead to a substantial imbalance cost. Therefore, this study proposes a formulation for obtaining an optimal bidding which reflects the change of power flow in the connecting line by real-time adjustment using ANM. The proposed formulation maximizes the expected profit of the microgrid considering various network and physical constraints. The effectiveness of the proposed bidding strategy is verified through the simulations with a 33-bus test microgrid. The simulation results show that the proposed bidding strategy improves the expected operating profit by reducing the imbalance cost to a greater degree compared to the basic bidding strategy without consideration of ANM.

  7. Improved Vegetation Profiles with GOCI Imagery Using Optimized BRDF Composite

    Directory of Open Access Journals (Sweden)

    Sang-il Kim

    2016-01-01

    Full Text Available The purpose of this study was to optimize a composite method for the Geostationary Ocean Color Imager (GOCI, which is the first geostationary ocean color sensor in the world. Before interpreting the sensitivity of each composite with ground measurements, we evaluated the accuracy of bidirectional reflectance distribution function (BRDF performance by comparing modeled surface reflectance from BRDF simulation with GOCI-measured surface reflectance according to composite period. The root mean square error values for modeled and measured surface reflectance showed reasonable accuracy for all of composite days since each BRDF composite period includes at least seven cloud-free angular sampling for all BRDF performances. Also, GOCI-BRDF-adjusted NDVIs with four different composite periods were compared with field-observation NDVI and we interpreted the sensitivity of temporal crop dynamics of GOCI-BRDF-adjusted NDVIs. The results showed that vegetation index seasonal profiles appeared similar to vegetation growth curves in both field observations from crop scans and GOCI normalized difference vegetation index (NDVI data. Finally, we showed that a 12-day composite period was optimal in terms of BRDF simulation accuracy, surface coverage, and real-time sensitivity.

  8. NDDP multi-stage flash desalination process simulator design process optimization

    International Nuclear Information System (INIS)

    Sashi Kumar, G.N.; Mahendra, A.K.; Sanyal, A.; Gouthaman, G.

    2009-03-01

    The improvement of NDDP-MSF plant's performance ratio (PR) from design value of 9.0 to 13.1 was achieved by optimizing the plant's operating parameters within the feasible zone of operation. This plant has 20% excess heat transfer area over the design condition which helped us to get a PR of 15.1 after optimization. Thus we have obtained, (1) A 45% increase in the output over design value by the optimization carried out with design heat transfer area. (2) A 68% increase in the output over design value by the optimization carried out with increased heat transfer area. This report discusses the approach, methodology and results of the optimization study carried out. A simulator, MSFSIM which predicts the performance of a multi-stage flash (MSF) desalination plant has been coupled with Genetic Algorithm (GA) optimizer. Exhaustive optimization case studies have been conducted on this plant with an objective to increase the performance ratio (PR). The steady state optimization performed was based on obtaining the best stage wise pressure profile to enhance thermal efficiency which in-turn improves the performance ratio. Apart from this, the recirculating brine flow rate was also optimized. This optimization study enabled us to increase the PR of NDDP-MSF plant from design value of 9.0 to an optimized value 13.1. The actual plant is provided with 20% additional heat transfer area over and above the design heat transfer area. Optimization with this additional heat transfer area has taken the PR to 15.1. A desire to maintain equal flashing rates in all of the stages (a feature required for long life of the plant and to avoid cascading effect of non-flashing triggered by any stage) of the MSF plant has also been achieved. The deviation in the flashing rates within stages has been reduced. The startup characteristic of the plant (i.e the variation of stage pressure and the variation of recirculation flow rate with time), have been optimized with a target to minimize the

  9. Virtual reality simulation for the optimization of endovascular procedures: current perspectives.

    Science.gov (United States)

    Rudarakanchana, Nung; Van Herzeele, Isabelle; Desender, Liesbeth; Cheshire, Nicholas J W

    2015-01-01

    Endovascular technologies are rapidly evolving, often requiring coordination and cooperation between clinicians and technicians from diverse specialties. These multidisciplinary interactions lead to challenges that are reflected in the high rate of errors occurring during endovascular procedures. Endovascular virtual reality (VR) simulation has evolved from simple benchtop devices to full physic simulators with advanced haptics and dynamic imaging and physiological controls. The latest developments in this field include the use of fully immersive simulated hybrid angiosuites to train whole endovascular teams in crisis resource management and novel technologies that enable practitioners to build VR simulations based on patient-specific anatomy. As our understanding of the skills, both technical and nontechnical, required for optimal endovascular performance improves, the requisite tools for objective assessment of these skills are being developed and will further enable the use of VR simulation in the training and assessment of endovascular interventionalists and their entire teams. Simulation training that allows deliberate practice without danger to patients may be key to bridging the gap between new endovascular technology and improved patient outcomes.

  10. 3D Model Optimization of Four-Facet Drill for 3D Drilling Simulation

    Directory of Open Access Journals (Sweden)

    Buranský Ivan

    2016-09-01

    Full Text Available The article is focused on optimization of four-facet drill for 3D drilling numerical modelling. For optimization, the process of reverse engineering by PowerShape software was used. The design of four-facet drill was created in NumrotoPlus software. The modified 3D model of the drill was used in the numerical analysis of cutting forces. Verification of the accuracy of 3D models for reverse engineering was implemented using the colour deviation maps. The CAD model was in the STEP format. For simulation software, 3D model in the STEP format is ideal. STEP is a solid model. Simulation software automatically splits the 3D model into finite elements. The STEP model was therefore more suitable than the STL model.

  11. Generalized DSS shell for developing simulation and optimization hydro-economic models of complex water resources systems

    Science.gov (United States)

    Pulido-Velazquez, Manuel; Lopez-Nicolas, Antonio; Harou, Julien J.; Andreu, Joaquin

    2013-04-01

    Hydrologic-economic models allow integrated analysis of water supply, demand and infrastructure management at the river basin scale. These models simultaneously analyze engineering, hydrology and economic aspects of water resources management. Two new tools have been designed to develop models within this approach: a simulation tool (SIM_GAMS), for models in which water is allocated each month based on supply priorities to competing uses and system operating rules, and an optimization tool (OPT_GAMS), in which water resources are allocated optimally following economic criteria. The characterization of the water resource network system requires a connectivity matrix representing the topology of the elements, generated using HydroPlatform. HydroPlatform, an open-source software platform for network (node-link) models, allows to store, display and export all information needed to characterize the system. Two generic non-linear models have been programmed in GAMS to use the inputs from HydroPlatform in simulation and optimization models. The simulation model allocates water resources on a monthly basis, according to different targets (demands, storage, environmental flows, hydropower production, etc.), priorities and other system operating rules (such as reservoir operating rules). The optimization model's objective function is designed so that the system meets operational targets (ranked according to priorities) each month while following system operating rules. This function is analogous to the one used in the simulation module of the DSS AQUATOOL. Each element of the system has its own contribution to the objective function through unit cost coefficients that preserve the relative priority rank and the system operating rules. The model incorporates groundwater and stream-aquifer interaction (allowing conjunctive use simulation) with a wide range of modeling options, from lumped and analytical approaches to parameter-distributed models (eigenvalue approach). Such

  12. Value for money in particle-mesh plasma simulations

    International Nuclear Information System (INIS)

    Eastwood, J.W.

    1976-01-01

    The established particle-mesh method of simulating a collisionless plasma is discussed. Problems are outlined, and it is stated that given constraints on mesh size and particle number, the only way to adjust the compromise between dispersive forces, collision time and heating time is by altering the force calculating cycle. In 'value for money', schemes, matching of parts of the force calculation cycle is optimized. Interparticle forces are considered. Optimized combinations of elements of the force calculation cycle are compared. Following sections cover the dispersion relation, and comparisons with other schemes. (U.K.)

  13. A Multi-level hierarchic Markov process with Bayesian updating for herd optimization and simulation in dairy cattle

    NARCIS (Netherlands)

    Demeter, R.M.; Kristensen, A.R.; Dijkstra, J.; Oude Lansink, A.G.J.M.; Meuwissen, M.P.M.; Arendonk, van J.A.M.

    2011-01-01

    Herd optimization models that determine economically optimal insemination and replacement decisions are valuable research tools to study various aspects of farming systems. The aim of this study was to develop a herd optimization and simulation model for dairy cattle. The model determines

  14. Efficiency of particle swarm optimization applied on fuzzy logic DC motor speed control

    Directory of Open Access Journals (Sweden)

    Allaoua Boumediene

    2008-01-01

    Full Text Available This paper presents the application of Fuzzy Logic for DC motor speed control using Particle Swarm Optimization (PSO. Firstly, the controller designed according to Fuzzy Logic rules is such that the systems are fundamentally robust. Secondly, the Fuzzy Logic controller (FLC used earlier was optimized with PSO so as to obtain optimal adjustment of the membership functions only. Finally, the FLC is completely optimized by Swarm Intelligence Algorithms. Digital simulation results demonstrate that in comparison with the FLC the designed FLC-PSO speed controller obtains better dynamic behavior and superior performance of the DC motor, as well as perfect speed tracking with no overshoot.

  15. A Water Hammer Protection Method for Mine Drainage System Based on Velocity Adjustment of Hydraulic Control Valve

    Directory of Open Access Journals (Sweden)

    Yanfei Kou

    2016-01-01

    Full Text Available Water hammer analysis is a fundamental work of pipeline systems design process for water distribution networks. The main characteristics for mine drainage system are the limited space and high cost of equipment and pipeline changing. In order to solve the protection problem of valve-closing water hammer for mine drainage system, a water hammer protection method for mine drainage system based on velocity adjustment of HCV (Hydraulic Control Valve is proposed in this paper. The mathematic model of water hammer fluctuations is established based on the characteristic line method. Then, boundary conditions of water hammer controlling for mine drainage system are determined and its simplex model is established. The optimization adjustment strategy is solved from the mathematic model of multistage valve-closing. Taking a mine drainage system as an example, compared results between simulations and experiments show that the proposed method and the optimized valve-closing strategy are effective.

  16. Applying simulation to optimize plastic molded optical parts

    Science.gov (United States)

    Jaworski, Matthew; Bakharev, Alexander; Costa, Franco; Friedl, Chris

    2012-10-01

    Optical injection molded parts are used in many different industries including electronics, consumer, medical and automotive due to their cost and performance advantages compared to alternative materials such as glass. The injection molding process, however, induces elastic (residual stress) and viscoelastic (flow orientation stress) deformation into the molded article which alters the material's refractive index to be anisotropic in different directions. Being able to predict and correct optical performance issues associated with birefringence early in the design phase is a huge competitive advantage. This paper reviews how to apply simulation analysis of the entire molding process to optimize manufacturability and part performance.

  17. A parameters optimization method for planar joint clearance model and its application for dynamics simulation of reciprocating compressor

    Science.gov (United States)

    Hai-yang, Zhao; Min-qiang, Xu; Jin-dong, Wang; Yong-bo, Li

    2015-05-01

    In order to improve the accuracy of dynamics response simulation for mechanism with joint clearance, a parameter optimization method for planar joint clearance contact force model was presented in this paper, and the optimized parameters were applied to the dynamics response simulation for mechanism with oversized joint clearance fault. By studying the effect of increased clearance on the parameters of joint clearance contact force model, the relation of model parameters between different clearances was concluded. Then the dynamic equation of a two-stage reciprocating compressor with four joint clearances was developed using Lagrange method, and a multi-body dynamic model built in ADAMS software was used to solve this equation. To obtain a simulated dynamic response much closer to that of experimental tests, the parameters of joint clearance model, instead of using the designed values, were optimized by genetic algorithms approach. Finally, the optimized parameters were applied to simulate the dynamics response of model with oversized joint clearance fault according to the concluded parameter relation. The dynamics response of experimental test verified the effectiveness of this application.

  18. A Simulation Platform To Model, Optimize And Design Wind Turbines. The Matlab/Simulink Toolbox

    Directory of Open Access Journals (Sweden)

    Anca Daniela HANSEN

    2002-12-01

    Full Text Available In the last years Matlab / Simulink® has become the most used software for modeling and simulation of dynamic systems. Wind energy conversion systems are for example such systems, containing subsystems with different ranges of the time constants: wind, turbine, generator, power electronics, transformer and grid. The electrical generator and the power converter need the smallest simulation step and therefore, these blocks decide the simulation speed. This paper presents a new and integrated simulation platform for modeling, optimizing and designing wind turbines. The platform contains different simulation tools: Matlab / Simulink - used as basic modeling tool, HAWC, DIgSilent and Saber.

  19. Optimal design of wind barriers using 3D computational fluid dynamics simulations

    Science.gov (United States)

    Fang, H.; Wu, X.; Yang, X.

    2017-12-01

    Desertification is a significant global environmental and ecological problem that requires human-regulated control and management. Wind barriers are commonly used to reduce wind velocity or trap drifting sand in arid or semi-arid areas. Therefore, optimal design of wind barriers becomes critical in Aeolian engineering. In the current study, we perform 3D computational fluid dynamics (CFD) simulations for flow passing through wind barriers with different structural parameters. To validate the simulation results, we first inter-compare the simulated flow field results with those from both wind-tunnel experiments and field measurements. Quantitative analyses of the shelter effect are then conducted based on a series of simulations with different structural parameters (such as wind barrier porosity, row numbers, inter-row spacing and belt schemes). The results show that wind barriers with porosity of 0.35 could provide the longest shelter distance (i.e., where the wind velocity reduction is more than 50%) thus are recommended in engineering designs. To determine the optimal row number and belt scheme, we introduce a cost function that takes both wind-velocity reduction effects and economical expense into account. The calculated cost function show that a 3-row-belt scheme with inter-row spacing of 6h (h as the height of wind barriers) and inter-belt spacing of 12h is the most effective.

  20. CLFs-based optimization control for a class of constrained visual servoing systems.

    Science.gov (United States)

    Song, Xiulan; Miaomiao, Fu

    2017-03-01

    In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Optimal Spatial Subdivision method for improving geometry navigation performance in Monte Carlo particle transport simulation

    International Nuclear Information System (INIS)

    Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin

    2015-01-01

    Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes

  2. Simulation-Based Early Prediction of Rocket, Artillery, and Mortar Trajectories and Real-Time Optimization for Counter-RAM Systems

    Directory of Open Access Journals (Sweden)

    Arash Ramezani

    2017-01-01

    Full Text Available The threat imposed by terrorist attacks is a major hazard for military installations, for example, in Iraq and Afghanistan. The large amounts of rockets, artillery projectiles, and mortar grenades (RAM that are available pose serious threats to military forces. An important task for international research and development is to protect military installations and implement an accurate early warning system against RAM threats on conventional computer systems in out-of-area field camps. This work presents a method for determining the trajectory, caliber, and type of a projectile based on the estimation of the ballistic coefficient. A simulation-based optimization process is presented that enables iterative adjustment of predicted trajectories in real time. Analytical and numerical methods are used to reduce computing time for out-of-area missions and low-end computer systems. A GUI is programmed to present the results. It allows for comparison between predicted and actual trajectories. Finally, different aspects and restrictions for measuring the quality of the results are discussed.

  3. Structure Optimization and Numerical Simulation of Nozzle for High Pressure Water Jetting

    Directory of Open Access Journals (Sweden)

    Shuce Zhang

    2015-01-01

    Full Text Available Three kinds of nozzles normally used in industrial production are numerically simulated, and the structure of nozzle with the best jetting performance out of the three nozzles is optimized. The R90 nozzle displays the most optimal jetting properties, including the smooth transition of the nozzle’s inner surface. Simulation results of all sample nozzles in this study show that the helix nozzle ultimately displays the best jetting performance. Jetting velocity magnitude along Y and Z coordinates is not symmetrical for the helix nozzle. Compared to simply changing the jetting angle, revolving the jet issued from the helix nozzle creates a grinding wheel on the cleaning surface, which makes not only an impact effect but also a shearing action on the cleaning object. This particular shearing action improves the cleaning process overall and forms a wider, effective cleaning range, thus obtaining a broader jet width.

  4. Using simulation-based optimization to improve performance at a tire manufacturing company

    Directory of Open Access Journals (Sweden)

    Mohamad Darayi

    2013-04-01

    Full Text Available In this paper, a simulation optimization-based decision support tool has been developed to study the capacity enhancement scenarios in a tire manufacturing company located in Iran. This company is experiencing challenges in synchronizing production output with customer demand causing an unbalanced work-in-process (WIP inventory distribution throughout the tire manufacturing process. However, a new opportunity to increase the supplying of raw materials by fifty percent and increase the expected growth in market demand, necessitate this study of the current company situation. This research supported by the company, is to analyze whether the ongoing production logistics system can respond to the increased market demand, considering the raw material expansion. Implementation of a proposed hybrid push/pull production control strategy, together with the facility capacity enhancement options in bottleneck stations and/or heterogeneous lines within the plant, are investigated by the proposed simulation optimization methodology.

  5. Simulation-Optimization Framework for Synthesis and Design of Natural Gas Downstream Utilization Networks

    Directory of Open Access Journals (Sweden)

    Saad A. Al-Sobhi

    2018-02-01

    Full Text Available Many potential diversification and conversion options are available for utilization of natural gas resources, and several design configurations and technology choices exist for conversion of natural gas to value-added products. Therefore, a detailed mathematical model is desirable for selection of optimal configuration and operating mode among the various options available. In this study, we present a simulation-optimization framework for the optimal selection of economic and environmentally sustainable pathways for natural gas downstream utilization networks by optimizing process design and operational decisions. The main processes (e.g., LNG, GTL, and methanol production, along with different design alternatives in terms of flow-sheeting for each main processing unit (namely syngas preparation, liquefaction, N2 rejection, hydrogen, FT synthesis, methanol synthesis, FT upgrade, and methanol upgrade units, are used for superstructure development. These processes are simulated using ASPEN Plus V7.3 to determine the yields of different processing units under various operating modes. The model has been applied to maximize total profit of the natural gas utilization system with penalties for environmental impact, represented by CO2eq emission obtained using ASPEN Plus for each flowsheet configuration and operating mode options. The performance of the proposed modeling framework is demonstrated using a case study.

  6. Influence of the method of optimizing adjustments of ARV-SD on attainable degree of system stability. Vliyaniye metoda optimizatsii nastroyek ARV-SD na dostizhimuyu stepen ustoychivosti sistemy

    Energy Technology Data Exchange (ETDEWEB)

    Gruzdev, I.A.; Trudospekova, G.Kh.

    1983-01-01

    An examination is made of the efficiency of the methods of successive and simultaneous optimization of adjustments of ARV-SD (ARV of strong action) of several PP. It is shown that with the use of the method of simultaneous optimization for an idealized model of complex EPS, it is possible to attain absolute controllability of the degree of stability.

  7. Optimizing maintenance and repair policies via a combination of genetic algorithms and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Marseguerra, M.; Zio, E.

    2000-01-01

    In this paper we present an optimization approach based on the combination of a Genetic Algorithms maximization procedure with a Monte Carlo simulation. The approach is applied within the context of plant logistic management for what concerns the choice of maintenance and repair strategies. A stochastic model of plant operation is developed from the standpoint of its reliability/availability behavior, i.e. of the failure/repair/maintenance processes of its components. The model is evaluated by Monte Carlo simulation in terms of economic costs and revenues of operation. The flexibility of the Monte Carlo method allows us to include several practical aspects such as stand-by operation modes, deteriorating repairs, aging, sequences of periodic maintenances, number of repair teams available for different kinds of repair interventions (mechanical, electronic, hydraulic, etc.), components priority rankings. A genetic algorithm is then utilized to optimize the components maintenance periods and number of repair teams. The fitness function object of the optimization is a profit function which inherently accounts for the safety and economic performance of the plant and whose value is computed by the above Monte Carlo simulation model. For an efficient combination of Genetic Algorithms and Monte Carlo simulation, only few hundreds Monte Carlo histories are performed for each potential solution proposed by the genetic algorithm. Statistical significance of the results of the solutions of interest (i.e. the best ones) is then attained exploiting the fact that during the population evolution the fit chromosomes appear repeatedly many times. The proposed optimization approach is applied on two case studies of increasing complexity

  8. Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.

    Science.gov (United States)

    Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A

    2016-05-01

    A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Covariate-adjusted measures of discrimination for survival data

    DEFF Research Database (Denmark)

    White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth

    2015-01-01

    by the study design (e.g. age and sex) influence discrimination and can make it difficult to compare model discrimination between studies. Although covariate adjustment is a standard procedure for quantifying disease-risk factor associations, there are no covariate adjustment methods for discrimination...... statistics in censored survival data. OBJECTIVE: To develop extensions of the C-index and D-index that describe the prognostic ability of a model adjusted for one or more covariate(s). METHOD: We define a covariate-adjusted C-index and D-index for censored survival data, propose several estimators......, and investigate their performance in simulation studies and in data from a large individual participant data meta-analysis, the Emerging Risk Factors Collaboration. RESULTS: The proposed methods perform well in simulations. In the Emerging Risk Factors Collaboration data, the age-adjusted C-index and D-index were...

  10. Adjustment of Turbulent Boundary-Layer Flow to Idealized Urban Surfaces: A Large-Eddy Simulation Study

    Science.gov (United States)

    Cheng, Wai-Chi; Porté-Agel, Fernando

    2015-05-01

    Large-eddy simulations (LES) are performed to simulate the atmospheric boundary-layer (ABL) flow through idealized urban canopies represented by uniform arrays of cubes in order to better understand atmospheric flow over rural-to-urban surface transitions. The LES framework is first validated with wind-tunnel experimental data. Good agreement between the simulation results and the experimental data are found for the vertical and spanwise profiles of the mean velocities and velocity standard deviations at different streamwise locations. Next, the model is used to simulate ABL flows over surface transitions from a flat homogeneous terrain to aligned and staggered arrays of cubes with height . For both configurations, five different frontal area densities , equal to 0.028, 0.063, 0.111, 0.174 and 0.250, are considered. Within the arrays, the flow is found to adjust quickly and shows similar structure to the wake of the cubes after the second row of cubes. An internal boundary layer is identified above the cube arrays and found to have a similar depth in all different cases. At a downstream location where the flow immediately above the cube array is already adjusted to the surface, the spatially-averaged velocity is found to have a logarithmic profile in the vertical. The values of the displacement height are found to be quite insensitive to the canopy layout (aligned vs. staggered) and increase roughly from to as increases from 0.028 to 0.25. Relatively larger values of the aerodynamic roughness length are obtained for the staggered arrays, compared with the aligned cases, and a maximum value of is found at for both configurations. By explicitly calculating the drag exerted by the cubes on the flow and the drag coefficients of the cubes using our LES results, and comparing the results with existing theoretical expressions, we show that the larger values of for the staggered arrays are related to the relatively larger drag coefficients of the cubes for that

  11. Simulation and optimization of a coking wastewater biological treatment process by activated sludge models (ASM).

    Science.gov (United States)

    Wu, Xiaohui; Yang, Yang; Wu, Gaoming; Mao, Juan; Zhou, Tao

    2016-01-01

    Applications of activated sludge models (ASM) in simulating industrial biological wastewater treatment plants (WWTPs) are still difficult due to refractory and complex components in influents as well as diversity in activated sludges. In this study, an ASM3 modeling study was conducted to simulate and optimize a practical coking wastewater treatment plant (CWTP). First, respirometric characterizations of the coking wastewater and CWTP biomasses were conducted to determine the specific kinetic and stoichiometric model parameters for the consecutive aeration-anoxic-aeration (O-A/O) biological process. All ASM3 parameters have been further estimated and calibrated, through cross validation by the model dynamic simulation procedure. Consequently, an ASM3 model was successfully established to accurately simulate the CWTP performances in removing COD and NH4-N. An optimized CWTP operation condition could be proposed reducing the operation cost from 6.2 to 5.5 €/m(3) wastewater. This study is expected to provide a useful reference for mathematic simulations of practical industrial WWTPs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    Science.gov (United States)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  13. Mathematical Modelling, Simulation, and Optimal Control of the 2014 Ebola Outbreak in West Africa

    Directory of Open Access Journals (Sweden)

    Amira Rachah

    2015-01-01

    it is crucial to modelize the virus and simulate it. In this paper, we begin by studying a simple mathematical model that describes the 2014 Ebola outbreak in Liberia. Then, we use numerical simulations and available data provided by the World Health Organization to validate the obtained mathematical model. Moreover, we develop a new mathematical model including vaccination of individuals. We discuss different cases of vaccination in order to predict the effect of vaccination on the infected individuals over time. Finally, we apply optimal control to study the impact of vaccination on the spread of the Ebola virus. The optimal control problem is solved numerically by using a direct multiple shooting method.

  14. Next-generation simulation and optimization platform for forest management and analysis

    Science.gov (United States)

    Antti Makinen; Jouni Kalliovirta; Jussi Rasinmaki

    2009-01-01

    Late developments in the objectives and the data collection methods of forestry create new challenges and possibilities in forest management planning. Tools in forest management and forest planning systems must be able to make good use of novel data sources, use new models, and solve complex forest planning tasks at different scales. The SIMulation and Optimization (...

  15. Analysis and optimization of gyrokinetic toroidal simulations on homogenous and heterogenous platforms

    International Nuclear Information System (INIS)

    Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; Wang, Bei; Oliker, Leonid

    2013-01-01

    The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.

  16. Simulation of a coal-fired power plant using mathematical programming algorithms in order to optimize its efficiency

    International Nuclear Information System (INIS)

    Tzolakis, G.; Papanikolaou, P.; Kolokotronis, D.; Samaras, N.; Tourlidakis, A.; Tomboulides, A.

    2012-01-01

    Since most of the world's electric energy production is mainly based on fossil fuels and need for better efficiency of the energy conversion systems is imminent, mathematical programming algorithms were applied for the simulation and optimization of a detailed model of an existing lignite-fired power plant in Kozani, Greece (KARDIA IV). The optimization of its overall thermal efficiency, using as control variables the mass flow rates of the steam turbine extractions and the fuel consumption, was performed with the use of the simulation and optimization software gPROMS. The power plant components' mathematical models were imported in software by the authors and the results showed that further increase to the overall thermal efficiency of the plant can be achieved (a 0.55% absolute increase) through reduction of the HP turbine's and increase of the LP turbine's extractions mass flow rates and the parallel reduction of the fuel consumption by 2.05% which also results to an equivalent reduction of the greenhouse gasses. The setup of the mathematical model and the flexibility of gPROMS, make this software applicable to various power plants. - Highlights: ► Modeling and simulation of the flue gases circuit of a specific plant. ► Designing of modules in gPROMS FO (Foreign Objects). ► Simulation of the complete detailed plant with gPROMS. ► Optimization using a non-linear optimization algorithm of the plant's efficiency.

  17. Teaching Simulation and Computer-Aided Separation Optimization in Liquid Chromatography by Means of Illustrative Microsoft Excel Spreadsheets

    Science.gov (United States)

    Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.

    2017-01-01

    A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…

  18. Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts

    Science.gov (United States)

    Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo

    This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.

  19. Discrete-State Simulated Annealing For Traveling-Wave Tube Slow-Wave Circuit Optimization

    Science.gov (United States)

    Wilson, Jeffrey D.; Bulson, Brian A.; Kory, Carol L.; Williams, W. Dan (Technical Monitor)

    2001-01-01

    Algorithms based on the global optimization technique of simulated annealing (SA) have proven useful in designing traveling-wave tube (TWT) slow-wave circuits for high RF power efficiency. The characteristic of SA that enables it to determine a globally optimized solution is its ability to accept non-improving moves in a controlled manner. In the initial stages of the optimization, the algorithm moves freely through configuration space, accepting most of the proposed designs. This freedom of movement allows non-intuitive designs to be explored rather than restricting the optimization to local improvement upon the initial configuration. As the optimization proceeds, the rate of acceptance of non-improving moves is gradually reduced until the algorithm converges to the optimized solution. The rate at which the freedom of movement is decreased is known as the annealing or cooling schedule of the SA algorithm. The main disadvantage of SA is that there is not a rigorous theoretical foundation for determining the parameters of the cooling schedule. The choice of these parameters is highly problem dependent and the designer needs to experiment in order to determine values that will provide a good optimization in a reasonable amount of computational time. This experimentation can absorb a large amount of time especially when the algorithm is being applied to a new type of design. In order to eliminate this disadvantage, a variation of SA known as discrete-state simulated annealing (DSSA), was recently developed. DSSA provides the theoretical foundation for a generic cooling schedule which is problem independent, Results of similar quality to SA can be obtained, but without the extra computational time required to tune the cooling parameters. Two algorithm variations based on DSSA were developed and programmed into a Microsoft Excel spreadsheet graphical user interface (GUI) to the two-dimensional nonlinear multisignal helix traveling-wave amplifier analysis program TWA3

  20. System design and improvement of an emergency department using Simulation-Based Multi-Objective Optimization

    International Nuclear Information System (INIS)

    Uriarte, A Goienetxea; Zúñiga, E Ruiz; Moris, M Urenda; Ng, A H C

    2015-01-01

    Discrete Event Simulation (DES) is nowadays widely used to support decision makers in system analysis and improvement. However, the use of simulation for improving stochastic logistic processes is not common among healthcare providers. The process of improving healthcare systems involves the necessity to deal with trade-off optimal solutions that take into consideration a multiple number of variables and objectives. Complementing DES with Multi-Objective Optimization (SMO) creates a superior base for finding these solutions and in consequence, facilitates the decision-making process. This paper presents how SMO has been applied for system improvement analysis in a Swedish Emergency Department (ED). A significant number of input variables, constraints and objectives were considered when defining the optimization problem. As a result of the project, the decision makers were provided with a range of optimal solutions which reduces considerably the length of stay and waiting times for the ED patients. SMO has proved to be an appropriate technique to support healthcare system design and improvement processes. A key factor for the success of this project has been the involvement and engagement of the stakeholders during the whole process. (paper)

  1. OPTIMIZING THE DISTRIBUTION OF TIE POINTS FOR THE BUNDLE ADJUSTMENT OF HRSC IMAGE MOSAICS

    Directory of Open Access Journals (Sweden)

    J. Bostelmann

    2017-07-01

    Full Text Available For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC regularly acquires long image strips. By now more than 4,000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs and orthophotos. To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the

  2. A study on optimization of hybrid drive train using Advanced Vehicle Simulator (ADVISOR)

    Energy Technology Data Exchange (ETDEWEB)

    Same, Adam; Stipe, Alex; Grossman, David; Park, Jae Wan [Department of Mechanical and Aeronautical Engineering, University of California, Davis, One Shields Ave, Davis, CA 95616 (United States)

    2010-10-01

    This study investigates the advantages and disadvantages of three hybrid drive train configurations: series, parallel, and ''through-the-ground'' parallel. Power flow simulations are conducted with the MATLAB/Simulink-based software ADVISOR. These simulations are then applied in an application for the UC Davis SAE Formula Hybrid vehicle. ADVISOR performs simulation calculations for vehicle position using a combined backward/forward method. These simulations are used to study how efficiency and agility are affected by the motor, fuel converter, and hybrid configuration. Three different vehicle models are developed to optimize the drive train of a vehicle for three stages of the SAE Formula Hybrid competition: autocross, endurance, and acceleration. Input cycles are created based on rough estimates of track geometry. The output from these ADVISOR simulations is a series of plots of velocity profile and energy storage State of Charge that provide a good estimate of how the Formula Hybrid vehicle will perform on the given course. The most noticeable discrepancy between the input cycle and the actual velocity profile of the vehicle occurs during deceleration. A weighted ranking system is developed to organize the simulation results and to determine the best drive train configuration for the Formula Hybrid vehicle. Results show that the through-the-ground parallel configuration with front-mounted motors achieves an optimal balance of efficiency, simplicity, and cost. ADVISOR is proven to be a useful tool for vehicle power train design for the SAE Formula Hybrid competition. This vehicle model based on ADVISOR simulation is applicable to various studies concerning performance and efficiency of hybrid drive trains. (author)

  3. Extended Information Ratio for Portfolio Optimization Using Simulated Annealing with Constrained Neighborhood

    Science.gov (United States)

    Orito, Yukiko; Yamamoto, Hisashi; Tsujimura, Yasuhiro; Kambayashi, Yasushi

    The portfolio optimizations are to determine the proportion-weighted combination in the portfolio in order to achieve investment targets. This optimization is one of the multi-dimensional combinatorial optimizations and it is difficult for the portfolio constructed in the past period to keep its performance in the future period. In order to keep the good performances of portfolios, we propose the extended information ratio as an objective function, using the information ratio, beta, prime beta, or correlation coefficient in this paper. We apply the simulated annealing (SA) to optimize the portfolio employing the proposed ratio. For the SA, we make the neighbor by the operation that changes the structure of the weights in the portfolio. In the numerical experiments, we show that our portfolios keep the good performances when the market trend of the future period becomes different from that of the past period.

  4. Numerical simulation for optimization of multipole permanent magnets of multicusp ion source

    International Nuclear Information System (INIS)

    Hosseinzadeh, M.; Afarideh, H.

    2014-01-01

    A new ion source will be designed and manufactured for the CYCLONE30 commercial cyclotron with a much advanced performance compared with the previous one. The newly designed ion source has more plasma density, which is designed to deliver an H – beam at 30 keV. In this paper numerical simulation of the magnetic flux density from permanent magnet used for a multicusp ion source, plasma confinement and trapping of fast electrons by the magnetic field has been performed to optimize the number of magnets confining the plasma. A code has been developed to fly electrons in the magnetic field to evaluate the mean life of electrons in plasma in different magnetic conditions to have a better evaluation and comparison of density in different cases. The purpose of this design is to recapture more energetic electrons with permanent magnets. Performance simulations of the optimized ion source show considerable improvement over reported one by IBA

  5. Empirical optimization of undulator tapering at FLASH2 and comparison with numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Mak, Alan; Curbis, Francesca; Werin, Sverker [Lund Univ. (Sweden). MAX IV Laboratory; Faatz, Bart [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2016-08-15

    In a free-electron laser equipped with variable-gap undulator modules, the technique of undulator tapering opens up the possibility to increase the radiation power beyond the initial saturation point, thus enhancing the efficiency of the laser. The effectiveness of the enhancement relies on the proper optimization of the taper profile. In this work, a multidimensional optimization approach is implemented empirically in the X-ray free-electron laser FLASH2. The empirical results are compared with numerical simulations.

  6. Optimal control of building storage systems using both ice storage and thermal mass – Part I: Simulation environment

    International Nuclear Information System (INIS)

    Hajiah, Ali; Krarti, Moncef

    2012-01-01

    Highlights: ► A simulation environment is described to account for both passive and active thermal energy storage (TES) systems. ► Laboratory testing results have been used to validate the predictions from the simulation environment. ► Optimal control strategies for TES systems have been developed as part of the simulation environment. - Abstract: This paper presents a simulation environment that can evaluate the benefits of using simultaneously building thermal capacitance and ice storage system to reduce total operating costs including energy and demand charges while maintaining adequate occupant comfort conditions within commercial buildings. The building thermal storage is controlled through pre-cooling strategies by setting space indoor air temperatures. The ice storage system is controlled by charging the ice tank and operating the chiller during low electrical charge periods and melting the ice during on-peak periods. Optimal controls for both building thermal storage and ice storage are developed to minimize energy charges, demand charges, or combined energy and demand charges. The results obtained from the simulation environment are validated using laboratory testing for an optimal controller.

  7. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  8. Blade pitch optimization methods for vertical-axis wind turbines

    Science.gov (United States)

    Kozak, Peter

    Vertical-axis wind turbines (VAWTs) offer an inherently simpler design than horizontal-axis machines, while their lower blade speed mitigates safety and noise concerns, potentially allowing for installation closer to populated and ecologically sensitive areas. While VAWTs do offer significant operational advantages, development has been hampered by the difficulty of modeling the aerodynamics involved, further complicated by their rotating geometry. This thesis presents results from a simulation of a baseline VAWT computed using Star-CCM+, a commercial finite-volume (FVM) code. VAWT aerodynamics are shown to be dominated at low tip-speed ratios by dynamic stall phenomena and at high tip-speed ratios by wake-blade interactions. Several optimization techniques have been developed for the adjustment of blade pitch based on finite-volume simulations and streamtube models. The effectiveness of the optimization procedure is evaluated and the basic architecture for a feedback control system is proposed. Implementation of variable blade pitch is shown to increase a baseline turbine's power output between 40%-100%, depending on the optimization technique, improving the turbine's competitiveness when compared with a commercially-available horizontal-axis turbine.

  9. Simulation and optimization of a dc SQUID with finite capacitance

    Energy Technology Data Exchange (ETDEWEB)

    de Waal, V.J.; Schrijner, P.; Llurba, R.

    1984-02-01

    This paper deals with the calculations of the noise an the optimization of the energy resolution of a dc SQUID with finite junction capacitance. Up to now noise calculations of dc SQUIDs were performed using a model without parasitic capacitances across the Josephson junctions. As the capacitances limit the performance of the SQUID, for a good optimization one must take them into account. The model consists of two coupled nonlinear second-order differential equations. The equations are very suitable for simulation with an analog circuit. We implemented the model on a hybrid computer. The noise spectrum from the model is calculated with a fast Fourier transform. A calculation of the energy resolution for one set of parameters takes about 6 min of computer time. Detailed results of the optimization are given for products of inductance and temperature of LT = 1.2 and 5 nHK. Within a range of ..beta.. and ..beta../sub c/ between 1 and 2, which is optimum, the energy resolution is nearly independent of these variables. In this region the energy resolution is near the value calculated without parasitic capacitances. Results of the optimized energy resolution are given as a function of LT between 1.2 and 10 nHK.

  10. Simulation and optimization of a dc SQUID with finite capacitance

    Science.gov (United States)

    de Waal, V. J.; Schrijner, P.; Llurba, R.

    1984-02-01

    This paper deals with the calculations of the noise and the optimization of the energy resolution of a dc SQUID with finite junction capacitance. Up to now noise calculations of dc SQUIDs were performed using a model without parasitic capacitances across the Josephson junctions. As the capacitances limit the performance of the SQUID, for a good optimization one must take them into account. The model consists of two coupled nonlinear second-order differential equations. The equations are very suitable for simulation with an analog circuit. We implemented the model on a hybrid computer. The noise spectrum from the model is calculated with a fast Fourier transform. A calculation of the energy resolution for one set of parameters takes about 6 min of computer time. Detailed results of the optimization are given for products of inductance and temperature of LT=1.2 and 5 nH K. Within a range of β and β c between 1 and 2, which is optimum, the energy resolution is nearly independent of these variables. In this region the energy resolution is near the value calculated without parasitic capacitances. Results of the optimized energy resolution are given as a function of LT between 1.2 and 10 mH K.

  11. Control Optimization of a LHC 18 KW Cryoplant Warm Compression Station Using Dynamic Simulations

    CERN Document Server

    Bradu, B; Niculescu, S I

    2010-01-01

    This paper addresses the control optimization of a 4.5 K refrigerator used in the cryogenic system of the Large Hadron Collider (LHC) at CERN. First, the compressor station with the cold-box have been modeled and simulated under PROCOS (Process and Control Simulator), a simulation environment developed at CERN. Next, an appropriate parameter identification has been performed on the simulator to obtain a simplified model of the system in order to design an Internal Model Control (IMC) enhancing the regulation of the high pressure. Finally, a floating high pressure control is proposed using a cascade control to reduce operational costs.

  12. Reduced-order modeling (ROM) for simulation and optimization powerful algorithms as key enablers for scientific computing

    CERN Document Server

    Milde, Anja; Volkwein, Stefan

    2018-01-01

    This edited monograph collects research contributions and addresses the advancement of efficient numerical procedures in the area of model order reduction (MOR) for simulation, optimization and control. The topical scope includes, but is not limited to, new out-of-the-box algorithmic solutions for scientific computing, e.g. reduced basis methods for industrial problems and MOR approaches for electrochemical processes. The target audience comprises research experts and practitioners in the field of simulation, optimization and control, but the book may also be beneficial for graduate students alike. .

  13. Optimizing Grippers for Compensating Pose Uncertainties by Dynamic Simulation

    DEFF Research Database (Denmark)

    Wolniakowski, Adam; Kramberger, Aljaž; Gams, Andrej

    2017-01-01

    Gripper design process is one of the interesting challenges in the context of grasping within industry. Typically, simple parallel-finger grippers, which are easy to install and maintain, are used in platforms for robotic grasping. The context switches in these platforms require frequent exchange......, we have presented a method to automatically compute the optimal finger shapes for defined task contexts in simulation. In this paper, we show the performance of our method in an industrial grasping scenario. We first analyze the uncertainties of the used vision system, which are the major source...

  14. Assembly Line Productivity Assessment by Comparing Optimization-Simulation Algorithms of Trajectory Planning for Industrial Robots

    Directory of Open Access Journals (Sweden)

    Francisco Rubio

    2015-01-01

    Full Text Available In this paper an analysis of productivity will be carried out from the resolution of the problem of trajectory planning of industrial robots. The analysis entails economic considerations, thus overcoming some limitations of the existing literature. Two methodologies based on optimization-simulation procedures are compared to calculate the time needed to perform an industrial robot task. The simulation methodology relies on the use of robotics and automation software called GRASP. The optimization methodology developed in this work is based on the kinematics and the dynamics of industrial robots. It allows us to pose a multiobjective optimization problem to assess the trade-offs between the economic variables by means of the Pareto fronts. The comparison is carried out for different examples and from a multidisciplinary point of view, thus, to determine the impact of using each method. Results have shown the opportunity costs of non using the methodology with optimized time trajectories. Furthermore, it allows companies to stay competitive because of the quick adaptation to rapidly changing markets.

  15. Optimization of low temperature solar thermal electric generation with Organic Rankine Cycle in different areas

    International Nuclear Information System (INIS)

    Jing, Li; Gang, Pei; Jie, Ji

    2010-01-01

    The presented low temperature solar thermal electric generation system mainly consists of compound parabolic concentrators (CPC) and the Organic Rankine Cycle (ORC) working with HCFC-123. A novel design is proposed to reduce heat transfer irreversibility between conduction oil and HCFC-123 in the heat exchangers while maintaining the stability of electricity output. Mathematical formulations are developed to study the heat transfer and energy conversion processes and the numerical simulation is carried out based on distributed parameters. Annual performances of the proposed system in different areas of Canberra, Singapore, Bombay, Lhasa, Sacramento and Berlin are simulated. The influences of the collector tilt angle adjustment, the connection between the heat exchangers and the CPC collectors, and the ORC evaporation temperature on the system performance are investigated. The results indicate that the three factors have a major impact on the annual electricity output and should be the key points of optimization. And the optimized system shows that: (1) The annual received direct irradiance can be significantly increased by two or three times optimal adjustments even when the CPC concentration ratio is smaller than 3.0. (2) Compared with the traditional single-stage collectors, two-stage collectors connected with the heat exchangers by two thermal oil cycles can improve the collector efficiency by 8.1-20.9% in the simultaneous processes of heat collection and power generation. (3) On the use of the market available collectors the optimal ORC evaporation temperatures in most of the simulated areas are around 120 C. (author)

  16. Response surface method applied to the thermoeconomic optimization of a complex cogeneration system modeled in a process simulator

    International Nuclear Information System (INIS)

    Pires, Thiago S.; Cruz, Manuel E.; Colaço, Marcelo J.

    2013-01-01

    This work presents the application of a surrogate model – a response surface – to replace the objective function to be minimized in the thermoeconomic optimization of a complex thermal system modeled with the aid of an expert process simulator. The objective function accounts for fuel, capital, operation and maintenance costs of the thermal system, and depends on nine decision variables. The minimization task is performed through the computational integration of two professional programs, a process simulator and a mathematical platform. Five algorithms are used to perform the optimization: the pattern search and genetic algorithms, both available in the mathematical platform, plus three custom-coded algorithms, differential evolution, particle swarm and simulated annealing. A comparative analysis of the performance of all five methods is presented, together with a critical appraisal of the surrogate model effectiveness. In the course of the optimization procedure, the process simulator computes the thermodynamic properties of all flows of the thermal system and solves the mass and energy balances each time the objective function has to be evaluated. By handling a set of radial basis functions as an approximation model to the original computationally expensive objective function, it is found here that the number of function evaluations can be appreciably reduced without significant deviation of the optimal value. The present study indicates that, for a thermoeconomic system optimization problem with a large number of decision variables and/or a costly objective function, the application of the response surface surrogate may prove more efficient than the original simulation model, reducing substantially the computational time involved in the optimization. - Highlights: ► A successful response surface method was proposed. ► The surrogate model may be more efficient than the original simulation model. ► Relative differences of less than 5% were found for the

  17. Double-adjustment in propensity score matching analysis: choosing a threshold for considering residual imbalance.

    Science.gov (United States)

    Nguyen, Tri-Long; Collins, Gary S; Spence, Jessica; Daurès, Jean-Pierre; Devereaux, P J; Landais, Paul; Le Manach, Yannick

    2017-04-28

    Double-adjustment can be used to remove confounding if imbalance exists after propensity score (PS) matching. However, it is not always possible to include all covariates in adjustment. We aimed to find the optimal imbalance threshold for entering covariates into regression. We conducted a series of Monte Carlo simulations on virtual populations of 5,000 subjects. We performed PS 1:1 nearest-neighbor matching on each sample. We calculated standardized mean differences across groups to detect any remaining imbalance in the matched samples. We examined 25 thresholds (from 0.01 to 0.25, stepwise 0.01) for considering residual imbalance. The treatment effect was estimated using logistic regression that contained only those covariates considered to be unbalanced by these thresholds. We showed that regression adjustment could dramatically remove residual confounding bias when it included all of the covariates with a standardized difference greater than 0.10. The additional benefit was negligible when we also adjusted for covariates with less imbalance. We found that the mean squared error of the estimates was minimized under the same conditions. If covariate balance is not achieved, we recommend reiterating PS modeling until standardized differences below 0.10 are achieved on most covariates. In case of remaining imbalance, a double adjustment might be worth considering.

  18. Productivity simulation model for optimization of maritime container terminals

    Directory of Open Access Journals (Sweden)

    Elen TWRDY

    2009-01-01

    Full Text Available This article describes a proposed productivity simulation model enabling container terminal operators to find optimization possibilities. A research of more than forty terminals has been done, in order to provide a helping tool for maritime container terminals. By applying an adequate simulation model, it is possible to measure and increase the productivity in all subsystem of the maritime container terminal. Management of a maritime container terminal includes a vast number of different financial and operational decisions. Financial decisions are often in a direct connection with investments in infrastructure and handling equipment. Such investments are very expensive. Therefore, they must give back the invested money as soon as possible. On the other hand, some terminals are limited by the physical extension and are forced to increase annual throughput only with sophisticated equipment on the berth side and on the yard as well. Considering all these important facts in container and shipping industry, the proposed simulation model gives a helping tool for checking the productivity and its time variation and monitoring competitiveness of a certain maritime terminal with terminals from the same group.

  19. Geometric Optimization of Thermo-electric Coolers Using Simulated Annealing

    International Nuclear Information System (INIS)

    Khanh, D V K; Vasant, P M; Elamvazuthi, I; Dieu, V N

    2015-01-01

    The field of thermo-electric coolers (TECs) has grown drastically in recent years. In an extreme environment as thermal energy and gas drilling operations, TEC is an effective cooling mechanism for instrument. However, limitations such as the relatively low energy conversion efficiency and ability to dissipate only a limited amount of heat flux may seriously damage the lifetime and performance of the instrument. Until now, many researches were conducted to expand the efficiency of TECs. The material parameters are the most significant, but they are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of finding the optimal TECs design is to define a set of design parameters. In this paper, a new method of optimizing the dimension of TECs using simulated annealing (SA), to maximize the rate of refrigeration (ROR) was proposed. Equality constraint and inequality constraint were taken into consideration. This work reveals that SA shows better performance than Cheng's work. (paper)

  20. Temperature simulations in hyperthermia treatment planning of the head and neck region. Rigorous optimization of tissue properties

    International Nuclear Information System (INIS)

    Verhaart, Rene F.; Rijnen, Zef; Verduijn, Gerda M.; Paulides, Margarethus M.; Fortunati, Valerio; Walsum, Theo van; Veenland, Jifke F.

    2014-01-01

    Hyperthermia treatment planning (HTP) is used in the head and neck region (H and N) for pretreatment optimization, decision making, and real-time HTP-guided adaptive application of hyperthermia. In current clinical practice, HTP is based on power-absorption predictions, but thermal dose-effect relationships advocate its extension to temperature predictions. Exploitation of temperature simulations requires region- and temperature-specific thermal tissue properties due to the strong thermoregulatory response of H and N tissues. The purpose of our work was to develop a technique for patient group-specific optimization of thermal tissue properties based on invasively measured temperatures, and to evaluate the accuracy achievable. Data from 17 treated patients were used to optimize the perfusion and thermal conductivity values for the Pennes bioheat equation-based thermal model. A leave-one-out approach was applied to accurately assess the difference between measured and simulated temperature (∇T). The improvement in ∇T for optimized thermal property values was assessed by comparison with the ∇T for values from the literature, i.e., baseline and under thermal stress. The optimized perfusion and conductivity values of tumor, muscle, and fat led to an improvement in simulation accuracy (∇T: 2.1 ± 1.2 C) compared with the accuracy for baseline (∇T: 12.7 ± 11.1 C) or thermal stress (∇T: 4.4 ± 3.5 C) property values. The presented technique leads to patient group-specific temperature property values that effectively improve simulation accuracy for the challenging H and N region, thereby making simulations an elegant addition to invasive measurements. The rigorous leave-one-out assessment indicates that improvements in accuracy are required to rely only on temperature-based HTP in the clinic. (orig.) [de

  1. Response Adjusted for Days of Antibiotic Risk (RADAR): evaluation of a novel method to compare strategies to optimize antibiotic use.

    Science.gov (United States)

    Schweitzer, V A; van Smeden, M; Postma, D F; Oosterheert, J J; Bonten, M J M; van Werkhoven, C H

    2017-12-01

    The Response Adjusted for Days of Antibiotic Risk (RADAR) statistic was proposed to improve the efficiency of trials comparing antibiotic stewardship strategies to optimize antibiotic use. We studied the behaviour of RADAR in a non-inferiority trial in which a β-lactam monotherapy strategy (n = 656) was non-inferior to fluoroquinolone monotherapy (n = 888) for patients with moderately severe community-acquired pneumonia. Patients were ranked according to clinical outcome, using five or eight categories, and antibiotic use. RADAR was calculated as the probability that the β-lactam group had a more favourable ranking than the fluoroquinolone group. To investigate the sensitivity of RADAR to detrimental clinical outcome we simulated increasing rates of 90-day mortality in the β-lactam group and performed the RADAR and non-inferiority analysis. The RADAR of the β-lactam group compared with the fluoroquinolone group was 60.3% (95% CI 57.9%-62.7%) using five and 58.4% (95% CI 56.0%-60.9%) using eight clinical outcome categories, all in favour of β-lactam. Sample sizes for RADAR were 38% (250/653) and 89% (580/653) of the non-inferiority sample size calculation, using five or eight clinical outcome categories, respectively. With simulated mortality rates, loss of non-inferiority of the β-lactam group occurred at a relative risk of 1.125 in the conventional analysis, whereas using RADAR the β-lactam group lost superiority at a relative risk of mortality of 1.25 and 1.5, with eight and five clinical outcome categories, respectively. RADAR favoured β-lactam over fluoroquinolone therapy for community-acquired pneumonia. Although RADAR required fewer patients than conventional non-inferiority analysis, the statistic was less sensitive to detrimental outcomes. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  2. Shanghai Futures Exchange Incorporates Nickel & Tin to Finish Adjustment of Nonferrous Metal Index

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    After close of trading on August 12,the nonferrous metal futures price index adjustment&optimization; work of Shanghai Futures Exchange was completed and formally took effect.The new compilation plan incorporated two newly marketed varieties of nickel and tin,and made adjustment and optimization in

  3. Intermolecular Force Field Parameters Optimization for Computer Simulations of CH4 in ZIF-8

    Directory of Open Access Journals (Sweden)

    Phannika Kanthima

    2016-01-01

    Full Text Available The differential evolution (DE algorithm is applied for obtaining the optimized intermolecular interaction parameters between CH4 and 2-methylimidazolate ([C4N2H5]− using quantum binding energies of CH4-[C4N2H5]− complexes. The initial parameters and their upper/lower bounds are obtained from the general AMBER force field. The DE optimized and the AMBER parameters are then used in the molecular dynamics (MD simulations of CH4 molecules in the frameworks of ZIF-8. The results show that the DE parameters are better for representing the quantum interaction energies than the AMBER parameters. The dynamical and structural behaviors obtained from MD simulations with both sets of parameters are also of notable differences.

  4. Simulation study on heterogeneous variance adjustment for observations with different measurement error variance

    DEFF Research Database (Denmark)

    Pitkänen, Timo; Mäntysaari, Esa A; Nielsen, Ulrik Sander

    2013-01-01

    of variance correction is developed for the same observations. As automated milking systems are becoming more popular the current evaluation model needs to be enhanced to account for the different measurement error variances of observations from automated milking systems. In this simulation study different...... models and different approaches to account for heterogeneous variance when observations have different measurement error variances were investigated. Based on the results we propose to upgrade the currently applied models and to calibrate the heterogeneous variance adjustment method to yield same genetic......The Nordic Holstein yield evaluation model describes all available milk, protein and fat test-day yields from Denmark, Finland and Sweden. In its current form all variance components are estimated from observations recorded under conventional milking systems. Also the model for heterogeneity...

  5. Two-Dimensional IIR Filter Design Using Simulated Annealing Based Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Supriya Dhabal

    2014-01-01

    Full Text Available We present a novel hybrid algorithm based on particle swarm optimization (PSO and simulated annealing (SA for the design of two-dimensional recursive digital filters. The proposed method, known as SA-PSO, integrates the global search ability of PSO with the local search ability of SA and offsets the weakness of each other. The acceptance criterion of Metropolis is included in the basic algorithm of PSO to increase the swarm’s diversity by accepting sometimes weaker solutions also. The experimental results reveal that the performance of the optimal filter designed by the proposed SA-PSO method is improved. Further, the convergence behavior as well as optimization accuracy of proposed method has been improved significantly and computational time is also reduced. In addition, the proposed SA-PSO method also produces the best optimal solution with lower mean and variance which indicates that the algorithm can be used more efficiently in realizing two-dimensional digital filters.

  6. Globally-Optimized Local Pseudopotentials for (Orbital-Free) Density Functional Theory Simulations of Liquids and Solids.

    Science.gov (United States)

    Del Rio, Beatriz G; Dieterich, Johannes M; Carter, Emily A

    2017-08-08

    The accuracy of local pseudopotentials (LPSs) is one of two major determinants of the fidelity of orbital-free density functional theory (OFDFT) simulations. We present a global optimization strategy for LPSs that enables OFDFT to reproduce solid and liquid properties obtained from Kohn-Sham DFT. Our optimization strategy can fit arbitrary properties from both solid and liquid phases, so the resulting globally optimized local pseudopotentials (goLPSs) can be used in solid and/or liquid-phase simulations depending on the fitting process. We show three test cases proving that we can (1) improve solid properties compared to our previous bulk-derived local pseudopotential generation scheme; (2) refine predicted liquid and solid properties by adding force matching data; and (3) generate a from-scratch, accurate goLPS from the local channel of a non-local pseudopotential. The proposed scheme therefore serves as a full and improved LPS construction protocol.

  7. Optimization of multiple-module thermoelectric coolers using artificial-intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, K. [University of Utah (United States). Dept. of Mechanical Engineering; Lin, G.T. [National Taiwan University of Science and Technology, Taipei (China). Dept. of Mechanical Engineering

    2002-07-01

    Genetic algorithm (GA) and simulated annealing (SA) methods were employed to optimize the current distribution of a cooler made up of a large number of thermoelectric (TE) modules. The TE modules were grouped into several clusters in the flow direction, and the electric currents supplied to different clusters were adjusted separately to achieve maximum energy efficiency or minimum refrigeration temperature for different,operating conditions and cooling requirements. Optimization results based on the design parameters of a large TE cooler showed considerable improvements in energy efficiency and refrigeration temperature when compared to the results of uniform current for the parallel-flow arrangement. On the other hand, results of the counter-flow arrangement showed only slight differences between uniform- and non-uniform-current optimizations. The optimization results of GA and SA were very close to each other. SA converged faster and was more computationally economical than GA for TE system optimization. (author)

  8. Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations

    Directory of Open Access Journals (Sweden)

    Nah-Oak Song

    2015-08-01

    Full Text Available We propose an optimal electric energy management of a cooperative multi-microgrid community with sequentially coordinated operations. The sequentially coordinated operations are suggested to distribute computational burden and yet to make the optimal 24 energy management of multi-microgrids possible. The sequential operations are mathematically modeled to find the optimal operation conditions and illustrated with physical interpretation of how to achieve optimal energy management in the cooperative multi-microgrid community. This global electric energy optimization of the cooperative community is realized by the ancillary internal trading between the microgrids in the cooperative community which reduces the extra cost from unnecessary external trading by adjusting the electric energy production amounts of combined heat and power (CHP generators and amounts of both internal and external electric energy trading of the cooperative community. A simulation study is also conducted to validate the proposed mathematical energy management models.

  9. Using Virtual Reality in K-12 Education: A Simulation of Shooting Bottle Rockets for Distance

    Directory of Open Access Journals (Sweden)

    Charles Nippert

    2012-10-01

    Full Text Available Typically, it is often more challenging to shoot bottle rockets for distance instead of shooting them straight up and measuring altitude, as is often done.Using a device made from pipe and wood to launch bottle rockets and control the launch angle creates a much more interesting problem for students who are attempting to optimize launch conditions.Plans are presented for a launcher that allow students to adjust the launch angle. To help embellish the exercise, we supplement the bottle rocket with a model using virtual reality and a photorealistic simulation of the launch that allows the students to appreciate the optimization problems associated with water and air pressure and launch angle. Our usage data indicates that students easily adapt to the virtual reality simulation and use our simulation for intuitive experiments on their own to optimize launch conditions.

  10. Design and Optimal Research of a Non-Contact Adjustable Magnetic Adhesion Mechanism for a Wall-Climbing Welding Robot

    Directory of Open Access Journals (Sweden)

    Minghui Wu

    2013-01-01

    Full Text Available Wall-climbing welding robots (WCWRs can replace workers in manufacturing and maintaining large unstructured equipment, such as ships. The adhesion mechanism is the key component of WCWRs. As it is directly related to the robot's ability in relation to adsorbing, moving flexibly and obstacle-passing. In this paper, a novel non-contact adjustably magnetic adhesion mechanism is proposed. The magnet suckers are mounted under the robot's axils and the sucker and wall are in non-contact. In order to pass obstacles, the sucker and the wheel unit can be pulled up and pushed down by a lifting mechanism. The magnetic adhesion force can be adjusted by changing the height of the gap between the sucker and the wall by the lifting mechanism. In order to increase the adhesion force, the value of the sucker's magnetic energy density (MED is maximized by optimizing the magnet sucker's structure parameters with a finite element method. Experiments prove that the magnetic adhesion mechanism has enough adhesion force and that the WCWR can complete wall-climbing work within a large unstructured environment.

  11. On the formulation and numerical simulation of distributed-order fractional optimal control problems

    Science.gov (United States)

    Zaky, M. A.; Machado, J. A. Tenreiro

    2017-11-01

    In a fractional optimal control problem, the integer order derivative is replaced by a fractional order derivative. The fractional derivative embeds implicitly the time delays in an optimal control process. The order of the fractional derivative can be distributed over the unit interval, to capture delays of distinct sources. The purpose of this paper is twofold. Firstly, we derive the generalized necessary conditions for optimal control problems with dynamics described by ordinary distributed-order fractional differential equations (DFDEs). Secondly, we propose an efficient numerical scheme for solving an unconstrained convex distributed optimal control problem governed by the DFDE. We convert the problem under consideration into an optimal control problem governed by a system of DFDEs, using the pseudo-spectral method and the Jacobi-Gauss-Lobatto (J-G-L) integration formula. Next, we present the numerical solutions for a class of optimal control problems of systems governed by DFDEs. The convergence of the proposed method is graphically analyzed showing that the proposed scheme is a good tool for the simulation of distributed control problems governed by DFDEs.

  12. Virtual reality simulation for the optimization of endovascular procedures: current perspectives

    Directory of Open Access Journals (Sweden)

    Rudarakanchana N

    2015-03-01

    Full Text Available Nung Rudarakanchana,1 Isabelle Van Herzeele,2 Liesbeth Desender,2 Nicholas JW Cheshire1 1Department of Surgery, Imperial College London, London, UK; 2Department of Thoracic and Vascular Surgery, Ghent University Hospital, Ghent, BelgiumOn behalf of EVEREST (European Virtual reality Endovascular RESearch TeamAbstract: Endovascular technologies are rapidly evolving, often requiring coordination and cooperation between clinicians and technicians from diverse specialties. These multidisciplinary interactions lead to challenges that are reflected in the high rate of errors occurring during endovascular procedures. Endovascular virtual reality (VR simulation has evolved from simple benchtop devices to full physic simulators with advanced haptics and dynamic imaging and physiological controls. The latest developments in this field include the use of fully immersive simulated hybrid angiosuites to train whole endovascular teams in crisis resource management and novel technologies that enable practitioners to build VR simulations based on patient-specific anatomy. As our understanding of the skills, both technical and nontechnical, required for optimal endovascular performance improves, the requisite tools for objective assessment of these skills are being developed and will further enable the use of VR simulation in the training and assessment of endovascular interventionalists and their entire teams. Simulation training that allows deliberate practice without danger to patients may be key to bridging the gap between new endovascular technology and improved patient outcomes.Keywords: virtual reality, simulation, endovascular, aneurysm

  13. CFD Simulation and Optimization of Very Low Head Axial Flow Turbine Runner

    Directory of Open Access Journals (Sweden)

    Yohannis Mitiku Tobo

    2015-10-01

    Full Text Available The main objective of this work is Computational Fluid Dynamics (CFD modelling, simulation and optimization of very low head axial flow turbine runner  to be used to drive  a centrifugal pump of turbine-driven pump. The ultimate goal of the optimization is to produce a power of 1kW at head less than 1m from flowing  river to drive centrifugal pump using mechanical coupling (speed multiplier gear directly. Flow rate, blade numbers, turbine rotational speed, inlet angle are parameters used in CFD modeling,  simulation and design optimization of the turbine runner. The computed results show that power developed by a turbine runner increases with increasing flow rate. Pressure inside the turbine runner increases with flow rate but, runner efficiency increases for some flow rate and almost constant thereafter. Efficiency and power developed by a runner drops quickly if turbine speed increases due to higher pressure losses and conversion of pressure energy to kinetic energy inside the runner. Increasing blade number increases power developed but, efficiency does not increase always. Efficiency increases for some blade number and drops down due to the fact that  change in direction of the relative flow vector at the runner exit, which decreases the net rotational momentum and increases the axial flow velocity.

  14. Development of GEM detector for plasma diagnostics application: simulations addressing optimization of its performance

    Science.gov (United States)

    Chernyshova, M.; Malinowski, K.; Kowalska-Strzęciwilk, E.; Czarski, T.; Linczuk, P.; Wojeński, A.; Krawczyk, R. D.

    2017-12-01

    The advanced Soft X-ray (SXR) diagnostics setup devoted to studies of the SXR plasma emissivity is at the moment a highly relevant and important for ITER/DEMO application. Especially focusing on the energy range of tungsten emission lines, as plasma contamination by W and its transport in the plasma must be understood and monitored for W plasma-facing material. The Gas Electron Multiplier, with a spatial and energy-resolved photon detecting chamber, based SXR radiation detection system under development by our group may become such a diagnostic setup considering and solving many physical, technical and technological aspects. This work presents the results of simulations aimed to optimize a design of the detector's internal chamber and its performance. The study of the effect of electrodes alignment allowed choosing the gap distances which maximizes electron transmission and choosing the optimal magnitudes of the applied electric fields. Finally, the optimal readout structure design was identified suitable to collect a total formed charge effectively, basing on the range of the simulated electron cloud at the readout plane which was in the order of ~ 2 mm.

  15. Validation, Optimization and Simulation of a Solar Thermoelectric Generator Model

    Science.gov (United States)

    Madkhali, Hadi Ali; Hamil, Ali; Lee, HoSung

    2017-12-01

    This study explores thermoelectrics as a viable option for small-scale solar thermal applications. Thermoelectric technology is based on the Seebeck effect, which states that a voltage is induced when a temperature gradient is applied to the junctions of two differing materials. This research proposes to analyze, validate, simulate, and optimize a prototype solar thermoelectric generator (STEG) model in order to increase efficiency. The intent is to further develop STEGs as a viable and productive energy source that limits pollution and reduces the cost of energy production. An empirical study (Kraemer et al. in Nat Mater 10:532, 2011) on the solar thermoelectric generator reported a high efficiency performance of 4.6%. The system had a vacuum glass enclosure, a flat panel (absorber), thermoelectric generator and water circulation for the cold side. The theoretical and numerical approach of this current study validated the experimental results from Kraemer's study to a high degree. The numerical simulation process utilizes a two-stage approach in ANSYS software for Fluent and Thermal-Electric Systems. The solar load model technique uses solar radiation under AM 1.5G conditions in Fluent. This analytical model applies Dr. Ho Sung Lee's theory of optimal design to improve the performance of the STEG system by using dimensionless parameters. Applying this theory, using two cover glasses and radiation shields, the STEG model can achieve a highest efficiency of 7%.

  16. Rapid Simulation of Flat Knitting Loops Based On the Yarn Texture and Loop Geometrical Model

    Directory of Open Access Journals (Sweden)

    Lu Zhiwen

    2017-06-01

    Full Text Available In order to create realistic loop primitives suitable for the fast computer-aided design (CAD of the flat knitted fabric, we have a research on the geometric model of the loop as well as the variation of the loop surface. Establish the texture variation model based on the changing process from the normal yarn to loop that provides the realistic texture of the simulative loop. Then optimize the simulative loop based on illumination variation. This paper develops the computer program with the optimization algorithm and achieves the loop simulation of different yarns to verify the feasibility of the proposed algorithm. Our work provides a fast CAD of the flat knitted fabric with loop simulation, and it is not only more realistic but also material adjustable. Meanwhile it also provides theoretical value for the flat knitted fabric computer simulation.

  17. Optimism, Positive and Negative Affect, and Goal Adjustment Strategies: Their Relationship to Activity Patterns in Patients with Chronic Musculoskeletal Pain

    Directory of Open Access Journals (Sweden)

    Rosa Esteve

    2018-01-01

    Full Text Available Objective. Activity patterns are the product of pain and of the self-regulation of current goals in the context of pain. The aim of this study was to investigate the association between goal management strategies and activity patterns while taking into account the role of optimism/pessimism and positive/negative affect. Methods. Two hundred and thirty-seven patients with chronic musculoskeletal pain filled out questionnaires on optimism, positive and negative affect, pain intensity, and the activity patterns they employed in dealing with their pain. Questionnaires were also administered to assess their general goal management strategies: goal persistence, flexible goal adjustment, and disengagement and reengagement with goals. Results. Structural equation modelling showed that higher levels of optimism were related to persistence, flexible goal management, and commitment to new goals. These strategies were associated with higher positive affect, persistence in finishing tasks despite pain, and infrequent avoidance behaviour in the presence or anticipation of pain. Conclusions. The strategies used by the patients with chronic musculoskeletal pain to manage their life goals are related to their activity patterns.

  18. Optimism, Positive and Negative Affect, and Goal Adjustment Strategies: Their Relationship to Activity Patterns in Patients with Chronic Musculoskeletal Pain.

    Science.gov (United States)

    Esteve, Rosa; López-Martínez, Alicia E; Peters, Madelon L; Serrano-Ibáñez, Elena R; Ruiz-Párraga, Gema T; Ramírez-Maestre, Carmen

    2018-01-01

    Activity patterns are the product of pain and of the self-regulation of current goals in the context of pain. The aim of this study was to investigate the association between goal management strategies and activity patterns while taking into account the role of optimism/pessimism and positive/negative affect. Two hundred and thirty-seven patients with chronic musculoskeletal pain filled out questionnaires on optimism, positive and negative affect, pain intensity, and the activity patterns they employed in dealing with their pain. Questionnaires were also administered to assess their general goal management strategies: goal persistence, flexible goal adjustment, and disengagement and reengagement with goals. Structural equation modelling showed that higher levels of optimism were related to persistence, flexible goal management, and commitment to new goals. These strategies were associated with higher positive affect, persistence in finishing tasks despite pain, and infrequent avoidance behaviour in the presence or anticipation of pain. The strategies used by the patients with chronic musculoskeletal pain to manage their life goals are related to their activity patterns.

  19. Robust Inventory System Optimization Based on Simulation and Multiple Criteria Decision Making

    Directory of Open Access Journals (Sweden)

    Ahmad Mortazavi

    2014-01-01

    Full Text Available Inventory management in retailers is difficult and complex decision making process which is related to the conflict criteria, also existence of cyclic changes and trend in demand is inevitable in many industries. In this paper, simulation modeling is considered as efficient tool for modeling of retailer multiproduct inventory system. For simulation model optimization, a novel multicriteria and robust surrogate model is designed based on multiple attribute decision making (MADM method, design of experiments (DOE, and principal component analysis (PCA. This approach as a main contribution of this paper, provides a framework for robust multiple criteria decision making under uncertainty.

  20. Optimal Results and Numerical Simulations for Flow Shop Scheduling Problems

    Directory of Open Access Journals (Sweden)

    Tao Ren

    2012-01-01

    Full Text Available This paper considers the m-machine flow shop problem with two objectives: makespan with release dates and total quadratic completion time, respectively. For Fm|rj|Cmax, we prove the asymptotic optimality for any dense scheduling when the problem scale is large enough. For Fm‖ΣCj2, improvement strategy with local search is presented to promote the performance of the classical SPT heuristic. At the end of the paper, simulations show the effectiveness of the improvement strategy.

  1. An adaptive multi-spline refinement algorithm in simulation based sailboat trajectory optimization using onboard multi-core computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2016-06-01

    Full Text Available A new dynamic programming based parallel algorithm adapted to on-board heterogeneous computers for simulation based trajectory optimization is studied in the context of “high-performance sailing”. The algorithm uses a new discrete space of continuously differentiable functions called the multi-splines as its search space representation. A basic version of the algorithm is presented in detail (pseudo-code, time and space complexity, search space auto-adaptation properties. Possible extensions of the basic algorithm are also described. The presented experimental results show that contemporary heterogeneous on-board computers can be effectively used for solving simulation based trajectory optimization problems. These computers can be considered micro high performance computing (HPC platforms-they offer high performance while remaining energy and cost efficient. The simulation based approach can potentially give highly accurate results since the mathematical model that the simulator is built upon may be as complex as required. The approach described is applicable to many trajectory optimization problems due to its black-box represented performance measure and use of OpenCL.

  2. Simulation of microcirculatory hemodynamics: estimation of boundary condition using particle swarm optimization.

    Science.gov (United States)

    Pan, Qing; Wang, Ruofan; Reglin, Bettina; Fang, Luping; Pries, Axel R; Ning, Gangmin

    2014-01-01

    Estimation of the boundary condition is a critical problem in simulating hemodynamics in microvascular networks. This paper proposed a boundary estimation strategy based on a particle swarm optimization (PSO) algorithm, which aims to minimize the number of vessels with inverted flow direction in comparison to the experimental observation. The algorithm took boundary values as the particle swarm and updated the position of the particles iteratively to approach the optimization target. The method was tested in a real rat mesenteric network. With random initial boundary values, the method achieved a minimized 9 segments with an inverted flow direction in the network with 546 vessels. Compared with reported literature, the current work has the advantage of a better fit with experimental observations and is more suitable for the boundary estimation problem in pulsatile hemodynamic models due to the experiment-based optimization target selection.

  3. A computational fluid dynamics simulation framework for ventricular catheter design optimization.

    Science.gov (United States)

    Weisenberg, Sofy H; TerMaath, Stephanie C; Barbier, Charlotte N; Hill, Judith C; Killeffer, James A

    2017-11-10

    OBJECTIVE Cerebrospinal fluid (CSF) shunts are the primary treatment for patients suffering from hydrocephalus. While proven effective in symptom relief, these shunt systems are plagued by high failure rates and often require repeated revision surgeries to replace malfunctioning components. One of the leading causes of CSF shunt failure is obstruction of the ventricular catheter by aggregations of cells, proteins, blood clots, or fronds of choroid plexus that occlude the catheter's small inlet holes or even the full internal catheter lumen. Such obstructions can disrupt CSF diversion out of the ventricular system or impede it entirely. Previous studies have suggested that altering the catheter's fluid dynamics may help to reduce the likelihood of complete ventricular catheter failure caused by obstruction. However, systematic correlation between a ventricular catheter's design parameters and its performance, specifically its likelihood to become occluded, still remains unknown. Therefore, an automated, open-source computational fluid dynamics (CFD) simulation framework was developed for use in the medical community to determine optimized ventricular catheter designs and to rapidly explore parameter influence for a given flow objective. METHODS The computational framework was developed by coupling a 3D CFD solver and an iterative optimization algorithm and was implemented in a high-performance computing environment. The capabilities of the framework were demonstrated by computing an optimized ventricular catheter design that provides uniform flow rates through the catheter's inlet holes, a common design objective in the literature. The baseline computational model was validated using 3D nuclear imaging to provide flow velocities at the inlet holes and through the catheter. RESULTS The optimized catheter design achieved through use of the automated simulation framework improved significantly on previous attempts to reach a uniform inlet flow rate distribution using

  4. Geometry optimization of a fibrous scaffold based on mathematical modelling and CFD simulation of a dynamic cell culture

    DEFF Research Database (Denmark)

    Tajsoleiman, Tannaz; J. Abdekhodaie, Mohammad; Gernaey, Krist

    2016-01-01

    simulation of cartilage cell culture under a perfusion flow, which allows not only to characterize the supply of nutrients and metabolic products inside a fibrous scaffold, but also to assess the overall culture condition and predict the cell growth rate. Afterwards, the simulation results supported finding...... an optimized design of the scaffold within a new mathematical optimization algorithm that is proposed. The main concept of this optimization routine isto maintain a large effective surface while simultaneously keeping the shear stress levelin an operating range that is expected to be supporting growth....... Therewith, it should bepossible to gradually reach improved culture efficiency as defined in the objective function....

  5. Efficiency of timing delays and electrode positions in optimization of biventricular pacing: a simulation study.

    Science.gov (United States)

    Miri, Raz; Graf, Iulia M; Dössel, Olaf

    2009-11-01

    Electrode positions and timing delays influence the efficacy of biventricular pacing (BVP). Accordingly, this study focuses on BVP optimization, using a detailed 3-D electrophysiological model of the human heart, which is adapted to patient-specific anatomy and pathophysiology. The research is effectuated on ten heart models with left bundle branch block and myocardial infarction derived from magnetic resonance and computed tomography data. Cardiac electrical activity is simulated with the ten Tusscher cell model and adaptive cellular automaton at physiological and pathological conduction levels. The optimization methods are based on a comparison between the electrical response of the healthy and diseased heart models, measured in terms of root mean square error (E(RMS)) of the excitation front and the QRS duration error (E(QRS)). Intra- and intermethod associations of the pacing electrodes and timing delays variables were analyzed with statistical methods, i.e., t -test for dependent data, one-way analysis of variance for electrode pairs, and Pearson model for equivalent parameters from the two optimization methods. The results indicate that lateral the left ventricle and the upper or middle septal area are frequently (60% of cases) the optimal positions of the left and right electrodes, respectively. Statistical analysis proves that the two optimization methods are in good agreement. In conclusion, a noninvasive preoperative BVP optimization strategy based on computer simulations can be used to identify the most beneficial patient-specific electrode configuration and timing delays.

  6. Simulation-optimization framework for multi-site multi-season hybrid stochastic streamflow modeling

    Science.gov (United States)

    Srivastav, Roshan; Srinivasan, K.; Sudheer, K. P.

    2016-11-01

    A simulation-optimization (S-O) framework is developed for the hybrid stochastic modeling of multi-site multi-season streamflows. The multi-objective optimization model formulated is the driver and the multi-site, multi-season hybrid matched block bootstrap model (MHMABB) is the simulation engine within this framework. The multi-site multi-season simulation model is the extension of the existing single-site multi-season simulation model. A robust and efficient evolutionary search based technique, namely, non-dominated sorting based genetic algorithm (NSGA - II) is employed as the solution technique for the multi-objective optimization within the S-O framework. The objective functions employed are related to the preservation of the multi-site critical deficit run sum and the constraints introduced are concerned with the hybrid model parameter space, and the preservation of certain statistics (such as inter-annual dependence and/or skewness of aggregated annual flows). The efficacy of the proposed S-O framework is brought out through a case example from the Colorado River basin. The proposed multi-site multi-season model AMHMABB (whose parameters are obtained from the proposed S-O framework) preserves the temporal as well as the spatial statistics of the historical flows. Also, the other multi-site deficit run characteristics namely, the number of runs, the maximum run length, the mean run sum and the mean run length are well preserved by the AMHMABB model. Overall, the proposed AMHMABB model is able to show better streamflow modeling performance when compared with the simulation based SMHMABB model, plausibly due to the significant role played by: (i) the objective functions related to the preservation of multi-site critical deficit run sum; (ii) the huge hybrid model parameter space available for the evolutionary search and (iii) the constraint on the preservation of the inter-annual dependence. Split-sample validation results indicate that the AMHMABB model is

  7. Simulation Analysis of China’s Energy and Industrial Structure Adjustment Potential to Achieve a Low-carbon Economy by 2020

    Directory of Open Access Journals (Sweden)

    Nan Xiang

    2013-11-01

    Full Text Available To achieve a low-carbon economy, China has committed to reducing its carbon dioxide (CO2 emissions per unit of gross domestic product (GDP by 40%–45% by 2020 from 2005 levels and increasing the share of non-fossil fuels in primary energy consumption to approximately 15%. It is necessary to investigate whether this plan is suitable and how this target may be reached. This paper verifies the feasibility of achieving the CO2 emission targets by energy and industrial structure adjustments, and proposes applicable measures for further sustainable development by 2020 through comprehensive simulation. The simulation model comprises three sub-models: an energy flow balance model, a CO2 emission model, and a socio-economic model. The model is constructed based on input-output table and three balances (material, value, and energy flow balance, and it is written in LINGO, a linear dynamic programming language. The simulation results suggest that China’s carbon intensity reduction promise can be realized and even surpassed to 50% and that economic development (annual 10% GDP growth rate can be achieved if energy and industrial structure are adjusted properly by 2020. However, the total amount of CO2 emission will reach a relatively high level—13.68 billion tons—which calls for further sound approaches to realize a low carbon economy, such as energy utilization efficiency improvement, technology innovation, and non-fossil energy’s utilization.

  8. Optimizing a Water Simulation based on Wavefront Parameter Optimization

    OpenAIRE

    Lundgren, Martin

    2017-01-01

    DICE, a Swedish game company, wanted a more realistic water simulation. Currently, most large scale water simulations used in games are based upon ocean simulation technology. These techniques falter when used in other scenarios, such as coastlines. In order to produce a more realistic simulation, a new one was created based upon the water simulation technique "Wavefront Parameter Interpolation". This technique involves a rather extensive preprocess that enables ocean simulations to have inte...

  9. A multi-level hierarchic Markov process with Bayesian updating for herd optimization and simulation in dairy cattle.

    Science.gov (United States)

    Demeter, R M; Kristensen, A R; Dijkstra, J; Oude Lansink, A G J M; Meuwissen, M P M; van Arendonk, J A M

    2011-12-01

    Herd optimization models that determine economically optimal insemination and replacement decisions are valuable research tools to study various aspects of farming systems. The aim of this study was to develop a herd optimization and simulation model for dairy cattle. The model determines economically optimal insemination and replacement decisions for individual cows and simulates whole-herd results that follow from optimal decisions. The optimization problem was formulated as a multi-level hierarchic Markov process, and a state space model with Bayesian updating was applied to model variation in milk yield. Methodological developments were incorporated in 2 main aspects. First, we introduced an additional level to the model hierarchy to obtain a more tractable and efficient structure. Second, we included a recently developed cattle feed intake model. In addition to methodological developments, new parameters were used in the state space model and other biological functions. Results were generated for Dutch farming conditions, and outcomes were in line with actual herd performance in the Netherlands. Optimal culling decisions were sensitive to variation in milk yield but insensitive to energy requirements for maintenance and feed intake capacity. We anticipate that the model will be applied in research and extension. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Elektrisk Design og Styring. Simulation Platform to Model, Optimize and Design Wind Turbines

    DEFF Research Database (Denmark)

    Iov, Florin; Hansen, A. D.; Soerensen, P.

    This report is a general overview of the results obtained in the project ?Electrical Design and Control. Simulation Platform to Model, Optimize and Design Wind Turbines?. The report is structured in six chapters. First, the background of this project and the main goals as well as the structure...... of the simulation platform is given. The main topologies for wind turbines, which have been taken into account during the project, are briefly presented. Then, the considered simulation tools namely: HAWC, DIgSILENT, Saber and Matlab/Simulink have been used in this simulation platform are described. The focus here...... is on the modelling and simulation time scale aspects. The abilities of these tools are complementary and they can together cover all the modelling aspects of the wind turbines e.g. mechanical loads, power quality, switching, control and grid faults. New models and new control algorithms for wind turbine systems have...

  11. Robust optimization of robotic pick and place operations for deformable objects through simulation

    DEFF Research Database (Denmark)

    Bo Jorgensen, Troels; Debrabant, Kristian; Kruger, Norbert

    2016-01-01

    for the task. The solutions are parameterized in terms of the robot motion and the gripper configuration, and after each simulation various objective scores are determined and combined. This enables the use of various optimization strategies. Based on visual inspection of the most robust solution found...

  12. Optimization of metabolite detection by quantum mechanics simulations in magnetic resonance spectroscopy.

    Science.gov (United States)

    Gambarota, Giulio

    2017-07-15

    Magnetic resonance spectroscopy (MRS) is a well established modality for investigating tissue metabolism in vivo. In recent years, many efforts by the scientific community have been directed towards the improvement of metabolite detection and quantitation. Quantum mechanics simulations allow for investigations of the MR signal behaviour of metabolites; thus, they provide an essential tool in the optimization of metabolite detection. In this review, we will examine quantum mechanics simulations based on the density matrix formalism. The density matrix was introduced by von Neumann in 1927 to take into account statistical effects within the theory of quantum mechanics. We will discuss the main steps of the density matrix simulation of an arbitrary spin system and show some examples for the strongly coupled two spin system. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Optimization of linear and branched alkane interactions with water to simulate hydrophobic hydration

    Science.gov (United States)

    Ashbaugh, Henry S.; Liu, Lixin; Surampudi, Lalitanand N.

    2011-08-01

    Previous studies of simple gas hydration have demonstrated that the accuracy of molecular simulations at capturing the thermodynamic signatures of hydrophobic hydration is linked both to the fidelity of the water model at replicating the experimental liquid density at ambient pressure and an accounting of polarization interactions between the solute and water. We extend those studies to examine alkane hydration using the transferable potentials for phase equilibria united-atom model for linear and branched alkanes, developed to reproduce alkane phase behavior, and the TIP4P/2005 model for water, which provides one of the best descriptions of liquid water for the available fixed-point charge models. Alkane site/water oxygen Lennard-Jones cross interactions were optimized to reproduce the experimental alkane hydration free energies over a range of temperatures. The optimized model reproduces the hydration free energies of the fitted alkanes with a root mean square difference between simulation and experiment of 0.06 kcal/mol over a wide temperature range, compared to 0.44 kcal/mol for the parent model. The optimized model accurately reproduces the temperature dependence of hydrophobic hydration, as characterized by the hydration enthalpies, entropies, and heat capacities, as well as the pressure response, as characterized by partial molar volumes.

  14. Optical fringe-reflection deflectometry with bundle adjustment

    Science.gov (United States)

    Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng

    2018-06-01

    Liquid crystal display (LCD) screens are located outside of a camera's field of view in fringe-reflection deflectometry. Therefore, fringes that are displayed on LCD screens are obtained through specular reflection by a fixed camera. Thus, the pose calibration between the camera and LCD screen is one of the main challenges in fringe-reflection deflectometry. A markerless planar mirror is used to reflect the LCD screen more than three times, and the fringes are mapped into the fixed camera. The geometrical calibration can be accomplished by estimating the pose between the camera and the virtual image of fringes. Considering the relation between their pose, the incidence and reflection rays can be unified in the camera frame, and a forward triangulation intersection can be operated in the camera frame to measure three-dimensional (3D) coordinates of the specular surface. In the final optimization, constraint-bundle adjustment is operated to refine simultaneously the camera intrinsic parameters, including distortion coefficients, estimated geometrical pose between the LCD screen and camera, and 3D coordinates of the specular surface, with the help of the absolute phase collinear constraint. Simulation and experiment results demonstrate that the pose calibration with planar mirror reflection is simple and feasible, and the constraint-bundle adjustment can enhance the 3D coordinate measurement accuracy in fringe-reflection deflectometry.

  15. Adaptive adjustment of interval predictive control based on combined model and application in shell brand petroleum distillation tower

    Science.gov (United States)

    Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin

    2017-10-01

    Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.

  16. Optimized Ultrasound-Assisted Oxidative Desulfurization Process of Simulated Fuels over Activated Carbon-Supported Phosphotungstic Acid

    Directory of Open Access Journals (Sweden)

    Peniel Jean Gildo

    2018-01-01

    Full Text Available Recent technological advancements respond to the call to minimize/eliminate emissions to the atmosphere. However, on the average, fuel oils which is one of the major raw materials, is found to increase in sulfur concentration due to a phenomenon called thermal maturation. As such, a deeper desulfurization process is needed to obtain low/ultra-low sulfur fuel oils. In the present study, the ultrasound assisted oxidative desulfurization (UAOD processes using the H2O2 and HPW-AC oxidizing system applied to simulated fuel (~2800 ppm sulfur in the form of dibenzothiophene, benzothiophene, and thiophene dissolved in toluene, were optimized. After the pre-saturation of the HPW-AC with the simulated fuel, H2O2 was added just before the reaction was commenced under ultrasonic irradiation. After the application of both 2k-factorial design of experiment for screening and Face-Centered Design of Experiment for optimization, it was found that 25.52 wt% of H2O2 concentration, 983.9 mg of catalyst dose, 9.52 mL aqueous phase per 10 mL of the organic phase and 76.36 minutes of ultrasonication time would render 94.74% oxidation of the sulfur compounds in the simulated fuel. After the application of the optimized parameters to kerosene and employing a 4-cycle extraction using acetonitrile, 99% of the original sulfur content were removed from the kerosene using the UAOD optimized parameters. The desulfurization process resulted in a low-sulfur kerosene which retained its basic fuel properties such as density, viscosity and calorific value.

  17. Selecting and optimizing eco-physiological parameters of Biome-BGC to reproduce observed woody and leaf biomass growth of Eucommia ulmoides plantation in China using Dakota optimizer

    Science.gov (United States)

    Miyauchi, T.; Machimura, T.

    2013-12-01

    In the simulation using an ecosystem process model, the adjustment of parameters is indispensable for improving the accuracy of prediction. This procedure, however, requires much time and effort for approaching the simulation results to the measurements on models consisting of various ecosystem processes. In this study, we tried to apply a general purpose optimization tool in the parameter optimization of an ecosystem model, and examined its validity by comparing the simulated and measured biomass growth of a woody plantation. A biometric survey of tree biomass growth was performed in 2009 in an 11-year old Eucommia ulmoides plantation in Henan Province, China. Climate of the site was dry temperate. Leaf, above- and below-ground woody biomass were measured from three cut trees and converted into carbon mass per area by measured carbon contents and stem density. Yearly woody biomass growth of the plantation was calculated according to allometric relationships determined by tree ring analysis of seven cut trees. We used Biome-BGC (Thornton, 2002) to reproduce biomass growth of the plantation. Air temperature and humidity from 1981 to 2010 was used as input climate condition. The plant functional type was deciduous broadleaf, and non-optimizing parameters were left default. 11-year long normal simulations were performed following a spin-up run. In order to select optimizing parameters, we analyzed the sensitivity of leaf, above- and below-ground woody biomass to eco-physiological parameters. Following the selection, optimization of parameters was performed by using the Dakota optimizer. Dakota is an optimizer developed by Sandia National Laboratories for providing a systematic and rapid means to obtain optimal designs using simulation based models. As the object function, we calculated the sum of relative errors between simulated and measured leaf, above- and below-ground woody carbon at each of eleven years. In an alternative run, errors at the last year (at the

  18. Portfolio balancing and risk adjusted values under constrained budget conditions

    International Nuclear Information System (INIS)

    MacKay, J.A.; Lerche, I.

    1996-01-01

    For a given hydrocarbon exploration opportunity, the influences of value, cost, success probability and corporate risk tolerance provide an optimal working interest that should be taken in the opportunity in order to maximize the risk adjusted value. When several opportunities are available, but when the total budget is insufficient to take optimal working interest in each, an analytic procedure is given for optimizing the risk adjusted value of the total portfolio; the relevant working interests are also derived based on a cost exposure constraint. Several numerical illustrations are provided to exhibit the use of the method under different budget conditions, and with different numbers of available opportunities. When value, cost, success probability, and risk tolerance are uncertain for each and every opportunity, the procedure is generalized to allow determination of probable optimal risk adjusted value for the total portfolio and, at the same time, the range of probable working interest that should be taken in each opportunity is also provided. The result is that the computations of portfolio balancing can be done quickly in either deterministic or probabilistic manners on a small calculator, thereby providing rapid assessments of opportunities and their worth to a corporation. (Author)

  19. Diffuser Optimation at Exhaust System with Catalytic Converter for 110 cc Mopet with Fluid Flow CFD Simulation

    Directory of Open Access Journals (Sweden)

    Tresna Soemardi

    2010-10-01

    Full Text Available CFD simulation used to get behavior of exhaust gas through catalyst, this result will be used to optimize geometry form to perform uniform stream distribution to catalyst, and CFD Simulation will used to analyze backpressure that happened at the model.

  20. Design Optimization of a Thermoelectric Cooling Module Using Finite Element Simulations

    Science.gov (United States)

    Abid, Muhammad; Somdalen, Ragnar; Rodrigo, Marina Sancho

    2018-05-01

    The thermoelectric industry is concerned about the size reduction, cooling performance and, ultimately, the production cost of thermoelectric modules. Optimization of the size and performance of a commercially available thermoelectric cooling module is considered using finite element simulations. Numerical simulations are performed on eight different three-dimensional geometries of a single thermocouple, and the results are further extended for a whole module as well. The maximum temperature rise at the hot and cold sides of a thermocouple is determined by altering its height and cross-sectional area. The influence of the soldering layer is analyzed numerically using temperature dependent and temperature independent thermoelectric properties of the solder material and the semiconductor pellets. Experiments are conducted to test the cooling performance of the thermoelectric module and the results are compared with the results obtained through simulations. Finally, cooling rate and maximum coefficient of performance (COPmax) are computed using convective and non-convective boundary conditions.

  1. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    Science.gov (United States)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  2. A concept for optimizing avalanche rescue strategies using a Monte Carlo simulation approach.

    Directory of Open Access Journals (Sweden)

    Ingrid Reiweger

    Full Text Available Recent technical and strategical developments have increased the survival chances for avalanche victims. Still hundreds of people, primarily recreationists, get caught and buried by snow avalanches every year. About 100 die each year in the European Alps-and many more worldwide. Refining concepts for avalanche rescue means to optimize the procedures such that the survival chances are maximized in order to save the greatest possible number of lives. Avalanche rescue includes several parameters related to terrain, natural hazards, the people affected by the event, the rescuers, and the applied search and rescue equipment. The numerous parameters and their complex interaction make it unrealistic for a rescuer to take, in the urgency of the situation, the best possible decisions without clearly structured, easily applicable decision support systems. In order to analyse which measures lead to the best possible survival outcome in the complex environment of an avalanche accident, we present a numerical approach, namely a Monte Carlo simulation. We demonstrate the application of Monte Carlo simulations for two typical, yet tricky questions in avalanche rescue: (1 calculating how deep one should probe in the first passage of a probe line depending on search area, and (2 determining for how long resuscitation should be performed on a specific patient while others are still buried. In both cases, we demonstrate that optimized strategies can be calculated with the Monte Carlo method, provided that the necessary input data are available. Our Monte Carlo simulations also suggest that with a strict focus on the "greatest good for the greatest number", today's rescue strategies can be further optimized in the best interest of patients involved in an avalanche accident.

  3. BASIMO - Borehole Heat Exchanger Array Simulation and Optimization Tool

    Science.gov (United States)

    Schulte, Daniel O.; Bastian, Welsch; Wolfram, Rühaak; Kristian, Bär; Ingo, Sass

    2017-04-01

    Arrays of borehole heat exchangers are an increasingly popular source for renewable energy. Furthermore, they can serve as borehole thermal energy storage (BTES) systems for seasonally fluctuating heat sources like solar thermal energy or district heating grids. The high temperature level of these heat sources prohibits the use of the shallow subsurface for environmental reasons. Therefore, deeper reservoirs have to be accessed instead. The increased depth of the systems results in high investment costs and has hindered the implementation of this technology until now. Therefore, research of medium deep BTES systems relies on numerical simulation models. Current simulation tools cannot - or only to some extent - describe key features like partly insulated boreholes unless they run fully discretized models of the borehole heat exchangers. However, fully discretized models often come at a high computational cost, especially for large arrays of borehole heat exchangers. We give an update on the development of BASIMO: a tool, which uses one dimensional thermal resistance and capacity models for the borehole heat exchangers coupled with a numerical finite element model for the subsurface heat transport in a dual-continuum approach. An unstructured tetrahedral mesh bypasses the limitations of structured grids for borehole path geometries, while the thermal resistance and capacity model is improved to account for borehole heat exchanger properties changing with depth. Thereby, partly insulated boreholes can be considered in the model. Furthermore, BASIMO can be used to improve the design of BTES systems: the tool allows for automated parameter variations and is readily coupled to other code like mathematical optimization algorithms. Optimization can be used to determine the required minimum system size or to increase the system performance.

  4. Optimizing human activity patterns using global sensitivity analysis.

    Science.gov (United States)

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  5. Simulation optimizing of n-type HIT solar cells with AFORS-HET

    Science.gov (United States)

    Yao, Yao; Xiao, Shaoqing; Zhang, Xiumei; Gu, Xiaofeng

    2017-07-01

    This paper presents a study of heterojunction with intrinsic thin layer (HIT) solar cells based on n-type silicon substrates by a simulation software AFORS-HET. We have studied the influence of thickness, band gap of intrinsic layer and defect densities of every interface. Details in mechanisms are elaborated as well. The results show that the optimized efficiency reaches more than 23% which may give proper suggestions to practical preparation for HIT solar cells industry.

  6. Numerical simulation of the modulation transfer function (MTF) in infrared focal plane arrays: simulation methodology and MTF optimization

    Science.gov (United States)

    Schuster, J.

    2018-02-01

    Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.

  7. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  8. Optimization Analysis of Supply Chain Resource Allocation in Customized Online Shopping Service Mode

    Directory of Open Access Journals (Sweden)

    Jianming Yao

    2015-01-01

    Full Text Available For an online-shopping company, whether it can provide its customers with customized service is the key to enhance its customers’ experience value and its own competence. A good customized service requires effective integration and reasonable allocation of the company’s supply chain resources running in the background. Based on the analysis of the allocation of supply chain resources in the customized online shopping service mode and its operational characteristics, this paper puts forward an optimization model for the resource allocation and builds an improved ant algorithm to solve it. Finally, the effectiveness and feasibility of the optimization method and algorithm are demonstrated by a numerical simulation. This paper finds that the special online shopping environments lead to many dynamic and uncertain characters of the service demands. Different customized service patterns and their combination patterns should match with different supply chain resource allocations. The optimization model not only reflects the required service cost and delivery time in the objective function, but also considers the service scale effect optimization and the relations of integration benefits and risks. The improved ant algorithm has obvious advantages in flexibly balancing the multiobjective optimizations, adjusting the convergence speed, and adjusting the operation parameters.

  9. Multilevel criticality computations in AREVA NP'S core simulation code artemis - 195

    International Nuclear Information System (INIS)

    Van Geemert, R.

    2010-01-01

    This paper discusses the multi-level critical boron iteration approach that is applied per default in AREVA NP's whole-core neutronics and thermal hydraulics core simulation program ARTEMIS. This multi-level approach is characterized by the projection of variational boron concentration adjustments to the coarser mesh levels in a multi-level re-balancing hierarchy that is associated with the nodal flux equations to be solved in steady-state core simulation. At each individual re-balancing mesh level, optimized variational criticality tuning formulas are applied. The latter drive the core model to a numerically highly accurate self-sustaining state (i.e. with the neutronic eigenvalue being 1 up to a very high numerical precision) by continuous adjustment of the boron concentration as a system-wide scalar criticality parameter. Due to the default application of this approach in ARTEMIS reactor cycle simulations, an accuracy of all critical boron concentration estimates better than 0.001 ppm is enabled for all burnup time steps in a computationally efficient way. This high accuracy is relevant for precision optimization in industrial core simulation as well as for enabling accurate reactivity perturbation assessments. The developed approach is presented from a numerical methodology point of view with an emphasis on the multi-grid aspect of the concept. Furthermore, an application-relevant verification is presented in terms of achieved coupled iteration convergence efficiency for an application-representative industrial core cycle computation. (authors)

  10. Microwave imaging for conducting scatterers by hybrid particle swarm optimization with simulated annealing

    International Nuclear Information System (INIS)

    Mhamdi, B.; Grayaa, K.; Aguili, T.

    2011-01-01

    In this paper, a microwave imaging technique for reconstructing the shape of two-dimensional perfectly conducting scatterers by means of a stochastic optimization approach is investigated. Based on the boundary condition and the measured scattered field derived by transverse magnetic illuminations, a set of nonlinear integral equations is obtained and the imaging problem is reformulated in to an optimization problem. A hybrid approximation algorithm, called PSO-SA, is developed in this work to solve the scattering inverse problem. In the hybrid algorithm, particle swarm optimization (PSO) combines global search and local search for finding the optimal results assignment with reasonable time and simulated annealing (SA) uses certain probability to avoid being trapped in a local optimum. The hybrid approach elegantly combines the exploration ability of PSO with the exploitation ability of SA. Reconstruction results are compared with exact shapes of some conducting cylinders; and good agreements with the original shapes are observed.

  11. Stimulation of a turbofan engine for evaluation of multivariable optimal control concepts. [(computerized simulation)

    Science.gov (United States)

    Seldner, K.

    1976-01-01

    The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.

  12. Application of Nontraditional Optimization Techniques for Airfoil Shape Optimization

    Directory of Open Access Journals (Sweden)

    R. Mukesh

    2012-01-01

    Full Text Available The method of optimization algorithms is one of the most important parameters which will strongly influence the fidelity of the solution during an aerodynamic shape optimization problem. Nowadays, various optimization methods, such as genetic algorithm (GA, simulated annealing (SA, and particle swarm optimization (PSO, are more widely employed to solve the aerodynamic shape optimization problems. In addition to the optimization method, the geometry parameterization becomes an important factor to be considered during the aerodynamic shape optimization process. The objective of this work is to introduce the knowledge of describing general airfoil geometry using twelve parameters by representing its shape as a polynomial function and coupling this approach with flow solution and optimization algorithms. An aerodynamic shape optimization problem is formulated for NACA 0012 airfoil and solved using the methods of simulated annealing and genetic algorithm for 5.0 deg angle of attack. The results show that the simulated annealing optimization scheme is more effective in finding the optimum solution among the various possible solutions. It is also found that the SA shows more exploitation characteristics as compared to the GA which is considered to be more effective explorer.

  13. Optimizing the HLT Buffer Strategy with Monte Carlo Simulations

    CERN Document Server

    AUTHOR|(CDS)2266763

    2017-01-01

    This project aims to optimize the strategy of utilizing the disk buffer for the High Level Trigger (HLT) of the LHCb experiment with the help of Monte-Carlo simulations. A method is developed, which simulates the Event Filter Farm (EFF) -- a computing cluster for the High Level Trigger -- as a compound of nodes with different performance properties. In this way, the behavior of the computing farm can be analyzed at a deeper level than before. It is demonstrated that the current operating strategy might be improved when data taking is reaching a mid-year scheduled stop or the year-end technical stop. The processing time of the buffered data can be lowered by distributing the detector data according to the processing power of the nodes instead of the relative disk size as long as the occupancy level of the buffer is low enough. Moreover, this ensures that data taken and stored on the buffer at the same time is processed by different nodes nearly simultaneously, which reduces load on the infrastructure.

  14. Optimal Subinterval Selection Approach for Power System Transient Stability Simulation

    Directory of Open Access Journals (Sweden)

    Soobae Kim

    2015-10-01

    Full Text Available Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modal analysis using a single machine infinite bus (SMIB system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. The performance of the proposed method is demonstrated with the GSO 37-bus system.

  15. Control Parameters Optimization Based on Co-Simulation of a Mechatronic System for an UA-Based Two-Axis Inertially Stabilized Platform

    Directory of Open Access Journals (Sweden)

    Xiangyang Zhou

    2015-08-01

    Full Text Available This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP applied in an unmanned airship (UA, by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS®; then, to analyze the system’s kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB® SIMULINK® controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance.

  16. Simulation Optimization of Search and Rescue in Disaster Relief Based on Distributed Auction Mechanism

    Directory of Open Access Journals (Sweden)

    Jian Tang

    2017-11-01

    Full Text Available In this paper, we optimize the search and rescue (SAR in disaster relief through agent-based simulation. We simulate rescue teams’ search behaviors with the improved Truncated Lévy walks. Then we propose a cooperative rescue plan based on a distributed auction mechanism, and illustrate it with the case of landslide disaster relief. The simulation is conducted in three scenarios, including “fatal”, “serious” and “normal”. Compared with the non-cooperative rescue plan, the proposed rescue plan in this paper would increase victims’ relative survival probability by 7–15%, increase the ratio of survivors getting rescued by 5.3–12.9%, and decrease the average elapsed time for one site getting rescued by 16.6–21.6%. The robustness analysis shows that search radius can affect the rescue efficiency significantly, while the scope of cooperation cannot. The sensitivity analysis shows that the two parameters, the time limit for completing rescue operations in one buried site and the maximum turning angle for next step, both have a great influence on rescue efficiency, and there exists optimal value for both of them in view of rescue efficiency.

  17. Green Infrastructure Simulation and Optimization to Achieve Combined Sewer Overflow Reductions in Philadelphia's Mill Creek Sewershed

    Science.gov (United States)

    Cohen, J. S.; McGarity, A. E.

    2017-12-01

    The ability for mass deployment of green stormwater infrastructure (GSI) to intercept significant amounts of urban runoff has the potential to reduce the frequency of a city's combined sewer overflows (CSOs). This study was performed to aid in the Overbrook Environmental Education Center's vision of applying this concept to create a Green Commercial Corridor in Philadelphia's Overbrook Neighborhood, which lies in the Mill Creek Sewershed. In an attempt to further implement physical and social reality into previous work using simulation-optimization techniques to produce GSI deployment strategies (McGarity, et al., 2016), this study's models incorporated land use types and a specific neighborhood in the sewershed. The low impact development (LID) feature in EPA's Storm Water Management Model (SWMM) was used to simulate various geographic configurations of GSI in Overbrook. The results from these simulations were used to obtain formulas describing the annual CSO reduction in the sewershed based on the deployed GSI practices. These non-linear hydrologic response formulas were then implemented into the Storm Water Investment Strategy Evaluation (StormWISE) model (McGarity, 2012), a constrained optimization model used to develop optimal stormwater management practices on the watershed scale. By saturating the avenue with GSI, not only will CSOs from the sewershed into the Schuylkill River be reduced, but ancillary social and economic benefits of GSI will also be achieved. The effectiveness of these ancillary benefits changes based on the type of GSI practice and the type of land use in which the GSI is implemented. Thus, the simulation and optimization processes were repeated while delimiting GSI deployment by land use (residential, commercial, industrial, and transportation). The results give a GSI deployment strategy that achieves desired annual CSO reductions at a minimum cost based on the locations of tree trenches, rain gardens, and rain barrels in specified land

  18. Multi-Objective Patch Optimization with Integrated Kinematic Draping Simulation for Continuous–Discontinuous Fiber-Reinforced Composite Structures

    Directory of Open Access Journals (Sweden)

    Benedikt Fengler

    2018-03-01

    Full Text Available Discontinuous fiber-reinforced polymers (DiCoFRP in combination with local continuous fiber reinforced polymers (CoFRP provide both a high design freedom and high weight-specific mechanical properties. For the optimization of CoFRP patches on complexly shaped DiCoFRP structures, an optimization strategy is needed which considers manufacturing constraints during the optimization procedure. Therefore, a genetic algorithm is combined with a kinematic draping simulation. To determine the optimal patch position with regard to structural performance and overall material consumption, a multi-objective optimization strategy is used. The resulting Pareto front and a corresponding heat-map of the patch position are useful tools for the design engineer to choose the right amount of reinforcement. The proposed patch optimization procedure is applied to two example structures and the effect of different optimization setups is demonstrated.

  19. On-orbit servicing system assessment and optimization methods based on lifecycle simulation under mixed aleatory and epistemic uncertainties

    Science.gov (United States)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel

    2013-06-01

    To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.

  20. Method to Simulate and Optimize the Operating Conditions of a Solar-Fuel Heat Supply System

    International Nuclear Information System (INIS)

    Anarbaev, A.; Zakhidov, R.

    2011-01-01

    The problem of how to determine the optimal parameters for the solar part of a plant with respect to boiler equipment efficiency is examined. The most efficient condensing boilers are chosen for simulation. (authors)

  1. Controlling chaos based on an adaptive adjustment mechanism

    International Nuclear Information System (INIS)

    Zheng Yongai

    2006-01-01

    In this paper, we extend the ideas and techniques developed by Huang [Huang W. Stabilizing nonlinear dynamical systems by an adaptive adjustment mechanism. Phys Rev E 2000;61:R1012-5] for controlling discrete-time chaotic system using adaptive adjustment mechanism to continuous-time chaotic system. Two control approaches, namely adaptive adjustment mechanism (AAM) and modified adaptive adjustment mechanism (MAAM), are investigated. In both case sufficient conditions for the stabilization of chaotic systems are given analytically. The simulation results on Chen chaotic system have verified the effectiveness of the proposed techniques

  2. Simulation and optimization of stable isotope 18O separation by cascade distillation

    International Nuclear Information System (INIS)

    Jiang Yongyue; Chen Yuyan; Qin Chuanjiang; Liu Yan; Gu Hongsen

    2011-01-01

    The research about started from the plan of four cascade towers design was carried. Firstly, the method of experiment design was using uniform design. Then the incidence formula with the method of binomial stepwise regression was gotten. Last, the optimal operation conditions were gotten by using the method of genetic algorithm. Considering comprehensive factors of drawn from feed rate and from flow rates between cascade column, conclusions were reached on the study of the impact on the abundance of the isotope 18 O. Finally, the incidence formula between the abundance of the isotope 18 O and four operating variables were gotten. Also the incidence formula between heat consumption of the isotope 18 O and four operating variables were gotten. Besides, single factor response diagram of four factors were shown at last. The results showed that the method of simulation and optimization could be applied to 18 O industrial design and would be popular in traditional distillation process to realize optimization design. (authors)

  3. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  4. Optimized broad-histogram simulations for strong first-order phase transitions: droplet transitions in the large-Q Potts model

    International Nuclear Information System (INIS)

    Bauer, Bela; Troyer, Matthias; Gull, Emanuel; Trebst, Simon; Huse, David A

    2010-01-01

    The numerical simulation of strongly first-order phase transitions has remained a notoriously difficult problem even for classical systems due to the exponentially suppressed (thermal) equilibration in the vicinity of such a transition. In the absence of efficient update techniques, a common approach for improving equilibration in Monte Carlo simulations is broadening the sampled statistical ensemble beyond the bimodal distribution of the canonical ensemble. Here we show how a recently developed feedback algorithm can systematically optimize such broad-histogram ensembles and significantly speed up equilibration in comparison with other extended ensemble techniques such as flat-histogram, multicanonical and Wang–Landau sampling. We simulate, as a prototypical example of a strong first-order transition, the two-dimensional Potts model with up to Q = 250 different states in large systems. The optimized histogram develops a distinct multi-peak structure, thereby resolving entropic barriers and their associated phase transitions in the phase coexistence region—such as droplet nucleation and annihilation, and droplet–strip transitions for systems with periodic boundary conditions. We characterize the efficiency of the optimized histogram sampling by measuring round-trip times τ(N, Q) across the phase transition for samples comprised of N spins. While we find power-law scaling of τ versus N for small Q∼ 2 , we observe a crossover to exponential scaling for larger Q. These results demonstrate that despite the ensemble optimization, broad-histogram simulations cannot fully eliminate the supercritical slowing down at strongly first-order transitions

  5. Optimized broad-histogram simulations for strong first-order phase transitions: droplet transitions in the large-Q Potts model

    Science.gov (United States)

    Bauer, Bela; Gull, Emanuel; Trebst, Simon; Troyer, Matthias; Huse, David A.

    2010-01-01

    The numerical simulation of strongly first-order phase transitions has remained a notoriously difficult problem even for classical systems due to the exponentially suppressed (thermal) equilibration in the vicinity of such a transition. In the absence of efficient update techniques, a common approach for improving equilibration in Monte Carlo simulations is broadening the sampled statistical ensemble beyond the bimodal distribution of the canonical ensemble. Here we show how a recently developed feedback algorithm can systematically optimize such broad-histogram ensembles and significantly speed up equilibration in comparison with other extended ensemble techniques such as flat-histogram, multicanonical and Wang-Landau sampling. We simulate, as a prototypical example of a strong first-order transition, the two-dimensional Potts model with up to Q = 250 different states in large systems. The optimized histogram develops a distinct multi-peak structure, thereby resolving entropic barriers and their associated phase transitions in the phase coexistence region—such as droplet nucleation and annihilation, and droplet-strip transitions for systems with periodic boundary conditions. We characterize the efficiency of the optimized histogram sampling by measuring round-trip times τ(N, Q) across the phase transition for samples comprised of N spins. While we find power-law scaling of τ versus N for small Q \\lesssim 50 and N \\lesssim 40^2 , we observe a crossover to exponential scaling for larger Q. These results demonstrate that despite the ensemble optimization, broad-histogram simulations cannot fully eliminate the supercritical slowing down at strongly first-order transitions.

  6. Automatic Parameter Tuning for the Morpheus Vehicle Using Particle Swarm Optimization

    Science.gov (United States)

    Birge, B.

    2013-01-01

    A high fidelity simulation using a PC based Trick framework has been developed for Johnson Space Center's Morpheus test bed flight vehicle. There is an iterative development loop of refining and testing the hardware, refining the software, comparing the software simulation to hardware performance and adjusting either or both the hardware and the simulation to extract the best performance from the hardware as well as the most realistic representation of the hardware from the software. A Particle Swarm Optimization (PSO) based technique has been developed that increases speed and accuracy of the iterative development cycle. Parameters in software can be automatically tuned to make the simulation match real world subsystem data from test flights. Special considerations for scale, linearity, discontinuities, can be all but ignored with this technique, allowing fast turnaround both for simulation tune up to match hardware changes as well as during the test and validation phase to help identify hardware issues. Software models with insufficient control authority to match hardware test data can be immediately identified and using this technique requires very little to no specialized knowledge of optimization, freeing model developers to concentrate on spacecraft engineering. Integration of the PSO into the Morpheus development cycle will be discussed as well as a case study highlighting the tool's effectiveness.

  7. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Takashina, Masaaki; Koizumi, Masahiko [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P., E-mail: vadim.p.moskvin@gmail.com [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States)

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  8. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    International Nuclear Information System (INIS)

    Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.

    2014-01-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation

  9. Optimism and Adaptation to Multiple Sclerosis: What Does Optimism Mean?

    NARCIS (Netherlands)

    Fournier, M.; Ridder, D.T.D. de; Bensing, J.

    1999-01-01

    The aim of the present study was to determine the meaning of optimism by explicating the dimensions underlying the notion and their links to adjusting to MS. Seventy-three patients responded to optimism questionnaire s (i.e., the LOT, Generalized Self-Efficacy Scale) and outcome questionnaires.

  10. Optimism and adaptation to multiple sclerosis: what does optimism mean?

    NARCIS (Netherlands)

    Fournier, M.; Ridder, D. de; Bensing, J.

    1999-01-01

    The aim of the present study was to determine the meaning of optimism by explicating the dimensions underlying the notion and their links to adjusting to MS. Seventy-three patients responded to optimism questionnaires (i.e., the LOT, Generalized Self-Efficacy Scale) and outcome questionnaires. In

  11. Cross Layer Optimization and Simulation of Smart Grid Home Area Network

    Directory of Open Access Journals (Sweden)

    Lipi K. Chhaya

    2018-01-01

    Full Text Available An electrical “Grid” is a network that carries electricity from power plants to customer premises. Smart Grid is an assimilation of electrical and communication infrastructure. Smart Grid is characterized by bidirectional flow of electricity and information. Smart Grid is a complex network with hierarchical architecture. Realization of complete Smart Grid architecture necessitates diverse set of communication standards and protocols. Communication network protocols are engineered and established on the basis of layered approach. Each layer is designed to produce an explicit functionality in association with other layers. Layered approach can be modified with cross layer approach for performance enhancement. Complex and heterogeneous architecture of Smart Grid demands a deviation from primitive approach and reworking of an innovative approach. This paper describes a joint or cross layer optimization of Smart Grid home/building area network based on IEEE 802.11 standard using RIVERBED OPNET network design and simulation tool. The network performance can be improved by selecting various parameters pertaining to different layers. Simulation results are obtained for various parameters such as WLAN throughput, delay, media access delay, and retransmission attempts. The graphical results show that various parameters have divergent effects on network performance. For example, frame aggregation decreases overall delay but the network throughput is also reduced. To prevail over this effect, frame aggregation is used in combination with RTS and fragmentation mechanisms. The results show that this combination notably improves network performance. Higher value of buffer size considerably increases throughput but the delay is also greater and thus the choice of optimum value of buffer size is inevitable for network performance optimization. Parameter optimization significantly enhances the performance of a designed network. This paper is expected to serve

  12. Using Green's Functions to initialize and adjust a global, eddying ocean biogeochemistry general circulation model

    Science.gov (United States)

    Brix, H.; Menemenlis, D.; Hill, C.; Dutkiewicz, S.; Jahn, O.; Wang, D.; Bowman, K.; Zhang, H.

    2015-11-01

    The NASA Carbon Monitoring System (CMS) Flux Project aims to attribute changes in the atmospheric accumulation of carbon dioxide to spatially resolved fluxes by utilizing the full suite of NASA data, models, and assimilation capabilities. For the oceanic part of this project, we introduce ECCO2-Darwin, a new ocean biogeochemistry general circulation model based on combining the following pre-existing components: (i) a full-depth, eddying, global-ocean configuration of the Massachusetts Institute of Technology general circulation model (MITgcm), (ii) an adjoint-method-based estimate of ocean circulation from the Estimating the Circulation and Climate of the Ocean, Phase II (ECCO2) project, (iii) the MIT ecosystem model "Darwin", and (iv) a marine carbon chemistry model. Air-sea gas exchange coefficients and initial conditions of dissolved inorganic carbon, alkalinity, and oxygen are adjusted using a Green's Functions approach in order to optimize modeled air-sea CO2 fluxes. Data constraints include observations of carbon dioxide partial pressure (pCO2) for 2009-2010, global air-sea CO2 flux estimates, and the seasonal cycle of the Takahashi et al. (2009) Atlas. The model sensitivity experiments (or Green's Functions) include simulations that start from different initial conditions as well as experiments that perturb air-sea gas exchange parameters and the ratio of particulate inorganic to organic carbon. The Green's Functions approach yields a linear combination of these sensitivity experiments that minimizes model-data differences. The resulting initial conditions and gas exchange coefficients are then used to integrate the ECCO2-Darwin model forward. Despite the small number (six) of control parameters, the adjusted simulation is significantly closer to the data constraints (37% cost function reduction, i.e., reduction in the model-data difference, relative to the baseline simulation) and to independent observations (e.g., alkalinity). The adjusted air-sea gas

  13. Annealing evolutionary stochastic approximation Monte Carlo for global optimization

    KAUST Repository

    Liang, Faming

    2010-04-08

    In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.

  14. Adjustment model of thermoluminescence experimental data

    International Nuclear Information System (INIS)

    Moreno y Moreno, A.; Moreno B, A.

    2002-01-01

    This model adjusts the experimental results for thermoluminescence according to the equation: I (T) = I (a i * exp (-1/b i * (T-C i )) where: a i , b i , c i are the i-Th peak adjusted to a gaussian curve. The adjustments of the curve can be operated manual or analytically using the macro function and the solver.xla complement installed previously in the computational system. In this work it is shown: 1. The information of experimental data from a LiF curve obtained from the Physics Institute of UNAM which the data adjustment model is operated in the macro type. 2. A LiF curve of four peaks obtained from Harshaw information simulated in Microsoft Excel, discussed in previous works, as a reference not in macro. (Author)

  15. A study on optimal control of the aero-propulsion system acceleration process under the supersonic state

    Directory of Open Access Journals (Sweden)

    Fengyong Sun

    2017-04-01

    Full Text Available In order to solve the aero-propulsion system acceleration optimal problem, the necessity of inlet control is discussed, and a fully new aero-propulsion system acceleration process control design including the inlet, engine, and nozzle is proposed in this paper. In the proposed propulsion system control scheme, the inlet, engine, and nozzle are simultaneously adjusted through the FSQP method. In order to implement the control scheme design, an aero-propulsion system component-level model is built to simulate the inlet working performance and the matching problems between the inlet and engine. Meanwhile, a stabilizing inlet control scheme is designed to solve the inlet control problems. In optimal control of the aero-propulsion system acceleration process, the inlet is an emphasized control unit in the optimal acceleration control system. Two inlet control patterns are discussed in the simulation. The simulation results prove that by taking the inlet ramp angle as an active control variable instead of being modulated passively, acceleration performance could be obviously enhanced. Acceleration objectives could be obtained with a faster acceleration time by 5%.

  16. Dynamic Simulation and Optimization of Nuclear Hydrogen Production Systems

    Energy Technology Data Exchange (ETDEWEB)

    Paul I. Barton; Mujid S. Kaximi; Georgios Bollas; Patricio Ramirez Munoz

    2009-07-31

    This project is part of a research effort to design a hydrogen plant and its interface with a nuclear reactor. This project developed a dynamic modeling, simulation and optimization environment for nuclear hydrogen production systems. A hybrid discrete/continuous model captures both the continuous dynamics of the nuclear plant, the hydrogen plant, and their interface, along with discrete events such as major upsets. This hybrid model makes us of accurate thermodynamic sub-models for the description of phase and reaction equilibria in the thermochemical reactor. Use of the detailed thermodynamic models will allow researchers to examine the process in detail and have confidence in the accurary of the property package they use.

  17. Simulation-Based Design for Wearable Robotic Systems: An Optimization Framework for Enhancing a Standing Long Jump.

    Science.gov (United States)

    Ong, Carmichael F; Hicks, Jennifer L; Delp, Scott L

    2016-05-01

    Technologies that augment human performance are the focus of intensive research and development, driven by advances in wearable robotic systems. Success has been limited by the challenge of understanding human-robot interaction. To address this challenge, we developed an optimization framework to synthesize a realistic human standing long jump and used the framework to explore how simulated wearable robotic devices might enhance jump performance. A planar, five-segment, seven-degree-of-freedom model with physiological torque actuators, which have variable torque capacity depending on joint position and velocity, was used to represent human musculoskeletal dynamics. An active augmentation device was modeled as a torque actuator that could apply a single pulse of up to 100 Nm of extension torque. A passive design was modeled as rotational springs about each lower limb joint. Dynamic optimization searched for physiological and device actuation patterns to maximize jump distance. Optimization of the nominal case yielded a 2.27 m jump that captured salient kinematic and kinetic features of human jumps. When the active device was added to the ankle, knee, or hip, jump distance increased to between 2.49 and 2.52 m. Active augmentation of all three joints increased the jump distance to 3.10 m. The passive design increased jump distance to 3.32 m by adding torques of 135, 365, and 297 Nm to the ankle, knee, and hip, respectively. Dynamic optimization can be used to simulate a standing long jump and investigate human-robot interaction. Simulation can aid in the design of performance-enhancing technologies.

  18. Predictive simulations and optimization of nanowire field-effect PSA sensors including screening

    KAUST Repository

    Baumgartner, Stefan; Heitzinger, Clemens; Vacic, Aleksandar; Reed, Mark A

    2013-01-01

    We apply our self-consistent PDE model for the electrical response of field-effect sensors to the 3D simulation of nanowire PSA (prostate-specific antigen) sensors. The charge concentration in the biofunctionalized boundary layer at the semiconductor-electrolyte interface is calculated using the propka algorithm, and the screening of the biomolecules by the free ions in the liquid is modeled by a sensitivity factor. This comprehensive approach yields excellent agreement with experimental current-voltage characteristics without any fitting parameters. Having verified the numerical model in this manner, we study the sensitivity of nanowire PSA sensors by changing device parameters, making it possible to optimize the devices and revealing the attributes of the optimal field-effect sensor. © 2013 IOP Publishing Ltd.

  19. Predictive simulations and optimization of nanowire field-effect PSA sensors including screening

    KAUST Repository

    Baumgartner, Stefan

    2013-05-03

    We apply our self-consistent PDE model for the electrical response of field-effect sensors to the 3D simulation of nanowire PSA (prostate-specific antigen) sensors. The charge concentration in the biofunctionalized boundary layer at the semiconductor-electrolyte interface is calculated using the propka algorithm, and the screening of the biomolecules by the free ions in the liquid is modeled by a sensitivity factor. This comprehensive approach yields excellent agreement with experimental current-voltage characteristics without any fitting parameters. Having verified the numerical model in this manner, we study the sensitivity of nanowire PSA sensors by changing device parameters, making it possible to optimize the devices and revealing the attributes of the optimal field-effect sensor. © 2013 IOP Publishing Ltd.

  20. Real-Time Optimization for use in a Control Allocation System to Recover from Pilot Induced Oscillations

    Science.gov (United States)

    Leonard, Michael W.

    2013-01-01

    Integration of the Control Allocation technique to recover from Pilot Induced Oscillations (CAPIO) System into the control system of a Short Takeoff and Landing Mobility Concept Vehicle simulation presents a challenge because the CAPIO formulation requires that constrained optimization problems be solved at the controller operating frequency. We present a solution that utilizes a modified version of the well-known L-BFGS-B solver. Despite the iterative nature of the solver, the method is seen to converge in real time with sufficient reliability to support three weeks of piloted runs at the NASA Ames Vertical Motion Simulator (VMS) facility. The results of the optimization are seen to be excellent in the vast majority of real-time frames. Deficiencies in the quality of the results in some frames are shown to be improvable with simple termination criteria adjustments, though more real-time optimization iterations would be required.

  1. Simulation-optimization model of reservoir operation based on target storage curves

    Directory of Open Access Journals (Sweden)

    Hong-bin Fang

    2014-10-01

    Full Text Available This paper proposes a new storage allocation rule based on target storage curves. Joint operating rules are also proposed to solve the operation problems of a multi-reservoir system with joint demands and water transfer-supply projects. The joint operating rules include a water diversion rule to determine the amount of diverted water in a period, a hedging rule based on an aggregated reservoir to determine the total release from the system, and a storage allocation rule to specify the release from each reservoir. A simulation-optimization model was established to optimize the key points of the water diversion curves, the hedging rule curves, and the target storage curves using the improved particle swarm optimization (IPSO algorithm. The multi-reservoir water supply system located in Liaoning Province, China, including a water transfer-supply project, was employed as a case study to verify the effectiveness of the proposed join operating rules and target storage curves. The results indicate that the proposed operating rules are suitable for the complex system. The storage allocation rule based on target storage curves shows an improved performance with regard to system storage distribution.

  2. Simulation and optimization methods for logistics pooling in the outbound supply chain

    OpenAIRE

    Jesus Gonzalez-Feliu; Carlos Peris-Pla; Dina Rakotonarivo

    2010-01-01

    International audience; Logistics pooling and collaborative transportation systems are relatively new concepts in logistics research, but are very popular in practice. This communication proposes a conceptual framework for logistics and transportation pooling systems, as well as a simulation method for strategic planning optimization. This method is based on a twostep constructive heuristic in order to estimate for big instances the transportation and storage costs at a macroscopic level. Fou...

  3. Optimizing load transfer in multiwall nanotubes through interwall coupling: Theory and simulation

    International Nuclear Information System (INIS)

    Byrne, E.M.; Letertre, A.; McCarthy, M.A.; Curtin, W.A.; Xia, Z.

    2010-01-01

    An analytical model is developed to determine the length scales over which load is transferred from outer to inner walls of multiwall carbon nanotubes (MWCNTs) as a function of the amount of bonding between walls. The model predicts that the characteristic length for load transfer scales as l∼t√(E/μ-bar), where t is the CNT wall spacing, E is the effective wall Young's modulus, and μ-bar is the average interwall shear modulus due to interwall coupling. Molecular dynamics simulations for MWCNTs with up to six walls, and with interwall coupling achieved by interwall sp 3 bonding at various densities, provide data against which the model is tested. For interwall bonding having a uniform axial distribution, the analytic and simulation models agree well, showing that continuum mechanics concepts apply down to the atomic scale in this problem. The simulation models show, however, that load transfer is sensitive to natural statistical fluctuations in the spatial distribution of the interwall bonding between pairs of walls, and such fluctuations generally increase the net load transfer length needed to fully load an MWCNT. Optimal load transfer is achieved when bonding is uniformly distributed axially, and all interwall regions have the same shear stiffness, implying a linear decrease in the number of interwall bonds with distance from the outer wall. Optimal load transfer into an n-wall MWCNT is shown to occur over a length of ∼1.5nl. The model can be used to design MWCNTs for structural materials, and to interpret load transfer characteristics deduced from experiments on individual MWCNTs.

  4. Site utility system optimization with operation adjustment under uncertainty

    International Nuclear Information System (INIS)

    Sun, Li; Gai, Limei; Smith, Robin

    2017-01-01

    Highlights: • Uncertainties are classified into time-based and probability-based uncertain factors. • Multi-period operation and recourses deal with uncertainty implementation. • Operation scheduling are specified at the design stage to deal with uncertainties. • Steam mains superheating affects steam distribution and power generation in the system. - Abstract: Utility systems must satisfy process energy and power demands under varying conditions. The system performance is decided by the system configuration and individual equipment operating load for boilers, gas turbines, steam turbines, condensers, and let down valves. Steam mains conditions in terms of steam pressures and steam superheating also play important roles on steam distribution in the system and power generation by steam expansion in steam turbines, and should be included in the system optimization. Uncertainties such as process steam power demand changes and electricity price fluctuations should be included in the system optimization to eliminate as much as possible the production loss caused by steam power deficits due to uncertainties. In this paper, uncertain factors are classified into time-based and probability-based uncertain factors, and operation scheduling containing multi-period equipment load sharing, redundant equipment start up, and electricity import to compensate for power deficits, have been presented to deal with the happens of uncertainties, and are formulated as a multi-period item and a recourse item in the optimization model. There are two case studies in this paper. One case illustrates the system design to determine system configuration, equipment selection, and system operation scheduling at the design stage to deal with uncertainties. The other case provides operational optimization scenarios for an existing system, especially when the steam superheating varies. The proposed method can provide practical guidance to system energy efficiency improvement.

  5. Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.

    Science.gov (United States)

    Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O

    2016-03-01

    An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.

  6. A Modified Recession Vector Method Based on the Optimization-Simulation Approach to Design Problems of Information Security Systems

    Directory of Open Access Journals (Sweden)

    A. Yu. Bykov

    2015-01-01

    Full Text Available Modern practical task-solving techniques for designing information security systems in different purpose automated systems assume the solution of optimization tasks when choosing different elements of a security system. Formulations of mathematical programming tasks are rather often used, but in practical tasks it is not always analytically possible to set target function and (or restrictions in an explicit form. Sometimes, calculation of the target function value or checking of restrictions for the possible decision can be reduced to carrying out experiments on a simulation model of system. Similar tasks are considered within optimization-simulation approach and require the ad hoc methods of optimization considering the possible high computational effort of simulation.The article offers a modified recession vector method, which is used in tasks of discrete optimization to solve the similar problems. The method is applied when the task to be solved is to minimize the cost of selected information security tools in case of restriction on the maximum possible damage. The cost index is the linear function of the Boolean variables, which specify the selected security tools, with the restriction set as an "example simulator". Restrictions can be actually set implicitly. A validity of the possible solution is checked using a simulation model of the system.The offered algorithm of a method considers features of an objective. The main advantage of algorithm is that it requires a maximum of m+1 of steps where m is a dimensionality of the required vector of Boolean variables. The algorithm provides finding a local minimum by using the Hamming metrics in the discrete space; the radius of neighborhood is equal to 1. These statements are proved.The paper presents solution results of choosing security tools with the specified basic data.

  7. Simulation of high-resolution X-ray microscopic images for improved alignment

    International Nuclear Information System (INIS)

    Song Xiangxia; Zhang Xiaobo; Liu Gang; Cheng Xianchao; Li Wenjie; Guan Yong; Liu Ying; Xiong Ying; Tian Yangchao

    2011-01-01

    The introduction of precision optical elements to X-ray microscopes necessitates fine realignment to achieve optimal high-resolution imaging. In this paper, we demonstrate a numerical method for simulating image formation that facilitates alignment of the source, condenser, objective lens, and CCD camera. This algorithm, based on ray-tracing and Rayleigh-Sommerfeld diffraction theory, is applied to simulate the X-ray microscope beamline U7A of National Synchrotron Radiation Laboratory (NSRL). The simulations and imaging experiments show that the algorithm is useful for guiding experimental adjustments. Our alignment simulation method is an essential tool for the transmission X-ray microscope (TXM) with optical elements and may also be useful for the alignment of optical components in other modes of microscopy.

  8. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    Science.gov (United States)

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  9. Optimization of cladding parameters for resisting corrosion on low carbon steels using simulated annealing algorithm

    Science.gov (United States)

    Balan, A. V.; Shivasankaran, N.; Magibalan, S.

    2018-04-01

    Low carbon steels used in chemical industries are frequently affected by corrosion. Cladding is a surfacing process used for depositing a thick layer of filler metal in a highly corrosive materials to achieve corrosion resistance. Flux cored arc welding (FCAW) is preferred in cladding process due to its augmented efficiency and higher deposition rate. In this cladding process, the effect of corrosion can be minimized by controlling the output responses such as minimizing dilution, penetration and maximizing bead width, reinforcement and ferrite number. This paper deals with the multi-objective optimization of flux cored arc welding responses by controlling the process parameters such as wire feed rate, welding speed, Nozzle to plate distance, welding gun angle for super duplex stainless steel material using simulated annealing technique. Regression equation has been developed and validated using ANOVA technique. The multi-objective optimization of weld bead parameters was carried out using simulated annealing to obtain optimum bead geometry for reducing corrosion. The potentiodynamic polarization test reveals the balanced formation of fine particles of ferrite and autenite content with desensitized nature of the microstructure in the optimized clad bead.

  10. An optimization algorithm for simulation-based planning of low-income housing projects

    Directory of Open Access Journals (Sweden)

    Mohamed M. Marzouk

    2010-10-01

    Full Text Available Construction of low-income housing projects is a replicated process and is associated with uncertainties that arise from the unavailability of resources. Government agencies and/or contractors have to select a construction system that meets low-income housing projects constraints including project conditions, technical, financial and time constraints. This research presents a framework, using computer simulation, which aids government authorities and contractors in the planning of low-income housing projects. The proposed framework estimates the time and cost required for the construction of low-income housing using pre-cast hollow core with hollow blocks bearing walls. Five main components constitute the proposed framework: a network builder module, a construction alternative selection module, a simulation module, an optimization module and a reporting module. An optimization module utilizing a genetic algorithm enables the defining of different options and ranges of parameters associated with low-income housing projects that influence the duration and total cost of the pre-cast hollow core with hollow blocks bearing walls method. A computer prototype, named LIHouse_Sim, was developed in MS Visual Basic 6.0 as proof of concept for the proposed framework. A numerical example is presented to demonstrate the use of the developed framework and to illustrate its essential features.

  11. The empty wagons adjustment algorithm of Chinese heavy-haul railway

    International Nuclear Information System (INIS)

    Zhang, Jinchuan; Yang, Hao; Wei, Yuguang; Shang, Pan

    2016-01-01

    The paper studied the problem of empty wagons adjustment of Chinese heavy-haul railway. Firstly, based on the existing study of the empty wagons adjustment of heavy-haul railway in the world, Chinese heavy-haul railway was analyzed, especially the mode of transportation organization and characteristics of empty wagons adjustment. Secondly, the optimization model was set up to solve the empty wagons adjustment of heavy-haul railway and the model took the minimum idling period as the function goal. Finally, through application and solution of one case, validity and practicability of model and algorithm had been proved. So, the model could offer decision support to transport enterprises on adjusting empty wagons.

  12. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    Science.gov (United States)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  13. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    Science.gov (United States)

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  14. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    Directory of Open Access Journals (Sweden)

    Daniel Bartz

    Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  15. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  16. Manufacturing enterprise’s logistics operational cost simulation and optimization from the perspective of inter-firm network

    Directory of Open Access Journals (Sweden)

    Chun Fu

    2015-05-01

    Full Text Available Purpose: By studying the case of a Changsha engineering machinery manufacturing firm, this paper aims to find out the optimization tactics to reduce enterprise’s logistics operational cost. Design/methodology/approach: This paper builds the structure model of manufacturing enterprise’s logistics operational costs from the perspective of inter-firm network and simulates the model based on system dynamics. Findings: It concludes that applying system dynamics in the research of manufacturing enterprise’s logistics cost control can better reflect the relationship of factors in the system. And the case firm can optimize the logistics costs by implement joint distribution. Research limitations/implications: This study still lacks comprehensive consideration about the variables quantities and quantitative of the control factors. In the future, we should strengthen the collection of data and information about the engineering manufacturing firms and improve the logistics operational cost model. Practical implications: This study puts forward some optimization tactics to reduce enterprise’s logistics operational cost. And it is of great significance for enterprise’s supply chain management optimization and logistics cost control. Originality/value: Differing from the existing literatures, this paper builds the structure model of manufacturing enterprise’s logistics operational costs from the perspective of inter-firm network and simulates the model based on system dynamics.

  17. Optimization Of Thermo-Electric Coolers Using Hybrid Genetic Algorithm And Simulated Annealing

    Directory of Open Access Journals (Sweden)

    Khanh Doan V.K.

    2014-06-01

    Full Text Available Thermo-electric Coolers (TECs nowadays are applied in a wide range of thermal energy systems. This is due to their superior features where no refrigerant and dynamic parts are needed. TECs generate no electrical or acoustical noise and are environmentally friendly. Over the past decades, many researches were employed to improve the efficiency of TECs by enhancing the material parameters and design parameters. The material parameters are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of TECs design is to determine a set of design parameters such as leg area, leg length and the number of legs. Two elements that play an important role when considering the suitability of TECs in applications are rated of refrigeration (ROR and coefficient of performance (COP. In this paper, the review of some previous researches will be conducted to see the diversity of optimization in the design of TECs in enhancing the performance and efficiency. After that, single-objective optimization problems (SOP will be tested first by using Genetic Algorithm (GA and Simulated Annealing (SA to optimize geometry properties so that TECs will operate at near optimal conditions. Equality constraint and inequality constraint were taken into consideration.

  18. Development of an Optimizing Control Concept for Fossil-Fired Boilers using a Simulation Model

    DEFF Research Database (Denmark)

    Mortensen, J. H.; Mølbak, T.; Commisso, M.B.

    1997-01-01

    of implementation and commissioning. The optimizing control system takes into account the multivariable and nonlinear characteristics of the boiler process as a gain-scheduled LQG-controller is utilized. For the purpose of facilitating the control concept development a dynamic simulation model of the boiler process......An optimizing control system for improving the load following capabilities of power plant units has been developed. The system is implemented as a complement producing additive control signals to the existing boiler control system, a concept which has various practical advantages in terms...... model when designing a new control concept are discussed....

  19. Linearity optimizations of analog ring resonator modulators through bias voltage adjustments

    Science.gov (United States)

    Hosseinzadeh, Arash; Middlebrook, Christopher T.

    2018-03-01

    The linearity of ring resonator modulator (RRM) in microwave photonic links is studied in terms of instantaneous bandwidth, fabrication tolerances, and operational bandwidth. A proposed bias voltage adjustment method is shown to maximize spur-free dynamic range (SFDR) at instantaneous bandwidths required by microwave photonic link (MPL) applications while also mitigating RRM fabrication tolerances effects. The proposed bias voltage adjustment method shows RRM SFDR improvement of ∼5.8 dB versus common Mach-Zehnder modulators at 500 MHz instantaneous bandwidth. Analyzing operational bandwidth effects on SFDR shows RRMs can be promising electro-optic modulators for MPL applications which require high operational frequencies while in a limited bandwidth such as radio-over-fiber 60 GHz wireless network access.

  20. A novel approach in optimization problem for research reactors fuel plate using a synergy between cellular automata and quasi-simulated annealing methods

    International Nuclear Information System (INIS)

    Barati, Ramin

    2014-01-01

    Highlights: • An innovative optimization technique for multi-objective optimization is presented. • The technique utilizes combination of CA and quasi-simulated annealing. • Mass and deformation of fuel plate are considered as objective functions. • Computational burden is significantly reduced compared to classic tools. - Abstract: This paper presents a new and innovative optimization technique utilizing combination of cellular automata (CA) and quasi-simulated annealing (QSA) as solver concerning conceptual design optimization which is indeed a multi-objective optimization problem. Integrating CA and QSA into a unified optimizer tool has a great potential for solving multi-objective optimization problems. Simulating neighborhood effects while taking local information into account from CA and accepting transitions based on decreasing of objective function and Boltzmann distribution from QSA as transition rule make this tool effective in multi-objective optimization. Optimization of fuel plate safety design while taking into account major goals of conceptual design such as improving reliability and life-time – which are the most significant elements during shutdown – is a major multi-objective optimization problem. Due to hugeness of search space in fuel plate optimization problem, finding optimum solution in classical methods requires a huge amount of calculation and CPU time. The CA models, utilizing local information, require considerably less computation. In this study, minimizing both mass and deformation of fuel plate of a multipurpose research reactor (MPRR) are considered as objective functions. Results, speed, and qualification of proposed method are comparable with those of genetic algorithm and neural network methods applied to this problem before

  1. Clinical trial optimization: Monte Carlo simulation Markov model for planning clinical trials recruitment.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2007-05-01

    The patient recruitment process of clinical trials is an essential element which needs to be designed properly. In this paper we describe different simulation models under continuous and discrete time assumptions for the design of recruitment in clinical trials. The results of hypothetical examples of clinical trial recruitments are presented. The recruitment time is calculated and the number of recruited patients is quantified for a given time and probability of recruitment. The expected delay and the effective recruitment durations are estimated using both continuous and discrete time modeling. The proposed type of Monte Carlo simulation Markov models will enable optimization of the recruitment process and the estimation and the calibration of its parameters to aid the proposed clinical trials. A continuous time simulation may minimize the duration of the recruitment and, consequently, the total duration of the trial.

  2. Optimal set values of zone modeling in the simulation of a walking beam type reheating furnace on the steady-state operating regime

    International Nuclear Information System (INIS)

    Yang, Zhi; Luo, Xiaochuan

    2016-01-01

    Highlights: • The adjoint equation is introduced to the PDE optimal control problem. • Lipschitz continuity for the gradient of the cost functional is derived. • The simulation time and iterations reduce by a large margin in the simulations. • The model validation and comparison are made to verify the proposed math model. - Abstract: In this paper, this study proposed a new method to solve the PDE optimal control problem by introducing the adjoint problem to the optimization model, which was used to get the reference values for the optimal furnace zone temperatures and the optimal temperature distribution of steel slabs in the reheating furnace on the steady-state operating regime. It was proved that the gradient of the cost functional could be written via the weak solution of this adjoint problem and then Lipschitz continuity of the gradient was derived. Model validation and comparison between the mathematics model and the experiment results indicated that the present heat transfer model worked well for the prediction of thermal behavior about a slab in the reheating furnace. Iterations and simulation time had shown a significant decline in the simulations of 20MnSi slab, and it was shown by numerical simulations for 0.4 m thick slabs that the proposed method was better applied in the medium and heavy plate plant, leading to better performance in terms of productivity, energy efficiency and other features of reheating furnaces.

  3. A transmission power optimization with a minimum node degree for energy-efficient wireless sensor networks with full-reachability.

    Science.gov (United States)

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-03-20

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.

  4. A Transmission Power Optimization with a Minimum Node Degree for Energy-Efficient Wireless Sensor Networks with Full-Reachability

    Science.gov (United States)

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-01-01

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments. PMID:23519351

  5. Design and optimization analysis of dual material gate on DG-IMOS

    Science.gov (United States)

    Singh, Sarabdeep; Raman, Ashish; Kumar, Naveen

    2017-12-01

    An impact ionization MOSFET (IMOS) is evolved for overcoming the constraint of less than 60 mV/decade sub-threshold slope (SS) of conventional MOSFET at room temperature. In this work, first, the device performance of the p-type double gate impact ionization MOSFET (DG-IMOS) is optimized by adjusting the device design parameters. The adjusted parameters are ratio of gate and intrinsic length, gate dielectric thickness and gate work function. Secondly, the DMG (dual material gate) DG-IMOS is proposed and investigated. This DMG DG-IMOS is further optimized to obtain the best possible performance parameters. Simulation results reveal that DMG DG-IMOS when compared to DG-IMOS, shows better I ON, I ON/I OFF ratio, and RF parameters. Results show that by properly tuning the lengths of two materials at a ratio of 1.5 in DMG DG-IMOS, optimized performance is achieved including I ON/I OFF ratio of 2.87 × 109 A/μm with I ON as 11.87 × 10-4 A/μm and transconductance of 1.06 × 10-3 S/μm. It is analyzed that length of drain side material should be greater than the length of source side material to attain the higher transconductance in DMG DG-IMOS.

  6. Contact angle adjustment in equation-of-state-based pseudopotential model.

    Science.gov (United States)

    Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong

    2016-05-01

    The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.

  7. Optimization of PWR fuel assembly radial enrichment and burnable poison location based on adaptive simulated annealing

    International Nuclear Information System (INIS)

    Rogers, Timothy; Ragusa, Jean; Schultz, Stephen; St Clair, Robert

    2009-01-01

    The focus of this paper is to present a concurrent optimization scheme for the radial pin enrichment and burnable poison location in PWR fuel assemblies. The methodology is based on the Adaptive Simulated Annealing (ASA) technique, coupled with a neutron lattice physics code to update the cost function values. In this work, the variations in the pin U-235 enrichment are variables to be optimized radially, i.e., pin by pin. We consider the optimization of two categories of fuel assemblies, with and without Gadolinium burnable poison pins. When burnable poisons are present, both the radial distribution of enrichment and the poison locations are variables in the optimization process. Results for 15 x 15 PWR fuel assembly designs are provided.

  8. Research on a Micro-Grid Frequency Modulation Strategy Based on Optimal Utilization of Air Conditioners

    Directory of Open Access Journals (Sweden)

    Qingzhu Wan

    2016-12-01

    Full Text Available With the proportion of air conditioners increasing gradually, they can provide a certain amount of frequency-controlled reserves for a micro-grid. Optimizing utilization of air conditioners and considering load response characteristics and customer comfort, the frequency adjustment model is a quadratic function model between the trigger temperature of the air conditioner compressor, and frequency variation is provided, which can be used to regulate the trigger temperature of the air conditioner when the micro-grid frequency rises and falls. This frequency adjustment model combines a primary frequency modulation method and a secondary frequency modulation method of the energy storage system, in order to optimize the frequency of a micro-grid. The simulation results show that the frequency modulation strategy for air conditioners can effectively improve the frequency modulation ability of air conditioners and frequency modulation effects of a micro-grid in coordination with an energy storage system.

  9. Machine concept optimization for pumped-storage plants through combined dispatch simulation for wholesale and reserve markets

    International Nuclear Information System (INIS)

    Engels, Klaus; Harasta, Michaela; Braitsch, Werner; Moser, Albert; Schaefer, Andreas

    2012-01-01

    In Germany's energy markets of today, pumped-storage power plants offer excellent business opportunities due to their outstanding flexibility. However, the energy-economic simulation of pumped-storage plants, which is necessary to base the investment decision on a sound business case, is a highly complex matter since the plant's capacity must be optimized in a given plant portfolio and between two relevant markets: the scheduled wholesale and the reserve market. This mathematical optimization problem becomes even more complex when the question is raised as to which type of machine should be used for a pumped-storage new build option. For the first time, it has been proven possible to simulate the optimum dispatch of different pumped-storage machine concepts within two relevant markets - the scheduled wholesale and the reserve market - thereby greatly supporting the investment decision process. The methodology and findings of a cooperation study between E.ON and RWTH Aachen University in respect of the German pumped-storage extension project 'Waldeck 2+' are described, showing the latest development in dispatch simulation for generation portfolios. (authors)

  10. Design and verification of controllers for longitudinal oscillations using optimal control theory and numerical simulation: Predictions for PEP-II

    International Nuclear Information System (INIS)

    Hindi, H.; Prabhakar, S.; Fox, J.; Teytelman, D.

    1997-12-01

    The authors present a technique for the design and verification of efficient bunch-by-bunch controllers for damping longitudinal multibunch instabilities. The controllers attempt to optimize the use of available feedback amplifier power--one of the most expensive components of a feedback system--and define the limits of closed loop system performance. The design technique alternates between analytic computation of single bunch optimal controllers and verification on a multibunch numerical simulator. The simulator identifies unstable coupled bunch modes and predicts their growth and damping rates. The results from the simulator are shown to be in reasonable agreement with analytical calculations based on the single bunch model. The technique is then used to evaluate the performance of a variety of controllers proposed for PEP-II

  11. Optimal laser control of ultrafast photodissociation of I2- in water: Mixed quantum/classical molecular dynamics simulation

    International Nuclear Information System (INIS)

    Nishiyama, Yoshikazu; Kato, Tsuyoshi; Ohtsuki, Yukiyoshi; Fujimura, Yuichi

    2004-01-01

    A linearized optimal control method in combination with mixed quantum/classical molecular dynamics simulation is used for numerically investigating the possibility of controlling photodissociation wave packets of I 2 - in water. Optimal pulses are designed using an ensemble of photodissociation samples, aiming at the creation of localized dissociation wave packets. Numerical results clearly show the effectiveness of the control although the control achievement is reduced with an increase in the internuclear distance associated with a target region. We introduce effective optimal pulses that are designed using a statistically averaged effective dissociation potential, and show that they semiquantitatively reproduce the control achievements calculated by using optimal pulses. The control mechanisms are interpreted from the time- and frequency-resolved spectra of the effective optimal pulses

  12. Net Stable Funding Ratio: Impact on Funding Value Adjustment

    OpenAIRE

    Siadat, Medya; Hammarlid, Ola

    2017-01-01

    In this paper we investigate the relationship between Funding Value Adjustment (FVA) and Net Stable Funding Ratio (NSFR). FVA is defined in a consistent way with NSFR such that the new framework of FVA monitors the costs due to keeping NSFR at an acceptable level, as well. In addition, the problem of choosing the optimal funding strategy is formulated as a shortest path problem where the proposed FVA framework is applied in the optimization process. The solution provides us with the optimal f...

  13. Simulation and Optimization of the Heat Exchanger for Automotive Exhaust-Based Thermoelectric Generators

    Science.gov (United States)

    Su, C. Q.; Huang, C.; Deng, Y. D.; Wang, Y. P.; Chu, P. Q.; Zheng, S. J.

    2016-03-01

    In order to enhance the exhaust waste heat recovery efficiency of the automotive exhaust-based thermoelectric generator (TEG) system, a three-segment heat exchanger with folded-shaped internal structure for the TEG system is investigated in this study. As the major effect factors of the performance for the TEG system, surface temperature, and thermal uniformity of the heat exchanger are analyzed in this research, pressure drop along the heat exchanger is also considered. Based on computational fluid dynamics simulations and temperature distribution, the pressure drop along the heat exchanger is obtained. By considering variable length and thickness of folded plates in each segment of the heat exchanger, response surface methodology and optimization by a multi-objective genetic algorithm is applied for surface temperature, thermal uniformity, and pressure drop for the folded-shaped heat exchanger. An optimum design based on the optimization is proposed to improve the overall performance of the TEG system. The performance of the optimized heat exchanger in different engine conditions is discussed.

  14. Optimization of the parameters of HEMT GaN/AlN/AlGaN heterostructures for microwave transistors using numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Tikhomirov, V. G., E-mail: VV11111@yandex.ru [Saint Petersburg Electrotechnical University “LETI” (Russian Federation); Zemlyakov, V. E.; Volkov, V. V.; Parnes, Ya. M.; Vyuginov, V. N. [Joint Stock Company “Svetlana-Electronpribor” (Russian Federation); Lundin, W. V.; Sakharov, A. V.; Zavarin, E. E.; Tsatsulnikov, A. F. [Russian Academy of Sciences, Submicron Heterostructures for Microelectronics Research and Engineering Center (Russian Federation); Cherkashin, N. A. [CEMES-CNRS-Université de Toulouse (France); Mizerov, M. N. [Russian Academy of Sciences, Submicron Heterostructures for Microelectronics Research and Engineering Center (Russian Federation); Ustinov, V. M. [Russian Academy of Sciences, Ioffe Physical–Technical Institute (Russian Federation)

    2016-02-15

    The numerical simulation, and theoretical and experimental optimization of field-effect microwave high-electron-mobility transistors (HEMTs) based on GaN/AlN/AlGaN heterostructures are performed. The results of the study showed that the optimal thicknesses and compositions of the heterostructure layers, allowing high microwave power implementation, are in relatively narrow ranges. It is shown that numerical simulation can be efficiently applied to the development of microwave HEMTs, taking into account basic physical phenomena and features of actual device structures.

  15. Simulation and Automation of Microwave Frequency Control in Dynamic Nuclear Polarization for Solid Polarized Targets

    Science.gov (United States)

    Perera, Gonaduwage; Johnson, Ian; Keller, Dustin

    2017-09-01

    Dynamic Nuclear Polarization (DNP) is used in most of the solid polarized target scattering experiments. Those target materials must be irradiated using microwaves at a frequency determined by the difference in the nuclear Larmor and electron paramagnetic resonance (EPR) frequencies. But the resonance frequency changes with time as a result of radiation damage. Hence the microwave frequency should be adjusted accordingly. Manually adjusting the frequency can be difficult, and improper adjustments negatively impact the polarization. In order to overcome these difficulties, two controllers were developed which automate the process of seeking and maintaining the optimal frequency: one being a standalone controller for a traditional DC motor and the other a LabVIEW VI for a stepper motor configuration. Further a Monte-Carlo simulation was developed which can accurately model the polarization over time as a function of microwave frequency. In this talk, analysis of the simulated data and recent improvements to the automated system will be presented. DOE.

  16. Numerical Simulation of the Francis Turbine and CAD used to Optimized the Runner Design (2nd).

    Science.gov (United States)

    Sutikno, Priyono

    2010-06-01

    Hydro Power is the most important renewable energy source on earth. The water is free of charge and with the generation of electric energy in a Hydroelectric Power station the production of green house gases (mainly CO2) is negligible. Hydro Power Generation Stations are long term installations and can be used for 50 years and more, care must be taken to guarantee a smooth and safe operation over the years. Maintenance is necessary and critical parts of the machines have to be replaced if necessary. Within modern engineering the numerical flow simulation plays an important role in order to optimize the hydraulic turbine in conjunction with connected components of the plant. Especially for rehabilitation and upgrading existing Power Plants important point of concern are to predict the power output of turbine, to achieve maximum hydraulic efficiency, to avoid or to minimize cavitations, to avoid or to minimized vibrations in whole range operation. Flow simulation can help to solve operational problems and to optimize the turbo machinery for hydro electric generating stations or their component through, intuitive optimization, mathematical optimization, parametric design, the reduction of cavitations through design, prediction of draft tube vortex, trouble shooting by using the simulation. The classic design through graphic-analytical method is cumbersome and can't give in evidence the positive or negative aspects of the designing options. So it was obvious to have imposed as necessity the classical design methods to an adequate design method using the CAD software. There are many option chose during design calculus in a specific step of designing may be verified in ensemble and detail form a point of view. The final graphic post processing would be realized only for the optimal solution, through a 3 D representation of the runner as a whole for the final approval geometric shape. In this article it was investigated the redesign of the hydraulic turbine's runner

  17. iFit: a new data analysis framework. Applications for data reduction and optimization of neutron scattering instrument simulations with McStas

    DEFF Research Database (Denmark)

    Farhi, E.; Y., Debab,; Willendrup, Peter Kjær

    2014-01-01

    and noisy problems. These optimizers can then be used to fit models onto data objects, and optimize McStas instrument simulations. As an application, we propose a methodology to analyse neutron scattering measurements in a pure Monte Carlo optimization procedure using McStas and iFit. As opposed...

  18. Collateral Optimization : Liquidity & Funding Value Adjustments, - Best Practices -

    OpenAIRE

    Genest, Benoit; Rego, David; Freon, Helene

    2013-01-01

    The purpose of this paper is to understand how the current financial landscape shaped by the crises and new regulations impacts Investment Banking’s business model. We will focus on quantitative implications, i.e. valuation, modeling and pricing issues, as well as qualitative implications, i.e. best practices to manage quantitative aspects and handle these functions to the current Investment Banking organization. We considered two pillars to shape our vision of collateral optimization: ...

  19. Optimization of the Penelope code in F language for the simulation of the X-ray spectrum in radiodiagnosis

    International Nuclear Information System (INIS)

    Ballon P, C. I.; Quispe V, N. Y.; Vega R, J. L. J.

    2017-10-01

    The computational simulation to obtain the X-ray spectrum in the range of radio-diagnosis, allows a study and advance knowledge of the transport process of X-rays in the interaction with matter using the Monte Carlo method. With the obtaining of the X-ray spectra we can know the dose that the patient receives when he undergoes a radiographic study or CT, improving the quality of the obtained image. The objective of the present work was to implement and optimize the open source Penelope (Monte Carlo code for the simulation of the transport of electrons and photons in the matter) 2008 version programming extra code in functional language F, managing to double the processing speed, thus reducing the simulation time spent and errors when optimizing the software initially programmed in Fortran 77. The results were compared with those of Penelope, obtaining a good concordance. We also simulated the obtaining of a Pdd curve (depth dose profile) for a Theratron Equinox cobalt-60 teletherapy device, also validating the software implemented for high energies. (Author)

  20. Research of Dust Field Optimization Distribution Based on Parameters Change of Air Duct Outlet in Fully Mechanized Excavation Face of Coal Mine

    Science.gov (United States)

    Gong, Xiao-Yan; Xia, Zhi-Xin; Wu, Yue; Mo, Jin-Ming; Zhang, Xin-Yi

    2017-12-01

    Aiming at the problem of dust accumulation and pollution risk rising sharply in fully mechanized excavation face, which caused by the unreasonable air duct outlet airflow under the long distance driving, this paper proposes a new idea to optimize the distribution of dust by changing the angle, caliber and the front and rear distance of air duct outlet. Taking the fully mechanized excavation face of Ningtiaota coal mine which located in Shaanxi province as the research object, the numerical simulation scheme of dust field was established, the safety hazard of the distribution of original dust field was simulated and analyzed, the numerical simulation and optimization analysis of the dust distribution by changing the angle, caliber and the front and rear distance of air duct outlet was carried out, and the adjustment scheme of the optimized dust distribution was obtained, which provides a theoretical basis for reducing the probability of dust explosion and the degree of pollution.

  1. Design and Optimization of Large Accelerator Systems through High-Fidelity Electromagnetic Simulations

    International Nuclear Information System (INIS)

    Ng, Cho; Akcelik, Volkan; Candel, Arno; Chen, Sheng; Ge, Lixin; Kabel, Andreas; Lee, Lie-Quan; Li, Zenghai; Prudencio, Ernesto; Schussman, Greg; Uplenchwar1, Ravi; Xiao1, Liling; Ko1, Kwok; Austin, T.; Cary, J.R.; Ovtchinnikov, S.; Smith, D.N.; Werner, G.R.; Bellantoni, L.; TechX Corp.; Fermilab

    2008-01-01

    SciDAC1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' (AST) project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC CETs/Institutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider (ILC) and the Large Hadron Collider (LHC) in High Energy Physics (HEP), the JLab 12-GeV Upgrade in Nuclear Physics (NP), as well as the Spallation Neutron Source (SNS) and the Linac Coherent Light Source (LCLS) in Basic Energy Sciences (BES)

  2. Design and optimization of large accelerator systems through high-fidelity electromagnetic simulations

    International Nuclear Information System (INIS)

    Ng, C; Akcelik, V; Candel, A; Chen, S; Ge, L; Kabel, A; Lee, Lie-Quan; Li, Z; Prudencio, E; Schussman, G; Uplenchwar, R; Xiao, L; Ko, K; Austin, T; Cary, J R; Ovtchinnikov, S; Smith, D N; Werner, G R; Bellantoni, L

    2008-01-01

    SciDAC-1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC Centers and Insitutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider and the Large Hadron Collider in high energy physics, the JLab 12-GeV Upgrade in nuclear physics, and the Spallation Neutron Source and the Linac Coherent Light Source in basic energy sciences

  3. Optimization of Multiple Traveling Salesman Problem Based on Simulated Annealing Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xu Mingji

    2017-01-01

    Full Text Available It is very effective to solve the multi variable optimization problem by using hierarchical genetic algorithm. This thesis analyzes both advantages and disadvantages of hierarchical genetic algorithm and puts forward an improved simulated annealing genetic algorithm. The new algorithm is applied to solve the multiple traveling salesman problem, which can improve the performance of the solution. First, it improves the design of chromosomes hierarchical structure in terms of redundant hierarchical algorithm, and it suggests a suffix design of chromosomes; Second, concerning to some premature problems of genetic algorithm, it proposes a self-identify crossover operator and mutation; Third, when it comes to the problem of weak ability of local search of genetic algorithm, it stretches the fitness by mixing genetic algorithm with simulated annealing algorithm. Forth, it emulates the problems of N traveling salesmen and M cities so as to verify its feasibility. The simulation and calculation shows that this improved algorithm can be quickly converged to a best global solution, which means the algorithm is encouraging in practical uses.

  4. The simulation of stationary and non-stationary regime operation of heavy water production facilities

    International Nuclear Information System (INIS)

    Peculea, M.; Beca, T.; Constantinescu, D.M.; Dumitrescu, M.; Dimulescu, A.; Isbasescu, G.; Stefanescu, I.; Mihai, M.; Dogaru, C.; Marinescu, M.; Olariu, S.; Constantin, T.; Necula, A.

    1995-01-01

    This paper refers to testing procedures of the production capacity of heavy water production pilot, industrial scale plants and of heavy water reconcentration facilities. Simulation codes taking into account the mass and heat transfers inside the exchange columns were developed. These codes provided valuable insight about the isotope build-up of the installation which allowed estimating the time of reaching the stationary regime. Also transient regimes following perturbations in the operating parameters (i.e. temperature, pressure, fluid rates) of the installation were simulated and an optimal rate of routine inspections and adjustments was thus established

  5. A New Approach to Reducing Search Space and Increasing Efficiency in Simulation Optimization Problems via the Fuzzy-DEA-BCC

    Directory of Open Access Journals (Sweden)

    Rafael de Carvalho Miranda

    2014-01-01

    Full Text Available The development of discrete-event simulation software was one of the most successful interfaces in operational research with computation. As a result, research has been focused on the development of new methods and algorithms with the purpose of increasing simulation optimization efficiency and reliability. This study aims to define optimum variation intervals for each decision variable through a proposed approach which combines the data envelopment analysis with the Fuzzy logic (Fuzzy-DEA-BCC, seeking to improve the decision-making units’ distinction in the face of uncertainty. In this study, Taguchi’s orthogonal arrays were used to generate the necessary quantity of DMUs, and the output variables were generated by the simulation. Two study objects were utilized as examples of mono- and multiobjective problems. Results confirmed the reliability and applicability of the proposed method, as it enabled a significant reduction in search space and computational demand when compared to conventional simulation optimization techniques.

  6. NEMO. A novel techno-economic tool suite for simulating and optimizing solutions for grid integration of electric vehicles and charging stations

    Energy Technology Data Exchange (ETDEWEB)

    Erge, Thomas; Stillahn, Thies; Dallmer-Zerbe, Kilian; Wille-Haussmann, Bernhard [Frauenhofer Institut for Solar Energy Systems ISE, Freiburg (Germany)

    2013-07-01

    With an increasing use of electric vehicles (EV) grid operators need to predict energy flows depending on electromobility use profiles to accordingly adjust grid infrastructure and operation control accordingly. Tools and methodologies are required to characterize grid problems resulting from the interconnection of EV with the grid. The simulation and optimization tool suite NEMO (Novel E-MObility grid model) was developed within a European research project and is currently being tested using realistic showcases. It is a combination of three professional tools. One of the tools aims at a combined techno-economic design and operation, primarily modeling plants on contracts or the spot market, at the same time participating in balancing markets. The second tool is designed for planning grid extension or reinforcement while the third tool is mainly used to quickly discover potential conflicts of grid operation approaches through load flow analysis. The tool suite is used to investigate real showcases in Denmark, Germany and the Netherlands. First studies show that significant alleviation of stress on distribution grid lines could be achieved by few but intelligent restrictions to EV charging procedures.

  7. NEMO. A novel techno-economic tool suite for simulating and optimizing solutions for grid integration of electric vehicles and charging stations

    International Nuclear Information System (INIS)

    Erge, Thomas; Stillahn, Thies; Dallmer-Zerbe, Kilian; Wille-Haussmann, Bernhard

    2013-01-01

    With an increasing use of electric vehicles (EV) grid operators need to predict energy flows depending on electromobility use profiles to accordingly adjust grid infrastructure and operation control accordingly. Tools and methodologies are required to characterize grid problems resulting from the interconnection of EV with the grid. The simulation and optimization tool suite NEMO (Novel E-MObility grid model) was developed within a European research project and is currently being tested using realistic showcases. It is a combination of three professional tools. One of the tools aims at a combined techno-economic design and operation, primarily modeling plants on contracts or the spot market, at the same time participating in balancing markets. The second tool is designed for planning grid extension or reinforcement while the third tool is mainly used to quickly discover potential conflicts of grid operation approaches through load flow analysis. The tool suite is used to investigate real showcases in Denmark, Germany and the Netherlands. First studies show that significant alleviation of stress on distribution grid lines could be achieved by few but intelligent restrictions to EV charging procedures.

  8. Adjusting for treatment switching in randomised controlled trials - A simulation study and a simplified two-stage method.

    Science.gov (United States)

    Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J

    2017-04-01

    Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.

  9. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm.

    Science.gov (United States)

    Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li

    2017-03-01

    The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.

  10. Heating analysis of cobalt adjusters in reactor core

    International Nuclear Information System (INIS)

    Mei Qiliang; Li Kang; Fu Yaru

    2011-01-01

    In order to produce 60 Co source for industry and medicine applications in CANDU-6 reactor, the stainless steel adjusters were replaced with the cobalt adjusters. The cobalt rod will generate the heat when it is irradiated by neutron and γ ray. In addition, 59 Co will be activated and become 60 Co, the ray released due to 60 Co decay will be absorbed by adjusters, and then the adjusters will also generate the heat. So the heating rate of adjusters to be changed during normal operation must be studied, which will be provided as the input data for analyzing the temperature field of cobalt adjusters and the relative heat load of moderator. MCNP code was used to simulate whole core geometric configuration in detail, including reactor fuel, control rod, adjuster, coolant and moderator, and to analyze the heating rate of the stainless steel adjusters and the cobalt adjusters. The maximum heating rate of different cobalt adjuster based on above results will be provided for the steady thermal hydraulic and accident analysis, and make sure that the reactor is safe on the thermal hydraulic. (authors)

  11. Amundsen Sea simulation with optimized ocean, sea ice, and thermodynamic ice shelf model parameters

    Science.gov (United States)

    Nakayama, Y.; Menemenlis, D.; Schodlok, M.; Heimbach, P.; Nguyen, A. T.; Rignot, E. J.

    2016-12-01

    Ice shelves and glaciers of the West Antarctic Ice Sheet are thinning and melting rapidly in the Amundsen Sea (AS). This is thought to be caused by warm Circumpolar Deep Water (CDW) that intrudes via submarine glacial troughs located at the continental shelf break. Recent studies, however, point out that the depth of thermocline, or thickness of Winter Water (WW, potential temperature below -1 °C located above CDW) is critical in determining the melt rate, especially for the Pine Island Glacier (PIG). For example, the basal melt rate of PIG, which decreased by 50% during summer 2012, has been attributed to thickening of WW. Despite the possible importance of WW thickness on ice shelf melting, previous modeling studies in this region have focused primarily on CDW intrusion and have evaluated numerical simulations based on bottom or deep CDW properties. As a result, none of these models have shown a good representation of WW for the AS. In this study, we adjust a small number of model parameters in a regional Amundsen and Bellingshausen Seas configuration of the Massachusetts Institute of Technology general circulation model (MITgcm) to better fit the available observations during the 2007-2010 period. We choose this time period because summer observations during these years show small interannual variability in the eastern AS. As a result of adjustments, our model shows significantly better match with observations than previous modeling studies, especially for WW. Since density of sea water depends largely on salinity at low temperature, this is crucial for assessing the impact of WW on PIG melt rate. In addition, we conduct several sensitivity studies, showing the impact of surface heat loss on the thickness and properties of WW. We also discuss some preliminary results pertaining to further optimization using the adjoint method. Our work is a first step toward improved representation of ice-shelf ocean interactions in the ECCO (Estimating the Circulation and

  12. Design of Simulation Product for Stability of Electric Power System Using Power System Stabilizer and Optimal Control

    Science.gov (United States)

    Junaidi, Agus; Hamid, K. Abdul

    2018-03-01

    This paper will discuss the use of optimal control and Power System Stabilizer (PSS) in improving the oscillation of electric power system. Oscillations in the electric power system can occur due to the sudden release of the load (Switcing-Off). The oscillation of an unstable system for a long time causes the equipment to work in an interruption. To overcome this problem, a control device is required that can work effectively in repairing the oscillation. The power system is modeled from the Single Machine Infinite Bus Model (SMIB). The state space equation is used to mathematically model SMIB. SMIB system which is a plant will be formed togetherness state variables (State-Space), using riccati equation then determined the optimal gain as controller plant. Plant is also controlled by Power Stabilizer System using phase compensation method. Using Matlab Software based simulation will be observed response of rotor speed change and rotor angle change for each of the two controlling methods. Simulation results using the Simulink-MATLAB 6.1 software will compare the analysis of the plant state in Open loop state and use the controller. The simulation response shows that the optimal control and PSS can improve the stability of the power system in terms of acceleration to achieve settling-time and Over Shoot improvement. From the results of both methods are able to improve system performance.

  13. The (1+λ) evolutionary algorithm with self-adjusting mutation rate

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Witt, Carsten; Gießen, Christian

    2017-01-01

    We propose a new way to self-adjust the mutation rate in population-based evolutionary algorithms. Roughly speaking, it consists of creating half the offspring with a mutation rate that is twice the current mutation rate and the other half with half the current rate. The mutation rate is then upd......We propose a new way to self-adjust the mutation rate in population-based evolutionary algorithms. Roughly speaking, it consists of creating half the offspring with a mutation rate that is twice the current mutation rate and the other half with half the current rate. The mutation rate...... is then updated to the rate used in that subpopulation which contains the best offspring. We analyze how the (1 + A) evolutionary algorithm with this self-adjusting mutation rate optimizes the OneMax test function. We prove that this dynamic version of the (1 + A) EA finds the optimum in an expected optimization...... time (number of fitness evaluations) of O(nA/log A + n log n). This time is asymptotically smaller than the optimization time of the classic (1 + A) EA. Previous work shows that this performance is best-possible among all A-parallel mutation-based unbiased black-box algorithms. This result shows...

  14. Simulating carbon and water fluxes at Arctic and boreal ecosystems in Alaska by optimizing the modified BIOME-BGC with eddy covariance data

    Science.gov (United States)

    Ueyama, M.; Kondo, M.; Ichii, K.; Iwata, H.; Euskirchen, E. S.; Zona, D.; Rocha, A. V.; Harazono, Y.; Nakai, T.; Oechel, W. C.

    2013-12-01

    To better predict carbon and water cycles in Arctic ecosystems, we modified a process-based ecosystem model, BIOME-BGC, by introducing new processes: change in active layer depth on permafrost and phenology of tundra vegetation. The modified BIOME-BGC was optimized using an optimization method. The model was constrained using gross primary productivity (GPP) and net ecosystem exchange (NEE) at 23 eddy covariance sites in Alaska, and vegetation/soil carbon from a literature survey. The model was used to simulate regional carbon and water fluxes of Alaska from 1900 to 2011. Simulated regional fluxes were validated with upscaled GPP, ecosystem respiration (RE), and NEE based on two methods: (1) a machine learning technique and (2) a top-down model. Our initial simulation suggests that the original BIOME-BGC with default ecophysiological parameters substantially underestimated GPP and RE for tundra and overestimated those fluxes for boreal forests. We will discuss how optimization using the eddy covariance data impacts the historical simulation by comparing the new version of the model with simulated results from the original BIOME-BGC with default ecophysiological parameters. This suggests that the incorporation of the active layer depth and plant phenology processes is important to include when simulating carbon and water fluxes in Arctic ecosystems.

  15. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID.

    Science.gov (United States)

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-04-19

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.

  16. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID

    Science.gov (United States)

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-01-01

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822

  17. Optimization of the energy production for the Baghdara hydropower plant in Afghanistan using simulated annealing; Optimierung der Energieerzeugung fuer das Wasserkraftwerk Baghdara in Afghanistan mit simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Ayros, E.; Hildebrandt, H.; Peissner, K. [Fichtner GmbH und Co. KG, Stuttgart (Germany). Wasserbau und Wasserkraftwerke; Bardossy, A. [Stuttgart Univ. (Germany). Inst. fuer Wasserbau

    2008-07-01

    Simulated Annealing (SA) is an optimization method analogous to the thermodynamic method and is a new alternative for optimising the energy production of hydropower systems with storage capabilities. The SA-Algorithm is presented here and it was applied for the maximization of the energy production of the Baghdara hydropower plant in Afghanistan. The results were also compared with a non-linear optimization method NLP. (orig.)

  18. Optimization of FIBMOS Through 2D Silvaco ATLAS and 2D Monte Carlo Particle-based Device Simulations

    OpenAIRE

    Kang, J.; He, X.; Vasileska, D.; Schroder, D. K.

    2001-01-01

    Focused Ion Beam MOSFETs (FIBMOS) demonstrate large enhancements in core device performance areas such as output resistance, hot electron reliability and voltage stability upon channel length or drain voltage variation. In this work, we describe an optimization technique for FIBMOS threshold voltage characterization using the 2D Silvaco ATLAS simulator. Both ATLAS and 2D Monte Carlo particle-based simulations were used to show that FIBMOS devices exhibit enhanced current drive ...

  19. Control Systems with Normalized and Covariance Adaptation by Optimal Control Modification

    Science.gov (United States)

    Nguyen, Nhan T. (Inventor); Burken, John J. (Inventor); Hanson, Curtis E. (Inventor)

    2016-01-01

    Disclosed is a novel adaptive control method and system called optimal control modification with normalization and covariance adjustment. The invention addresses specifically to current challenges with adaptive control in these areas: 1) persistent excitation, 2) complex nonlinear input-output mapping, 3) large inputs and persistent learning, and 4) the lack of stability analysis tools for certification. The invention has been subject to many simulations and flight testing. The results substantiate the effectiveness of the invention and demonstrate the technical feasibility for use in modern aircraft flight control systems.

  20. A multi-objective simulation-optimization model for in situ bioremediation of groundwater contamination: Application of bargaining theory

    Science.gov (United States)

    Raei, Ehsan; Nikoo, Mohammad Reza; Pourshahabi, Shokoufeh

    2017-08-01

    In the present study, a BIOPLUME III simulation model is coupled with a non-dominating sorting genetic algorithm (NSGA-II)-based model for optimal design of in situ groundwater bioremediation system, considering preferences of stakeholders. Ministry of Energy (MOE), Department of Environment (DOE), and National Disaster Management Organization (NDMO) are three stakeholders in the groundwater bioremediation problem in Iran. Based on the preferences of these stakeholders, the multi-objective optimization model tries to minimize: (1) cost; (2) sum of contaminant concentrations that violate standard; (3) contaminant plume fragmentation. The NSGA-II multi-objective optimization method gives Pareto-optimal solutions. A compromised solution is determined using fallback bargaining with impasse to achieve a consensus among the stakeholders. In this study, two different approaches are investigated and compared based on two different domains for locations of injection and extraction wells. At the first approach, a limited number of predefined locations is considered according to previous similar studies. At the second approach, all possible points in study area are investigated to find optimal locations, arrangement, and flow rate of injection and extraction wells. Involvement of the stakeholders, investigating all possible points instead of a limited number of locations for wells, and minimizing the contaminant plume fragmentation during bioremediation are new innovations in this research. Besides, the simulation period is divided into smaller time intervals for more efficient optimization. Image processing toolbox in MATLAB® software is utilized for calculation of the third objective function. In comparison with previous studies, cost is reduced using the proposed methodology. Dispersion of the contaminant plume is reduced in both presented approaches using the third objective function. Considering all possible points in the study area for determining the optimal locations

  1. Modelling, simulation and optimization of solarthermal systems in an object-oriented simulation environment; Modellierung, Simulation und Optimierung solarthermischer Anlagen in einer objektorientierten Simulationsumgebung

    Energy Technology Data Exchange (ETDEWEB)

    Schrag, T.

    2001-07-01

    The simulation environment SMILE 1.0 and its new possibilities for modelling, simulating and optimising are described and demonstrated by three different examples of solarthermal aided energy supply systems. These examples have in common, that they deal with rather large systems and that they are all related to actual research projects. As the results obtained through the simulations have not only exemplary character but also represent new insights, their scientific background is given, too. The structure of the SMILE system is explained, where the concept of free software architecture enables extentability and adaption for special demands. The structuring of an object-oriented component library for solar- and building-models is shown and the advantages of object-orientation for modelling and validating are described. The integration of numerical optimization methods in the simulation environment allows automatic parameter studies and design calculations. To reduce the calculation time, different optimization strategies are studied as well as the reduction of the input weather data with neuronal nets. With this data reduction an acceleration for the design calculation of solar domestic hot water systems can be achieved, that leads in contrary to ordinary statistical methods to an acceptable accuracy. The examples differ not only in their relevance for the energy market, but also in the features of the simulationenvironment, they demonstrate. First large hot water buffer systems are studied. Two different designs for the discharging of a buffer are compared and it is shown, how object-orientation supports a gradual specification of the models for a detailed investigation of the heatexchangers. The parameters relevant for the discharging are numerically optimised and compared with the results of a paramter study. The focus of the second example is the combined examination of a multicomponent energy supply system and a building. The effects of thick insulation

  2. Risk Selection, Risk Adjustment and Choice: Concepts and Lessons from the Americas

    Science.gov (United States)

    Ellis, Randall P.; Fernandez, Juan Gabriel

    2013-01-01

    Interest has grown worldwide in risk adjustment and risk sharing due to their potential to contain costs, improve fairness, and reduce selection problems in health care markets. Significant steps have been made in the empirical development of risk adjustment models, and in the theoretical foundations of risk adjustment and risk sharing. This literature has often modeled the effects of risk adjustment without highlighting the institutional setting, regulations, and diverse selection problems that risk adjustment is intended to fix. Perhaps because of this, the existing literature and their recommendations for optimal risk adjustment or optimal payment systems are sometimes confusing. In this paper, we present a unified way of thinking about the organizational structure of health care systems, which enables us to focus on two key dimensions of markets that have received less attention: what choices are available that may lead to selection problems, and what financial or regulatory tools other than risk adjustment are used to influence these choices. We specifically examine the health care systems, choices, and problems in four countries: the US, Canada, Chile, and Colombia, and examine the relationship between selection-related efficiency and fairness problems and the choices that are allowed in each country, and discuss recent regulatory reforms that affect choices and selection problems. In this sample, countries and insurance programs with more choices have more selection problems. PMID:24284351

  3. Risk Selection, Risk Adjustment and Choice: Concepts and Lessons from the Americas

    Directory of Open Access Journals (Sweden)

    Randall P. Ellis

    2013-10-01

    Full Text Available Interest has grown worldwide in risk adjustment and risk sharing due to their potential to contain costs, improve fairness, and reduce selection problems in health care markets. Significant steps have been made in the empirical development of risk adjustment models, and in the theoretical foundations of risk adjustment and risk sharing. This literature has often modeled the effects of risk adjustment without highlighting the institutional setting, regulations, and diverse selection problems that risk adjustment is intended to fix. Perhaps because of this, the existing literature and their recommendations for optimal risk adjustment or optimal payment systems are sometimes confusing. In this paper, we present a unified way of thinking about the organizational structure of health care systems, which enables us to focus on two key dimensions of markets that have received less attention: what choices are available that may lead to selection problems, and what financial or regulatory tools other than risk adjustment are used to influence these choices. We specifically examine the health care systems, choices, and problems in four countries: the US, Canada, Chile, and Colombia, and examine the relationship between selection-related efficiency and fairness problems and the choices that are allowed in each country, and discuss recent regulatory reforms that affect choices and selection problems. In this sample, countries and insurance programs with more choices have more selection problems.

  4. A Novel Idea for Optimizing Condition-Based Maintenance Using Genetic Algorithms and Continuous Event Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-01-01

    Full Text Available Effective maintenance strategies are of utmost significance for system engineering due to their direct linkage with financial aspects and safety of the plants’ operation. At a point where the state of a system, for instance, level of its deterioration, can be constantly observed, a strategy based on condition-based maintenance (CBM may be affected; wherein upkeep of the system is done progressively on the premise of monitored state of the system. In this article, a multicomponent framework is considered that is continuously kept under observation. In order to decide an optimal deterioration stage for the said system, Genetic Algorithm (GA technique has been utilized that figures out when its preventive maintenance should be carried out. The system is configured into a multiobjective problem that is aimed at optimizing the two desired objectives, namely, profitability and accessibility. For the sake of reality, a prognostic model portraying the advancements of deteriorating system has been employed that will be based on utilization of continuous event simulation techniques. In this regard, Monte Carlo (MC simulation has been shortlisted as it can take into account a wide range of probable options that can help in reducing uncertainty. The inherent benefits proffered by the said simulation technique are fully utilized to display various elements of a deteriorating system working under stressed environment. The proposed synergic model (GA and MC is considered to be more effective due to the employment of “drop-by-drop approach” that permits successful drive of the related search process with regard to the best optimal solutions.

  5. Use of genetic algorithms for optimization of subchannel simulations

    International Nuclear Information System (INIS)

    Nava Dominguez, A.

    2004-01-01

    To facilitate the modeling of a rod fuel bundle, the most common used method consist in dividing the complex cross-sectional area in small subsections called subchannels. To close the system equations, a mixture model is used to represent the intersubchannel interactions. These interactions are as follows: diversion cross-flow, turbulent void diffusion, void drift and buoyancy drift. Amongst these mechanisms, the turbulent void diffusion and void drift are frequently modelled using diffusion coefficients. In this work, a novel approach has been employed where an existing subchannel code coupled to a genetic algorithm code which were used to optimize these coefficients. After several numerical simulations, a new objective function based in the principle of minimum dissipated energy was developed. The use of this function in the genetic algorithm coupled to the subchannel code, gave results in good agreement with the experimental data

  6. Bioprocess iterative batch-to-batch optimization based on hybrid parametric/nonparametric models.

    Science.gov (United States)

    Teixeira, Ana P; Clemente, João J; Cunha, António E; Carrondo, Manuel J T; Oliveira, Rui

    2006-01-01

    This paper presents a novel method for iterative batch-to-batch dynamic optimization of bioprocesses. The relationship between process performance and control inputs is established by means of hybrid grey-box models combining parametric and nonparametric structures. The bioreactor dynamics are defined by material balance equations, whereas the cell population subsystem is represented by an adjustable mixture of nonparametric and parametric models. Thus optimizations are possible without detailed mechanistic knowledge concerning the biological system. A clustering technique is used to supervise the reliability of the nonparametric subsystem during the optimization. Whenever the nonparametric outputs are unreliable, the objective function is penalized. The technique was evaluated with three simulation case studies. The overall results suggest that the convergence to the optimal process performance may be achieved after a small number of batches. The model unreliability risk constraint along with sampling scheduling are crucial to minimize the experimental effort required to attain a given process performance. In general terms, it may be concluded that the proposed method broadens the application of the hybrid parametric/nonparametric modeling technique to "newer" processes with higher potential for optimization.

  7. Mechanism of laser micro-adjustment

    International Nuclear Information System (INIS)

    Shen Hong

    2008-01-01

    Miniaturization is a requirement in engineering to produce competitive products in the field of optical and electronic industries. Laser micro-adjustment is a new and promising technology for sheet metal actuator systems. Efforts have been made to understand the mechanisms of metal plate forming using a laser heating source. Three mechanisms have been proposed for describing the laser forming processes in different scenarios, namely the temperature gradient mechanism (TGM), buckling mechanism and upsetting mechanism (UM). However, none of these mechanisms can fully describe the deformation mechanisms involved in laser micro-adjustment. Based on the thermal and elastoplastic analyses, a coupled TGM and UM are presented in this paper to illustrate the thermal mechanical behaviours of two-bridge actuators when applying a laser forming process. To validate the proposed coupling mechanism, numerical simulations are carried out and the corresponding results demonstrate the mechanism proposed. The mechanism of the micro-laser adjustment could be taken as a supplement to the laser forming process.

  8. A Novel Scheme for Optimal Control of a Nonlinear Delay Differential Equations Model to Determine Effective and Optimal Administrating Chemotherapy Agents in Breast Cancer.

    Science.gov (United States)

    Ramezanpour, H R; Setayeshi, S; Akbari, M E

    2011-01-01

    Determining the optimal and effective scheme for administrating the chemotherapy agents in breast cancer is the main goal of this scientific research. The most important issue here is the amount of drug or radiation administrated in chemotherapy and radiotherapy for increasing patient's survival. This is because in these cases, the therapy not only kills the tumor cells, but also kills some of the healthy tissues and causes serious damages. In this paper we investigate optimal drug scheduling effect for breast cancer model which consist of nonlinear ordinary differential time-delay equations. In this paper, a mathematical model of breast cancer tumors is discussed and then optimal control theory is applied to find out the optimal drug adjustment as an input control of system. Finally we use Sensitivity Approach (SA) to solve the optimal control problem. The goal of this paper is to determine optimal and effective scheme for administering the chemotherapy agent, so that the tumor is eradicated, while the immune systems remains above a suitable level. Simulation results confirm the effectiveness of our proposed procedure. In this paper a new scheme is proposed to design a therapy protocol for chemotherapy in Breast Cancer. In contrast to traditional pulse drug delivery, a continuous process is offered and optimized, according to the optimal control theory for time-delay systems.

  9. Optimal operating conditions for external cavity semiconductor laser optical chaos communication system

    International Nuclear Information System (INIS)

    Priyadarshi, S; Pierce, I; Hong, Y; Shore, K A

    2012-01-01

    In optical chaos communications a message is masked in the noise-like broadband output of a chaotic transmitter laser, and message recovery is enabled through the synchronization of the transmitter and the (chaotic) receiver laser. Key issues are to identify the laser operating conditions which provide the highest quality synchronization conditions and those which provide optimized message extraction. In general such operating conditions are not coincident. In this paper numerical simulations are performed with the aim of identifying a regime of operation where the highest quality synchronization and optimizing message extraction efficiency are achieved simultaneously. Use of such an operating regime will facilitate practical deployment of optical chaos communications systems without the need for re-adjustment of laser operating conditions in the field. (paper)

  10. Heat transfer simulation and retort program adjustment for thermal processing of wheat based Haleem in semi-rigid aluminum containers.

    Science.gov (United States)

    Vatankhah, Hamed; Zamindar, Nafiseh; Shahedi Baghekhandan, Mohammad

    2015-10-01

    A mixed computational strategy was used to simulate and optimize the thermal processing of Haleem, an ancient eastern food, in semi-rigid aluminum containers. Average temperature values of the experiments showed no significant difference (α = 0.05) in contrast to the predicted temperatures at the same positions. According to the model, the slowest heating zone was located in geometrical center of the container. The container geometrical center F0 was estimated to be 23.8 min. A 19 min processing time interval decrease in holding time of the treatment was estimated to optimize the heating operation since the preferred F0 of some starch or meat based fluid foods is about 4.8-7.5 min.

  11. Simulation and Optimization of an Innovative Dual Mixed Component Refrigerant Cycle (DMRC) for Natural Gas Offshore Liquefaction Plants

    International Nuclear Information System (INIS)

    SHAHBA, L.A.; Fahmy, M.F.M.

    2004-01-01

    Simulation and optimization of an innovative liquefaction process used for the LNG production , namely the Dual Mixed Refrigerant Process (DMRC) has been conducted using the HYSYS simulator .This new process is especially suitable for off shore natural gas liquefaction plants. A numerical optimization technique has been used to determine the optimum conditions for Egyptian natural gas feed source. The investigation of the effect of different compositions of the Mixed refrigerants used was conducted. Meanwhile, the investigation of the influence of the temperature of cooling water used was conducted. The best optimum conditions for the DMRC process were determined .The optimum results achieved for the DMRC process revealed that the DMRC process can be successfully applied as a promising technique for off shore natural gas liquefaction plants

  12. Linear study and bundle adjustment data fusion; Application to vision localization

    International Nuclear Information System (INIS)

    Michot, J.

    2010-01-01

    The works presented in this manuscript are in the field of computer vision, and tackle the problem of real-time vision based localization and 3D reconstruction. In this context, the trajectory of a camera and the 3D structure of the filmed scene are initially estimated by linear algorithms and then optimized by a nonlinear algorithm, bundle adjustment. The thesis first presents a new technique of line search, dedicated to the nonlinear minimization algorithms used in Structure-from-Motion. The proposed technique is not iterative and can be quickly installed in traditional bundle adjustment frameworks. This technique, called Global Algebraic Line Search (G-ALS), and its two-dimensional variant (Two way-ALS), accelerate the convergence of the bundle adjustment algorithm. The approximation of the re-projection error by an algebraic distance enables the analytical calculation of an effective displacement amplitude (or two amplitudes for the Two way-ALS variant) by solving a degree 3 (G-ALS) or 5 (Two way-ALS) polynomial. Our experiments, conducted on simulated and real data, show that this amplitude, which is optimal for the algebraic distance, is also efficient for the Euclidean distance and reduces the convergence time of minimizations. One difficulty of real-time tracking algorithms (monocular SLAM) is that the estimated trajectory is often affected by drifts: on the absolute orientation, position and scale. Since these algorithms are incremental, errors and approximations are accumulated throughout the trajectory and cause global drifts. In addition, a tracking vision system can always be dazzled or used under conditions which prevented temporarily to calculate the location of the system. To solve these problems, we propose to use an additional sensor measuring the displacement of the camera. The type of sensor used will vary depending on the targeted application (an odometer for a vehicle, a lightweight inertial navigation system for a person). We propose to

  13. Support of the operation of an agricultural biogas plants with dynamic simulation; Unterstuetzung des Betriebs einer landwirtschaftlichen Biogasanlage mit dynamischer Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Seick, Ingolf; Gebhardt, Sebastian [Hochschule Magdeburg-Stendal, Magdeburg (Germany). Fachbereich Wasser- und Kreislaufwirtschaft; Tschepetzki, Ralf [ifak system GmbH, Magdeburg (Germany)

    2012-07-01

    Mathematical models for the dynamic simulation can be useful for agricultural biogas plants, but are not state of the art. Presented in the following text is a dynamic simulation model of a typical plant. This is based on the Anaerobic Digestion Model No. 1 (ADM1) and parameterized and compared with relevant system data. The results were incorporated into the development of a system for the direct, model-based operational support of biogas plants. Integrated is an operation diary for data acquisition and a simulation system. It supports the biogas plant operation through analysis and evaluation of complex biological processes, forecasting (e.g. the gas yield) and optimization of biology in conjunction with the process technology. Based on the above biogas plant, a practical method and exemplary results of an automatic model adjustment will be shown and example forecasts for the stabilization of the biological process are presented. (orig.)

  14. Numerical Simulation and Optimization of Hole Spacing for Cement Grouting in Rocks

    Directory of Open Access Journals (Sweden)

    Ping Fu

    2013-01-01

    Full Text Available The fine fissures of V-diabase were the main stratigraphic that affected the effectiveness of foundation grout curtain in Dagang Mountain Hydropower Station. Thus, specialized in situ grouting tests were conducted to determine reasonable hole spacing and other parameters. Considering time variation of the rheological parameters of grout, variation of grouting pressure gradient, and evolution law of the fracture opening, numerical simulations were performed on the diffusion process of cement grouting in the fissures of the rock mass. The distribution of permeability after grouting was obtained on the basis of analysis results, and the grouting hole spacing was discussed based on the reliability analysis. A probability of optimization along with a finer optimization precision as 0.1 m could be adopted when compared with the accuracy of 0.5 m that is commonly used. The results could provide a useful reference for choosing reasonable grouting hole spacing in similar projects.

  15. Radar adjusted data versus modelled precipitation: a case study over Cyprus

    Directory of Open Access Journals (Sweden)

    M. Casaioli

    2006-01-01

    Full Text Available In the framework of the European VOLTAIRE project (Fifth Framework Programme, simulations of relatively heavy precipitation events, which occurred over the island of Cyprus, by means of numerical atmospheric models were performed. One of the aims of the project was indeed the comparison of modelled rainfall fields with multi-sensor observations. Thus, for the 5 March 2003 event, the 24-h accumulated precipitation BOlogna Limited Area Model (BOLAM forecast was compared with the available observations reconstructed from ground-based radar data and estimated by rain gauge data. Since radar data may be affected by errors depending on the distance from the radar, these data could be range-adjusted by using other sensors. In this case, the Precipitation Radar aboard the Tropical Rainfall Measuring Mission (TRMM satellite was used to adjust the ground-based radar data with a two-parameter scheme. Thus, in this work, two observational fields were employed: the rain gauge gridded analysis and the observational analysis obtained by merging the range-adjusted radar and rain gauge fields. In order to verify the modelled precipitation, both non-parametric skill scores and the contiguous rain area (CRA analysis were applied. Skill score results show some differences when using the two observational fields. CRA results are instead quite in agreement, showing that in general a 0.27° eastward shift optimizes the forecast with respect to the two observational analyses. This result is also supported by a subjective inspection of the shifted forecast field, whose gross features agree with the analysis pattern more than the non-shifted forecast one. However, some open questions, especially regarding the effect of other range adjustment techniques, remain open and need to be addressed in future works.

  16. Water-Energy Nexus: Examining The Crucial Connection Through Simulation Based Optimization

    Science.gov (United States)

    Erfani, T.; Tan, C. C.

    2014-12-01

    With a growing urbanisation and the emergence of climate change, the world is facing a more water constrained future. This phenomenon will have direct impacts on the resilience and performance of energy sector as water is playing a key role in electricity generation processes. As energy is becoming a thirstier resource and the pressure on finite water sources is increasing, modelling and analysing this closely interlinked and interdependent loop, called 'water-energy nexus' is becoming an important cross-disciplinary challenge. Conflict often arises in transboundary river where several countries share the same source of water to be used in productive sectors for economic growth. From the perspective of the upstream users, it would be ideal to store the water for hydropower generation and protect the city against drought whereas the downstream users need the supply of water for growth. This research use the case study on the transboundary Blue Nile River basin located in the Middle East where the Ethiopian government decided to invest on building a new dam to store the water and generate hydropower. This leads to an opposition by downstream users as they believe that the introduction of the dam would reduce the amount of water available downstream. This calls for a compromise management where the reservoir operating rules need to be derived considering the interdependencies between the resources available and the requirements proposed by all users. For this, we link multiobjective optimization algorithm to water-energy use simulation model to achieve effective management of the transboundary reservoir operating strategies. The objective functions aim to attain social and economic welfare by minimizing the deficit of water supply and maximizing the hydropower generation. The study helps to improve the policies by understanding the value of water and energy in their alternative uses. The results show how different optimal reservoir release rules generate different

  17. Advanced automated gain adjustments for in-vivo ultrasound imaging

    DEFF Research Database (Denmark)

    Moshavegh, Ramin; Hemmsen, Martin Christian; Martins, Bo

    2015-01-01

    Automatic gain adjustments are necessary on the state-of-the-art ultrasound scanners to obtain optimal scan quality, while reducing the unnecessary user interactions with the scanner. However, when large anechoic regions exist in the scan plane, the sudden and drastic variation of attenuations in....... Wilcoxon signed-rank test was then applied to the ratings provided by radiologists. The average VAS score was highly positive 12.16 (p-value: 2.09 x 10-23) favoring the gain-adjusted scans with the proposed algorithm....

  18. Optimization of the SNS magnetism reflectometer neutron-guide optics using Monte Carlo simulations

    CERN Document Server

    Klose, F

    2002-01-01

    The magnetism reflectometer at the spallation neutron source SNS will employ advanced neutron optics to achieve high data rate, improved resolution, and extended dynamic range. Optical components utilized will include a multi-channel polygonal curved bender and a tapered neutron-focusing guide section. The results of a neutron beam interacting with these devices are rather complex. Additional complexity arises due to the spectral/time-emission profile of the moderator and non-perfect neutron optical coatings. While analytic formulae for the individual components provide some design guidelines, a realistic performance assessment of the whole instrument can only be achieved by advanced simulation methods. In this contribution, we present guide optics optimizations for the magnetism reflectometer using Monte Carlo simulations. We compare different instrument configurations and calculate the resulting data rates. (orig.)

  19. Comparison of Lasserre's Measure-based Bounds for Polynomial Optimization to Bounds Obtained by Simulated Annealing

    NARCIS (Netherlands)

    de Klerk, Etienne; Laurent, Monique

    We consider the problem of minimizing a continuous function f over a compact set K. We compare the hierarchy of upper bounds proposed by Lasserre in [SIAM J. Optim. 21(3) (2011), pp. 864-885] to bounds that may be obtained from simulated annealing. We show that, when f is a polynomial and K a convex

  20. Measurements and simulation-based optimization of TIGRESS HPGe detector array performance

    International Nuclear Information System (INIS)

    Schumaker, M.A.

    2005-01-01

    TIGRESS is a new γ-ray detector array being developed for installation at the new ISAC-II facility at TRIUMF in Vancouver. When complete, it will consist of twelve large-volume segmented HPGe clover detectors, fitted with segmented Compton suppression shields. The combined operation of prototypes of both a TIGRESS detector and a suppression shield has been tested. Peak-to-total ratios, relative photopeak efficiencies, and energy resolution functions have been determined in order to characterize the performance of TIGRESS. This information was then used to refine a GEANT4 simulation of the full detector array. Using this simulation, methods to overcome the degradation of the photopeak efficiency and peak-to-total response that occurs with high γ-ray multiplicity events were explored. These methods take advantage of the high segmentation of both the HPGe clovers and the suppression shields to suppress or sum detector interactions selectively. For a range of γ-ray energies and multiplicities, optimal analysis methods have been determined, which has resulted in significant gains in the expected performance of TIGRESS. (author)

  1. A statistical data assimilation method for seasonal streamflow forecasting to optimize hydropower reservoir management in data-scarce regions

    Science.gov (United States)

    Arsenault, R.; Mai, J.; Latraverse, M.; Tolson, B.

    2017-12-01

    Probabilistic ensemble forecasts generated by the ensemble streamflow prediction (ESP) methodology are subject to biases due to errors in the hydrological model's initial states. In day-to-day operations, hydrologists must compensate for discrepancies between observed and simulated states such as streamflow. However, in data-scarce regions, little to no information is available to guide the streamflow assimilation process. The manual assimilation process can then lead to more uncertainty due to the numerous options available to the forecaster. Furthermore, the model's mass balance may be compromised and could affect future forecasts. In this study we propose a data-driven approach in which specific variables that may be adjusted during assimilation are defined. The underlying principle was to identify key variables that would be the most appropriate to modify during streamflow assimilation depending on the initial conditions such as the time period of the assimilation, the snow water equivalent of the snowpack and meteorological conditions. The variables to adjust were determined by performing an automatic variational data assimilation on individual (or combinations of) model state variables and meteorological forcing. The assimilation aimed to simultaneously optimize: (1) the error between the observed and simulated streamflow at the timepoint where the forecasts starts and (2) the bias between medium to long-term observed and simulated flows, which were simulated by running the model with the observed meteorological data on a hindcast period. The optimal variables were then classified according to the initial conditions at the time period where the forecast is initiated. The proposed method was evaluated by measuring the average electricity generation of a hydropower complex in Québec, Canada driven by this method. A test-bed which simulates the real-world assimilation, forecasting, water release optimization and decision-making of a hydropower cascade was

  2. Frequency adjustable MEMS vibration energy harvester

    Science.gov (United States)

    Podder, P.; Constantinou, P.; Amann, A.; Roy, S.

    2016-10-01

    Ambient mechanical vibrations offer an attractive solution for powering the wireless sensor nodes of the emerging “Internet-of-Things”. However, the wide-ranging variability of the ambient vibration frequencies pose a significant challenge to the efficient transduction of vibration into usable electrical energy. This work reports the development of a MEMS electromagnetic vibration energy harvester where the resonance frequency of the oscillator can be adjusted or tuned to adapt to the ambient vibrational frequency. Micro-fabricated silicon spring and double layer planar micro-coils along with sintered NdFeB micro-magnets are used to construct the electromagnetic transduction mechanism. Furthermore, another NdFeB magnet is adjustably assembled to induce variable magnetic interaction with the transducing magnet, leading to significant change in the spring stiffness and resonance frequency. Finite element analysis and numerical simulations exhibit substantial frequency tuning range (25% of natural resonance frequency) by appropriate adjustment of the repulsive magnetic interaction between the tuning and transducing magnet pair. This demonstrated method of frequency adjustment or tuning have potential applications in other MEMS vibration energy harvesters and micromechanical oscillators.

  3. Frequency adjustable MEMS vibration energy harvester

    International Nuclear Information System (INIS)

    Podder, P; Constantinou, P; Roy, S; Amann, A

    2016-01-01

    Ambient mechanical vibrations offer an attractive solution for powering the wireless sensor nodes of the emerging “Internet-of-Things”. However, the wide-ranging variability of the ambient vibration frequencies pose a significant challenge to the efficient transduction of vibration into usable electrical energy. This work reports the development of a MEMS electromagnetic vibration energy harvester where the resonance frequency of the oscillator can be adjusted or tuned to adapt to the ambient vibrational frequency. Micro-fabricated silicon spring and double layer planar micro-coils along with sintered NdFeB micro-magnets are used to construct the electromagnetic transduction mechanism. Furthermore, another NdFeB magnet is adjustably assembled to induce variable magnetic interaction with the transducing magnet, leading to significant change in the spring stiffness and resonance frequency. Finite element analysis and numerical simulations exhibit substantial frequency tuning range (25% of natural resonance frequency) by appropriate adjustment of the repulsive magnetic interaction between the tuning and transducing magnet pair. This demonstrated method of frequency adjustment or tuning have potential applications in other MEMS vibration energy harvesters and micromechanical oscillators. (paper)

  4. Optimizing nitrogen fertilizer use: Current approaches and simulation models

    International Nuclear Information System (INIS)

    Baethgen, W.E.

    2000-01-01

    Nitrogen (N) is the most common limiting nutrient in agricultural systems throughout the world. Crops need sufficient available N to achieve optimum yields and adequate grain-protein content. Consequently, sub-optimal rates of N fertilizers typically cause lower economical benefits for farmers. On the other hand, excessive N fertilizer use may result in environmental problems such as nitrate contamination of groundwater and emission of N 2 O and NO. In spite of the economical and environmental importance of good N fertilizer management, the development of optimum fertilizer recommendations is still a major challenge in most agricultural systems. This article reviews the approaches most commonly used for making N recommendations: expected yield level, soil testing and plant analysis (including quick tests). The paper introduces the application of simulation models that complement traditional approaches, and includes some examples of current applications in Africa and South America. (author)

  5. Automatic, unstructured mesh optimization for simulation and assessment of tide- and surge-driven hydrodynamics in a longitudinal estuary: St. Johns River

    Science.gov (United States)

    Bacopoulos, Peter

    2018-05-01

    A localized truncation error analysis with complex derivatives (LTEA+CD) is applied recursively with advanced circulation (ADCIRC) simulations of tides and storm surge for finite element mesh optimization. Mesh optimization is demonstrated with two iterations of LTEA+CD for tidal simulation in the lower 200 km of the St. Johns River, located in northeast Florida, and achieves more than an over 50% decrease in the number of mesh nodes, relating to a twofold increase in efficiency, at a zero cost to model accuracy. The recursively generated meshes using LTEA+CD lead to successive reductions in the global cumulative truncation error associated with the model mesh. Tides are simulated with root mean square error (RMSE) of 0.09-0.21 m and index of agreement (IA) values generally in the 80s and 90s percentage ranges. Tidal currents are simulated with RMSE of 0.09-0.23 m s-1 and IA values of 97% and greater. Storm tide due to Hurricane Matthew 2016 is simulated with RMSE of 0.09-0.33 m and IA values of 75-96%. Analysis of the LTEA+CD results shows the M2 constituent to dominate the node spacing requirement in the St. Johns River, with the M4 and M6 overtides and the STEADY constituent contributing some. Friction is the predominant physical factor influencing the target element size distribution, especially along the main river stem, while frequency (inertia) and Coriolis (rotation) are supplementary contributing factors. The combination of interior- and boundary-type computational molecules, providing near-full coverage of the model domain, renders LTEA+CD an attractive mesh generation/optimization tool for complex coastal and estuarine domains. The mesh optimization procedure using LTEA+CD is automatic and extensible to other finite element-based numerical models. Discussion is provided on the scope of LTEA+CD, the starting point (mesh) of the procedure, the user-specified scaling of the LTEA+CD results, and the iteration (termination) of LTEA+CD for mesh optimization.

  6. Optimization of the Fabrication Route of Ferritic/Martensitic ODS Cladding Tubes: Metallurgical Approach and Pilgering Numerical Modeling

    International Nuclear Information System (INIS)

    Logé, R.E.; Vanegas-Marques, E.; Mocellin, K.; Toualbi, L.; Carlan, Y. de

    2013-01-01

    Conclusions: • Fabrication route of 9Cr-ODS (martensitic) alloys is well controlled. • Fabrication route of 14Cr-ODS (ferritic) should be further optimized. • The choice between a ferritic or a martensitic grade is not already done, it will depend also on the behaviour under irradiation, the corrosion resistance … • Part of the optimization can rely on numerical simulation of pilgering: • The constitutive behaviour is an essential ingredient for process optimization: appropriate cyclic laws must be used. • The numerical analysis can look at cracking risks, final yield stress, and even residual stress state or surface roughness. • HPTR laboratory approaches can be translated to the (industrial) VMR process provided some additional adjustments in the numerical code

  7. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    Science.gov (United States)

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  8. Case‐mix adjustment in non‐randomised observational evaluations: the constant risk fallacy

    OpenAIRE

    Nicholl, Jon

    2007-01-01

    Observational studies comparing groups or populations to evaluate services or interventions usually require case‐mix adjustment to account for imbalances between the groups being compared. Simulation studies have, however, shown that case‐mix adjustment can make any bias worse.

  9. Simulation optimization of filament parameters for uniform depositions of diamond films on surfaces of ultra-large circular holes

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xinchang, E-mail: wangxinchangz@163.com; Shen, Xiaotian; Sun, Fanghong; Shen, Bin

    2016-12-01

    Highlights: • A verified simulation model using a novel filament arrangement is constructed. • Influences of filament parameters are clarified. • A coefficient between simulated and experimental results is proposed. • Orthogonal simulations are adopted to optimize filament parameters. • A general filament arrangement suitable for different conditions is determined. - Abstract: Chemical vapor deposition (CVD) diamond films have been widely applied as protective coatings on varieties of anti-frictional and wear-resistant components, owing to their excellent mechanical and tribological properties close to the natural diamond. In applications of some components, the inner hole surface will serve as the working surface that suffers severe frictional or erosive wear. It is difficult to realize uniform depositions of diamond films on surfaces of inner holes, especially ultra-large inner holes. Adopting a SiC compact die with an aperture of 80 mm as an example, a novel filament arrangement with a certain number of filaments evenly distributed on a circle is designed, and specific effects of filament parameters, including the filament number, arrangement direction, filament temperature, filament diameter, circumradius and the downward translation, on the substrate temperature distribution are studied by computational fluid dynamics (CFD) simulations based on the finite volume method (FVM), adopting a modified computational model well consistent with the actual deposition environment. Corresponding temperature measurement experiments are also conducted to verify the rationality of the computational model. From the aspect of depositing uniform boron-doped micro-crystalline, undoped micro-crystalline and undoped fine-grained composite diamond (BDM-UMC-UFGCD) film on such the inner hole surface, filament parameters as mentioned above are accurately optimized and compensated by orthogonal simulations. Moreover, deposition experiments adopting compensated optimized

  10. An Optimization Scheme for ProdMod

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1999-01-01

    A general purpose dynamic optimization scheme has been devised in conjunction with the ProdMod simulator. The optimization scheme is suitable for the Savannah River Site (SRS) High Level Waste (HLW) complex operations, and able to handle different types of optimizations such as linear, nonlinear, etc. The optimization is performed in the stand-alone FORTRAN based optimization deliver, while the optimizer is interfaced with the ProdMod simulator for flow of information between the two

  11. Planning intensive care unit design using computer simulation modeling: optimizing integration of clinical, operational, and architectural requirements.

    Science.gov (United States)

    OʼHara, Susan

    2014-01-01

    Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.

  12. Swarm size and iteration number effects to the performance of PSO algorithm in RFID tag coverage optimization

    Science.gov (United States)

    Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah

    2017-04-01

    Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.

  13. A Novel Adjustable Concept for Permeable Gas/Vapor Protective Clothing: Balancing Protection and Thermal Strain.

    Science.gov (United States)

    Bogerd, Cornelis Peter; Langenberg, Johannes Pieter; DenHartog, Emiel A

    2018-02-13

    Armed forces typically have personal protective clothing (PPC) in place to offer protection against chemical, biological, radiological and nuclear (CBRN) agents. The regular soldier is equipped with permeable CBRN-PPC. However, depending on the operational task, these PPCs pose too much thermal strain to the wearer, which results in a higher risk of uncompensable heat stress. This study investigates the possibilities of adjustable CBRN-PPC, consisting of different layers that can be worn separately or in combination with each other. This novel concept aims to achieve optimization between protection and thermal strain during operations. Two CBRN-PPC (protective) layers were obtained from two separate manufacturers: (i) a next-to-skin (NTS) and (ii) a low-burden battle dress uniform (protective BDU). In addition to these layers, a standard (non-CBRN protective) BDU (sBDU) was also made available. The effect of combining clothing layers on the levels of protection were investigated with a Man-In-Simulant Test. Finally, a mechanistic numerical model was employed to give insight into the thermal burden of the evaluated CBRN-PPC concepts. Combining layers results in substantially higher protection that is more than the sum of the individual layers. Reducing the airflow on the protective layer closest to the skin seems to play an important role in this, since combining the NTS with the sBDU also resulted in substantially higher protection. As expected, the thermal strain posed by the different clothing layer combinations decreases as the level of protection decreases. This study has shown that the concept of adjustable protection and thermal strain through multiple layers of CBRN-PPC works. Adjustable CBRN-PPC allows for optimization of the CBRN-PPC in relation to the threat level, thermal environment, and tasks at hand in an operational setting. © The Author(s) 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  14. Constraining neutron guide optimizations with phase-space considerations

    Energy Technology Data Exchange (ETDEWEB)

    Bertelsen, Mads, E-mail: mads.bertelsen@gmail.com; Lefmann, Kim

    2016-09-11

    We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.

  15. Post Pareto optimization-A case

    Science.gov (United States)

    Popov, Stoyan; Baeva, Silvia; Marinova, Daniela

    2017-12-01

    Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.

  16. SQUEEZE-E: The Optimal Solution for Molecular Simulations with Periodic Boundary Conditions.

    Science.gov (United States)

    Wassenaar, Tsjerk A; de Vries, Sjoerd; Bonvin, Alexandre M J J; Bekker, Henk

    2012-10-09

    In molecular simulations of macromolecules, it is desirable to limit the amount of solvent in the system to avoid spending computational resources on uninteresting solvent-solvent interactions. As a consequence, periodic boundary conditions are commonly used, with a simulation box chosen as small as possible, for a given minimal distance between images. Here, we describe how such a simulation cell can be set up for ensembles, taking into account a priori available or estimable information regarding conformational flexibility. Doing so ensures that any conformation present in the input ensemble will satisfy the distance criterion during the simulation. This helps avoid periodicity artifacts due to conformational changes. The method introduces three new approaches in computational geometry: (1) The first is the derivation of an optimal packing of ensembles, for which the mathematical framework is described. (2) A new method for approximating the α-hull and the contact body for single bodies and ensembles is presented, which is orders of magnitude faster than existing routines, allowing the calculation of packings of large ensembles and/or large bodies. 3. A routine is described for searching a combination of three vectors on a discretized contact body forming a reduced base for a lattice with minimal cell volume. The new algorithms reduce the time required to calculate packings of single bodies from minutes or hours to seconds. The use and efficacy of the method is demonstrated for ensembles obtained from NMR, MD simulations, and elastic network modeling. An implementation of the method has been made available online at http://haddock.chem.uu.nl/services/SQUEEZE/ and has been made available as an option for running simulations through the weNMR GRID MD server at http://haddock.science.uu.nl/enmr/services/GROMACS/main.php .

  17. Measurements and simulation for design optimization for low NOx coal-firing system

    Energy Technology Data Exchange (ETDEWEB)

    E. Bar-Ziv; Y. Yasur; B. Chudnovsky; L. Levin; A. Talanker [Ben-Gurion University of Negev, Beer-Sheva (Israel)

    2003-07-01

    The information required to design a utility steam generator is the heat balance, fuel analysis and emission. These establish the furnace wall configuration, the heat release rates, and the firing technology. The furnace must be sized for (1) residence time for complete combustion with low NOx, and (2) reduction of flue gas temperature to minimize ash deposition. To meet these, computational fluid dynamics (CFD) of the combustion process in the furnace were performed and proven to be a powerful tool for this purpose. Still, reliable numerical simulations require careful interpretation and comparison with measurements. We report numerical results and measurements for a 575 MW pulverized coal tangential firing boiler of the Hadera power plant of Israel Electric Corporation (IEC). Measured and calculated values were found to be in reasonable agreement. We used the simulations for optimization and investigated temperature distribution, heat fluxes and concentration of chemical species. We optimized both the furnace flue gas temperature entering the convective path and the staged residence time for low NOx. We tested mass flow rates through close-coupled and separate overfire air ports and its arrangement and the coal powder fineness. These parameters can control the mixing rate between the fuel and the oxidizer streams and can affect the most important characteristics of the boiler such as temperature regimes, coal burning rate and nitrogen oxidation/reduction. From this effort, IEC started to improve the boiler performance by replacing the existing typical tangential burners to low NOx firing system to ensure the current regulation requirements of emission pollutions.

  18. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation

    International Nuclear Information System (INIS)

    Subramanian, Swetha; Mast, T Douglas

    2015-01-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. (note)

  19. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    Science.gov (United States)

    Subramanian, Swetha; Mast, T Douglas

    2015-10-07

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  20. A Simulation-Optimization Model for Seawater Intrusion Management at Pingtung Coastal Area, Taiwan

    Directory of Open Access Journals (Sweden)

    Po-Syun Huang

    2018-02-01

    Full Text Available The coastal regions of Pingtung Plain in southern Taiwan rely on groundwater as their main source of fresh water for aquaculture, agriculture, domestic, and industrial sectors. The availability of fresh groundwater is threatened by unsustainable groundwater extraction and the over-pumpage leads to the serious problem of seawater intrusion. It is desired to find appropriate management strategies to control groundwater salinity and mitigate seawater intrusion. In this study, a simulation–optimization model has been presented to solve the problem of seawater intrusion along the coastal aquifers in Pingtung Plain and the objective is using injection well barriers and minimizing the total injection rate based on the pre-determined locations of injection barriers. The SEAWAT code is used to simulate the process of seawater intrusion and the surrogate model of artificial neural networks (ANNs is used to approximate the seawater intrusion (SWI numerical model to increase the computational efficiency during the optimization process. The heuristic optimization scheme of differential evolution (DE algorithm is selected to identify the global optimal management solution. Two different management scenarios, one is the injection barriers located along the coast and the other is the injection barrier located at the inland, are considered and the optimized results show that the deployment of injection barriers at the inland is more effective to reduce total dissolved solids (TDS concentrations and mitigate seawater intrusion than that along the coast. The computational time can be reduced by more than 98% when using ANNs to replace the numerical model and the DE algorithm has been confirmed as a robust optimization scheme to solve groundwater management problems. The proposed framework can identify the most reliable management strategies and provide a reference tool for decision making with regard to seawater intrusion remediation.