Optimal Control and Coordination of Connected and Automated Vehicles at Urban Traffic Intersections
Zhang, Yue J. [Boston University; Malikopoulos, Andreas [ORNL; Cassandras, Christos G. [Boston University
2016-01-01
We address the problem of coordinating online a continuous flow of connected and automated vehicles (CAVs) crossing two adjacent intersections in an urban area. We present a decentralized optimal control framework whose solution yields for each vehicle the optimal acceleration/deceleration at any time in the sense of minimizing fuel consumption. The solu- tion, when it exists, allows the vehicles to cross the intersections without the use of traffic lights, without creating congestion on the connecting road, and under the hard safety constraint of collision avoidance. The effectiveness of the proposed solution is validated through simulation considering two intersections located in downtown Boston, and it is shown that coordination of CAVs can reduce significantly both fuel consumption and travel time.
On Reaction Coordinate Optimality.
Krivov, Sergei V
2013-01-01
The following question is addressed: how to establish that a constructed reaction coordinate is optimal, i.e., that it provides an accurate description of dynamics. It is shown that the reaction coordinate is optimal if its cut free energy profile, determined using length-weighted transitions, is constant, i.e., it is position and sampling interval independent. The observation leads to a number of interesting results. In particular, the equilibrium flux between two boundary states can be computed exactly as diffusion on a free energy profile associated with the coordinate. The mean square displacement, for the trajectory projected onto the coordinate, grows linear with time. That for the same trajectory projected onto a suboptimal coordinate grows slower than linear with time. The results are illustrated on a number of model systems, Sierpinski gasket, FIP35 protein, and beta3s peptide. PMID:26589017
Nonparametric variational optimization of reaction coordinates
Banushkina, Polina V.; Krivov, Sergei V.
2015-11-01
State of the art realistic simulations of complex atomic processes commonly produce trajectories of large size, making the development of automated analysis tools very important. A popular approach aimed at extracting dynamical information consists of projecting these trajectories into optimally selected reaction coordinates or collective variables. For equilibrium dynamics between any two boundary states, the committor function also known as the folding probability in protein folding studies is often considered as the optimal coordinate. To determine it, one selects a functional form with many parameters and trains it on the trajectories using various criteria. A major problem with such an approach is that a poor initial choice of the functional form may lead to sub-optimal results. Here, we describe an approach which allows one to optimize the reaction coordinate without selecting its functional form and thus avoiding this source of error.
Nonparametric variational optimization of reaction coordinates
Banushkina, Polina V.; Krivov, Sergei V., E-mail: s.krivov@leeds.ac.uk [Astbury Center for Structural Molecular Biology, Faculty of Biological Sciences, University of Leeds, Leeds LS2 9JT (United Kingdom)
2015-11-14
State of the art realistic simulations of complex atomic processes commonly produce trajectories of large size, making the development of automated analysis tools very important. A popular approach aimed at extracting dynamical information consists of projecting these trajectories into optimally selected reaction coordinates or collective variables. For equilibrium dynamics between any two boundary states, the committor function also known as the folding probability in protein folding studies is often considered as the optimal coordinate. To determine it, one selects a functional form with many parameters and trains it on the trajectories using various criteria. A major problem with such an approach is that a poor initial choice of the functional form may lead to sub-optimal results. Here, we describe an approach which allows one to optimize the reaction coordinate without selecting its functional form and thus avoiding this source of error.
Optimal Coordination of Automatic Line Switches for Distribution Systems
Jyh-Cherng Gu
2012-04-01
Full Text Available For the Taiwan Power Company (Taipower, the margins of coordination times between the lateral circuit breakers (LCB of underground 4-way automatic line switches and the protection equipment of high voltage customers are often too small. This could lead to sympathy tripping by the feeder circuit breaker (FCB of the distribution feeder and create difficulties in protection coordination between upstream and downstream protection equipment, identification of faults, and restoration operations. In order to solve the problem, it is necessary to reexamine the protection coordination between LCBs and high voltage customers’ protection equipment, and between LCBs and FCBs, in order to bring forth new proposals for settings and operations. This paper applies linear programming to optimize the protection coordination of protection devices, and proposes new time current curves (TCCs for the overcurrent (CO and low-energy overcurrent (LCO relays used in normally open distribution systems by performing simulations in the Electrical Transient Analyzer Program (ETAP environment. The simulation results show that the new TCCs solve the coordination problems among high voltage customer, lateral, feeder, bus-interconnection, and distribution transformer. The new proposals also satisfy the requirements of Taipower on protection coordination of the distribution feeder automation system (DFAS. Finally, the authors believe that the system configuration, operation experience, and relevant criteria mentioned in this paper may serve as valuable references for other companies or utilities when building DFAS of their own.
Automated selection of LEDs by luminance and chromaticity coordinate
Fischer, Ulrich H P; Reinboth, Christian
2010-01-01
The increased use of LEDs for lighting purposes has led to the development of numerous applications requiring a pre-selection of LEDs by their luminance and / or their chromaticity coordinate. This paper demonstrates how a manual pre-selection process can be realized using a relatively simple configuration. Since a manual selection service can only be commercially viable as long as only small quantities of LEDs need to be sorted, an automated solution suggests itself. This paper introduces such a solution, which has been developed by Harzoptics in close cooperation with Rundfunk Gernrode. The paper also discusses current challenges in measurement technology as well as market trends.
Optimal coordinated voltage control of power systems
LI Yan-jun; HILL David J.; WU Tie-jun
2006-01-01
An immune algorithm solution is proposed in this paper to deal with the problem of optimal coordination of local physically based controllers in order to preserve or retain mid and long term voltage stability. This problem is in fact a global coordination control problem which involves not only sequencing and timing different control devices but also tuning the parameters of controllers. A multi-stage coordinated control scheme is presented, aiming at retaining good voltage levels with minimal control efforts and costs after severe disturbances in power systems. A self-pattern-recognized vaccination procedure is developed to transfer effective heuristic information into the new generation of solution candidates to speed up the convergence of the search procedure to global optima. An example of four bus power system case study is investigated to show the effectiveness and efficiency of the proposed algorithm, compared with several existing approaches such as differential dynamic programming and tree-search.
Optimization-based Method for Automated Road Network Extraction
Xiong, D
2001-09-18
Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction.
Optimization-based Method for Automated Road Network Extraction
Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction
Applications of Intelligent Evolutionary Algorithms in Optimal Automation System Design
Tung-Kuan Liu; Jyh-Horng Chou
2011-01-01
This paper proposes an intelligent evolutionary algorithm that can be applied in the design of optimal automation systems, and employs a multimodal six-bar mechanism optimization design, job shop production scheduling for the fishing equipment industry, and dynamic real-time production scheduling system design cases to show how the technique developed in this paper is highly effective at resolving optimal automation system design problems. Major breakthroughs in artificial intelligence contin...
Tomahawk strike coordinator predesignation : optimizing firing platform and weapon allocation
Kirk, Brian D.
1999-01-01
Ships, submarines, and missiles are currently manually allocated for naval strike warfare tasking. Naval Surface Warfare Center Dahlgren Division has proposed to the Office of Naval Research to develop automated "predesignation" aids that automatically allocate the Tomahawk Land Attack Missile, at both the Tomahawk Strike Coordinator level, and at the individual firing, platform level. A mixed integer program is introduced for Tomahawk Strike Coordinator predesignation, and is implemented in ...
Automated Cache Performance Analysis And Optimization
Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2013-12-23
While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters
Optimization Tools For Automated Vehicle Systems
Shiller, Zvi
1995-01-01
This work focuses on computing time-optimal maneuvers which might be used to develop strategies for emergency maneuvers and establishing the vehicle' s performance envelope. The problem of emergency maneuvers is addressed in the context of time optimal control. Time optimal trajectories are computed along specified paths for a nonlinear vehicle model, which considers both lateral and longitudinal motions.
Automated firewall analytics design, configuration and optimization
Al-Shaer, Ehab
2014-01-01
This book provides a comprehensive and in-depth study of automated firewall policy analysis for designing, configuring and managing distributed firewalls in large-scale enterpriser networks. It presents methodologies, techniques and tools for researchers as well as professionals to understand the challenges and improve the state-of-the-art of managing firewalls systematically in both research and application domains. Chapters explore set-theory, managing firewall configuration globally and consistently, access control list with encryption, and authentication such as IPSec policies. The author
Automated beam steering using optimal control
We present a steering algorithm which, with the aid of a model, allows the user to specify beam behavior throughout a beamline, rather than just at specified beam position monitor (BPM) locations. The model is used primarily to compute the values of the beam phase vectors from BPM measurements, and to define cost functions that describe the steering objectives. The steering problem is formulated as constrained optimization problem; however, by applying optimal control theory we can reduce it to an unconstrained optimization whose dimension is the number of control signals.
Optimization and Coordination of HAFDV PINN Control by Improved PSO
Bin Huang; Nengling Tai; Wentao Huang
2013-01-01
The new hybrid active filter (HAF) is composed of the larger-capacity passive filter banks and the smaller-capacity active filter. It is difficult to tune the parameters of a PI controller using the DC capacitor voltage control. In this paper, the improved particle swarm optimization (improved PSO) algorithm is proposed to solve the coordinated design problem, and the neural network weights as the particle swarm optimization are adopted to optimize the system parameters. Comparing with the co...
Agent Technology Application in Automating the Coordination and Decision-Making in Supply Chain
JIE Hui; JI Jian-hua
2005-01-01
Coordinating all the activities among all the parties involved in supply chain can be a daunting task. This paper put forth the viewpoint of applying agent technology to automate the coordination and decision-making tasks in a typical home PC industry supply chain. The main features of the proposed approach, which differentiate it cesses and issues faced by parties in the supply chain. A prototype and the overall process flow were also described.
Automated global optimization of commercial SAGD operations
The economic optimization of steam assisted gravity drainage (SAGD) operations has been largely conducted through the use of simulations to identify optimal steam use approaches. In this study, the cumulative steam to oil ratio (CSOR) was optimized by altering the steam injection pressure throughout the evolution of the process in a detailed, 3-d reservoir model. A generic Athabasca simulation model was used along with a thermal reservoir simulator which used a corner point grid. A line heater was specified in the grid cells containing the well bores to mimic steam circulation. During heating, the injection and production locations were allowed to produce reservoir fluids from the reservoir to relieve pressure associated with the thermal expansion of oil sand. After steam circulation, the well bores were switched to an SAGD operation. At the producer well the operating constraint imposed a maximum temperature difference between the saturation temperature corresponding to the pressure of the fluids and the temperature in the wellbore equal to 5 degrees C. At the injection well, the steam injection pressure was specified according to the optimizer. A response surface was constructed by fitting the parameter sets and corresponding cost functions to a biquadratic function. After the minimum from the cost function was determined, a new set of parameters was selected to complete the iterations. Results indicated that optimization of SAGD is feasible with complex and detailed reservoir models by using parallel calculations. The general trend determined by the optimization algorithm developed in the research indicated that before the steam chamber contacts the overburden, the operating pressure should be relatively high. After contact is made, the injection pressure should be lowered to reduce heat losses. 17 refs., 1 tab., 5 figs
Optimal Protection Coordination for Microgrid under Different Operating Modes
Ming-Ta Yang; Li-Feng Chang
2013-01-01
Significant consequences result when a microgrid is connected to a distribution system. This study discusses the impacts of bolted three-phase faults and bolted single line-to-ground faults on the protection coordination of a distribution system connected by a microgrid which operates in utility-only mode or in grid-connected mode. The power system simulation software is used to build the test system. The linear programming method is applied to optimize the coordination of relays, and the rel...
Learning the optimal control of coordinated eye and head movements.
Sohrab Saeb
2011-11-01
Full Text Available Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements.
Coordination and Emergence in the Cellular Automated Fashion Game
Cao, Zhigang; Qu, Xinglong; Yang, Mingmin; Yang, Xiaoguang
2012-01-01
We investigate a heterogeneous cellular automaton, where there are two types of agents, conformists and rebels. Each agent has to choose between two actions, 0 and 1. A conformist likes to choose an action that most of her neighbors choose, while in contrast a rebel wants to be different with most of her neighbors. Theoretically, this model is equivalent to the matching pennies game on regular networks. We study the dynamical process by assuming that each agent takes a myopic updating rule. An uniform updating probability is also introduced for each agent to study the whole spectrum from synchronous updating to asynchronous updating. Our model characterizes the phenomenon of fashion very well and has a great potential in the study of the finance and stock markets. A large number of simulations show that in most case agents can reach extraordinarily high degree of coordination. This process is also quite fast and steady. Considering that these dynamics are really simple, agents are selfish, myopic, and have ve...
Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows
Tianhong Song
2014-10-01
Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.
Automated Multivariate Optimization Tool for Energy Analysis: Preprint
Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.
2006-07-01
Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.
Optimal design of coordination control strategy for distributed generation system
WANG Ai-hua; Norapon Kanjanapadit
2009-01-01
This paper presents a novel design procedure for optimizing the power distribution strategy in distributed generation system. A coordinating controller, responsible to distribute the total load power request among multiple DG units, is suggested based on the conception of hierarchical control structure in the dynamic system.The optimal control problem was formulated as a nonlinear optimization problem subject to set of constraints.The resulting problem was solved using the Kutm-Tucker method. Computer simulation results demonstrate that the proposed method can provide better efficiency in terms of reducing total costs compared to existing methods.In addition, the proposed optimal load distribution strategy can be easily implemented in real-time thanks to the simplicity of closed-form solutions.
Block-Coordinate Frank-Wolfe Optimization for Structural SVMs
Lacoste-Julien, Simon; Jaggi, Martin; Schmidt, Mark; Pletscher, Patrick
2012-01-01
We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full Frank-Wolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. Howev...
Coordinated parallelizing compiler optimizations and high-level synthesis
S Gupta; Gupta, R. K.; Dutt, N D; Nicolau, A.
2004-01-01
We present a high-level synthesis methodology that applies a coordinated set of coarse-grain and fine-grain parallelizing transformations. The transformations are applied both during a presynthesis phase and during scheduling, with the objective of optimizing the results of synthesis and reducing the impact of control flow constructs on the quality of results. We first apply a set of source level presynthesis transformations that include common sub-expression elimination (CSE), copy propagati...
Optimal Protection Coordination for Microgrid under Different Operating Modes
Ming-Ta Yang
2013-01-01
Full Text Available Significant consequences result when a microgrid is connected to a distribution system. This study discusses the impacts of bolted three-phase faults and bolted single line-to-ground faults on the protection coordination of a distribution system connected by a microgrid which operates in utility-only mode or in grid-connected mode. The power system simulation software is used to build the test system. The linear programming method is applied to optimize the coordination of relays, and the relays coordination simulation software is used to verify if the coordination time intervals (CTIs of the primary/backup relay pairs are adequate. In addition, this study also proposes a relays protection coordination strategy when the microgrid operates in islanding mode during a utility power outage. Because conventional CO/LCO relays are not capable of detecting high impedance fault, intelligent electrical device (IED combined with wavelet transformer and neural network is proposed to accurately detect high impedance fault and identify the fault phase.
An Liu
2012-01-01
Full Text Available Coordination optimization of directional overcurrent relays (DOCRs is an important part of an efficient distribution system. This optimization problem involves obtaining the time dial setting (TDS and pickup current (Ip values of each DOCR. The optimal results should have the shortest primary relay operating time for all fault lines. Recently, the particle swarm optimization (PSO algorithm has been considered an effective tool for linear/nonlinear optimization problems with application in the protection and coordination of power systems. With a limited runtime period, the conventional PSO considers the optimal solution as the final solution, and an early convergence of PSO results in decreased overall performance and an increase in the risk of mistaking local optima for global optima. Therefore, this study proposes a new hybrid Nelder-Mead simplex search method and particle swarm optimization (proposed NM-PSO algorithm to solve the DOCR coordination optimization problem. PSO is the main optimizer, and the Nelder-Mead simplex search method is used to improve the efficiency of PSO due to its potential for rapid convergence. To validate the proposal, this study compared the performance of the proposed algorithm with that of PSO and original NM-PSO. The findings demonstrate the outstanding performance of the proposed NM-PSO in terms of computation speed, rate of convergence, and feasibility.
Optimizing Watershed Management by Coordinated Operation of Storing Facilities
Anghileri, Daniela; Castelletti, Andrea; Pianosi, Francesca; Soncini-Sessa, Rodolfo; Weber, Enrico
2013-04-01
Water storing facilities in a watershed are very often operated independently one to another to meet specific operating objectives, with no information sharing among the operators. This uncoordinated approach might result in upstream-downstream disputes and conflicts among different water users, or inefficiencies in the watershed management, when looked at from the viewpoint of an ideal central decision-maker. In this study, we propose an approach in two steps to design coordination mechanisms at the watershed scale with the ultimate goal of enlarging the space for negotiated agreements between competing uses and improve the overall system efficiency. First, we compute the multi-objective centralized solution to assess the maximum potential benefits of a shift from a sector-by-sector to an ideal fully coordinated perspective. Then, we analyze the Pareto-optimal operating policies to gain insight into suitable strategies to foster cooperation or impose coordination among the involved agents. The approach is demonstrated on an Alpine watershed in Italy where a long lasting conflict exists between upstream hydropower production and downstream irrigation water users. Results show that a coordination mechanism can be designed that drive the current uncoordinated structure towards the performance of the ideal centralized operation.
Optimal deadlock avoidance Petri net supervisors for automated manufacturing systems
Keyi XING; Feng TIAN; Xiaojun YANG
2007-01-01
Deadlock avoidance problems are investigated for automated manufacturing systems with flexible routings.Based on the Petri net models of the systems, this paper proposes, for the first time, the concept of perfect maximal resourcetransition circuits and their saturated states. The concept facilitates the development of system liveness characterization and deadlock avoidance Petri net supervisors. Deadlock is characterized as some perfect maximal resource-transition circuits reaching their saturated states. For a large class of manufacturing systems, which do not contain center resources, the optimal deadlock avoidance Petri net supervisors are presented. For a general manufacturing system, a method is proposed for reducing the system Petri net model so that the reduced model does not contain center resources and, hence, has optimal deadlock avoidance Petri net supervisor. The controlled reduced Petri net model can then be used as the liveness supervisor of the system.
Automated assay optimization with integrated statistics and smart robotics.
Taylor, P B; Stewart, F P; Dunnington, D J; Quinn, S T; Schulz, C K; Vaidya, K S; Kurali, E; Lane, T R; Xiong, W C; Sherrill, T P; Snider, J S; Terpstra, N D; Hertzberg, R P
2000-08-01
The transition from manual to robotic high throughput screening (HTS) in the last few years has made it feasible to screen hundreds of thousands of chemical entities against a biological target in less than a month. This rate of HTS has increased the visibility of bottlenecks, one of which is assay optimization. In many organizations, experimental methods are generated by therapeutic teams associated with specific targets and passed on to the HTS group. The resulting assays frequently need to be further optimized to withstand the rigors and time frames inherent in robotic handling. Issues such as protein aggregation, ligand instability, and cellular viability are common variables in the optimization process. The availability of robotics capable of performing rapid random access tasks has made it possible to design optimization experiments that would be either very difficult or impossible for a person to carry out. Our approach to reducing the assay optimization bottleneck has been to unify the highly specific fields of statistics, biochemistry, and robotics. The product of these endeavors is a process we have named automated assay optimization (AAO). This has enabled us to determine final optimized assay conditions, which are often a composite of variables that we would not have arrived at by examining each variable independently. We have applied this approach to both radioligand binding and enzymatic assays and have realized benefits in both time and performance that we would not have predicted a priori. The fully developed AAO process encompasses the ability to download information to a robot and have liquid handling methods automatically created. This evolution in smart robotics has proven to be an invaluable tool for maintaining high-quality data in the context of increasing HTS demands. PMID:10992042
Hu, Jie; Peng, Yinghong; Xiong, Guangleng
2007-01-01
Abstract This study presents a parameter coordination and robust optimization approach based on knowledge network modeling. The method allows multidisciplinary designer to synthetically coordinate and optimize parameter considering multidisciplinary knowledge. First, a knowledge network model is established, including design knowledge from assembly, manufacture, performance, and simulation. Second, the parameter coordination method is presented to solve the knowledge network model,...
Optimizing wireless LAN for longwall coal mine automation
Hargrave, C.O.; Ralston, J.C.; Hainsworth, D.W. [Exploration & Mining Commonwealth Science & Industrial Research Organisation, Pullenvale, Qld. (Australia)
2007-01-15
A significant development in underground longwall coal mining automation has been achieved with the successful implementation of wireless LAN (WLAN) technology for communication on a longwall shearer. WIreless-FIdelity (Wi-Fi) was selected to meet the bandwidth requirements of the underground data network, and several configurations were installed on operating longwalls to evaluate their performance. Although these efforts demonstrated the feasibility of using WLAN technology in longwall operation, it was clear that new research and development was required in order to establish optimal full-face coverage. By undertaking an accurate characterization of the target environment, it has been possible to achieve great improvements in WLAN performance over a nominal Wi-Fi installation. This paper discusses the impact of Fresnel zone obstructions and multipath effects on radio frequency propagation and reports an optimal antenna and system configuration. Many of the lessons learned in the longwall case are immediately applicable to other underground mining operations, particularly wherever there is a high degree of obstruction from mining equipment.
Development of An Optimization Method for Determining Automation Rate in Nuclear Power Plants
Since automation was introduced in various industrial fields, it has been known that automation provides positive effects like greater efficiency and fewer human errors, and negative effect defined as out-of-the-loop (OOTL). Thus, before introducing automation in nuclear field, the estimation of the positive and negative effects of automation on human operators should be conducted. In this paper, by focusing on CPS, the optimization method to find an appropriate proportion of automation is suggested by integrating the suggested cognitive automation rate and the concepts of the level of ostracism. The cognitive automation rate estimation method was suggested to express the reduced amount of human cognitive loads, and the level of ostracism was suggested to express the difficulty in obtaining information from the automation system and increased uncertainty of human operators' diagnosis. The maximized proportion of automation that maintains the high level of attention for monitoring the situation is derived by an experiment, and the automation rate is estimated by the suggested automation rate estimation method. It is expected to derive an appropriate inclusion proportion of the automation system avoiding the OOTL problem and having maximum efficacy at the same time
Optimal Coordinated Strategy Analysis for the Procurement Logistics of a Steel Group
2014-01-01
This paper focuses on the optimization of an internal coordinated procurement logistics system in a steel group and the decision on the coordinated procurement strategy by minimizing the logistics costs. Considering the coordinated procurement strategy and the procurement logistics costs, the aim of the optimization model was to maximize the degree of quality satisfaction and to minimize the procurement logistics costs. The model was transformed into a single-objective model and solved using ...
A New View on Geometry Optimization: the Quasi-Independent Curvilinear Coordinate Approximation
Németh, Károly; Challacombe, Matt
2004-01-01
This article presents a new and efficient alternative to well established algorithms for molecular geometry optimization. The new approach exploits the approximate decoupling of molecular energetics in a curvilinear internal coordinate system, allowing separation of the 3N-dimensional optimization problem into an O(N) set of quasi-independent one-dimensional problems. Each uncoupled optimization is developed by a weighted least squares fit of energy gradients in the internal coordinate system...
Mohamed Zellagui; Rabah Benabid; Mohamed Boudour; Abdelaziz Chaghi
2015-01-01
Optimal coordination of Inverse Definite Minimum Time (IDMT) direction overcurrent relays in the power systems in the presence of multiple Thyristor Controller Series Capacitor (TCSC) on inductive and capacitive operation mode on meshed power system is studied in this paper. The coordination problem is formulated as a non-linear constrained mono-objective optimization problem. The objective function of this optimization problem is the minimization of the operation time (T) of the associated r...
Optimal Coordinated Strategy Analysis for the Procurement Logistics of a Steel Group
Lianbo Deng
2014-01-01
Full Text Available This paper focuses on the optimization of an internal coordinated procurement logistics system in a steel group and the decision on the coordinated procurement strategy by minimizing the logistics costs. Considering the coordinated procurement strategy and the procurement logistics costs, the aim of the optimization model was to maximize the degree of quality satisfaction and to minimize the procurement logistics costs. The model was transformed into a single-objective model and solved using a simulated annealing algorithm. In the algorithm, the supplier of each subsidiary was selected according to the evaluation result for independent procurement. Finally, the effect of different parameters on the coordinated procurement strategy was analysed. The results showed that the coordinated strategy can clearly save procurement costs; that the strategy appears to be more cooperative when the quality requirement is not stricter; and that the coordinated costs have a strong effect on the coordinated procurement strategy.
Axdahl, Erik L.
2015-01-01
Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.
Optimal Coordination of Automatic Line Switches for Distribution Systems
Jyh-Cherng Gu; Ming-Ta Yang
2012-01-01
For the Taiwan Power Company (Taipower), the margins of coordination times between the lateral circuit breakers (LCB) of underground 4-way automatic line switches and the protection equipment of high voltage customers are often too small. This could lead to sympathy tripping by the feeder circuit breaker (FCB) of the distribution feeder and create difficulties in protection coordination between upstream and downstream protection equipment, identification of faults, and restoration operations....
Automated analysis of dUT1 with VieVS using new post-earthquake coordinates for Tsukuba
Kareinen, N.; Uunila, M.
2013-08-01
The automated analysis of dUT1 from intensive sessions performed in Aalto University Metsähovi Radio Observatory include IVS-INT2 and IVS-INT3 sessions. These sessions are sensitive to a priori positions of the stations due to the small number of baselines. We analyze IVS-R1 sessions to estimate the new a priori coordinates for Tsukuba affected by the March 2011 Tohoku Earthquake in order to include IVS-INT2 and to improve the accuracy of IVS-INT3 sessions in the analysis. The procedure for utilising new a priori coordinates is automated and included in the dUT1 analysis. It can be utilised in case of another event distrupting the stations in the observation network.
Application of Advanced Particle Swarm Optimization Techniques to Wind-thermal Coordination
Singh, Sri Niwas; Østergaard, Jacob; Yadagiri, J.
wind-thermal coordination algorithm is necessary to determine the optimal proportion of wind and thermal generator capacity that can be integrated into the system. In this paper, four versions of Particle Swarm Optimization (PSO) techniques are proposed for solving wind-thermal coordination problem. A...... pseudo code based algorithm is suggested to deal with the equality constraints of the problem for accelerating the optimization process. The simulation results show that the proposed PSO methods are capable of obtaining higher quality solutions efficiently in wind-thermal coordination problems....
Nabil Mancer
2015-01-01
Full Text Available The integration of system compensation such as Series Compensator (SC into the transmission line makes the coordination of directional overcurrent in a practical power system important and complex. This article presents an efficient variant of Particle Swarm Optimization (PSO algorithm based on Time-Varying Acceleration Coefficients (PSO-TVAC for optimal coordination of directional overcurrent relays (DOCRs considering the integration of series compensation. Simulation results are compared to other methods to confirm the efficiency of the proposed variant PSO in solving the optimal coordination of directional overcurrent relay in the presence of series compensation.
Review of Automated Design and Optimization of MEMS
Achiche, Sofiane; Fan, Zhun; Bolognini, Francesca
2007-01-01
In recent years MEMS saw a very rapid development. Although many advances have been reached, due to the multiphysics nature of MEMS, their design is still a difficult task carried on mainly by hand calculation. In order to help to overtake such difficulties, attempts to automate MEMS design were...... carried out. This paper presents a review of these techniques. The design task of MEMS is usually divided into four main stages: System Level, Device Level, Physical Level and the Process Level. The state of the art o automated MEMS design in each of these levels is investigated....
Optimal Sampling of a Reaction Coordinate in Molecular Dynamics
Pohorille, Andrew
2005-01-01
Estimating how free energy changes with the state of a system is a central goal in applications of statistical mechanics to problems of chemical or biological interest. From these free energy changes it is possible, for example, to establish which states of the system are stable, what are their probabilities and how the equilibria between these states are influenced by external conditions. Free energies are also of great utility in determining kinetics of transitions between different states. A variety of methods have been developed to compute free energies of condensed phase systems. Here, I will focus on one class of methods - those that allow for calculating free energy changes along one or several generalized coordinates in the system, often called reaction coordinates or order parameters . Considering that in almost all cases of practical interest a significant computational effort is required to determine free energy changes along such coordinates it is hardly surprising that efficiencies of different methods are of great concern. In most cases, the main difficulty is associated with its shape along the reaction coordinate. If the free energy changes markedly along this coordinate Boltzmann sampling of its different values becomes highly non-uniform. This, in turn, may have considerable, detrimental effect on the performance of many methods for calculating free energies.
Fursin, Grigori
2009-01-01
International audience Computing systems rarely deliver best possible performance due to ever increasing hardware and software complexity and limitations of the current optimization technology. Additional code and architecture optimizations are often required to improve execution time, size, power consumption, reliability and other important characteristics of computing systems. However, it is often a tedious, repetitive, isolated and time consuming process. In order to automate, simplify ...
Lyapunov-based Low-thrust Optimal Orbit Transfer: An approach in Cartesian coordinates
Zhang, Hantian; Cao, Qingjie
2014-01-01
This paper presents a simple approach to low-thrust optimal-fuel and optimal-time transfer problems between two elliptic orbits using the Cartesian coordinates system. In this case, an orbit is described by its specific angular momentum and Laplace vectors with a free injection point. Trajectory optimization with the pseudospectral method and nonlinear programming are supported by the initial guess generated from the Chang-Chichka-Marsden Lyapunov-based transfer controller. This approach successfully solves several low-thrust optimal problems. Numerical results show that the Lyapunov-based initial guess overcomes the difficulty in optimization caused by the strong oscillation of variables in the Cartesian coordinates system. Furthermore, a comparison of the results shows that obtaining the optimal transfer solution through the polynomial approximation by utilizing Cartesian coordinates is easier than using orbital elements, which normally produce strongly nonlinear equations of motion. In this paper, the Eart...
L. Abramova; Chernobaev, N.
2007-01-01
A short review of main methods of traffic flow control is represented, great attention is paid to methods of coordinated control and quality characteristics of traffic control. The problem of parameter optimization of traffic coordinated control on the basis of vehicle delay minimizing at highway intersections has been defined.
COORDINATION MECHANISM COMBINING SUPPLY CHAIN OPTIMIZATION AND RULE IN EXCHANGE
JINSHI ZHAO; JIAZHEN HUO
2013-01-01
There are two kinds of option pricing. The option pricing in exchange follows the Black–Scholes rule but does not consider the optimizing of supply chain. The traditional supply chain option contract can optimize supply chain but does not meet the Black–Scholes rule. We integrate the assumption of above two kinds of option pricing, and design a model to combine the Black–Scholes rule and traditional option contract of optimizing in a supplier-led supply chain. Our combined model can guide the...
Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows
Tianhong Song; Sven Köhler; Bertram Ludäscher; James Hanken; Maureen Kelly; David Lowery; Macklin, James A.; Morris, Paul J.; Morris, Robert A.
2014-01-01
Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfal...
Xing Wu; Peihuang Lou; Dunbing Tang
2011-01-01
This paper presents a multi-objective genetic algorithm (MOGA) with Pareto optimality and elitist tactics for the control system design of automated guided vehicle (AGV). The MOGA is used to identify AGV driving system model and optimize its servo control system sequentially. In system identification, the model identified by least square method is adopted as an evolution tutor who selects the individuals having balanced performances in all objectives as elitists. In controller optimization, t...
Optimization of composite carriage for a coordinate measurement machine
Lombardi, Marco
1994-01-01
The growing need for high quality and reliability of products requires the control of the accuracy of dimensions and shape of product components. Coordinate Measurement Machines (CMM) are now able to measure the dimensions and/or the shape of objects with submicron precision. The desire for high-speed measurement, has stimulated the interest of CMM manufacturers in the use of composite materials for the structure of their machines. Composites are lighter than conventional mater...
A PLM-based automated inspection planning system for coordinate measuring machine
Zhao, Haibin; Wang, Junying; Wang, Boxiong; Wang, Jianmei; Chen, Huacheng
2006-11-01
With rapid progress of Product Lifecycle Management (PLM) in manufacturing industry, automatic generation of inspection planning of product and the integration with other activities in product lifecycle play important roles in quality control. But the techniques for these purposes are laggard comparing with techniques of CAD/CAM. Therefore, an automatic inspection planning system for Coordinate Measuring Machine (CMM) was developed to improve the automatization of measuring based on the integration of inspection system in PLM. Feature information representation is achieved based on a PLM canter database; measuring strategy is optimized through the integration of multi-sensors; reasonable number and distribution of inspection points are calculated and designed with the guidance of statistic theory and a synthesis distribution algorithm; a collision avoidance method is proposed to generate non-collision inspection path with high efficiency. Information mapping is performed between Neutral Interchange Files (NIFs), such as STEP, DML, DMIS, XML, etc., to realize information integration with other activities in the product lifecycle like design, manufacturing and inspection execution, etc. Simulation was carried out to demonstrate the feasibility of the proposed system. As a result, the inspection process is becoming simpler and good result can be got based on the integration in PLM.
A sensitivity-based coordination method for optimization of product families
Zou, Jun; Yao, Wei-Xing; Xia, Tian-Xiang
2016-07-01
This article provides an introduction to a decomposition-based method for the optimization of product families with predefined platforms. To improve the efficiency of the system coordinator, a new sensitivity-based coordination method (SCM) is proposed. The key idea in SCM is that the system level coordinates share variables by using sensitivity information to make trade-offs between the product subsystems. The coordinated shared variables are determined by minimizing the performance deviation with respect to the optimal design of subproblems and constraint violation incurred by sharing. Each subproblem has a significant degree of independence and can be solved in a simultaneous way. The numerical performance of SCM is investigated, and the results suggest that the new approach is robust and leads to a substantial reduction in computational effort compared with the analytical target cascading method. Then, the proposed methodology is applied to the structural optimization of a family of automotive body side-frames.
An Automated Tool for Optimizing Waste Transportation Routing and Scheduling
An automated software tool has been developed and implemented to increase the efficiency and overall life-cycle productivity of site cleanup by scheduling vehicle and container movement between waste generators and disposal sites on the Department of Energy's Oak Ridge Reservation. The software tool identifies the best routes or accepts specifically requested routes and transit times, looks at fleet availability, selects the most cost effective route for each waste stream, and creates a transportation schedule in advance of waste movement. This tool was accepted by the customer and has been implemented. (authors)
Axon Membrane Skeleton Structure is Optimized for Coordinated Sodium Propagation
Zhang, Yihao; Li, He; Tzingounis, Anastasios V; Lykotrafitis, George
2016-01-01
Axons transmit action potentials with high fidelity and minimal jitter. This unique capability is likely the result of the spatiotemporal arrangement of sodium channels along the axon. Super-resolution microscopy recently revealed that the axon membrane skeleton is structured as a series of actin rings connected by spectrin filaments that are held under entropic tension. Sodium channels also exhibit a periodic distribution pattern, as they bind to ankyrin G, which associates with spectrin. Here, we elucidate the relationship between the axon membrane skeleton structure and the function of the axon. By combining cytoskeletal dynamics and continuum diffusion modeling, we show that spectrin filaments under tension minimize the thermal fluctuations of sodium channels and prevent overlap of neighboring channel trajectories. Importantly, this axon skeletal arrangement allows for a highly reproducible band-like activation of sodium channels leading to coordinated sodium propagation along the axon.
WANG Ya-lin; MA Jie; GUI Wei-hua; YANG Chun-hua; ZHANG Chuan-fu
2006-01-01
A multi-objective intelligent coordinating optimization strategy based on qualitative and quantitative synthetic model for Pb-Zn sintering blending process was proposed to obtain optimal mixture ratio. The mechanism and neural network quantitative models for predicting compositions and rule models for expert reasoning were constructed based on statistical data and empirical knowledge. An expert reasoning method based on these models were proposed to solve blending optimization problem, including multi-objective optimization for the first blending process and area optimization for the second blending process, and to determine optimal mixture ratio which will meet the requirement of intelligent coordination. The results show that the qualified rates of agglomerate Pb, Zn and S compositions are increased by 7.1%, 6.5% and 6.9%, respectively, and the fluctuation of sintering permeability is reduced by 7.0 %, which effectively stabilizes the agglomerate compositions and the permeability.
apsis - Framework for Automated Optimization of Machine Learning Hyper Parameters
Diehl, Frederik; Jauch, Andreas
2015-01-01
The apsis toolkit presented in this paper provides a flexible framework for hyperparameter optimization and includes both random search and a bayesian optimizer. It is implemented in Python and its architecture features adaptability to any desired machine learning code. It can easily be used with common Python ML frameworks such as scikit-learn. Published under the MIT License other researchers are heavily encouraged to check out the code, contribute or raise any suggestions. The code can be ...
Zheping Yan; Chao Deng; Benyin Li; Jiajia Zhou
2014-01-01
A novel improved particle swarm algorithm named competition particle swarm optimization (CPSO) is proposed to calibrate the Underwater Transponder coordinates. To improve the performance of the algorithm, TVAC algorithm is introduced into CPSO to present an extension competition particle swarm optimization (ECPSO). The proposed method is tested with a set of 10 standard optimization benchmark problems and the results are compared with those obtained through existing PSO algorithms, basic par...
Advanced Coordinating Control System for Power Plant
WU Peng; WEI Shuangying
2006-01-01
The coordinating control system is popular used in power plant. This paper describes the advanced coordinating control by control methods and optimal operation, introduces their principals and features by using the examples of power plant operation. It is wealthy for automation application in optimal power plant operation.
Birkholz, Adam B; Schlegel, H Bernhard
2016-05-14
Reaction path optimization is being used more frequently as an alternative to the standard practice of locating a transition state and following the path downhill. The Variational Reaction Coordinate (VRC) method was proposed as an alternative to chain-of-states methods like nudged elastic band and string method. The VRC method represents the path using a linear expansion of continuous basis functions, allowing the path to be optimized variationally by updating the expansion coefficients to minimize the line integral of the potential energy gradient norm, referred to as the Variational Reaction Energy (VRE) of the path. When constraints are used to control the spacing of basis functions and to couple the minimization of the VRE with the optimization of one or more individual points along the path (representing transition states and intermediates), an approximate path as well as the converged geometries of transition states and intermediates along the path are determined in only a few iterations. This algorithmic efficiency comes at a high per-iteration cost due to numerical integration of the VRE derivatives. In the present work, methods for incorporating redundant internal coordinates and potential energy surface interpolation into the VRC method are described. With these methods, the per-iteration cost, in terms of the number of potential energy surface evaluations, of the VRC method is reduced while the high algorithmic efficiency is maintained. PMID:27179465
Birkholz, Adam B.; Schlegel, H. Bernhard
2016-05-01
Reaction path optimization is being used more frequently as an alternative to the standard practice of locating a transition state and following the path downhill. The Variational Reaction Coordinate (VRC) method was proposed as an alternative to chain-of-states methods like nudged elastic band and string method. The VRC method represents the path using a linear expansion of continuous basis functions, allowing the path to be optimized variationally by updating the expansion coefficients to minimize the line integral of the potential energy gradient norm, referred to as the Variational Reaction Energy (VRE) of the path. When constraints are used to control the spacing of basis functions and to couple the minimization of the VRE with the optimization of one or more individual points along the path (representing transition states and intermediates), an approximate path as well as the converged geometries of transition states and intermediates along the path are determined in only a few iterations. This algorithmic efficiency comes at a high per-iteration cost due to numerical integration of the VRE derivatives. In the present work, methods for incorporating redundant internal coordinates and potential energy surface interpolation into the VRC method are described. With these methods, the per-iteration cost, in terms of the number of potential energy surface evaluations, of the VRC method is reduced while the high algorithmic efficiency is maintained.
Automated Finite Element Modeling of Wing Structures for Shape Optimization
Harvey, Michael Stephen
1993-01-01
The displacement formulation of the finite element method is the most general and most widely used technique for structural analysis of airplane configurations. Modem structural synthesis techniques based on the finite element method have reached a certain maturity in recent years, and large airplane structures can now be optimized with respect to sizing type design variables for many load cases subject to a rich variety of constraints including stress, buckling, frequency, stiffness and aeroelastic constraints (Refs. 1-3). These structural synthesis capabilities use gradient based nonlinear programming techniques to search for improved designs. For these techniques to be practical a major improvement was required in computational cost of finite element analyses (needed repeatedly in the optimization process). Thus, associated with the progress in structural optimization, a new perspective of structural analysis has emerged, namely, structural analysis specialized for design optimization application, or.what is known as "design oriented structural analysis" (Ref. 4). This discipline includes approximation concepts and methods for obtaining behavior sensitivity information (Ref. 1), all needed to make the optimization of large structural systems (modeled by thousands of degrees of freedom and thousands of design variables) practical and cost effective.
Suleimanov, Yury V.; Green, William H.
2015-01-01
We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation single- and double-ended transition-state optimization algorithms - the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not on...
Automated Optimization of Walking Parameters for the Nao Humanoid Robot
N. Girardi; C. Kooijman; A.J. Wiggers; A. Visser
2013-01-01
This paper describes a framework for optimizing walking parameters for a Nao humanoid robot. In this case an omnidirectional walk is learned. The parameters are learned in simulation with an evolutionary approach. The best performance was obtained for a combination of a low mutation rate and a high
Automated Optimization of Walking Parameters for the Nao Humanoid Robot
Girardi, N.; Kooijman, C.; Wiggers, A.J.; de Visser, A.
2013-01-01
This paper describes a framework for optimizing walking parameters for a Nao humanoid robot. In this case an omnidirectional walk is learned. The parameters are learned in simulation with an evolutionary approach. The best performance was obtained for a combination of a low mutation rate and a high crossover rate.
Optimizing Supply Chain Performance in China with Country-Specific Supply Chain Coordination
Herczeg, András; Vastag, Gyula
2012-01-01
The implementation of country-specific supply chain coordination techniques ensures an optimal global supply chain performance. This paper looks at success factors going with a supply chain coordination strategy within the global supply chain of a successful, medium-sized, privately-owned company, one having locations in North America (USA), Europe (Hungary) and Asia (China). Through the shown GSL’s Chinese plant, we will endeavor to argue that increased collaboration in the supply network wi...
Qiu-Yu Lu; Wei Hu; Le Zheng; Yong Min; Miao Li; Xiao-Ping Li; Wei-Chun Ge; Zhi-Ming Wang
2012-01-01
Automatic Generation Control (AGC) and Automatic Voltage Control (AVC) are key approaches to frequency and voltage regulation in power systems. However, based on the assumption of decoupling of active and reactive power control, the existing AGC and AVC systems work independently without any coordination. In this paper, a concept and method of hybrid control is introduced to set up an Integrated Coordinated Optimization Control (ICOC) system for AGC and AVC. Concerning the diversity of contro...
Determinants of the Optimal Network Configuration and the Implications for Coordination
Patricia Deflorin; Helmut M. Dietl; Markus Lang; Eric Lucas
2011-01-01
This paper develops a simulation model to compare the performance of two stylized manufacturing networks: the lead factory network (LFN) and the archetype network (AN). The model identifies the optimal network configuration and its implications for coordination mechanisms. Using an NK simulation model to diffierentiate between exogenous factors (configuration) and endogenous factors (coordination), we find low complexity of the production process, low transfer costs and high search costs, as ...
Mohamed Zellagui
2015-01-01
Full Text Available Optimal coordination of Inverse Definite Minimum Time (IDMT direction overcurrent relays in the power systems in the presence of multiple Thyristor Controller Series Capacitor (TCSC on inductive and capacitive operation mode on meshed power system is studied in this paper. The coordination problem is formulated as a non-linear constrained mono-objective optimization problem. The objective function of this optimization problem is the minimization of the operation time (T of the associated relays in the systems, and the decision variables are: the time dial setting (TDS and the pickup current setting (IP of each relay. To solve this complex non linear optimization problem, a variant of evolutionary optimization techniques named Biogeography Based Optimization (BBO is used. The proposed algorithm is validated on IEEE 14-bus transmission network test system considering various scenarios. The obtained results show a high efficiency of the proposed method to solve such complex optimization problem, in such a way the relays coordination is guaranteed for all simulation scenarios with minimum operating time. The results of new relay setting are compared to other optimization algorithms.
Micro-simulation Modeling of Coordination of Automated Guided Vehicles at Intersection
Makarem, Laleh; Pham, Minh Hai; Dumont, André-Gilles; Gillet, Denis
2012-01-01
One of the challenging problems with autonomous vehicles is their performance at intersections. This paper shows an alternative control method for the coordination of autonomous vehicles at intersections. The proposed approach is grounded in multi-robot coordination and it also takes into account vehicle dynamics as well as realistic communication constraints. The existing concept of decentralized navigation functions is combined with a sensing model and a crossing strategy is developed. It i...
AMMOS: Automated Molecular Mechanics Optimization tool for in silico Screening
Pajeva Ilza
2008-10-01
Full Text Available Abstract Background Virtual or in silico ligand screening combined with other computational methods is one of the most promising methods to search for new lead compounds, thereby greatly assisting the drug discovery process. Despite considerable progresses made in virtual screening methodologies, available computer programs do not easily address problems such as: structural optimization of compounds in a screening library, receptor flexibility/induced-fit, and accurate prediction of protein-ligand interactions. It has been shown that structural optimization of chemical compounds and that post-docking optimization in multi-step structure-based virtual screening approaches help to further improve the overall efficiency of the methods. To address some of these points, we developed the program AMMOS for refining both, the 3D structures of the small molecules present in chemical libraries and the predicted receptor-ligand complexes through allowing partial to full atom flexibility through molecular mechanics optimization. Results The program AMMOS carries out an automatic procedure that allows for the structural refinement of compound collections and energy minimization of protein-ligand complexes using the open source program AMMP. The performance of our package was evaluated by comparing the structures of small chemical entities minimized by AMMOS with those minimized with the Tripos and MMFF94s force fields. Next, AMMOS was used for full flexible minimization of protein-ligands complexes obtained from a mutli-step virtual screening. Enrichment studies of the selected pre-docked complexes containing 60% of the initially added inhibitors were carried out with or without final AMMOS minimization on two protein targets having different binding pocket properties. AMMOS was able to improve the enrichment after the pre-docking stage with 40 to 60% of the initially added active compounds found in the top 3% to 5% of the entire compound collection
邹涛; 李少远
2005-01-01
In this paper, the feasibility and objectives coordination of real-time optimization (RTO) are systemically investigated under soft constraints. The reason for requiring soft constraints adjustment and objective relaxation simultaneously is that the result is not satisfactory when the feasible region is apart from the desired working point or the optimization problem is infeasible. The mixed logic method is introduced to describe the priority of the constraints and objectives, thereby the soft constraints adjustment and objectives coordination are solved together in RTO. A case study on the Shell heavy oil fractionators benchmark problem illustrating the method is finally presented.
Automation for pattern library creation and in-design optimization
Deng, Rock; Zou, Elain; Hong, Sid; Wang, Jinyan; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua; Huang, Jason
2015-03-01
contain remedies built in so that fixing happens either automatically or in a guided manner. Building a comprehensive library of patterns is a very difficult task especially when a new technology node is being developed or the process keeps changing. The main dilemma is not having enough representative layouts to use for model simulation where pattern locations can be marked and extracted. This paper will present an automatic pattern library creation flow by using a few known yield detractor patterns to systematically expand the pattern library and generate optimized patterns. We will also look at the specific fixing hints in terms of edge movements, additive, or subtractive changes needed during optimization. Optimization will be shown for both the digital physical implementation and custom design methods.
Optimization of Fuse-Recloser Coordination and Dispersed Generation Capacity in Distribution Systems
Morteza Nojavan; Heresh Seyedi,; Kazem Zare; Arash Mahari
2014-01-01
In this paper, a novel protection of coordinating optimization algorithm is proposed. Maximizing the penetration‘s dispersed generation and at the same time minimizing the fuse’s operating time are the targets. A novel optimization technique, the Imperialistic Competition Algorithm (ICA), is applied to solve the problem. The results of simulations confirm that the proposed method leads to lower operating times of protective devices and higher possible DG penetration, compared with the tra...
Ming-Ta Yang; An Liu
2013-01-01
In power systems, determining the values of time dial setting (TDS) and the plug setting (PS) for directional overcurrent relays (DOCRs) is an extremely constrained optimization problem that has been previously described and solved as a nonlinear programming problem. Optimization coordination problems of near-end faults and far-end faults occurring simultaneously in circuits with various topologies, including fixed and variable network topologies, are considered in this study. The aim of thi...
Optimal Stochastic Coordinated Beamforming for Wireless Cooperative Networks with CSI Uncertainty
Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.
2013-01-01
Transmit optimization and resource allocation for wireless cooperative networks with channel state information (CSI) uncertainty are important but challenging problems in terms of both the uncertainty modeling and performance op- timization. In this paper, we establish a generic stochastic coordinated beamforming (SCB) framework that provides flex- ibility in the channel uncertainty modeling, while guaranteeing optimality in the transmission strategies. We adopt a general stochastic model for...
Gamma knife treatments are usually planned manually, requiring much expertise and time. We describe a new, fully automatic method of treatment planning. The treatment volume to be planned is first compared with a database of past treatments to find volumes closely matching in size and shape. The treatment parameters of the closest matches are used as starting points for the new treatment plan. Further optimization is performed with the Nelder-Mead simplex method: the coordinates and weight of the isocenters are allowed to vary until a maximally conformal plan specific to the new treatment volume is found. The method was tested on a randomly selected set of 10 acoustic neuromas and 10 meningiomas. Typically, matching a new volume took under 30 seconds. The time for simplex optimization, on a 3 GHz Xeon processor, ranged from under a minute for small volumes (30 000 cubic mm,>20 isocenters). In 8/10 acoustic neuromas and 8/10 meningiomas, the automatic method found plans with conformation number equal or better than that of the manual plan. In 4/10 acoustic neuromas and 5/10 meningiomas, both overtreatment and undertreatment ratios were equal or better in automated plans. In conclusion, data-mining of past treatments can be used to derive starting parameters for treatment planning. These parameters can then be computer optimized to give good plans automatically
Application of multi-objective nonlinear optimization technique for coordinated ramp-metering
This paper aims at developing a multi-objective nonlinear optimization algorithm applied to coordinated motorway ramp metering. The multi-objective function includes two components: traffic and safety. Off-line simulation studies were performed on A4 France Motorway including 4 on-ramps
Application of multi-objective nonlinear optimization technique for coordinated ramp-metering
Haj Salem, Habib; Farhi, Nadir; Lebacque, Jean Patrick, E-mail: abib.haj-salem@ifsttar.fr, E-mail: nadir.frahi@ifsttar.fr, E-mail: jean-patrick.lebacque@ifsttar.fr [IFSTTAR/GRETITA, 14-20, Bd Newton, 77447 Marne-La-Vallée Cedex2 (France)
2015-03-10
This paper aims at developing a multi-objective nonlinear optimization algorithm applied to coordinated motorway ramp metering. The multi-objective function includes two components: traffic and safety. Off-line simulation studies were performed on A4 France Motorway including 4 on-ramps.
Xinlei Liu
2012-08-01
Full Text Available On the basis of the shifting process of automated mechanical transmissions (AMTs for traditional hybrid electric vehicles (HEVs, and by combining the features of electric machines with fast response speed, the dynamic model of the hybrid electric AMT vehicle powertrain is built up, the dynamic characteristics of each phase of shifting process are analyzed, and a control strategy in which torque and speed of the engine and electric machine are coordinatively controlled to achieve AMT shifting control for a plug-in hybrid electric vehicle (PHEV without clutch is proposed. In the shifting process, the engine and electric machine are well controlled, and the shift jerk and power interruption and restoration time are reduced. Simulation and real car test results show that the proposed control strategy can more efficiently improve the shift quality for PHEVs equipped with AMTs.
Yang, Rui; Zhang, Yingchen
2016-08-01
Distributed energy resources (DERs) and smart loads have the potential to provide flexibility to the distribution system operation. A coordinated optimization approach is proposed in this paper to actively manage DERs and smart loads in distribution systems to achieve the optimal operation status. A three-phase unbalanced Optimal Power Flow (OPF) problem is developed to determine the output from DERs and smart loads with respect to the system operator's control objective. This paper focuses on coordinating PV systems and smart loads to improve the overall voltage profile in distribution systems. Simulations have been carried out in a 12-bus distribution feeder and results illustrate the superior control performance of the proposed approach.
Rohe Peter
2012-10-01
Full Text Available Abstract Background High-throughput methods are widely-used for strain screening effectively resulting in binary information regarding high or low productivity. Nevertheless achieving quantitative and scalable parameters for fast bioprocess development is much more challenging, especially for heterologous protein production. Here, the nature of the foreign protein makes it impossible to predict the, e.g. best expression construct, secretion signal peptide, inductor concentration, induction time, temperature and substrate feed rate in fed-batch operation to name only a few. Therefore, a high number of systematic experiments are necessary to elucidate the best conditions for heterologous expression of each new protein of interest. Results To increase the throughput in bioprocess development, we used a microtiter plate based cultivation system (Biolector which was fully integrated into a liquid-handling platform enclosed in laminar airflow housing. This automated cultivation platform was used for optimization of the secretory production of a cutinase from Fusarium solani pisi with Corynebacterium glutamicum. The online monitoring of biomass, dissolved oxygen and pH in each of the microtiter plate wells enables to trigger sampling or dosing events with the pipetting robot used for a reliable selection of best performing cutinase producers. In addition to this, further automated methods like media optimization and induction profiling were developed and validated. All biological and bioprocess parameters were exclusively optimized at microtiter plate scale and showed perfect scalable results to 1 L and 20 L stirred tank bioreactor scale. Conclusions The optimization of heterologous protein expression in microbial systems currently requires extensive testing of biological and bioprocess engineering parameters. This can be efficiently boosted by using a microtiter plate cultivation setup embedded into a liquid-handling system, providing more throughput
A Novel Optimization Tool for Automated Design of Integrated Circuits based on MOSGA
Maryam Dehbashian
2011-11-01
Full Text Available In this paper a novel optimization method based on Multi-Objective Gravitational Search Algorithm (MOGSA is presented for automated design of analog integrated circuits. The recommended method firstly simulates a selected circuit using a simulator and then simulated results are optimized by MOGSA algorithm. Finally this process continues to meet its optimum result. The main programs of the proposed method have been implemented in MATLAB while analog circuits are simulated by HSPICE software. To show the capability of this method, its proficiency will be examined in the optimization of analog integrated circuits design. In this paper, an analog circuit sizing scheme -Optimum Automated Design of a Temperature independent Differential Op-amp using Widlar Current Source- is illustrated as a case study. The computer results obtained from implementing this method indicate that the design specifications are closely met. Moreover, according to various design criteria, this tool by proposing a varied set of answers can give more options to designers to choose a desirable scheme among other suggested results. MOGSA, the proposal algorithm, introduces a novel method in multi objective optimization on the basis of Gravitational Search Algorithm in which the concept of “Pareto-optimality” is used to determine “non-dominated” positions as well as an external repository to keep these positions. To ensure the accuracy of MOGSA performance, this algorithm is validated using several standard test functions from some specialized literatures. Final results indicate that our method is highly competitive with current multi objective optimization algorithms.
Ali, Mohd.Hasan; Murata, Toshiaki; Tamura, Junji
2006-01-01
This paper analyzes the effect of the coordination of optimal reclosing and fuzzy logic-controlled braking resistor on the transient stability of a multimachine power system in case of an unsuccessful reclosing of circuit breakers. The transient stability performance of the coordinated operation of optimal reclosing and fuzzy controlled braking resistor is compared to that of the coordinated operation of conventional auto-reclosing and fuzzy controlled braking resistor. The effectiveness of t...
Automation refers to the use of a device or a system to perform a function previously performed by a human operator. It is introduced to reduce the human errors and to enhance the performance in various industrial fields, including the nuclear industry. However, these positive effects are not always achieved in complex systems such as nuclear power plants (NPPs). An excessive introduction of automation can generate new roles for human operators and change activities in unexpected ways. As more automation systems are accepted, the ability of human operators to detect automation failures and resume manual control is diminished. This disadvantage of automation is called the Out-of-the- Loop (OOTL) problem. We should consider the positive and negative effects of automation at the same time to determine the appropriate level of the introduction of automation. Thus, in this paper, we suggest an estimation method to consider the positive and negative effects of automation at the same time to determine the appropriate introduction of automation. This concept is limited in that it does not consider the effects of automation on human operators. Thus, a new estimation method for automation rate was suggested to overcome this problem
Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting
Optimal Coordinated Control of Power Extraction in LES of a Wind Farm with Entrance Effects
Jay P. Goit
2016-01-01
Full Text Available We investigate the use of optimal coordinated control techniques in large eddy simulations of wind farm boundary layer interaction with the aim of increasing the total energy extraction in wind farms. The individual wind turbines are considered as flow actuators, and their energy extraction is dynamically regulated in time, so as to optimally influence the flow field. We extend earlier work on wind farm optimal control in the fully-developed regime (Goit and Meyers 2015, J. Fluid Mech. 768, 5–50 to a ‘finite’ wind farm case, in which entrance effects play an important role. For the optimal control, a receding horizon framework is employed in which turbine thrust coefficients are optimized in time and per turbine. Optimization is performed with a conjugate gradient method, where gradients of the cost functional are obtained using adjoint large eddy simulations. Overall, the energy extraction is increased 7% by the optimal control. This increase in energy extraction is related to faster wake recovery throughout the farm. For the first row of turbines, the optimal control increases turbulence levels and Reynolds stresses in the wake, leading to better wake mixing and an inflow velocity for the second row that is significantly higher than in the uncontrolled case. For downstream rows, the optimal control mainly enhances the sideways mean transport of momentum. This is different from earlier observations by Goit and Meyers (2015 in the fully-developed regime, where mainly vertical transport was enhanced.
Rhythm Suren Wadhwa
2011-11-01
Full Text Available The paper presents a comparison and application of metaheuristic population-based optimization algorithms to a flexible manufacturing automation scenario in a metacasting foundry. It presents a novel application and comparison of Bee Colony Algorithm (BCA with variations of Particle Swarm Optimization (PSO and Ant Colony Optimization (ACO for object recognition problem in a robot material handling system. To enable robust pick and place activity of metalcasted parts by a six axis industrial robot manipulator, it is important that the correct orientation of the parts is input to the manipulator, via the digital image captured by the vision system. This information is then used for orienting the robot gripper to grip the part from a moving conveyor belt. The objective is to find the reference templates on the manufactured parts from the target landscape picture which may contain noise. The Normalized cross-correlation (NCC function is used as an objection function in the optimization procedure. The ultimate goal is to test improved algorithms that could prove useful in practical manufacturing automation scenarios.
Optimal operation of EDF generation system using decomposition-coordination methods
The French electricity generation system comprises several dozens nuclear units (mostly Pressurized Water Reactors), coal and oil-fired units, and several hundreds smaller hydro units, with a total capacity of about 90 000 MW. The various optimal operation problems of that system, whether they cover a few hours (unit scheduling), or several years (unit refuelling), have many characteristics in common: - unit dynamics are decoupled, - technical constraints of one unit are complex, and their formulation sometimes requires integer variables, - there are only a little number of coupling constraints (load-demand equilibrium...). Decomposition-coordination methods (sometimes called Lagrangian relaxation) appear particularly adequate to solve such problems: they make it possible to handle very precisely the technical constraints of each unit, at the decomposition level (each local problem being solved by dynamic programming or linear programming). Global optimization is achieved at the coordination level
OPTIMAL SUBSTRUCTURE OF SET-VALUED SOLUTIONS OF NORMAL-FORM GAMES AND COORDINATION
Norimasa KOBAYASHI; Kyoichi KIJIMA
2009-01-01
A number of solution concepts of normal-form games have been proposed in the literature on subspaces of action profiles that have Nash type stability. While the literature mainly focuses on the minimal of such stable subspaces, this paper argues that non-minimal stable subspaces represent well the multi-agent situations to which neither Nash equilibrium nor rationalizability may be applied with satisfaction. As a theoretical support, the authors prove the optimal substructure of stable subspaces regarding the restriction of a game. It is further argued that the optimal substructure characterizes hierarchical diversity of coordination and interim phases in learning.
Energy Coordinative Optimization of Wind-Storage-Load Microgrids Based on Short-Term Prediction
Changbin Hu; Shanna Luo; Zhengxi Li; Xin Wang; Li Sun
2015-01-01
According to the topological structure of wind-storage-load complementation microgrids, this paper proposes a method for energy coordinative optimization which focuses on improvement of the economic benefits of microgrids in the prediction framework. First of all, the external characteristic mathematical model of distributed generation (DG) units including wind turbines and storage batteries are established according to the requirements of the actual constraints. Meanwhile, using the minimum ...
Kia, Solmaz S.; Cortes, Jorge; Martinez, Sonia
2014-01-01
This paper proposes a novel class of distributed continuous-time coordination algorithms to solve network optimization problems whose cost function is a sum of local cost functions associated to the individual agents. We establish the exponential convergence of the proposed algorithm under (i) strongly connected and weight-balanced digraph topologies when the local costs are strongly convex with globally Lipschitz gradients, and (ii) connected graph topologies when the local costs are strongl...
Ming-Ta Yang
2013-01-01
Full Text Available In power systems, determining the values of time dial setting (TDS and the plug setting (PS for directional overcurrent relays (DOCRs is an extremely constrained optimization problem that has been previously described and solved as a nonlinear programming problem. Optimization coordination problems of near-end faults and far-end faults occurring simultaneously in circuits with various topologies, including fixed and variable network topologies, are considered in this study. The aim of this study was to apply the Nelder-Mead (NM simplex search method and particle swarm optimization (PSO to solve this optimization problem. The proposed NM-PSO method has the advantage of NM algorithm, with a quicker movement toward optimal solution, as well as the advantage of PSO algorithm in the ability to obtain globally optimal solution. Neither a conventional PSO nor the proposed NM-PSO method is capable of dealing with constrained optimization problems. Therefore, we use the gradient-based repair method embedded in a conventional PSO and the proposed NM-PSO. This study used an IEEE 8-bus test system as a case study to compare the convergence performance of the proposed NM-PSO method and a conventional PSO approach. The results demonstrate that a robust and optimal solution can be obtained efficiently by implementing the proposal.
Energy Coordinative Optimization of Wind-Storage-Load Microgrids Based on Short-Term Prediction
Changbin Hu
2015-02-01
Full Text Available According to the topological structure of wind-storage-load complementation microgrids, this paper proposes a method for energy coordinative optimization which focuses on improvement of the economic benefits of microgrids in the prediction framework. First of all, the external characteristic mathematical model of distributed generation (DG units including wind turbines and storage batteries are established according to the requirements of the actual constraints. Meanwhile, using the minimum consumption costs from the external grid as the objective function, a grey prediction model with residual modification is introduced to output the predictive wind turbine power and load at specific periods. Second, based on the basic framework of receding horizon optimization, an intelligent genetic algorithm (GA is applied to figure out the optimum solution in the predictive horizon for the complex non-linear coordination control model of microgrids. The optimum results of the GA are compared with the receding solution of mixed integer linear programming (MILP. The obtained results show that the method is a viable approach for energy coordinative optimization of microgrid systems for energy flow and reasonable schedule. The effectiveness and feasibility of the proposed method is verified by examples.
Automated Software Testing Using Metahurestic Technique Based on An Ant Colony Optimization
Srivastava, Praveen Ranjan
2011-01-01
Software testing is an important and valuable part of the software development life cycle. Due to time, cost and other circumstances, exhaustive testing is not feasible that's why there is a need to automate the software testing process. Testing effectiveness can be achieved by the State Transition Testing (STT) which is commonly used in real time, embedded and web-based type of software systems. Aim of the current paper is to present an algorithm by applying an ant colony optimization technique, for generation of optimal and minimal test sequences for behavior specification of software. Present paper approach generates test sequence in order to obtain the complete software coverage. This paper also discusses the comparison between two metaheuristic techniques (Genetic Algorithm and Ant Colony optimization) for transition based testing
Salkuti, Surender Reddy; Bijwe, P. R.; Abhyankar, A. R.
2016-04-01
This paper proposes an optimal dynamic reserve activation plan after the occurrence of an emergency situation (generator/transmission line outage, load increase or both). An optimal plan is developed to handle the emergency situation, using coordinated action of fast and slow reserves, for secure operation with minimum overall cost. This paper considers the reserves supplied by generators (spinning reserves) and loads (demand-side reserves). The optimal backing down of costly/fast reserves and bringing up of slow reserves in each sub-interval in an integrated manner is proposed. The simulation studies are performed on IEEE 30, 57 and 300 bus test systems to demonstrate the advantage of proposed integrated/dynamic reserve activation plan over the conventional/sequential approach.
Garland, Joshua; James, Ryan G.; Bradley, Elizabeth
2016-02-01
Delay-coordinate reconstruction is a proven modeling strategy for building effective forecasts of nonlinear time series. The first step in this process is the estimation of good values for two parameters, the time delay and the embedding dimension. Many heuristics and strategies have been proposed in the literature for estimating these values. Few, if any, of these methods were developed with forecasting in mind, however, and their results are not optimal for that purpose. Even so, these heuristics—intended for other applications—are routinely used when building delay coordinate reconstruction-based forecast models. In this paper, we propose an alternate strategy for choosing optimal parameter values for forecast methods that are based on delay-coordinate reconstructions. The basic calculation involves maximizing the shared information between each delay vector and the future state of the system. We illustrate the effectiveness of this method on several synthetic and experimental systems, showing that this metric can be calculated quickly and reliably from a relatively short time series, and that it provides a direct indication of how well a near-neighbor based forecasting method will work on a given delay reconstruction of that time series. This allows a practitioner to choose reconstruction parameters that avoid any pathologies, regardless of the underlying mechanism, and maximize the predictive information contained in the reconstruction.
Garland, Joshua; James, Ryan G; Bradley, Elizabeth
2016-02-01
Delay-coordinate reconstruction is a proven modeling strategy for building effective forecasts of nonlinear time series. The first step in this process is the estimation of good values for two parameters, the time delay and the embedding dimension. Many heuristics and strategies have been proposed in the literature for estimating these values. Few, if any, of these methods were developed with forecasting in mind, however, and their results are not optimal for that purpose. Even so, these heuristics-intended for other applications-are routinely used when building delay coordinate reconstruction-based forecast models. In this paper, we propose an alternate strategy for choosing optimal parameter values for forecast methods that are based on delay-coordinate reconstructions. The basic calculation involves maximizing the shared information between each delay vector and the future state of the system. We illustrate the effectiveness of this method on several synthetic and experimental systems, showing that this metric can be calculated quickly and reliably from a relatively short time series, and that it provides a direct indication of how well a near-neighbor based forecasting method will work on a given delay reconstruction of that time series. This allows a practitioner to choose reconstruction parameters that avoid any pathologies, regardless of the underlying mechanism, and maximize the predictive information contained in the reconstruction. PMID:26986345
Automated Portfolio Optimization Based on a New Test for Structural Breaks
Tobias Berens
2014-04-01
Full Text Available We present a completely automated optimization strategy which combines the classical Markowitz mean-variance portfolio theory with a recently proposed test for structural breaks in covariance matrices. With respect to equity portfolios, global minimum-variance optimizations, which base solely on the covariance matrix, yield considerable results in previous studies. However, financial assets cannot be assumed to have a constant covariance matrix over longer periods of time. Hence, we estimate the covariance matrix of the assets by respecting potential change points. The resulting approach resolves the issue of determining a sample for parameter estimation. Moreover, we investigate if this approach is also appropriate for timing the reoptimizations. Finally, we apply the approach to two datasets and compare the results to relevant benchmark techniques by means of an out-of-sample study. It is shown that the new approach outperforms equally weighted portfolios and plain minimum-variance portfolios on average.
Suleimanov, Yury V
2015-01-01
We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation single- and double-ended transition-state optimization algorithms - the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the possibility of discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.
Suleimanov, Yury V; Green, William H
2015-09-01
We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes. PMID:26575920
Many computer-aided diagnosis (CAD) systems use neural networks (NNs) for either detection or classification of abnormalities. Currently, most NNs are 'optimized' by manual search in a very limited parameter space. In this work, we evaluated the use of automated optimization methods for selecting an optimal convolution neural network (CNN) architecture. Three automated methods, the steepest descent (SD), the simulated annealing (SA), and the genetic algorithm (GA), were compared. We used as an example the CNN that classifies true and false microcalcifications detected on digitized mammograms by a prescreening algorithm. Four parameters of the CNN architecture were considered for optimization, the numbers of node groups and the filter kernel sizes in the first and second hidden layers, resulting in a search space of 432 possible architectures. The area Az under the receiver operating characteristic (ROC) curve was used to design a cost function. The SA experiments were conducted with four different annealing schedules. Three different parent selection methods were compared for the GA experiments. An available data set was split into two groups with approximately equal number of samples. By using the two groups alternately for training and testing, two different cost surfaces were evaluated. For the first cost surface, the SD method was trapped in a local minimum 91% (392/432) of the time. The SA using the Boltzman schedule selected the best architecture after evaluating, on average, 167 architectures. The GA achieved its best performance with linearly scaled roulette-wheel parent selection; however, it evaluated 391 different architectures, on average, to find the best one. The second cost surface contained no local minimum. For this surface, a simple SD algorithm could quickly find the global minimum, but the SA with the very fast reannealing schedule was still the most efficient. The same SA scheme, however, was trapped in a local minimum on the first cost surface
Qiu-Yu Lu
2012-09-01
Full Text Available Automatic Generation Control (AGC and Automatic Voltage Control (AVC are key approaches to frequency and voltage regulation in power systems. However, based on the assumption of decoupling of active and reactive power control, the existing AGC and AVC systems work independently without any coordination. In this paper, a concept and method of hybrid control is introduced to set up an Integrated Coordinated Optimization Control (ICOC system for AGC and AVC. Concerning the diversity of control devices and the characteristics of discrete control interaction with a continuously operating power system, the ICOC system is designed in a hierarchical structure and driven by security, quality and economic events, consequently reducing optimization complexity and realizing multi-target quasi-optimization. In addition, an innovative model of Loss Minimization Control (LMC taking into consideration active and reactive power regulation is proposed to achieve a substantial reduction in network losses and a cross iterative method for AGC and AVC instructions is also presented to decrease negative interference between control systems. The ICOC system has already been put into practice in some provincial regional power grids in China. Open-looping operation tests have proved the validity of the presented control strategies.
Gauri Joshi
2011-05-01
Full Text Available Limited hardware capabilities and very limited battery power supply are the two main constraints thatarise because of small size and low cost of the wireless sensor nodes. Power optimization is highlydesired at all the levels in order to have a long lived Wireless Sensor Network (WSN. Prolonging the lifespan of the network is the prime focus in highly energy constrained wireless sensor networks. Sufficientnumber of active nodes can only ensure proper coverage of the sensing field and connectivity of thenetwork. If large number of wireless sensor nodes get their batteries depleted over a short time span thenit is not possible to maintain the network. In order to have long lived network it is mandatory to havelong lived sensor nodes and hence power optimization at node level becomes equally important as poweroptimization at network level. In this paper need for a dynamically adaptive sensor node is signified inorder to optimize power at individual nodes along with the reduction in data loss due to buffercongestion.We have analyzed a sensor node with fixed service rate (processing rate and transmission rate and asensor node with variable service rates for its power consumption and data loss in small sized buffersunder varying traffic (workload conditions. For variable processing rate Dynamic Voltage FrequencyScaling (DVFS technique is considered and for variable transmission rate Dynamic Modulation Scaling(DMS technique is considered. Comparing the results of a dynamically adaptive sensor node with thatof a fixed service rate sensor node shows improvement in the lifetime of node as well as reduction in thedata loss due to buffer congestion. Further we have tried to coordinate the service rates of computationunit and communication unit on a sensor node which give rise to Coordinated Adaptive Power (CAPmanagement. The main objective of CAP Management is to save the power during normal periods andreduce the data loss due to buffer congestion (overflow
Hutter, Frank; Bartz-Beielstein, Thomas; Hoos, Holger H.; Leyton-Brown, Kevin; Murphy, Kevin P.
This work experimentally investigates model-based approaches for optimizing the performance of parameterized randomized algorithms. Such approaches build a response surface model and use this model for finding good parameter settings of the given algorithm. We evaluated two methods from the literature that are based on Gaussian process models: sequential parameter optimization (SPO) (Bartz-Beielstein et al. 2005) and sequential Kriging optimization (SKO) (Huang et al. 2006). SPO performed better "out-of-the-box," whereas SKO was competitive when response values were log transformed. We then investigated key design decisions within the SPO paradigm, characterizing the performance consequences of each. Based on these findings, we propose a new version of SPO, dubbed SPO+, which extends SPO with a novel intensification procedure and a log-transformed objective function. In a domain for which performance results for other (modelfree) parameter optimization approaches are available, we demonstrate that SPO+ achieves state-of-the-art performance. Finally, we compare this automated parameter tuning approach to an interactive, manual process that makes use of classical
Xing Wu
2011-07-01
Full Text Available This paper presents a multi-objective genetic algorithm (MOGA with Pareto optimality and elitist tactics for the control system design of automated guided vehicle (AGV. The MOGA is used to identify AGV driving system model and optimize its servo control system sequentially. In system identification, the model identified by least square method is adopted as an evolution tutor who selects the individuals having balanced performances in all objectives as elitists. In controller optimization, the velocity regulating capability required by AGV path tracking is employed as decision-making preferences which select Pareto optimal solutions as elitists. According to different objectives and elitist tactics, several sub-populations are constructed and they evolve concurrently by using independent reproduction, neighborhood mutation and heuristic crossover. The lossless finite precision method and the multi-objective normalized increment distance are proposed to keep the population diversity with a low computational complexity. Experiment results show that the cascaded MOGA have the capability to make the system model consistent with AGV driving system both in amplitude and phase, and to make its servo control system satisfy the requirements on dynamic performance and steady-state accuracy in AGV path tracking.
Optimizing RF gun cavity geometry within an automated injector design system
Alicia Hofler ,Pavel Evtushenko
2011-03-28
RF guns play an integral role in the success of several light sources around the world, and properly designed and optimized cw superconducting RF (SRF) guns can provide a path to higher average brightness. As the need for these guns grows, it is important to have automated optimization software tools that vary the geometry of the gun cavity as part of the injector design process. This will allow designers to improve existing designs for present installations, extend the utility of these guns to other applications, and develop new designs. An evolutionary algorithm (EA) based system can provide this capability because EAs can search in parallel a large parameter space (often non-linear) and in a relatively short time identify promising regions of the space for more careful consideration. The injector designer can then evaluate more cavity design parameters during the injector optimization process against the beam performance requirements of the injector. This paper will describe an extension to the APISA software that allows the cavity geometry to be modified as part of the injector optimization and provide examples of its application to existing RF and SRF gun designs.
Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan
2016-01-01
An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.
Zhi-song CHEN
2012-12-01
Full Text Available The South-to-North Water Diversion (SNWD Project is a significant engineering project meant to solve water shortage problems in North China. Faced with market operations management of the water diversion system, this study defined the supply chain system for the SNWD Project, considering the actual project conditions, built a decentralized decision model and a centralized decision model with strategic customer behavior (SCB using a floating pricing mechanism (FPM, and constructed a coordination mechanism via a revenue-sharing contract. The results suggest the following: (1 owing to water shortage supplements and the excess water sale policy provided by the FPM, the optimal ordering quantity of water resources is less than that without the FPM, and the optimal profits of the whole supply chain, supplier, and external distributor are higher than they would be without the FPM; (2 wholesale pricing and supplementary wholesale pricing with SCB are higher than those without SCB, and the optimal profits of the whole supply chain, supplier, and external distributor are higher than they would be without SCB; and (3 considering SCB and introducing the FPM help increase the optimal profits of the whole supply chain, supplier, and external distributor, and improve the efficiency of water resources usage.
Bogaarts, J G; Gommer, E D; Hilkman, D M W; van Kranen-Mastenbroek, V H J M; Reulen, J P H
2016-08-01
Automated seizure detection is a valuable asset to health professionals, which makes adequate treatment possible in order to minimize brain damage. Most research focuses on two separate aspects of automated seizure detection: EEG feature computation and classification methods. Little research has been published regarding optimal training dataset composition for patient-independent seizure detection. This paper evaluates the performance of classifiers trained on different datasets in order to determine the optimal dataset for use in classifier training for automated, age-independent, seizure detection. Three datasets are used to train a support vector machine (SVM) classifier: (1) EEG from neonatal patients, (2) EEG from adult patients and (3) EEG from both neonates and adults. To correct for baseline EEG feature differences among patients feature, normalization is essential. Usually dedicated detection systems are developed for either neonatal or adult patients. Normalization might allow for the development of a single seizure detection system for patients irrespective of their age. Two classifier versions are trained on all three datasets: one with feature normalization and one without. This gives us six different classifiers to evaluate using both the neonatal and adults test sets. As a performance measure, the area under the receiver operating characteristics curve (AUC) is used. With application of FBC, it resulted in performance values of 0.90 and 0.93 for neonatal and adult seizure detection, respectively. For neonatal seizure detection, the classifier trained on EEG from adult patients performed significantly worse compared to both the classifier trained on EEG data from neonatal patients and the classier trained on both neonatal and adult EEG data. For adult seizure detection, optimal performance was achieved by either the classifier trained on adult EEG data or the classifier trained on both neonatal and adult EEG data. Our results show that age
Blommaert, Maarten; Reiter, Detlev [Institute of Energy and Climate Research (IEK-4), FZ Juelich GmbH, D-52425 Juelich (Germany); Heumann, Holger [Centre de Recherche INRIA Sophia Antipolis, BP 93 06902 Sophia Antipolis (France); Baelmans, Martine [KU Leuven, Department of Mechanical Engineering, 3001 Leuven (Belgium); Gauger, Nicolas Ralph [TU Kaiserslautern, Chair for Scientific Computing, 67663 Kaiserslautern (Germany)
2015-05-01
At present, several plasma boundary codes exist that attempt to describe the complex interactions in the divertor SOL (Scrape-Off Layer). The predictive capability of these edge codes is still very limited. Yet, in parallel to major efforts to mature edge codes, we face the design challenges for next step fusion devices. One of them is the design of the helium and heat exhaust system. In past automated design studies, results indicated large potential reductions in peak heat load by an increased magnetic flux divergence towards the target structures. In the present study, a free boundary magnetic equilibrium solver is included into the simulation chain to verify these tendencies. Additionally, we expanded the applicability of the automated design method by introducing advanced ''adjoint'' sensitivity computations. This method, inherited from airfoil shape optimization in aerodynamics, allows for a large number of design variables at no additional computational cost. Results are shown for a design application of the new WEST divertor.
Md. Ahsanul Hoque
2015-09-01
Full Text Available Antenna alignment is very cumbersome in telecommunication industry and it especially affects the MW links due to environmental anomalies or physical degradation over a period of time. While in recent years a more conventional approach of redundancy has been employed but to ensure the LOS link stability, novel automation techniques are needed. The basic principle is to capture the desired Received Signal Level (RSL by means of an outdoor unit installed on tower top and analyzing the RSL in indoor unit by means of a GUI interface. We have proposed a new smart antenna system where automation is initiated when the transceivers receive low signal strength and report the finding to processing comparator unit. Series architecture is used that include loop antenna, RCX Robonics, LabVIEW interface coupled with a tunable external controller. Denavit–Hartenberg parameters are used in analytical modeling and numerous control techniques have been investigated to overcome imminent overshoot problems for the transport link. With this novel approach, a solution has been put forward for the communication industry where any antenna could achieve optimal directivity for desired RSL with low overshoot and fast steady state response.
Choi, Kihwan; Ng, Alphonsus H C; Fobel, Ryan; Chang-Yen, David A; Yarnell, Lyle E; Pearson, Elroy L; Oleksak, Carl M; Fischer, Andrew T; Luoma, Robert P; Robinson, John M; Audet, Julie; Wheeler, Aaron R
2013-10-15
We introduce an automated digital microfluidic (DMF) platform capable of performing immunoassays from sample to analysis with minimal manual intervention. This platform features (a) a 90 Pogo pin interface for digital microfluidic control, (b) an integrated (and motorized) photomultiplier tube for chemiluminescent detection, and (c) a magnetic lens assembly which focuses magnetic fields into a narrow region on the surface of the DMF device, facilitating up to eight simultaneous digital microfluidic magnetic separations. The new platform was used to implement a three-level full factorial design of experiments (DOE) optimization for thyroid-stimulating hormone immunoassays, varying (1) the analyte concentration, (2) the sample incubation time, and (3) the sample volume, resulting in an optimized protocol that reduced the detection limit and sample incubation time by up to 5-fold and 2-fold, respectively, relative to those from previous work. To our knowledge, this is the first report of a DOE optimization for immunoassays in a microfluidic system of any format. We propose that this new platform paves the way for a benchtop tool that is useful for implementing immunoassays in near-patient settings, including community hospitals, physicians' offices, and small clinical laboratories. PMID:23978190
An Automated Tool for Optimization of FMS Scheduling With Meta Heuristic Approach
A. V. S. Sreedhar Kumar
2014-03-01
Full Text Available The evolutions of manufacturing systems have reflected the need and requirement of the market which varies from time to time. Flexible manufacturing systems have contributed a lot to the development of efficient manufacturing process and production of variety of customized limited volume products as per the market demand based on customer needs. Scheduling of FMS is a crucial operation in maximizing throughput, reducing the wastages and increasing the overall efficiency of the manufacturing process. The dynamic nature of the Flexible Manufacturing Systems makes them unique and hence a generalized solution for scheduling is difficult to be abstracted. Any Solution for optimizing the scheduling should take in to account a multitude of parameters before proposing any solution. The primary objective of the proposed research is to design a tool to automate the optimization of scheduling process by searching for solution in the search spaces using Meta heuristic approaches. The research also validates the use of reward as means for optimizing the scheduling by including it as one of the parameters in the Combined Objective Function.
Automated procedure for selection of optimal refueling policies for light water reactors
An automated procedure determining a minimum cost refueling policy has been developed for light water reactors. The procedure is an extension of the equilibrium core approach previously devised for pressurized water reactors (PWRs). Use of 1 1/2-group theory has improved the accuracy of the nuclear model and eliminated tedious fitting of albedos. A simple heuristic algorithm for locating a good starting policy has materially reduced PWR computing time. Inclusion of void effects and use of the Haling principle for axial flux calculations extended the nuclear model to boiling water reactors (BWRs). A good initial estimate of the refueling policy is obtained by recognizing that a nearly uniform distribution of reactivity provides low-power peaking. The initial estimate is improved upon by interchanging groups of four assemblies and is subsequently refined by interchanging individual assemblies. The method yields very favorable results, is simpler than previously proposed BWR fuel optimization schemes, and retains power cost as the objective function
Automated design and optimization of flexible booster autopilots via linear programming, volume 1
Hauser, F. D.
1972-01-01
A nonlinear programming technique was developed for the automated design and optimization of autopilots for large flexible launch vehicles. This technique, which resulted in the COEBRA program, uses the iterative application of linear programming. The method deals directly with the three main requirements of booster autopilot design: to provide (1) good response to guidance commands; (2) response to external disturbances (e.g. wind) to minimize structural bending moment loads and trajectory dispersions; and (3) stability with specified tolerances on the vehicle and flight control system parameters. The method is applicable to very high order systems (30th and greater per flight condition). Examples are provided that demonstrate the successful application of the employed algorithm to the design of autopilots for both single and multiple flight conditions.
Coordination Between Unmanned Aerial and Ground Vehicles: A Taxonomy and Optimization Perspective.
Chen, Jie; Zhang, Xing; Xin, Bin; Fang, Hao
2016-04-01
The coordination between unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) is a proactive research topic whose great value of application has attracted vast attention. This paper outlines the motivations for studying the cooperative control of UAVs and UGVs, and attempts to make a comprehensive investigation and analysis on recent research in this field. First, a taxonomy for classification of existing unmanned aerial and ground vehicles systems (UAGVSs) is proposed, and a generalized optimization framework is developed to allow the decision-making problems for different types of UAGVSs to be described in a unified way. By following the proposed taxonomy, we show how different types of UAGVSs can be built to realize the goal of a common task, that is target tracking, and how optimization problems can be formulated for a UAGVS to perform specific tasks. This paper presents an optimization perspective to model and analyze different types of UAGVSs, and serves as a guidance and reference for developing UAGVSs. PMID:25898328
Birkholz, Adam B; Schlegel, H Bernhard
2015-12-28
The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster. PMID:26723645
The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster
An Arterial Signal Coordination Optimization Model for Trams Based on Modified AM-BAND
Yangfan Zhou
2016-01-01
Full Text Available Modern trams are developing fast because of their characteristics like medium capability and energy saving. Exclusive way is always set in practice to avoid interruption from general vehicles, while trams have to stop at intersections frequently due to signal rules in the road network. Therefore, signal optimization has great effects on operational efficiency of trams system. In this paper, an arterial signal coordination optimization model is proposed for trams progression based on the Asymmetrical Multi-BAND (AM-BAND method. The AM-BAND is modified from the following aspects. Firstly, BAM-BAND is developed by supplementing active bandwidth constraints to AM-BAND. Assisted by the IBM ILOG CPLEX Optimization Studio, two arterial signals plans with eight intersections are achieved from AM-BAND and BAM-BAND for comparison. Secondly, based on the modified BAM-BAND, a BAM-TRAMBAND model is presented, which incorporates three constraints regarding tram operations, including dwell time at stations, active signal priority, and minimum bandwidth value. The case study and VISSIM simulation results show that travel times of trams decrease with signal plan from BAM-TRAMBAND comparing with the original signal plan. Moreover, traffic performance indicators such as stops and delay are improved significantly.
Yu, Long; Druckenbrod, Markus; Greve, Martin; Wang, Ke-qi; Abdel-Maksoud, Moustafa
2015-10-01
A fully automated optimization process is provided for the design of ducted propellers under open water conditions, including 3D geometry modeling, meshing, optimization algorithm and CFD analysis techniques. The developed process allows the direct integration of a RANSE solver in the design stage. A practical ducted propeller design case study is carried out for validation. Numerical simulations and open water tests are fulfilled and proved that the optimum ducted propeller improves hydrodynamic performance as predicted.
Optimized Multi Agent Coordination using Evolutionary Algorithm: Special Impact in Online Education
Subrat P Pattanayak
2012-08-01
Full Text Available Intelligent multi-agent systems are contemporary direction ofartificial intelligence that is being built up as a result ofresearchers in information processing, distributed systems,network technologies for problem solving. Multi agentcoordination is a vital area where agents coordinate amongthemselves to achieve a particular goal, which either can not besolved by a single agent or is not time effective by a singleagent. The agent’s role in education field is rapidly increasing.Information retrieval, students information processing system,Learning Information System, Pedagogical Agents are variedwork done by different agent technology. The novice usersspecifically are the most useful learners of an E-Tutoringsystem. A multi-agent system plays a vital role in this type ofE-tutoring system. Online Education is an emerging field inEducation System. To improve the interaction betweenlearners and tutors with personalized communication, weproposed an Optimized Multi Agent System (OMAS bywhich a learner can get sufficient information to achieve theirobjective. This conceptual framework is based on the idea that,adaptiveness is the best match between a particular learnersprofile and its course contents. We also try to optimize theprocedure using evolutionary process so that style of thelearner and the learning methods with respect to the learner ismatched with high fitness value. The agent technology hasbeen applied in a varied type of applications for education, butthis system may work as a user friendly conceptual systemwhichcan be integrated with any e-learning software. Use ofthe GUI user interface can make the system moreenriched. When a particular request comes from thelearner, the agents coordinate themselves to get the bestpossible solution. The solution can be represented in ananimated way in front of the learner, so that the noviceusers those who are new to the system, can adopt it veryeasily and with ease.
Design And Modeling An Automated Digsilent Power System For Optimal New Load Locations
Mohamed Saad
2015-08-01
Full Text Available Abstract The electric power utilities seek to take advantage of novel approaches to meet growing energy demand. Utilities are under pressure to evolve their classical topologies to increase the usage of distributed generation. Currently the electrical power engineers in many regions of the world are implementing manual methods to measure power consumption for farther assessment of voltage violation. Such process proved to be time consuming costly and inaccurate. Also demand response is a grid management technique where retail or wholesale customers are requested either electronically or manually to reduce their load. Therefore this paper aims to design and model an automated power system for optimal new load locations using DPL DIgSILENT Programming Language. This study is a diagnostic approach that assists system operator about any voltage violation cases that would happen during adding new load to the grid. The process of identifying the optimal bus bar location involves a complicated calculation of the power consumptions at each load bus As a result the DPL program would consider all the IEEE 30 bus internal networks data then a load flow simulation will be executed. To add the new load to the first bus in the network. Therefore the developed model will simulate the new load at each available bus bar in the network and generate three analytical reports for each case that captures the overunder voltage and the loading elements among the grid.
Optimal coordinate operation control for wind–photovoltaic–battery storage power-generation units
Highlights: • Adopt ‘rainflow’ calculation method to establish the battery cycle life model and quantitatively calculate the life wreck. • Minimize unit cost of power generation through enhanced gravitational search algorithm. • Analyze the relationship between renewable resource potential and the economic efficiency of power generation unit. - Abstract: An optimal coordinate operation control method for large-scale wind–photovoltaic (PV)–battery storage power generation units (WPB-PGUs) connected to a power grid with rated power output was proposed to address the challenges of poor stability, lack of decision-making, and low economic benefits. The “rainflow” calculation method was adopted to establish the battery cycle life model and to calculate quantitatively the life expectancy loss in the operation process. To minimize unit cost of power generation, this work optimized the output period of the equipment and strategy of battery charging and discharging with consideration of working conditions, generation equipment characteristics, and load demand by using the enhanced gravitational search algorithm (EGSA). A case study was conducted on the basis of data obtained using WPB-PGU in Zhangbei, China. Results showed that the proposed method could effectively minimize the unit cost of a WPB-PGU under different scenarios and diverse meteorological conditions. The proposed algorithm has high calculation accuracy and fast convergence speed
Automated telescope scheduling
Johnston, Mark D.
1988-08-01
With the ever increasing level of automation of astronomical telescopes the benefits and feasibility of automated planning and scheduling are becoming more apparent. Improved efficiency and increased overall telescope utilization are the most obvious goals. Automated scheduling at some level has been done for several satellite observatories, but the requirements on these systems were much less stringent than on modern ground or satellite observatories. The scheduling problem is particularly acute for Hubble Space Telescope: virtually all observations must be planned in excruciating detail weeks to months in advance. Space Telescope Science Institute has recently made significant progress on the scheduling problem by exploiting state-of-the-art artificial intelligence software technology. What is especially interesting is that this effort has already yielded software that is well suited to scheduling groundbased telescopes, including the problem of optimizing the coordinated scheduling of more than one telescope.
Optimizing human-system interface automation design based on a skill-rule-knowledge framework
This study considers the technological change that has occurred in complex systems within the past 30 years. The role of human operators in controlling and interacting with complex systems following the technological change was also investigated. Modernization of instrumentation and control systems and components leads to a new issue of human-automation interaction, in which human operational performance must be considered in automated systems. The human-automation interaction can differ in its types and levels. A system design issue is usually realized: given these technical capabilities, which system functions should be automated and to what extent? A good automation design can be achieved by making an appropriate human-automation function allocation. To our knowledge, only a few studies have been published on how to achieve appropriate automation design with a systematic procedure. Further, there is a surprising lack of information on examining and validating the influences of levels of automation (LOAs) on instrumentation and control systems in the advanced control room (ACR). The study we present in this paper proposed a systematic framework to help in making an appropriate decision towards types of automation (TOA) and LOAs based on a 'Skill-Rule-Knowledge' (SRK) model. From the evaluating results, it was shown that the use of either automatic mode or semiautomatic mode is insufficient to prevent human errors. For preventing the occurrences of human errors and ensuring the safety in ACR, the proposed framework can be valuable for making decisions in human-automation allocation.
Martin, Thomas Joseph
This dissertation presents the theoretical methodology, organizational strategy, conceptual demonstration and validation of a fully automated computer program for the multi-disciplinary analysis, inverse design and optimization of convectively cooled axial gas turbine blades and vanes. Parametric computer models of the three-dimensional cooled turbine blades and vanes were developed, including the automatic generation of discretized computational grids. Several new analysis programs were written and incorporated with existing computational tools to provide computer models of the engine cycle, aero-thermodynamics, heat conduction and thermofluid physics of the internally cooled turbine blades and vanes. A generalized information transfer protocol was developed to provide the automatic mapping of geometric and boundary condition data between the parametric design tool and the numerical analysis programs. A constrained hybrid optimization algorithm controlled the overall operation of the system and guided the multi-disciplinary internal turbine cooling design process towards the objectives and constraints of engine cycle performance, aerodynamic efficiency, cooling effectiveness and turbine blade and vane durability. Several boundary element computer programs were written to solve the steady-state non-linear heat conduction equation inside the internally cooled and thermal barrier-coated turbine blades and vanes. The boundary element method (BEM) did not require grid generation inside the internally cooled turbine blades and vanes, so the parametric model was very robust. Implicit differentiations of the BEM thermal and thereto-elastic analyses were done to compute design sensitivity derivatives faster and more accurately than via explicit finite differencing. A factor of three savings of computer processing time was realized for two-dimensional thermal optimization problems, and a factor of twenty was obtained for three-dimensional thermal optimization problems
Model of Stochastic Automation Asymptotically Optimal Behavior for Inter-budget Regulation
Elena D. Streltsova
2013-01-01
Full Text Available This paper is focused on the topical issue of inter-budget control in the structure ↔ by applying econometric models. To create the decision-making model, mathematical tool of the theory of stochastic automation, operating in random environments was used. On the basis of the application of this mathematical tool, the adaptive training economic and mathematical model, able to adapt to the environment, maintained by the income from the payment of federal and regional taxes and fees, payable to the budget of the constituent entity of the RF and paid to the budget of a lower level in the form of budget regulation was developed. The authors have developed the structure of the machine, described its behavior in a random environment and introduced the expression for the final probabilities of machine in each of its states. The behavioral aspect of the machine by means of a mathematically rigorous proof of the theorem on the feasibility of behavior and the asymptotic optimality of the proposed design of the machine were presented.
Serag, Ahmed; Wenzel, Fabian; Thiele, Frank; Buchert, Ralph; Young, Stewart
2009-02-01
FDG-PET is increasingly used for the evaluation of dementia patients, as major neurodegenerative disorders, such as Alzheimer's disease (AD), Lewy body dementia (LBD), and Frontotemporal dementia (FTD), have been shown to induce specific patterns of regional hypo-metabolism. However, the interpretation of FDG-PET images of patients with suspected dementia is not straightforward, since patients are imaged at different stages of progression of neurodegenerative disease, and the indications of reduced metabolism due to neurodegenerative disease appear slowly over time. Furthermore, different diseases can cause rather similar patterns of hypo-metabolism. Therefore, classification of FDG-PET images of patients with suspected dementia may lead to misdiagnosis. This work aims to find an optimal subset of features for automated classification, in order to improve classification accuracy of FDG-PET images in patients with suspected dementia. A novel feature selection method is proposed, and performance is compared to existing methods. The proposed approach adopts a combination of balanced class distributions and feature selection methods. This is demonstrated to provide high classification accuracy for classification of FDG-PET brain images of normal controls and dementia patients, comparable with alternative approaches, and provides a compact set of features selected.
Luis Marenco
2014-05-01
Full Text Available This paper describes how DISCO, the data-aggregator that supports the Neuroscience Information Framework (NIF, has been extended to play a central role in automating the complex workflow required to support and coordinate the NIF's data integration capabilities. The NIF is an NIH Neuroscience Blueprint initiative designed to help researchers access the wealth of data related to the neurosciences available via the Internet. A central component is the NIF Federation, a searchable database that currently contains data from 231 data and information resources regularly harvested, updated, and warehoused in the DISCO system. In the past several years, DISCO has greatly extended its functionality and has evolved to play a central role in automating the complex, ongoing process of harvesting, validating, integrating, and displaying neuroscience data from a growing set of participating resources. This paper provides an overview of DISCO's current capabilities and discusses a number of the challenges and future directions related to the process of coordinating the integration of neuroscience data within the NIF Federation.
Optimal voltage control in distribution systems with coordination of distribution installations
Oshiro, Masato; Tanaka, Kenichi; Uehara, Akie; Senjyu, Tomonobu; Miyazato, Yoshitaka; Yona, Atsushi [Faculty of Engineering, University of the Ryukyus, 1 Senbaru, Nishihara-cho, Nakagami, Okinawa 903-0213 (Japan); Funabashi, Toshihisa [Meidensha Corporation, 2-1-1, Osaki, Shinagawa-ku, Tokyo 141-6029 (Japan)
2010-12-15
In recent years, distributed generations based on natural energy or co-generation systems are increasing due to global warming and reduction of fossil fuels. Many of the distributed generations are set up in the vicinity of customers, with the advantage that this decreases transmission losses and transmission capacity. However, output power generated from renewable energy such as wind power and photovoltaics, is influenced by weather conditions. Therefore if the distributed generation increases with conventional control schemes, the voltage variation in a distribution system becomes a serious problem. In this paper, an optimal control method of distribution voltage with coordination of distributed installations, such as On Load Tap Changer (OLTC), Step Voltage Regulator (SVR), Shunt Capacitor (SC), Shunt Reactor (ShR), and Static Var Compensator (SVC), is proposed. In this research, the communication infrastructure is assumed to be widespread in the distribution network. The proposed technique combines genetic algorithm (GA) and Tabu search (TS) to determine the control operation. In order to confirm the validity of the proposed method, simulation results are presented for a distribution network model with distributed (photovoltaic) generation. (author)
Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be 'translated' to a set of 'if-then rules' for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the 'behavior' of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way
Vysotskiy, Victor P; Boström, Jonas; Veryazov, Valera
2013-11-15
A parallel procedure for an effective optimization of relative position and orientation between two or more fragments has been implemented in the MOLCAS program package. By design, the procedure does not perturb the electronic structure of a system under the study. The original composite system is divided into frozen fragments and internal coordinates linking those fragments are the only optimized parameters. The procedure is capable to handle fully independent (no border atoms) fragments as well as fragments connected by covalent bonds. In the framework of the procedure, the optimization of relative position and orientation of the fragments are carried out in the internal "Z-matrix" coordinates using numerical derivatives. The total number of required single points energy evaluations scales with the number of fragments rather than with the total number of atoms in the system. The accuracy and the performance of the procedure have been studied by test calculations for a representative set of two- and three-fragment molecules with artificially distorted structures. The developed approach exhibits robust and smooth convergence to the reference optimal structures. As only a few internal coordinates are varied during the procedure, the proposed constrained fragment geometry optimization can be afforded even for high level ab initio methods like CCSD(T) and CASPT2. This capability has been demonstrated by applying the method to two larger cases, CCSD(T) and CASPT2 calculations on a positively charged benzene lithium complex and on the oxygen molecule interacting to iron porphyrin molecule, respectively. PMID:24006272
Introduction: Accumulation of β-amyloid (Aβ) aggregates in the brain is linked to the pathogenesis of Alzheimer's disease (AD). Imaging probes targeting these Aβ aggregates in the brain may provide a useful tool to facilitate the diagnosis of AD. Recently, [18F]AV-45 ([18F]5) demonstrated high binding to the Aβ aggregates in AD patients. To improve the availability of this agent for widespread clinical application, a rapid, fully automated, high-yield, cGMP-compliant radiosynthesis was necessary for production of this probe. We report herein an optimal [18F]fluorination, de-protection condition and fully automated radiosynthesis of [18F]AV-45 ([18F]5) on a radiosynthesis module (BNU F-A2). Methods: The preparation of [18F]AV-45 ([18F]5) was evaluated under different conditions, specifically by employing different precursors (-OTs and -Br as the leaving group), reagents (K222/K2CO3 vs. tributylammonium bicarbonate) and deprotection in different acids. With optimized conditions from these experiments, the automated synthesis of [18F]AV-45 ([18F]5) was accomplished by using a computer-programmed, standard operating procedure, and was purified on an on-line solid-phase cartridge (Oasis HLB). Results: The optimized reaction conditions were successfully implemented to an automated nucleophilic fluorination module. The radiochemical purity of [18F]AV-45 ([18F]5) was >95%, and the automated synthesis yield was 33.6±5.2% (no decay corrected, n=4), 50.1±7.9% (decay corrected) in 50 min at a quantity level of 10-100 mCi (370-3700 MBq). Autoradiography studies of [18F]AV-45 ([18F]5) using postmortem AD brain and Tg mouse brain sections in the presence of different concentration of 'cold' AV-136 showed a relatively low inhibition of in vitro binding of [18F]AV-45 ([18F]5) to the Aβ plaques (IC50=1-4 μM, a concentration several order of magnitude higher than the expected pseudo carrier concentration in the brain). Conclusions: Solid-phase extraction purification and
Dongqi Liu
2016-03-01
Full Text Available This paper proposed a optimal strategy for coordinated operation of electric vehicles (EVs charging and discharging with wind-thermal system. By aggregating a large number of EVs, the huge total battery capacity is sufficient to stabilize the disturbance of the transmission grid. Hence, a dynamic environmental dispatch model which coordinates a cluster of charging and discharging controllable EV units with wind farms and thermal plants is proposed. A multi-objective particle swarm optimization (MOPSO algorithm and a fuzzy decision maker are put forward for the simultaneous optimization of grid operating cost, CO2 emissions, wind curtailment, and EV users’ cost. Simulations are done in a 30 node system containing three traditional thermal plants, two carbon capture and storage (CCS thermal plants, two wind farms, and six EV aggregations. Contrast of strategies under different EV charging/discharging price is also discussed. The results are presented to prove the effectiveness of the proposed strategy.
Xinlei Liu; Zhentong Liu; Liming Zhu; Hongwen He
2012-01-01
On the basis of the shifting process of automated mechanical transmissions (AMTs) for traditional hybrid electric vehicles (HEVs), and by combining the features of electric machines with fast response speed, the dynamic model of the hybrid electric AMT vehicle powertrain is built up, the dynamic characteristics of each phase of shifting process are analyzed, and a control strategy in which torque and speed of the engine and electric ma...
Wang, Li; Gao, Yaozong; Shi, Feng; Liao, Shu; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 (United States); Chen, Ken Chung [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403 (China); Shen, Steve G. F.; Yan, Jin [Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Lee, Philip K. M.; Chow, Ben [Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077 (China); Liu, Nancy X. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050 (China); Xia, James J. [Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People' s Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, 136701 (Korea, Republic of)
2014-04-15
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT
Moving Toward an Optimal and Automated Geospatial Network for CCUS Infrastructure
Hoover, Brendan Arthur [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-05
Modifications in the global climate are being driven by the anthropogenic release of greenhouse gases (GHG) including carbon dioxide (CO_{2}) (Middleton et al. 2014). CO_{2} emissions have, for example, been directly linked to an increase in total global temperature (Seneviratne et al. 2016). Strategies that limit CO_{2} emissions—like CO_{2} capture, utilization, and storage (CCUS) technology—can greatly reduce emissions by capturing CO_{2} before it is released to the atmosphere. However, to date CCUS technology has not been developed at a large commercial scale despite several promising high profile demonstration projects (Middleton et al. 2015). Current CCUS research has often focused on capturing CO_{2} emissions from coal-fired power plants, but recent research at Los Alamos National Laboratory (LANL) suggests focusing CCUS CO_{2} capture research upon industrial sources might better encourage CCUS deployment. To further promote industrial CCUS deployment, this project builds off current LANL research by continuing the development of a software tool called SimCCS, which estimates a regional system of transport to inject CO_{2} into sedimentary basins. The goal of SimCCS, which was first developed by Middleton and Bielicki (2009), is to output an automated and optimal geospatial industrial CCUS pipeline that accounts for industrial source and sink locations by estimating a Delaunay triangle network which also minimizes topographic and social costs (Middleton and Bielicki 2009). Current development of SimCCS is focused on creating a new version that accounts for spatial arrangements that were not available in the previous version. This project specifically addresses the issue of non-unique Delaunay triangles by adding additional triangles to the network, which can affect how the CCUS network is calculated.
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT
Sanyal, Amit K.
2005-01-01
There are several attitude estimation algorithms in existence, all of which use local coordinate representations for the group of rigid body orientations. All local coordinate representations of the group of orientations have associated problems. While minimal coordinate representations exhibit kinematic singularities for large rotations, the quaternion representation requires satisfaction of an extra constraint. This paper treats the attitude estimation and filtering problem as an optimizati...
In recent years, many surgical procedures have increasingly been replaced by interventional procedures that guide catheters into the arteries under X ray fluoroscopic guidance to perform a variety of operations such as ballooning, embolization, implantation of stents etc. The radiation exposure to patients and staff in such procedures is much higher than in simple radiographic examinations like X ray of chest or abdomen such that radiation induced skin injuries to patients and eye lens opacities among workers have been reported in the 1990's and after. Interventional procedures have grown both in frequency and importance during the last decade. This Coordinated Research Project (CRP) and TECDOC were developed within the International Atomic Energy Agency's (IAEA) framework of statutory responsibility to provide for the worldwide application of the standards for the protection of people against exposure to ionizing radiation. The CRP took place between 2003 and 2005 in six countries, with a view of optimizing the radiation protection of patients undergoing interventional procedures. The Fundamental Safety Principles and the International Basic Safety Standards for Protection against Ionizing Radiation (BSS) issued by the IAEA and co-sponsored by the Food and Agriculture Organization of the United Nations (FAO), the International Labour Organization (ILO), the World Health Organization (WHO), the Pan American Health Organization (PAHO) and the Nuclear Energy Agency (NEA), among others, require the radiation protection of patients undergoing medical exposures through justification of the procedures involved and through optimization. In keeping with its responsibility on the application of standards, the IAEA programme on Radiological Protection of Patients encourages the reduction of patient doses. To facilitate this, it has issued specific advice on the application of the BSS in the field of radiology in Safety Reports Series No. 39 and the three volumes on Radiation
Orimoto, Yuuichi; Aoki, Yuriko
2016-07-01
An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between "choose-maximum" (choose a base pair giving the maximum β for each step) and "choose-minimum" (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.
Orimoto, Yuuichi; Aoki, Yuriko
2016-07-14
An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between "choose-maximum" (choose a base pair giving the maximum β for each step) and "choose-minimum" (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account. PMID:27421397
A perturbative extension to optimized coordinate vibrational self-consistent field (oc-VSCF) is proposed based on the quasi-degenerate perturbation theory (QDPT). A scheme to construct the degenerate space (P space) is developed, which incorporates degenerate configurations and alleviates the divergence of perturbative expansion due to localized coordinates in oc-VSCF (e.g., local O–H stretching modes of water). An efficient configuration selection scheme is also implemented, which screens out the Hamiltonian matrix element between the P space configuration (p) and the complementary Q space configuration (q) based on a difference in their quantum numbers (λpq = ∑s|ps − qs|). It is demonstrated that the second-order vibrational QDPT based on optimized coordinates (oc-VQDPT2) smoothly converges with respect to the order of the mode coupling, and outperforms the conventional one based on normal coordinates. Furthermore, an improved, fast algorithm is developed for optimizing the coordinates. First, the minimization of the VSCF energy is conducted in a restricted parameter space, in which only a portion of pairs of coordinates is selectively transformed. A rational index is devised for this purpose, which identifies the important coordinate pairs to mix from others that may remain unchanged based on the magnitude of harmonic coupling induced by the transformation. Second, a cubic force field (CFF) is employed in place of a quartic force field, which bypasses intensive procedures that arise due to the presence of the fourth-order force constants. It is found that oc-VSCF based on CFF together with the pair selection scheme yields the coordinates similar in character to the conventional ones such that the final vibrational energy is affected very little while gaining an order of magnitude acceleration. The proposed method is applied to ethylene and trans-1,3-butadiene. An accurate, multi-resolution potential, which combines the MP2 and coupled-cluster with singles
Hua-Ming Song
2011-01-01
Full Text Available This paper investigates the ordering decisions and coordination mechanism for a distributed short-life-cycle supply chain. The objective is to maximize the whole supply chain's expected profit and meanwhile make the supply chain participants achieve a Pareto improvement. We treat lead time as a controllable variable, thus the demand forecast is dependent on lead time: the shorter lead time, the better forecast. Moreover, optimal decision-making models for lead time and order quantity are formulated and compared in the decentralized and centralized cases. Besides, a three-parameter contract is proposed to coordinate the supply chain and alleviate the double margin in the decentralized scenario. In addition, based on the analysis of the models, we develop an algorithmic procedure to find the optimal ordering decisions. Finally, a numerical example is also presented to illustrate the results.
Jian-Qiang Luo; Hui Yang; Hua-Ming Song
2011-01-01
This paper investigates the ordering decisions and coordination mechanism for a distributed short-life-cycle supply chain. The objective is to maximize the whole supply chain's expected profit and meanwhile make the supply chain participants achieve a Pareto improvement. We treat lead time as a controllable variable, thus the demand forecast is dependent on lead time: the shorter lead time, the better forecast. Moreover, optimal decision-making models for lead time and order quantity are form...
Jip Kim
2016-07-01
Full Text Available The increasing penetration of plug-in electric vehicles (PEVs may cause a low-voltage problem in the distribution network. In particular, the introduction of charging stations where multiple PEVs are simultaneously charged at the same bus can aggravate the low-voltage problem. Unlike a distribution network operator (DNO who has the overall responsibility for stable and reliable network operation, a charging station operator (CSO may schedule PEV charging without consideration for the resulting severe voltage drop. Therefore, there is a need for the DNO to impose a coordination measure to induce the CSO to adjust its charging schedule to help mitigate the voltage problem. Although the current time-of-use (TOU tariff is an indirect coordination measure that can motivate the CSO to shift its charging demand to off-peak time by imposing a high rate at the peak time, it is limited by its rigidity in that the network voltage condition cannot be flexibly reflected in the tariff. Therefore, a flexible penalty contract (FPC for voltage security to be used as a direct coordination measure is proposed. In addition, the optimal coordinated management is formulated. Using the Pacific Gas and Electric Company (PG&E 69-bus test distribution network, the effectiveness of the coordination was verified by comparison with the current TOU tariff.
Teng, Pang-yu; Bagci, Ahmet Murat; Alperin, Noam
2009-02-01
A computer-aided method for finding an optimal imaging plane for simultaneous measurement of the arterial blood inflow through the 4 vessels leading blood to the brain by phase contrast magnetic resonance imaging is presented. The method performance is compared with manual selection by two observers. The skeletons of the 4 vessels for which centerlines are generated are first extracted. Then, a global direction of the relatively less curved internal carotid arteries is calculated to determine the main flow direction. This is then used as a reference direction to identify segments of the vertebral arteries that strongly deviates from the main flow direction. These segments are then used to identify anatomical landmarks for improved consistency of the imaging plane selection. An optimal imaging plane is then identified by finding a plane with the smallest error value, which is defined as the sum of the angles between the plane's normal and the vessel centerline's direction at the location of the intersections. Error values obtained using the automated and the manual methods were then compared using 9 magnetic resonance angiography (MRA) data sets. The automated method considerably outperformed the manual selection. The mean error value with the automated method was significantly lower than the manual method, 0.09+/-0.07 vs. 0.53+/-0.45, respectively (p<.0001, Student's t-test). Reproducibility of repeated measurements was analyzed using Bland and Altman's test, the mean 95% limits of agreements for the automated and manual method were 0.01~0.02 and 0.43~0.55 respectively.
Highlights: • Optimization of large-scale hydropower system in the Yangtze River basin. • Improved decomposition–coordination and discrete differential dynamic programming. • Generating initial solution randomly to reduce generation time. • Proposing relative coefficient for more power generation. • Proposing adaptive bias corridor technology to enhance convergence speed. - Abstract: With the construction of major hydro plants, more and more large-scale hydropower systems are taking shape gradually, which brings up a challenge to optimize these systems. Optimization of large-scale hydropower system (OLHS), which is to determine water discharges or water levels of overall hydro plants for maximizing total power generation when subjecting to lots of constrains, is a high dimensional, nonlinear and coupling complex problem. In order to solve the OLHS problem effectively, an improved decomposition–coordination and discrete differential dynamic programming (IDC–DDDP) method is proposed in this paper. A strategy that initial solution is generated randomly is adopted to reduce generation time. Meanwhile, a relative coefficient based on maximum output capacity is proposed for more power generation. Moreover, an adaptive bias corridor technology is proposed to enhance convergence speed. The proposed method is applied to long-term optimal dispatches of large-scale hydropower system (LHS) in the Yangtze River basin. Compared to other methods, IDC–DDDP has competitive performances in not only total power generation but also convergence speed, which provides a new method to solve the OLHS problem
Cantin, Greg T.; Shock, Teresa R.; Park, Sung Kyu; Madhani, Hiten D.; Yates, John R.
2007-01-01
An automated online multidimensional liquid chromatography system coupled to ESI-based tandem mass spectrometry was used to assess the effectiveness of TiO2 in the enrichment of phosphopeptides from tryptic digests of protein mixtures. By monitoring the enrichment of phosphopeptides, an optimized set of loading, wash, and elution conditions were realized for TiO2. A comparison of TiO2 with other resins used for phosphopeptide enrichment, Fe(III)-IMAC and ZrO2, was also carried out using trypt...
Vlachogiannis, Ioannis (John); Lee, KY
2009-01-01
In this paper an improved coordinated aggregation-based particle swarm optimization (ICA-PSO) algorithm is introduced for solving the optimal economic load dispatch (ELD) problem in power systems. In the ICA-PSO algorithm each particle in the swarm retains a memory of its best position ever...... particles search the decision space with accuracy up to two digit points resulting in the improved convergence of the process. The ICA-PSO algorithm is tested on a number of power systems, including the systems with 6, 13, 15, and 40 generating units, the island power system of Crete in Greece and the...... Hellenic bulk power system, and is compared with other state-of-the-art heuristic optimization techniques (HOTs), demonstrating improved performance over them....
Guiding automated NMR structure determination using a global optimization metric, the NMR DP score
Huang, Yuanpeng Janet, E-mail: yphuang@cabm.rutgers.edu; Mao, Binchen; Xu, Fei; Montelione, Gaetano T., E-mail: gtm@rutgers.edu [Rutgers, The State University of New Jersey, Department of Molecular Biology and Biochemistry, Center for Advanced Biotechnology and Medicine, and Northeast Structural Genomics Consortium (United States)
2015-08-15
ASDP is an automated NMR NOE assignment program. It uses a distinct bottom-up topology-constrained network anchoring approach for NOE interpretation, with 2D, 3D and/or 4D NOESY peak lists and resonance assignments as input, and generates unambiguous NOE constraints for iterative structure calculations. ASDP is designed to function interactively with various structure determination programs that use distance restraints to generate molecular models. In the CASD–NMR project, ASDP was tested and further developed using blinded NMR data, including resonance assignments, either raw or manually-curated (refined) NOESY peak list data, and in some cases {sup 15}N–{sup 1}H residual dipolar coupling data. In these blinded tests, in which the reference structure was not available until after structures were generated, the fully-automated ASDP program performed very well on all targets using both the raw and refined NOESY peak list data. Improvements of ASDP relative to its predecessor program for automated NOESY peak assignments, AutoStructure, were driven by challenges provided by these CASD–NMR data. These algorithmic improvements include (1) using a global metric of structural accuracy, the discriminating power score, for guiding model selection during the iterative NOE interpretation process, and (2) identifying incorrect NOESY cross peak assignments caused by errors in the NMR resonance assignment list. These improvements provide a more robust automated NOESY analysis program, ASDP, with the unique capability of being utilized with alternative structure generation and refinement programs including CYANA, CNS, and/or Rosetta.
Guiding automated NMR structure determination using a global optimization metric, the NMR DP score
ASDP is an automated NMR NOE assignment program. It uses a distinct bottom-up topology-constrained network anchoring approach for NOE interpretation, with 2D, 3D and/or 4D NOESY peak lists and resonance assignments as input, and generates unambiguous NOE constraints for iterative structure calculations. ASDP is designed to function interactively with various structure determination programs that use distance restraints to generate molecular models. In the CASD–NMR project, ASDP was tested and further developed using blinded NMR data, including resonance assignments, either raw or manually-curated (refined) NOESY peak list data, and in some cases 15N–1H residual dipolar coupling data. In these blinded tests, in which the reference structure was not available until after structures were generated, the fully-automated ASDP program performed very well on all targets using both the raw and refined NOESY peak list data. Improvements of ASDP relative to its predecessor program for automated NOESY peak assignments, AutoStructure, were driven by challenges provided by these CASD–NMR data. These algorithmic improvements include (1) using a global metric of structural accuracy, the discriminating power score, for guiding model selection during the iterative NOE interpretation process, and (2) identifying incorrect NOESY cross peak assignments caused by errors in the NMR resonance assignment list. These improvements provide a more robust automated NOESY analysis program, ASDP, with the unique capability of being utilized with alternative structure generation and refinement programs including CYANA, CNS, and/or Rosetta
GENPLAT: an automated platform for biomass enzyme discovery and cocktail optimization.
Walton, Jonathan; Banerjee, Goutami; Car, Suzana
2011-01-01
The high cost of enzymes for biomass deconstruction is a major impediment to the economic conversion of lignocellulosic feedstocks to liquid transportation fuels such as ethanol. We have developed an integrated high throughput platform, called GENPLAT, for the discovery and development of novel enzymes and enzyme cocktails for the release of sugars from diverse pretreatment/biomass combinations. GENPLAT comprises four elements: individual pure enzymes, statistical design of experiments, robotic pipeting of biomass slurries and enzymes, and automated colorimeteric determination of released Glc and Xyl. Individual enzymes are produced by expression in Pichia pastoris or Trichoderma reesei, or by chromatographic purification from commercial cocktails or from extracts of novel microorganisms. Simplex lattice (fractional factorial) mixture models are designed using commercial Design of Experiment statistical software. Enzyme mixtures of high complexity are constructed using robotic pipeting into a 96-well format. The measurement of released Glc and Xyl is automated using enzyme-linked colorimetric assays. Optimized enzyme mixtures containing as many as 16 components have been tested on a variety of feedstock and pretreatment combinations. GENPLAT is adaptable to mixtures of pure enzymes, mixtures of commercial products (e.g., Accellerase 1000 and Novozyme 188), extracts of novel microbes, or combinations thereof. To make and test mixtures of ˜10 pure enzymes requires less than 100 μg of each protein and fewer than 100 total reactions, when operated at a final total loading of 15 mg protein/g glucan. We use enzymes from several sources. Enzymes can be purified from natural sources such as fungal cultures (e.g., Aspergillus niger, Cochliobolus carbonum, and Galerina marginata), or they can be made by expression of the encoding genes (obtained from the increasing number of microbial genome sequences) in hosts such as E. coli, Pichia pastoris, or a filamentous fungus such
Jeong, Bong-Keun
2010-01-01
Digital piracy and the emergence of new distribution channels have changed the dynamics of supply chain coordination and created many interesting problems. There has been increased attention to understanding the phenomenon of consumer piracy behavior and its impact on supply chain profitability. The purpose of this dissertation is to better…
Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov
2015-08-01
Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits. PMID:26227212
Xueliang Huang
2013-01-01
Full Text Available As an important component of the smart grid, electric vehicles (EVs could be a good measure against energy shortages and environmental pollution. A main way of energy supply to EVs is to swap battery from the swap station. Based on the characteristics of EV battery swap station, the coordinated charging optimal control strategy is investigated to smooth the load fluctuation. Shuffled frog leaping algorithm (SFLA is an optimization method inspired by the memetic evolution of a group of frogs when seeking food. An improved shuffled frog leaping algorithm (ISFLA with the reflecting method to deal with the boundary constraint is proposed to obtain the solution of the optimal control strategy for coordinated charging. Based on the daily load of a certain area, the numerical simulations including the comparison of PSO and ISFLA are carried out and the results show that the presented ISFLA can effectively lower the peak-valley difference and smooth the load profile with the faster convergence rate and higher convergence precision.
Krejci, I; Piana, C.; Howitz, S.; Wegener, T; Fiedler, S.; ZWANZIG, M.; Schmitt, D.; Daum, N; Meier, K.; Lehr, C. M.; Batista, U; Zemljic, S; Messerschmidt, J.; Franzke, J; M. Wirth
2012-01-01
There is increasing demand for automated cell reprogramming in the fields of cell biology, biotechnology and the biomedical sciences. Microfluidic-based platforms that provide unattended manipulation of adherent cells promise to be an appropriate basis for cell manipulation. In this study we developed a magnetically driven cell carrier to serve as a vehicle within an in vitro environment. To elucidate the impact of the carrier on cells, biocompatibility was estimated using the human adenocarc...
Automated optimal glycaemic control using a physiology based pharmacokinetic, pharmacodynamic model
Schaller, Stephan
2015-01-01
After decades of research, Automated Glucose Control (AGC) is still out of reach for everyday control of blood glucose. The inter- and intra-individual variability of glucose dynamics largely arising from variability in insulin absorption, distribution, and action, and related physiological lag-times remain a core problem in the development of suitable control algorithms. Over the years, model predictive control (MPC) has established itself as the gold standard in AGC systems in research. Mod...
Optimal solutions for protection, control and automation in hydroelectric power systems
Fault statistics and a poll at the electricity network companies show that incorrect functions from protection, control- and automation equipment contribute relatively much to undelivered energy. Yet there is little focus on doing fault analyses and register such faults in FASIT (a Norwegian system for registration of faults and interruption). This is especially true of the distribution network 1 - 22 kV. This is where the potential of reducing the amount of undelivered energy by introducing various automatic means is greatest
Chen, Zhi-Song; Wang, Hui-Min
2012-01-01
The South-to-North Water Diversion (SNWD) Project is a significant engineering project meant to solve water shortage problems in North China. Faced with market operations management of the water diversion system, this study defined the supply chain system for the SNWD Project, considering the actual project conditions, built a decentralized decision model and a centralized decision model with strategic customer behavior (SCB) using a floating pricing mechanism (FPM), and constructed a coordin...
Mohammed Forhad UDDIN; Kazushi SANO
2012-01-01
In this paper, a supply chain with a coordination mechanism consisting of a single vendor and buyeris considered. Further, instead of a price sensitive linear or deterministic demand function, a price-sensitivenon-linear demand function is introduced. To find the inventory cost, penalty cost and transportation cost, it isassumed that the production and shipping functions of the vendor are continuously harmonized and occur at thesame rate. In this integrated supply chain, the Buyer’s Linear Pr...
Juravle I.
2013-02-01
Full Text Available The study related in this paper reflects the opinion of various experts regarding the importance in developing thecoordinative abilities level to improve selection system of elite athletes. These coordinative abilities can be viewed as the ability of aperson that performs actions with a high degree of difficulty, adjusting the movements in time and space and taking into account newsituations that occur.The main research method used in this paper is based on the literature studies in this area of interest, i.e. articles andpublications, manuals, tutorials, etc. Initially, the study began from the hypothesis that significant improvements can be observed inthe selection process of the young athlete’s when is take into account the development of the coordinative abilities.Analyzing the related work in this field, selection process is the decisive factor in creating the assumptions for achievinghigh performances in sport. Also, these researches provide criteria, samples and standards, features and models for initial and primaryselection process, and also for the selection of the Olympic or national athletes groups.The conclusion of this study shows that one of the most important criteria for athletes’ selection process is represented bytheir level of development of coordination abilities. Researches included in this paper also argue the importance of athlete’scoordination abilities development for selection process in different types of team sport games.
Hauser, F. D.; Szollosi, G. D.; Lakin, W. S.
1972-01-01
COEBRA, the Computerized Optimization of Elastic Booster Autopilots, is an autopilot design program. The bulk of the design criteria is presented in the form of minimum allowed gain/phase stability margins. COEBRA has two optimization phases: (1) a phase to maximize stability margins; and (2) a phase to optimize structural bending moment load relief capability in the presence of minimum requirements on gain/phase stability margins.
An Automated Fixed-Point Optimization Tool in MATLAB XSG/SynDSP Environment
Wang, Cheng C.; Changchun Shi; Robert W. Brodersen; Dejan Marković
2011-01-01
This paper presents an automated tool for floating-point to fixed-point conversion. The tool is based on previous work that was built in MATLAB/Simulink environment and Xilinx System Generator support. The tool is now extended to include Synplify DSP blocksets in a seamless way from the users' view point. In addition to FPGA area estimation, the tool now also includes ASIC area estimation for end-users who choose the ASIC flow. The tool minimizes hardware cost subject to mean-squared quantiza...
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.
Vasanthan Maruthapillai
Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884
Mohammed Forhad UDDIN
2012-01-01
Full Text Available In this paper, a supply chain with a coordination mechanism consisting of a single vendor and buyeris considered. Further, instead of a price sensitive linear or deterministic demand function, a price-sensitivenon-linear demand function is introduced. To find the inventory cost, penalty cost and transportation cost, it isassumed that the production and shipping functions of the vendor are continuously harmonized and occur at thesame rate. In this integrated supply chain, the Buyer’s Linear Program (LP, vendor’s Integer Program (IP andcoordinated Mixed Integer Program (MIP models are formulated. In this research, numerical example ispresented which includes the sensitivity of the key parameters to illustrate the models. The solution proceduresdemonstrate that the individual profit as well as joint profit could be increased by a coordination mechanismeven though the demand function is non-linear. In addition, the results illustrate that Buyer’s selling price, alongwith the consumers purchasing price, could be decreased, which may increase the demand of the end market.Finally, a conclusion
Meyers, Johan; Munters, Wim; Goit, Jay
2015-11-01
We investigate optimal control of wind-farm boundary layers, considering the individual wind turbines as flow actuators. By controlling the thrust coefficients of the turbines as function of time, the energy extraction can be dynamically regulated with the aim to optimally influence the flow field and the vertical energy transport. To this end, we use Large-Eddy Simulations (LES) of wind-farm boundary layers in a receding-horizon optimal control framework. Recently, the approach was applied to fully developed wind-farm boundary layers in a 7D by 6D aligned wind-turbine arrangement. For this case, energy extraction increased up to 16%, related to improved wake mixing by slightly anti-correlating the turbine thrust coefficient with the local wind speed at the turbine level. Here we discuss optimal control results for finite wind farms that are characterized by entrance effects and a developing internal boundary layer above the wind farm. Both aligned and staggered arrangement patterns are considered, and a range of different constraints on the controls is included. The authors acknowledge support from the European Research Council (FP7-Ideas, grant no. 306471). Simulations were performed on the infrastructure of the Flemish Supercomputer Center, funded by the Hercules Foundation and the Flemish Governement.
Mammography is an extremely useful non-invasive imaging technique with unparalleled advantages for the detection of breast cancer. It has played an immense role in the screening of women above a certain age or with a family history of breast cancer. The IAEA has a statutory responsibility to establish standards for the protection of people against exposure to ionizing radiation and to provide for the worldwide application of those standards. A fundamental requirement of the International Basic Safety Standards for Protection Against Ionizing Radiation (BSS) and for the Safety of Radiation Sources, issued by the IAEA and co-sponsored by FAO, ILO, WHO, PAHO and NEA, is the optimization of radiological protection of patients undergoing medical exposure. In keeping with its responsibility on the application of standards, the IAEA programme on Radiological Protection of Patients attempts to reduce radiation doses to patients while balancing quality assurance considerations. IAEA-TECDOC-796, Radiation Doses in Diagnostic Radiology and Methods for Dose Reduction (1995), addresses this aspect. The related IAEA-TECDOC-1423 on Optimization of the Radiological Protection of Patients undergoing Radiography, Fluoroscopy and Computed Tomography, (2004) constitutes the final report of the coordinated research in Africa, Asia and eastern Europe. The preceding publications do not explicitly consider mammography. Mindful of the importance of this imaging technique, the IAEA launched a Coordinated Research Project on Optimization of Protection in Mammography in some eastern European States. The present publication is the outcome of this project: it is aimed at evaluating the situation in a number of countries, identifying variations in the technique, examining the status of the equipment and comparing performance in the light of the norms established by the European Commission. A number of important aspects are covered, including: - quality control of mammography equipment; - imaging
The use of state-of-the-art numerical analysis tools to determine the optimal design of a radioactive material (RAM) transportation container is investigated. The design of a RAM package's components involves a complex coupling of structural, thermal, and radioactive shielding analyses. The final design must adhere to very strict design constraints. The current technique used by cask designers is uncoupled and involves designing each component separately with respect to its driving constraint. With the use of numerical optimization schemes, the complex couplings can be considered directly, and the performance of the integrated package can be maximized with respect to the analysis conditions. This can lead to more efficient package designs. Thermal and structural accident conditions are analyzed in the shape optimization of a simplified cask design. In this paper, details of the integration of numerical analysis tools, development of a process model, nonsmoothness difficulties with the optimization of the cask, and preliminary results are discussed
Vlachogiannis, Ioannis (John); Lee, K. Y.
2010-01-01
In this paper an improved coordinated aggregation-based particle swarm optimization algorithm is introduced for solving the optimal economic load dispatch problem in power systems. In the improved coordinated aggregation-based particle swarm optimization algorithm each particle in the swarm retains...... a memory of its best position ever encountered, and is attracted only by other particles with better achievements than its own with the exception of the particle with the best achievement, which moves randomly.The ICA-PSO algorithm is tested on a number of power systems, including the systems with 6...
Statistical Learning in Automated Troubleshooting: Application to LTE Interference Mitigation
Tiwana, Moazzam Islam; Altman, Zwi
2010-01-01
This paper presents a method for automated healing as part of off-line automated troubleshooting. The method combines statistical learning with constraint optimization. The automated healing aims at locally optimizing radio resource management (RRM) or system parameters of cells with poor performance in an iterative manner. The statistical learning processes the data using Logistic Regression (LR) to extract closed form (functional) relations between Key Performance Indicators (KPIs) and Radio Resource Management (RRM) parameters. These functional relations are then processed by an optimization engine which proposes new parameter values. The advantage of the proposed formulation is the small number of iterations required by the automated healing method to converge, making it suitable for off-line implementation. The proposed method is applied to heal an Inter-Cell Interference Coordination (ICIC) process in a 3G Long Term Evolution (LTE) network which is based on soft-frequency reuse scheme. Numerical simulat...
Nazia Bibi
2011-09-01
Full Text Available Many companies/organizations are trying to move towards automation and provide their workers with the internet facility on their mobile in order to carry out their routine tasks to save time and resources. The proposed system is based on GPRS technology aims to provide a solution to problem faced in carryout routine tasks considering mobility. The system is designed in a way that facilitates Workers/field staff get updates on their mobile phone regarding tasks at hand. This System is beneficial in a sense that it saves resources in term of time, human resources and cuts down the paper work. The proposed system has been developed in view of research study conducted in the software development and telecom industry and provides a high end solution to the customers/fieldworkers that use GPRS technology for transactions updates of databases.
Borot de Battisti, M.; Maenhout, M.; de Senneville, B. Denis; Hautvast, G.; Binnekamp, D.; Lagendijk, J. J. W.; van Vulpen, M.; Moerland, M. A.
2015-10-01
Focal high-dose-rate (HDR) for prostate cancer has gained increasing interest as an alternative to whole gland therapy as it may contribute to the reduction of treatment related toxicity. For focal treatment, optimal needle guidance and placement is warranted. This can be achieved under MR guidance. However, MR-guided needle placement is currently not possible due to space restrictions in the closed MR bore. To overcome this problem, a MR-compatible, single-divergent needle-implant robotic device is under development at the University Medical Centre, Utrecht: placed between the legs of the patient inside the MR bore, this robot will tap the needle in a divergent pattern from a single rotation point into the tissue. This rotation point is just beneath the perineal skin to have access to the focal prostate tumor lesion. Currently, there is no treatment planning system commercially available which allows optimization of the dose distribution with such needle arrangement. The aim of this work is to develop an automatic inverse dose planning optimization tool for focal HDR prostate brachytherapy with needle insertions in a divergent configuration. A complete optimizer workflow is proposed which includes the determination of (1) the position of the center of rotation, (2) the needle angulations and (3) the dwell times. Unlike most currently used optimizers, no prior selection or adjustment of input parameters such as minimum or maximum dose or weight coefficients for treatment region and organs at risk is required. To test this optimizer, a planning study was performed on ten patients (treatment volumes ranged from 8.5 cm3to 23.3 cm3) by using 2-14 needle insertions. The total computation time of the optimizer workflow was below 20 min and a clinically acceptable plan was reached on average using only four needle insertions.
Borot de Battisti, M; Maenhout, M; Denis de Senneville, B; Hautvast, G; Binnekamp, D; Lagendijk, J J W; van Vulpen, M; Moerland, M A
2015-10-01
Focal high-dose-rate (HDR) for prostate cancer has gained increasing interest as an alternative to whole gland therapy as it may contribute to the reduction of treatment related toxicity. For focal treatment, optimal needle guidance and placement is warranted. This can be achieved under MR guidance. However, MR-guided needle placement is currently not possible due to space restrictions in the closed MR bore. To overcome this problem, a MR-compatible, single-divergent needle-implant robotic device is under development at the University Medical Centre, Utrecht: placed between the legs of the patient inside the MR bore, this robot will tap the needle in a divergent pattern from a single rotation point into the tissue. This rotation point is just beneath the perineal skin to have access to the focal prostate tumor lesion. Currently, there is no treatment planning system commercially available which allows optimization of the dose distribution with such needle arrangement. The aim of this work is to develop an automatic inverse dose planning optimization tool for focal HDR prostate brachytherapy with needle insertions in a divergent configuration. A complete optimizer workflow is proposed which includes the determination of (1) the position of the center of rotation, (2) the needle angulations and (3) the dwell times. Unlike most currently used optimizers, no prior selection or adjustment of input parameters such as minimum or maximum dose or weight coefficients for treatment region and organs at risk is required. To test this optimizer, a planning study was performed on ten patients (treatment volumes ranged from 8.5 cm(3)to 23.3 cm(3)) by using 2-14 needle insertions. The total computation time of the optimizer workflow was below 20 min and a clinically acceptable plan was reached on average using only four needle insertions. PMID:26378657
A Unified Approach towards Decomposition and Coordination for Multi-level Optimization
de Wit, A. J.
2009-01-01
Complex systems, such as those encountered in aerospace engineering, can typically be considered as a hierarchy of individual coupled elements. This hierarchy is reflected in the analysis techniques that are used to analyze the physcial characteristics of the system. Consequently, a hierarchy of coupled models is to be used, accounting for different physical scales, components and/or disciplines. Numerical optimization of complex systems with embedded hierarchy is accomplished via multi-level...
Bothe, Wolfgang; Schubert, Harald; Diab, Mahmoud; Faerber, Gloria; Bettag, Christoph; Jiang, Xiaoyan; Fischer, Martin S; Denzler, Joachim; Doenst, Torsten
2016-01-01
Purpose Recently, algorithms were developed to track radiopaque markers in the heart fully automated. However, the methodology did not allow to assign the exact anatomical location to each marker. In this case study we describe the steps from the generation of three-dimensional marker coordinates to quantitative data analyses in an in vivo ovine model. Methods In one adult sheep, twenty silver balls were sutured to the right side of the heart: 10 to the tricuspid annulus, one to the anterior ...
Yan, Wei [State Key Laboratory of Power Transmission Equipment and System Security and New Technology, College of Electrical Engineering, Chongqing University, Chongqing 400030 (China); Wen, Lili [Test and Research Institute of Chongqing Electric Power, Chongqing 401123 (China); Li, W. [British Columbia Transmission Corporation, Vancouver, BC (Canada); Chung, C.Y. [State Key Laboratory of Power Transmission Equipment and System Security and New Technology, College of Electrical Engineering, Chongqing University, Chongqing 400030 (China); Department of Electrical Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon (China); Wong, K.P. [Department of Electrical Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon (China)
2011-01-15
A decomposition-coordination interior point method (DIPM) is presented and applied to the multi-area optimal reactive power flow (ORPF) problem in this paper. In the method, the area distributed ORPF problem is first formed by introducing duplicated border variables. Then the nonlinear primal dual interior point method (IPM) is directly applied to the distributed ORPF problem in which a Newton system with border-matrix-blocks is formulated. Finally the overall ORPF problem is solved in decomposition iterations with the Newton system being decoupled. The proposed DIPM inherits the good performance of the traditional IPM with a feature appropriate for distributed calculations among multiple areas. It can be easily extended to other distributed optimization problems of power systems. Numeric results of five IEEE Test Systems are demonstrated and comparisons are made with those obtained using the traditional auxiliary problem principle (APP) method. The results show that the DIPM for the multi-area OPRF problem requires less iterations and CPU time, has better stability in convergence, and reaches better optimality compared to the traditional auxiliary problem principle method. (author)
SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization
Li, Dengwang; Wang, Jie [College of Physics and Electronics, Shandong Normal University, Jinan, Shandong (China); Kapp, Daniel S.; Xing, Lei [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States)
2015-06-15
Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is
SU-E-J-130: Automating Liver Segmentation Via Combined Global and Local Optimization
Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data were segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is
Optimization of genes important to production of fuel ethanol from hemicellulosic biomass for use in developing improved commercial yeast strains is necessary to meet the rapidly expanding need for ethanol. The United States Department of Agriculture has developed a fully automated platform for mol...
The optimization of various genes is important in cellulosic fuel ethanol production from S. cerevisiae to meet the rapidly expanding need for ethanol derived from hemicellulosic materials. The United States Department of Agriculture has developed a fully automated platform for molecular biology ro...
Rumbell, Timothy H; Draguljić, Danel; Yadav, Aniruddha; Hof, Patrick R; Luebke, Jennifer I; Weaver, Christina M
2016-08-01
Conductance-based compartment modeling requires tuning of many parameters to fit the neuron model to target electrophysiological data. Automated parameter optimization via evolutionary algorithms (EAs) is a common approach to accomplish this task, using error functions to quantify differences between model and target. We present a three-stage EA optimization protocol for tuning ion channel conductances and kinetics in a generic neuron model with minimal manual intervention. We use the technique of Latin hypercube sampling in a new way, to choose weights for error functions automatically so that each function influences the parameter search to a similar degree. This protocol requires no specialized physiological data collection and is applicable to commonly-collected current clamp data and either single- or multi-objective optimization. We applied the protocol to two representative pyramidal neurons from layer 3 of the prefrontal cortex of rhesus monkeys, in which action potential firing rates are significantly higher in aged compared to young animals. Using an idealized dendritic topology and models with either 4 or 8 ion channels (10 or 23 free parameters respectively), we produced populations of parameter combinations fitting the target datasets in less than 80 hours of optimization each. Passive parameter differences between young and aged models were consistent with our prior results using simpler models and hand tuning. We analyzed parameter values among fits to a single neuron to facilitate refinement of the underlying model, and across fits to multiple neurons to show how our protocol will lead to predictions of parameter differences with aging in these neurons. PMID:27106692
Analytical study on coordinative optimization of convection in tubes with variable heat flux
无
2004-01-01
［1］Guo, Z. Y., Li, D. Y., Wang, B. X., A novel concept for convective heat transfer enhancement, Int. J. Heat Mass Transfer, 1998, 41: 2221-2225.［2］Tao, W. Q., Guo, Z. Y., Wang, B. X., Field synergy principle for enhancing convective heat transfer--extension and numerical verification, Int. J. Heat Mass Transfer, 2002, 45: 3849-3856.［3］Guo, Z. Y., Mechanism and control of convective heat transfer--Coordination of velocity and heat flow fields, Chinese Science Bulletin, 2001, 46(7): 596-599.［4］Sellars, J. R., Tribus, M., Klein, J. S., Heat transfer to laminar flow in a round tubes or flat conduit--The Graetz problem extended, Tras. ASME, 1956, 78: 441-448.［5］Kays, W. M., Crawford, M. E., Convective Heat Transfer, 3rd ed., Chapter 9, New York: McGraw-Hill Inc., 1993.［6］Shah, R. K., London, A. L., Laminar Flow Forced Convection in Ducts, Advances in Heat Transfer, New York: Academic Press, 1978.
Optimization of quantum systems by closed-loop adaptive pulse shaping offers a rich domain for the development and application of specialized evolutionary algorithms. Derandomized evolution strategies (DESs) are presented here as a robust class of optimizers for experimental quantum control. The combination of stochastic and quasi-local search embodied by these algorithms is especially amenable to the inherent topology of quantum control landscapes. Implementation of DES in the laboratory results in efficiency gains of up to ∼9 times that of the standard genetic algorithm, and thus is a promising tool for optimization of unstable or fragile systems. The statistical learning upon which these algorithms are predicated also provide the means for obtaining a control problem's Hessian matrix with no additional experimental overhead. The forced optimal covariance adaptive learning (FOCAL) method is introduced to enable retrieval of the Hessian matrix, which can reveal information about the landscape's local structure and dynamic mechanism. Exploitation of such algorithms in quantum control experiments should enhance their efficiency and provide additional fundamental insights.
SWANS: A Prototypic SCALE Criticality Sequence for Automated Optimization Using the SWAN Methodology
Greenspan, E.
2001-01-11
SWANS is a new prototypic analysis sequence that provides an intelligent, semi-automatic search for the maximum k{sub eff} of a given amount of specified fissile material, or of the minimum critical mass. It combines the optimization strategy of the SWAN code with the composition-dependent resonance self-shielded cross sections of the SCALE package. For a given system composition arrived at during the iterative optimization process, the value of k{sub eff} is as accurate and reliable as obtained using the CSAS1X Sequence of SCALE-4.4. This report describes how SWAN is integrated within the SCALE system to form the new prototypic optimization sequence, describes the optimization procedure, provides a user guide for SWANS, and illustrates its application to five different types of problems. In addition, the report illustrates that resonance self-shielding might have a significant effect on the maximum k{sub eff} value a given fissile material mass can have.
An Optimized Clustering Approach for Automated Detection of White Matter Lesions in MRI Brain Images
M. Anitha
2012-04-01
Full Text Available Settings White Matter lesions (WMLs are small areas of dead cells found in parts of the brain. In general, it is difficult for medical experts to accurately quantify the WMLs due to decreased contrast between White Matter (WM and Grey Matter (GM. The aim of this paper is to
automatically detect the White Matter Lesions which is present in the brains of elderly people. WML detection process includes the following stages: 1. Image preprocessing, 2. Clustering (Fuzzy c-means clustering, Geostatistical Possibilistic clustering and Geostatistical Fuzzy clustering and 3.Optimization using Particle Swarm Optimization (PSO. The proposed system is tested on a database of 208 MRI images. GFCM yields high sensitivity of 89%, specificity of 94% and overall accuracy of 93% over FCM and GPC. The clustered brain images are then subjected to Particle Swarm Optimization (PSO. The optimized result obtained from GFCM-PSO provides sensitivity of 90%, specificity of 94% and accuracy of 95%. The detection results reveals that GFCM and GFCMPSO better localizes the large regions of lesions and gives less false positive rate when compared to GPC and GPC-PSO which captures the largest loads of WMLs only in the upper ventral horns of the brain.
With the increasing need for higher accuracy measurement in computer vision, the precision of camera calibration is a more important factor. The objective of stereo camera calibration is to estimate the intrinsic and extrinsic parameters of each camera. We presented a high-accurate technique to calibrate binocular stereo vision system having been mounted the locations and attitudes, which was realized by combining nonlinear optimization method with accurate calibration points. The calibration points with accurate coordinates, were formed by an infrared LED moved with three-dimensional coordinate measurement machine, which can ensure indeterminacy of measurement is 1/30000. By using bilinear interpolation square-gray weighted centroid location algorithm, the imaging centers of the calibration points can be accurately determined. The accuracy of the calibration is measured in terms of the accuracy in the reconstructing calibration points through triangulation, the mean distance between reconstructing point and given calibration point is 0.039mm. The technique can satisfy the goals of measurement and camera accurate calibration
This TECDOC summarizes the results of a Co-ordinated Research Project (CRP) on the Use of Nuclear Techniques for Optimizing Fertilizer Application under Irrigated Wheat to Increase the Efficient Use of Nitrogen Fertilizer and Consequently Reduce Environmental Pollution. The project was carried out between 1994 and 1998 through the technical co-ordination of the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture. Fourteen Member States of the IAEA and FAO carried out a series of field experiments aimed at improving irrigation water and fertilizer-N uptake efficiencies through integrated management of the complex Interactions involving inputs, soils, climate, and wheat cultivars. Its goals were: to investigate various aspects of fertilizer N uptake efficiency of wheat crops under irrigation through an interregional research network involving countries growing large areas of irrigated wheat; to use 15N and the soil-moisture neutron probe to determine the fate of applied N, to follow water and nitrate movement in the soil, and to determine water balance and water-use efficiency in irrigated wheat cropping systems; to use the data generated to further develop and refine various relationships in the Ceres-Wheat computer simulation model; to use the knowledge generated to produce a N-rate-recommendation package to refine specific management strategies with respect to fertilizer applications and expected yields
Gunduz, Mustafa Emre
Many government agencies and corporations around the world have found the unique capabilities of rotorcraft indispensable. Incorporating such capabilities into rotorcraft design poses extra challenges because it is a complicated multidisciplinary process. The concept of applying several disciplines to the design and optimization processes may not be new, but it does not currently seem to be widely accepted in industry. The reason for this might be the lack of well-known tools for realizing a complete multidisciplinary design and analysis of a product. This study aims to propose a method that enables engineers in some design disciplines to perform a fairly detailed analysis and optimization of a design using commercially available software as well as codes developed at Georgia Tech. The ultimate goal is when the system is set up properly, the CAD model of the design, including all subsystems, will be automatically updated as soon as a new part or assembly is added to the design; or it will be updated when an analysis and/or an optimization is performed and the geometry needs to be modified. Designers and engineers will be involved in only checking the latest design for errors or adding/removing features. Such a design process will take dramatically less time to complete; therefore, it should reduce development time and costs. The optimization method is demonstrated on an existing helicopter rotor originally designed in the 1960's. The rotor is already an effective design with novel features. However, application of the optimization principles together with high-speed computing resulted in an even better design. The objective function to be minimized is related to the vibrations of the rotor system under gusty wind conditions. The design parameters are all continuous variables. Optimization is performed in a number of steps. First, the most crucial design variables of the objective function are identified. With these variables, Latin Hypercube Sampling method is used
Lin, Mai; Ranganathan, David; Mori, Tetsuya; Hagooly, Aviv; Rossin, Raffaella; Welch, Michael J; Lapi, Suzanne E
2012-10-01
Interest in using (68)Ga is rapidly increasing for clinical PET applications due to its favorable imaging characteristics and increased accessibility. The focus of this study was to provide our long-term evaluations of the two TiO(2)-based (68)Ge/(68)Ga generators and develop an optimized automation strategy to synthesize [(68)Ga]DOTATOC by using HEPES as a buffer system. This data will be useful in standardizing the evaluation of (68)Ge/(68)Ga generators and automation strategies to comply with regulatory issues for clinical use. PMID:22897970
Automation of reverse engineering process in aircraft modeling and related optimization problems
Li, W.; Swetits, J.
1994-01-01
During the year of 1994, the engineering problems in aircraft modeling were studied. The initial concern was to obtain a surface model with desirable geometric characteristics. Much of the effort during the first half of the year was to find an efficient way of solving a computationally difficult optimization model. Since the smoothing technique in the proposal 'Surface Modeling and Optimization Studies of Aerodynamic Configurations' requires solutions of a sequence of large-scale quadratic programming problems, it is important to design algorithms that can solve each quadratic program in a few interactions. This research led to three papers by Dr. W. Li, which were submitted to SIAM Journal on Optimization and Mathematical Programming. Two of these papers have been accepted for publication. Even though significant progress has been made during this phase of research and computation times was reduced from 30 min. to 2 min. for a sample problem, it was not good enough for on-line processing of digitized data points. After discussion with Dr. Robert E. Smith Jr., it was decided not to enforce shape constraints in order in order to simplify the model. As a consequence, P. Dierckx's nonparametric spline fitting approach was adopted, where one has only one control parameter for the fitting process - the error tolerance. At the same time the surface modeling software developed by Imageware was tested. Research indicated a substantially improved fitting of digitalized data points can be achieved if a proper parameterization of the spline surface is chosen. A winning strategy is to incorporate Dierckx's surface fitting with a natural parameterization for aircraft parts. The report consists of 4 chapters. Chapter 1 provides an overview of reverse engineering related to aircraft modeling and some preliminary findings of the effort in the second half of the year. Chapters 2-4 are the research results by Dr. W. Li on penalty functions and conjugate gradient methods for
Distributed Co-ordinator Model for Optimal Utilization of Software and Piracy Prevention
S. Zeeshan Hussain
2010-01-01
Full Text Available Today the software technologies have evolved it to the extent that now a customer can have free and open source software available in the market. But with this evolution the menace of software piracy has also evolved. Unlike other things a customer purchases, the software applications and fonts bought don't belong to the specified user. Instead, the customer becomes a licensed user — means the customer purchases the right to use the software on a single computer, and can't put copies on other machines or pass that software along to colleagues. Software piracy is the illegal distribution and/or reproduction of software applications for business or personal use. Whether software piracy is deliberate or not, it is still illegal and punishable by law. The major reasons of piracy include the high cost of software and the rigid licensing structure which is becoming even less popular due to inefficient software utilization. Various software companies are inclined towards the research of techniques to handle this problem of piracy. Many defense mechanisms have been devised till date but the hobbyists or the black market leaders (so called “software pirates” have always found a way out of it. This paper identifies the types of piracies and licensing mechanisms along with the flaws in the existing defense mechanisms and examines social and technical challenges associated with handling software piracy prevention. The goal of this paper is to design, implement and empirically evaluate a comprehensive framework for software piracy prevention and optimal utilization of the software.
Optimal part and module selection for synthetic gene circuit design automation.
Huynh, Linh; Tagkopoulos, Ilias
2014-08-15
An integral challenge in synthetic circuit design is the selection of optimal parts to populate a given circuit topology, so that the resulting circuit behavior best approximates the desired one. In some cases, it is also possible to reuse multipart constructs or modules that have been already built and experimentally characterized. Efficient part and module selection algorithms are essential to systematically search the solution space, and their significance will only increase in the following years due to the projected explosion in part libraries and circuit complexity. Here, we address this problem by introducing a structured abstraction methodology and a dynamic programming-based algorithm that guaranties optimal part selection. In addition, we provide three extensions that are based on symmetry check, information look-ahead and branch-and-bound techniques, to reduce the running time and space requirements. We have evaluated the proposed methodology with a benchmark of 11 circuits, a database of 73 parts and 304 experimentally constructed modules with encouraging results. This work represents a fundamental departure from traditional heuristic-based methods for part and module selection and is a step toward maximizing efficiency in synthetic circuit design and construction. PMID:24933033
The rearing of tsetse flies for the sterile insect technique has been a laborious procedure in the past. The purpose of this co-ordinated research project (CRP) 'Automation for tsetse mass rearing for use in sterile insect technique programmes' was to develop appropriate semiautomated procedures to simplify the rearing, reduce the cost and standardize the product. Two main objectives were accomplished. The first was to simplify the handling of adults at emergence. This was achieved by allowing the adults to emerge directly into the production cages. Selection of the appropriate environmental conditions and timing allowed the manipulation of the emergence pattern to achieve the desired ratio of four females to one male with minimal un-emerged females remaining mixed with the male pupae. Tests demonstrated that putting the sexes together at emergence, leaving the males in the production cages, and using a ratio of 4:1 (3:1 for a few species) did not adversely affect pupal production. This has resulted in a standardized system for the self stocking of production cages. The second was to reduce the labour involved in feeding the flies. Three distinct systems were developed and tested in sequence. The first tsetse production unit (TPU 1) was a fully automated system, but the fly survival and fecundity were unacceptably low. From this a simpler TPU 2 was developed and tested, where 63 large cages were held on a frame that could be moved as a single unit to the feeding location. TPU 2 was tested in various locations, and found to satisfy the basic requirements, and the adoption of Plexiglas pupal collection slopes resolved much of the problem due to light distribution. However the cage holding frame was heavy and difficult to position on the feeding frame and the movement disturbed the flies. TPU 2 was superseded by TPU 3, in which the cages remain stationary at all times, and the blood is brought to the flies. The blood feeding system is mounted on rails to make it
Optimized Energy Management of a Single-House Residential Micro-Grid With Automated Demand Response
Anvari-Moghaddam, Amjad; Monsef, Hassan; Rahimi-Kian, Ashkan;
2015-01-01
In this paper, an intelligent multi-objective energy management system (MOEMS) is proposed for applications in residential LVAC micro-grids where households are equipped with smart appliances, such as washing machine, dishwasher, tumble dryer and electric heating and they have the capability to t...... reduce residential energy use and improve the user’s satisfaction degree by optimal management of demand/generation sides.......In this paper, an intelligent multi-objective energy management system (MOEMS) is proposed for applications in residential LVAC micro-grids where households are equipped with smart appliances, such as washing machine, dishwasher, tumble dryer and electric heating and they have the capability to...... take part in demand response (DR) programs. The superior performance and efficiency of the proposed system is studied through several scenarios and case studies and validated in comparison with the conventional models. The simulation results demonstrate that the proposed MOEMS has the capability to...
A hybrid systems strategy for automated spacecraft tour design and optimization
Stuart, Jeffrey R.
As the number of operational spacecraft increases, autonomous operations is rapidly evolving into a critical necessity. Additionally, the capability to rapidly generate baseline trajectories greatly expands the range of options available to analysts as they explore the design space to meet mission demands. Thus, a general strategy is developed, one that is suitable for the construction of flight plans for both Earth-based and interplanetary spacecraft that encounter multiple objects, where these multiple encounters comprise a ``tour''. The proposed scheme is flexible in implementation and can readily be adjusted to a variety of mission architectures. Heuristic algorithms that autonomously generate baseline tour trajectories and, when appropriate, adjust reference solutions in the presence of rapidly changing environments are investigated. Furthermore, relative priorities for ranking the targets are explicitly accommodated during the construction of potential tour sequences. As a consequence, a priori, as well as newly acquired, knowledge concerning the target objects enhances the potential value of the ultimate encounter sequences. A variety of transfer options are incorporated, from rendezvous arcs enabled by low-thrust engines to more conventional impulsive orbit adjustments via chemical propulsion technologies. When advantageous, trajectories are optimized in terms of propellant consumption via a combination of indirect and direct methods; such a combination of available technologies is an example of hybrid optimization. Additionally, elements of hybrid systems theory, i.e., the blending of dynamical states, some discrete and some continuous, are integrated into the high-level tour generation scheme. For a preliminary investigation, this strategy is applied to mission design scenarios for a Sun-Jupiter Trojan asteroid tour as well as orbital debris removal for near-Earth applications.
Supatchaya Chotayakul
2013-01-01
Full Text Available This research studies a cash inventory problem in an ATM Network to satisfy customerâs cash needs over multiple periods with deterministic demand. The objective is to determine the amount of money to place in Automated Teller Machines (ATMs and cash centers for each period over a given time horizon. The algorithms are designed as a multi-echelon inventory problem with single-item capacitated lot-sizing to minimize total costs of running ATM network. In this study, we formulate the problem as a Mixed Integer Program (MIP and develop an approach based on reformulating the model as a shortest path formulation for finding a near-optimal solution of the problem. This reformulation is the same as the traditional model, except the capacity constraints, inventory balance constraints and setup constraints related to the management of the money in ATMs are relaxed. This new formulation gives more variables and constraints, but has a much tighter linear relaxation than the original and is faster to solve for short term planning. Computational results show its effectiveness, especially for large sized problems.
van 't Klooster, Ronald; Patterson, Andrew J; Young, Victoria E; Gillard, Jonathan H; Reiber, Johan H C; van der Geest, Rob J
2013-01-01
A typical MR imaging protocol to study the status of atherosclerosis in the carotid artery consists of the application of multiple MR sequences. Since scanner time is limited, a balance has to be reached between the duration of the applied MR protocol and the quantity and quality of the resulting images which are needed to assess the disease. In this study an objective method to optimize the MR sequence set for classification of soft plaque in vessel wall images of the carotid artery using automated image segmentation was developed. The automated method employs statistical pattern recognition techniques and was developed based on an extensive set of MR contrast weightings and corresponding manual segmentations of the vessel wall and soft plaque components, which were validated by histological sections. Evaluation of the results from nine contrast weightings showed the tradeoff between scan duration and automated image segmentation performance. For our dataset the best segmentation performance was achieved by selecting five contrast weightings. Similar performance was achieved with a set of three contrast weightings, which resulted in a reduction of scan time by more than 60%. The presented approach can help others to optimize MR imaging protocols by investigating the tradeoff between scan duration and automated image segmentation performance possibly leading to shorter scanning times and better image interpretation. This approach can potentially also be applied to other research fields focusing on different diseases and anatomical regions. PMID:24194941
Zhao, Z. G.; Chen, H. J.; Zhen, Z. X.; Yang, Y. Y.
2014-06-01
As for the self-developed six-speed dry dual-clutch transmission (DCT), the optimal torque-coordinated control strategy between engine and dual clutches is proposed to resolve the problem of launching with twin clutches simultaneously involved based on the minimum value principle. Focusing on the sliding friction phase of the launching process, dynamics equations of dry DCT with two intermediate shafts are firstly established, and then the optimal transmitting torque variation rate and the driven plate's rotating speed of dual clutches are deduced by using the minimum value principle, in which the jerk intensity and friction work are taken as the performance indexes, and the terminal constraints of state variables are determined according to the driver's launching intention. Besides, the separating conditions of non-target gear clutch and the torque distributing relations of twin clutches are derived from the launching control targets that guarantee the approximately equal friction extent of two clutches and no power cycle. After the synchronisation of driving and driven plates of on-coming clutch, the output torque of engine is smoothly switched to the driver's demand level. Furthermore, launching the simulation model of the dry DCT vehicle is set up on the Matlab/Simulink platform. Simulation results indicate that the proposed launching control strategy not only can effectively reflect the driver's intention and extend the life span of twin clutches, but also obtain an excellent launching quality. Finally, the torque control laws of two clutches obtained through the simulation are transformed into clutch position control laws for the future realisation in the real car, and the closed-loop position controls of twin clutches in the launching process are conducted on the test bench with two sets of clutch actuator, obtaining preferable tracking effects.
Highlights: • Formulate probabilistic OPF with VPE, multi-fuel options, POZs, FOR of CHP units. • Propose a new powerful optimization method based on enhanced black hole algorithm. • Coordinate of TUs, WPPs, PVs and CHP units together in the proposed problem. • Evaluate the impacts of inputs’ uncertainties and their correlations on the POPF. • Use the 2m + 1 point estimated method. - Abstract: This paper addresses a novel probabilistic optimisation framework for handling power system uncertainties in the optimal power flow (OPF) problem that considers all the essential factors of great impact in the OPF problem. The object is to study and model the correlation and fluctuation of load demands, photovoltaic (PV) and wind power plants (WPPs) which have an important influence on transmission lines and bus voltages. Moreover, as an important tool of saving waste heat energy in the thermoelectric power plant, the power networks share of combined heat and power (CHP) has increased dramatically in the past decade. So, the probabilistic OPF (POPF) problem considering valve point effects, multi-fuel options and prohibited zones of thermal units (TUs) is firstly formulated. The PV, WPP and CHP units are also modeled. Then, a new method utilizing enhanced binary black hole (EBBH) algorithm and 2m + 1 point estimated method is proposed to solve this problem and to handle the random nature of solar irradiance, wind speed and load of consumers. The correlation between input random variables is considered using a correlation matrix. Finally, numerical results are presented and considered regarding the IEEE 118-busses, including PV, WPP, CHP and TU at several busses. The simulation and comparison results obtained demonstrate the broad advantages and feasibility of the suggested framework in the presence of dependent non-Gaussian distribution of random variables
Chung-Yuan Huang; Tzai-Hung Wen
2014-01-01
Immediate treatment with an automated external defibrillator (AED) increases out-of-hospital cardiac arrest (OHCA) patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs) are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO) that considers spatial and temporal cardiac arrest occu...
Liu Yajing; Zhu Lin [Key Laboratory of Radiopharmaceuticals, Beijing Normal University, Ministry of Education, Beijing, 100875 (China); Department of Radiology, University of Pennsylvania, Philadelphia, PA 19014 (United States); Ploessl, Karl [Department of Radiology, University of Pennsylvania, Philadelphia, PA 19014 (United States); Choi, Seok Rye [Avid Radiopharmaceuticals Inc., Philadelphia, PA 19014 (United States); Qiao Hongwen; Sun Xiaotao; Li Song [Key Laboratory of Radiopharmaceuticals, Beijing Normal University, Ministry of Education, Beijing, 100875 (China); Zha Zhihao [Key Laboratory of Radiopharmaceuticals, Beijing Normal University, Ministry of Education, Beijing, 100875 (China); Department of Radiology, University of Pennsylvania, Philadelphia, PA 19014 (United States); Kung, Hank F., E-mail: kunghf@sunmac.spect.upenn.ed [Key Laboratory of Radiopharmaceuticals, Beijing Normal University, Ministry of Education, Beijing, 100875 (China); Department of Radiology, University of Pennsylvania, Philadelphia, PA 19014 (United States)
2010-11-15
Introduction: Accumulation of {beta}-amyloid (A{beta}) aggregates in the brain is linked to the pathogenesis of Alzheimer's disease (AD). Imaging probes targeting these A{beta} aggregates in the brain may provide a useful tool to facilitate the diagnosis of AD. Recently, [{sup 18}F]AV-45 ([{sup 18}F]5) demonstrated high binding to the A{beta} aggregates in AD patients. To improve the availability of this agent for widespread clinical application, a rapid, fully automated, high-yield, cGMP-compliant radiosynthesis was necessary for production of this probe. We report herein an optimal [{sup 18}F]fluorination, de-protection condition and fully automated radiosynthesis of [{sup 18}F]AV-45 ([{sup 18}F]5) on a radiosynthesis module (BNU F-A2). Methods: The preparation of [{sup 18}F]AV-45 ([{sup 18}F]5) was evaluated under different conditions, specifically by employing different precursors (-OTs and -Br as the leaving group), reagents (K222/K{sub 2}CO{sub 3} vs. tributylammonium bicarbonate) and deprotection in different acids. With optimized conditions from these experiments, the automated synthesis of [{sup 18}F]AV-45 ([{sup 18}F]5) was accomplished by using a computer-programmed, standard operating procedure, and was purified on an on-line solid-phase cartridge (Oasis HLB). Results: The optimized reaction conditions were successfully implemented to an automated nucleophilic fluorination module. The radiochemical purity of [{sup 18}F]AV-45 ([{sup 18}F]5) was >95%, and the automated synthesis yield was 33.6{+-}5.2% (no decay corrected, n=4), 50.1{+-}7.9% (decay corrected) in 50 min at a quantity level of 10-100 mCi (370-3700 MBq). Autoradiography studies of [{sup 18}F]AV-45 ([{sup 18}F]5) using postmortem AD brain and Tg mouse brain sections in the presence of different concentration of 'cold' AV-136 showed a relatively low inhibition of in vitro binding of [{sup 18}F]AV-45 ([{sup 18}F]5) to the A{beta} plaques (IC50=1-4 {mu}M, a concentration several
Two steps integrated optimization algorithm on the basis of the improvement genetic algorithm (GA) was developed for BWR core optimization. It showed good convergence performance keeping with global search capability. When the method was applied to 1356MWe BWR design, optimization was realized by the practical cost. An integer combinatorial optimization using MAA (Multi Agent Algorithm) was developed. MAA was introduced to the first-step part of two-step GA and the convergence performance increased. An idea of MAA proposed by us gets a hint from humane behavior in the group. The reactor design and reactor control of BWR, the coordinative optimization, its application to the practical plant and the next generation reactor control system are explained. (S.Y.)
Chong, C.-Y.; Athans, M.
1975-01-01
The decentralized stochastic control of a linear dynamic system consisting of several subsystems is considered. A two-level approach is used by the introduction of a coordinator who collects measurements from the local controllers periodically and in return transmits coordinating parameters. Two types of coordination are considered: open-loop feedback and closed loop. The resulting control laws are found to be intuitively attractive.
Lai, Pik-Yin
Methods of statistical mechanics are applied to two important NP-complete combinatorial optimization problems. The first is the chromatic number problem which seeks the minimal number of colors necessary to color a graph such that no two sites connected by an edge have the same color. The second is partitioning of a graph into q equal subgraphs so as to minimize inter-subgraph connections. Both models are mapped into a frustrated Potts model which is related to the q-state Potts spin glass. For the first problem, we obtain very good agreement with numerical simulations and theoretical bounds using the annealed approximation. The quenched model is also discussed. For the second problem we obtain analytic and numerical results by evaluating the ground state energy of the q = 3 and 4 Potts spin glass using Parisi's replica symmetry breaking. We also performed some numerical simulations to test the theoretical result and obtained very good agreement. In the second part of the thesis, we simulate the Ising spin-glass model on a random lattice with a finite (average) coordination number and also on the Bethe lattice with various different boundary conditions. In particular, we calculate the overlap function P(q) for two independent samples. For the random lattice, the results are consistent with a spin-glass transition above which P(q) converges to a Dirac delta -function for large N (number of sites) and below which P(q) has in addition a long tail similar to previous results obtained for the infinite ranged model. For the Bethe lattice, we obtain results in the interior by discarding the two outer shells of the Cayley tree when calculating the thermal averages. For fixed (uncorrelated) boundary conditions, P(q) seems to converge to a delta -function even below the spin-glass transition whereas for a "closed" lattice (correlated boundary conditions) P(q) has a long tail similar to its behavior in the random lattice case.
Mohammad Sadegh Payam
2012-01-01
Full Text Available This study presents a new approach for simultaneous coordinated tuning of over current relay for a Power Delivery System (PDS including Distribution Generations (DGs. In the proposed scheme, instead of changing in protection system structure or using new elements, solving of relay coordination problem is done with revising of relays setting in presence of DGs. For this, the relay coordination problem is formulated as the optimization problem by considering two strategies: minimizing the relays operation time and minimizing the number of changes in relays setting. Also, an efficient hybrid algorithm based on Shuffled Frog Leaping (SFL algorithm and Linear Programming (LP is introduced for solving complex and non-convex optimization problem. To investigate the ability of the proposed method, a 30-bus IEEE test system is considered. Three scenarios are examined to evaluate the effectiveness of the proposed approach to solve the directional overcurrent relay coordination problem for a PDS with DGs. Simulation result show the efficiency of proposed method.
Nuclear power has been used for five decades and has been one of the fastest growing energy options. Although the rate at which nuclear power has penetrated the world energy market has declined, it has retained a substantial share, and is expected to continue as a viable option well into the future. Seawater desalination by distillation is much older than nuclear technology. However, the current desalination technology involving large-scale application, has a history comparable to nuclear power, i.e. it spans about five decades. Both nuclear and desalination technologies are mature and proven, and are commercially available from a variety of suppliers. Therefore, there are benefits in combining the two technologies together. Where nuclear energy could be an option for electricity supply, it can also be used as an energy source for seawater desalination. This has been recognized from the early days of the two technologies. However, the main interest during the 1960s and 1970s was directed towards the use of nuclear energy for electricity generation, district heating, and industrial process heat. Renewed interest in nuclear desalination has been growing worldwide since 1989, as indicated by the adoption of a number of resolutions on the subject at the IAEA General Conferences. Responding to this trend, the IAEA reviewed information on desalination technologies and the coupling of nuclear reactors with desalination plants, compared the economic viability of seawater desalination using nuclear energy in various coupling configuration with fossil fuels in a generic assessment, conducted a regional feasibility study on nuclear desalination in the North African Countries and initiated in a two-year Options Identification Programme (OIP) to identify candidate reactor and desalination technologies that could serve as practical demonstrations of nuclear desalination, supplementing the existing expertise and experience. In 1998, the IAEA initiated a Coordinated Research
Automated purchasing: Forecasts to determine stock levels and print orders.
Wilcox, M M; Moore, A N; Hoover, L W
1978-10-01
An automated purchasing system to optimize inventory levels of frozen foods, including meat items, while minimizing stock outages, was developed and implemented. Menu item forecast data were coordinated with on-hand quantities to automate the calculation of order quantities and printing purchase requisitions. The model selected also incorporated: (a) Safety stock level, (b) accumulated forecasts, and (c) accumulated orders already placed. The project was smoothly integrated into an on-going computer-assisted management system. All programs functioned as planned; computer documents were complete and accurate. The system design was retained for use in the foodservice operation. PMID:701670
Ádám Z Lendvai
Full Text Available Studies of animal behavior often rely on human observation, which introduces a number of limitations on sampling. Recent developments in automated logging of behaviors make it possible to circumvent some of these problems. Once verified for efficacy and accuracy, these automated systems can be used to determine optimal sampling regimes for behavioral studies. Here, we used a radio-frequency identification (RFID system to quantify parental effort in a bi-parental songbird species: the tree swallow (Tachycineta bicolor. We found that the accuracy of the RFID monitoring system was similar to that of video-recorded behavioral observations for quantifying parental visits. Using RFID monitoring, we also quantified the optimum duration of sampling periods for male and female parental effort by looking at the relationship between nest visit rates estimated from sampling periods with different durations and the total visit numbers for the day. The optimum sampling duration (the shortest observation time that explained the most variation in total daily visits per unit time was 1h for both sexes. These results show that RFID and other automated technologies can be used to quantify behavior when human observation is constrained, and the information from these monitoring technologies can be useful for evaluating the efficacy of human observation methods.
Koukalova Dagmar
2009-11-01
Full Text Available Abstract Background Rapid, easy, economical and accurate species identification of yeasts isolated from clinical samples remains an important challenge for routine microbiological laboratories, because susceptibility to antifungal agents, probability to develop resistance and ability to cause disease vary in different species. To overcome the drawbacks of the currently available techniques we have recently proposed an innovative approach to yeast species identification based on RAPD genotyping and termed McRAPD (Melting curve of RAPD. Here we have evaluated its performance on a broader spectrum of clinically relevant yeast species and also examined the potential of automated and semi-automated interpretation of McRAPD data for yeast species identification. Results A simple fully automated algorithm based on normalized melting data identified 80% of the isolates correctly. When this algorithm was supplemented by semi-automated matching of decisive peaks in first derivative plots, 87% of the isolates were identified correctly. However, a computer-aided visual matching of derivative plots showed the best performance with average 98.3% of the accurately identified isolates, almost matching the 99.4% performance of traditional RAPD fingerprinting. Conclusion Since McRAPD technique omits gel electrophoresis and can be performed in a rapid, economical and convenient way, we believe that it can find its place in routine identification of medically important yeasts in advanced diagnostic laboratories that are able to adopt this technique. It can also serve as a broad-range high-throughput technique for epidemiological surveillance.
Verrelst, J.; Rivera, J. P.; Leonenko, G.; Alonso, L.; Moreno, J.
2012-04-01
Radiative transfer (RT) modeling plays a key role for earth observation (EO) because it is needed to design EO instruments and to develop and test inversion algorithms. The inversion of a RT model is considered as a successful approach for the retrieval of biophysical parameters because of being physically-based and generally applicable. However, to the broader community this approach is considered as laborious because of its many processing steps and expert knowledge is required to realize precise model parameterization. We have recently developed a radiative transfer toolbox ARTMO (Automated Radiative Transfer Models Operator) with the purpose of providing in a graphical user interface (GUI) essential models and tools required for terrestrial EO applications such as model inversion. In short, the toolbox allows the user: i) to choose between various plant leaf and canopy RT models (e.g. models from the PROSPECT and SAIL family, FLIGHT), ii) to choose between spectral band settings of various air- and space-borne sensors or defining own sensor settings, iii) to simulate a massive amount of spectra based on a look up table (LUT) approach and storing it in a relational database, iv) to plot spectra of multiple models and compare them with measured spectra, and finally, v) to run model inversion against optical imagery given several cost options and accuracy estimates. In this work ARTMO was used to tackle some well-known problems related to model inversion. According to Hadamard conditions, mathematical models of physical phenomena are mathematically invertible if the solution of the inverse problem to be solved exists, is unique and depends continuously on data. This assumption is not always met because of the large number of unknowns and different strategies have been proposed to overcome this problem. Several of these strategies have been implemented in ARTMO and were here analyzed to optimize the inversion performance. Data came from the SPARC-2003 dataset
This report summarizes the results of the first meeting of the Coordinated Research Programme (CRP) on Development of Methodologies for Optimization of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs, held at the Agency Headquarters in Vienna, from 16 to 20 December 1996. The purpose of this Research Coordination Meeting (RCM) was that all Chief Scientific Investigators of the groups participating in the CRP presented an outline of their proposed research projects. Additionally, the participants discussed the objective, scope, work plan and information channels of the CRP in detail. Based on these presentations and discussions, the entire project plan was updated, completed and included in this report. This report represents a common agreed project work plan for the CRP. Refs, figs, tabs
The contaminant analysis automation robot implementation for the automated laboratory
The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLM when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation
Automated Methods of Corrosion Measurements
Andersen, Jens Enevold Thaulov
. Mechanical control, recording, and data processing must therefore be automated to a high level of precision and reliability. These general techniques and the apparatus involved have been described extensively. The automated methods of such high-resolution microscopy coordinated with computerized...
Daraio, M. G.; Battagliere, M. L.; Sacco, P.; Fasano, L.; Coletta, A.
2015-10-01
COSMO-SkyMed is a dual-use program for both civilian and defense provides user community (institutional and commercial) with SAR data in several environmental applications. In the context of COSMO-SkyMed data and User management, one of the aspects carefully monitored is the user satisfaction level, it is links to satisfaction of submitted user requests. The operational experience of the first years of operational phase, and the consequent lessons learnt by the COSMO-SkyMed data and user management, have demonstrated that a lot of acquisition rejections are due to conflicts (time conflicts or system conflicts) among two or more civilian user requests, and they can be managed and solved implementing an improved coordination of users and their requests on a daily basis. With this aim a new Service Support Tool (SST) has been designed and developed to support the operators in the User Request coordination. The Tool allow to analyze conflicts among Acquisition Requests (ARs) before the National Rankization phase and to elaborate proposals for conflict resolution. In this paper the most common causes of the occurred rejections will be showed, for example as the impossibility to aggregate different orders, and the SST functionalities will be described, in particular how it works to remove or minimize the conflicts among different orders.
Huang, Chung-Yuan; Wen, Tzai-Hung
2014-01-01
Immediate treatment with an automated external defibrillator (AED) increases out-of-hospital cardiac arrest (OHCA) patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs) are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO) that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei. PMID:25045396
Nuclear plants control: towards an more advanced automation
Reactor operation appears to have a lower automation level than other fields of production industry. Compiled data point out that human factors are implied in about 70 % of un acceptable errors happening in a nuclear plant. But no obvious conclusion can result from these data. Human factor are so numerous and unlike that no mathematical model can be achieved. For instance, instead of assuming a whole automated control, a computerized aided control system can lower human faults probability by lowering attention requirements and hence fatigue. Man-machine system engineering and interactive display devices appear to be the best tools to determine the optimal automation level for the highest safety level. CEA and EDF use them in their ''ESCRIME'' coordinated research program. (D.L.)
Loshchilov, Ilya; Schoenauer, Marc; Sebag, Michèle
2011-01-01
Independence from the coordinate system is one source of efficiency and robustness for the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). The recently proposed Adaptive Encoding (AE) procedure generalizes CMA-ES adaptive mechanism, and can be used together with any optimization algorithm. Adaptive Encoding gradually builds a transformation of the coordinate system such that the new coordinates are as decorrelated as possible with respect to the objective function. But any optimizat...
Xue Gang
2011-01-01
Among all the emission reduction measures, carbon tax is recognized as the most effective way to protect our climate. That is why the Chinese government has recently taken it as a tax reform direction, In the current economic analysis, the design of carbon tax is mostly based on the target to maximize the efficiency However, based on the theory of tax system optimization, we should also consider other policy objectives, such as equity, revenue and cost, and then balance different objectives to achieve the suboptimum reform of carbon tax system in China.
Blirup-Jensen, S
2001-11-01
Quantitative protein determinations in routine laboratories are today most often carried out using automated instruments. However, slight variations in the assay principle, in the programming of the instrument or in the reagents may lead to different results. This has led to the prerequisite of method optimization and standardization. The basic principles of turbidimetry and nephelometry are discussed. The different reading principles are illustrated and investigated. Various problems are identified and a suggestion is made for an integrated, fast and convenient test system for the determination of a number of different proteins on the same instrument. An optimized test system for turbidimetry and nephelometry should comprise high-quality antibodies, calibrators, controls, and buffers and a protocol with detailed parameter settings in order to program the instrument correctly. A good user program takes full advantage of the optimal reading principles for the different instruments. This implies--for all suitable instruments--sample preincubation followed by real sample blanking, which automatically corrects for initial turbidity in the sample. Likewise it is recommended to measure the reagent blank, which represents any turbidity caused by the antibody itself. By correcting all signals with these two blank values the best possible signal is obtained for the specific analyte. An optimized test system should preferably offer a wide measuring range combined with a wide security range, which for the user means few re-runs and maximum security against antigen excess. A non-linear calibration curve based on six standards is obtained using a suitable mathematical fitting model, which normally is part of the instrument software. PMID:11831625
Li Xi; Ji Hong; Zheng Ruiming; Li Ting
2009-01-01
In order to improve the performance of peer-to-peer files sharing system under mobile distributed environments, a novel always-optimally-coordinated (AOC) criterion and corresponding candidate selection algorithm are proposed in this paper. Compared with the traditional min-hops criterion, the new approach introduces a fuzzy knowledge combination theory to investigate several important factors that influence files transfer success rate and efficiency. Whereas the min-hops based protocols only ask the nearest candidate peer for desired files, the selection algorithm based on AOC comprehensively considers users' preference and network requirements with flexible balancing rules. Furthermore, its advantage also expresses in the independence of specified resource discovering protocols, allowing for scalability. The simulation results show that when using the AOC based peer selection algorithm, system performance is much better than the min-hops scheme, with files successful transfer rate improved more than 50% and transfer time reduced at least 20%.
Yu, Dawei; Liu, Jibao; Sui, Qianwen; Wei, Yuansong
2016-03-01
Control of organic loading rate (OLR) is essential for anaerobic digestion treating high COD wastewater, which would cause operation failure by overload or less efficiency by underload. A novel biogas-pH automation control strategy using the combined gas-liquor phase monitoring was developed for an anaerobic membrane bioreactor (AnMBR) treating high COD (27.53 g·L(-1)) starch wastewater. The biogas-pH strategy was proceeded with threshold between biogas production rate >98 Nml·h(-1) preventing overload and pH>7.4 preventing underload, which were determined by methane production kinetics and pH titration of methanogenesis slurry, respectively. The OLR and the effluent COD were doubled as 11.81 kgCOD·kgVSS(-1)·d(-1) and halved as 253.4 mg·L(-1), respectively, comparing with a constant OLR control strategy. Meanwhile COD removal rate, biogas yield and methane concentration were synchronously improved to 99.1%, 312 Nml·gCODin(-1) and 74%, respectively. Using the biogas-pH strategy, AnMBR formed a "pH self-regulation ternary buffer system" which seizes carbon dioxide and hence provides sufficient buffering capacity. PMID:26722804
Suárez, Carlos Gómez; Reigosa, Paula Diaz; Iannuzzo, Francesco;
2016-01-01
An original tool for parameter extraction of PSpice models has been released, enabling a simple parameter identification. A physics-based IGBT model is used to demonstrate that the optimization tool is capable of generating a set of parameters which predicts the steady-state and switching behavior...... of two IGBT modules rated at 1.7 kV / 1 kA and 1.7 kV / 1.4kA....
Villoutreix Bruno O
2009-11-01
Full Text Available Abstract Background Discovery of new bioactive molecules that could enter drug discovery programs or that could serve as chemical probes is a very complex and costly endeavor. Structure-based and ligand-based in silico screening approaches are nowadays extensively used to complement experimental screening approaches in order to increase the effectiveness of the process and facilitating the screening of thousands or millions of small molecules against a biomolecular target. Both in silico screening methods require as input a suitable chemical compound collection and most often the 3D structure of the small molecules has to be generated since compounds are usually delivered in 1D SMILES, CANSMILES or in 2D SDF formats. Results Here, we describe the new open source program DG-AMMOS which allows the generation of the 3D conformation of small molecules using Distance Geometry and their energy minimization via Automated Molecular Mechanics Optimization. The program is validated on the Astex dataset, the ChemBridge Diversity database and on a number of small molecules with known crystal structures extracted from the Cambridge Structural Database. A comparison with the free program Balloon and the well-known commercial program Omega generating the 3D of small molecules is carried out. The results show that the new free program DG-AMMOS is a very efficient 3D structure generator engine. Conclusion DG-AMMOS provides fast, automated and reliable access to the generation of 3D conformation of small molecules and facilitates the preparation of a compound collection prior to high-throughput virtual screening computations. The validation of DG-AMMOS on several different datasets proves that generated structures are generally of equal quality or sometimes better than structures obtained by other tested methods.
Ahmed, Zeeshan
2010-01-01
In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.
Automated Test-Form Generation
van der Linden, Wim J.; Diao, Qi
2011-01-01
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Chung-Yuan Huang
2014-01-01
Full Text Available Immediate treatment with an automated external defibrillator (AED increases out-of-hospital cardiac arrest (OHCA patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei.
Horstkotte, Burkhard; Tovar Sánchez, Antonio; Duarte, Carlos M; Cerdà, Víctor
2010-01-25
A multipurpose analyzer system based on sequential injection analysis (SIA) for the determination of dissolved oxygen (DO) in seawater is presented. Three operation modes were established and successfully applied onboard during a research cruise in the Southern ocean: 1st, in-line execution of the entire Winkler method including precipitation of manganese (II) hydroxide, fixation of DO, precipitate dissolution by confluent acidification, and spectrophotometric quantification of the generated iodine/tri-iodide (I(2)/I(3)(-)), 2nd, spectrophotometric quantification of I(2)/I(3)(-) in samples prepared according the classical Winkler protocol, and 3rd, accurate batch-wise titration of I(2)/I(3)(-) with thiosulfate using one syringe pump of the analyzer as automatic burette. In the first mode, the zone stacking principle was applied to achieve high dispersion of the reagent solutions in the sample zone. Spectrophotometric detection was done at the isobestic wavelength 466 nm of I(2)/I(3)(-). Highly reduced consumption of reagents and sample compared to the classical Winkler protocol, linear response up to 16 mg L(-1) DO, and an injection frequency of 30 per hour were achieved. It is noteworthy that for the offline protocol, sample metering and quantification with a potentiometric titrator lasts in general over 5 min without counting sample fixation, incubation, and glassware cleaning. The modified SIMPLEX methodology was used for the simultaneous optimization of four volumetric and two chemical variables. Vertex calculation and consequent application including in-line preparation of one reagent was carried out in real-time using the software AutoAnalysis. The analytical system featured high signal stability, robustness, and a repeatability of 3% RSD (1st mode) and 0.8% (2nd mode) during shipboard application. PMID:20103088
Although radiography has been an established imaging modality for over a century, continuous developments have led to improvements in technique resulting in improved image quality at reduced patient dose. If one compares the technique used by Roentgen with the methods used today, one finds that a radiograph can now be obtained at a dose which is smaller by a factor of 100 or more. Nonetheless, some national surveys, particularly in the United Kingdom and in the United States of America in the 1980s and 1990s, have indicated large variations in patient doses for the same diagnostic examination, in some cases by a factor of 20 or more. This arises not only owing to the various types of equipment and accessories used by the different health care providers, but also because of operational factors. The IAEA has a statutory responsibility to establish standards for the protection of people against exposure to ionising radiation and to provide for the worldwide application of those standards. A fundamental requirement of the International Basic Safety Standards for Protection against Ionizing Radiation and for the Safety of Radiation Sources (BSS), issued by the IAEA in cooperation with the FAO, ILO, WHO, PAHO and NEA, is the optimization of radiological protection of patients undergoing medical exposure. Towards its responsibility of implementation of standards and under the subprogramme of radiation safety, in 1995, the IAEA launched a coordinated research project (CRP) on radiological protection in diagnostic radiology in some countries in the Eastern European, African and Asian region. Initially, the CRP addressed radiography only and it covered wide aspects of optimisation of radiological protection. Subsequently, the scope of the CRP was extended to fluoroscopy and computed tomography (CT), but it covered primarily situation analysis of patient doses and equipment quality control. It did not cover patient dose reduction aspects in fluoroscopy and CT. The project
Signal optimization is affected during radio links between a transmitter and a receiver located in adjacent material media with differing optical densities. The Optimization is carried out via the automated control theory method. The radio signal obtained after the optimization is coordinated with the two media's electrical characteristics simultaneously; and this enables low power and small antenna of the transmitter, on the one hand, and low sensitivity of the receiver, on the other
Automated Integrated Analog Filter Design Issues
Karolis Kiela; Romualdas Navickas
2015-01-01
An analysis of modern automated integrated analog circuits design methods and their use in integrated filter design is done. Current modern analog circuits automated tools are based on optimization algorithms and/or new circuit generation methods. Most automated integrated filter design methods are only suited to gmC and switched current filter topologies. Here, an algorithm for an active RC integrated filter design is proposed, that can be used in automated filter designs. The algorithm is t...
Asti, Mattia [Nuclear Medicine Department, Santa Maria Nuova Hospital, Reggio Emilia (Italy)], E-mail: asti.mattia@asmn.re.it; Farioli, Daniela; Iori, Michele; Guidotti, Claudio; Versari, Annibale; Salvo, Diana [Nuclear Medicine Department, Santa Maria Nuova Hospital, Reggio Emilia (Italy)
2010-04-15
[{sup 18}F]-labelled choline analogues, such as 2-[{sup 18}F]fluoroethylcholine ({sup 18}FECH), have suggested to be a new class of choline derivatives highly useful for the imaging of prostate and brain tumours. In fact, tumour cells with enhanced proliferation rate usually exhibit an improved choline uptake due to the increased membrane phospholipids biosynthesis. The aim of this study was the development of a high yielding synthesis of {sup 18}FECH. The possibility of shortening the synthesis time by reacting all the reagents in a convenient and rapid one-step reaction was specially considered. Methods: {sup 18}FECH was synthesized by reacting [{sup 18}F]fluoride with 1,2-bis(tosyloxy)ethane and N,N-dimethylaminoethanol. The synthesis was carried out using both a one- and a two-step reaction in order to compare the two procedures. The effects on the radiochemical yield and purity by using different [{sup 18}F]fluoride phase transfer catalysts, reagents amounts and purification methods were assessed. Quality controls on the final products were performed by means of radio-thin-layer chromatography, gas chromatography and high-performance liquid chromatography equipped with conductimetric, ultraviolet and radiometric detectors. Results: In the optimized experimental conditions, {sup 18}FECH was synthesized with a radiochemical yield of 43{+-}3% and 48{+-}1% (not corrected for decay) when the two-step or the one-step approach were used, respectively. The radiochemical purity was higher than 99% regardless of the different synthetic pathways or purification methods adopted. The main chemical impurity was due to N,N-dimethylmorpholinium. The identity of this impurity in {sup 18}FECH preparations was not previously reported. Conclusion: An improved two-step and an innovative one-step reaction for synthesizing {sup 18}FECH in a high yield were reported. The adaptation of a multistep synthesis to a single step process, opens further possibilities for simpler and more
[18F]-labelled choline analogues, such as 2-[18F]fluoroethylcholine (18FECH), have suggested to be a new class of choline derivatives highly useful for the imaging of prostate and brain tumours. In fact, tumour cells with enhanced proliferation rate usually exhibit an improved choline uptake due to the increased membrane phospholipids biosynthesis. The aim of this study was the development of a high yielding synthesis of 18FECH. The possibility of shortening the synthesis time by reacting all the reagents in a convenient and rapid one-step reaction was specially considered. Methods: 18FECH was synthesized by reacting [18F]fluoride with 1,2-bis(tosyloxy)ethane and N,N-dimethylaminoethanol. The synthesis was carried out using both a one- and a two-step reaction in order to compare the two procedures. The effects on the radiochemical yield and purity by using different [18F]fluoride phase transfer catalysts, reagents amounts and purification methods were assessed. Quality controls on the final products were performed by means of radio-thin-layer chromatography, gas chromatography and high-performance liquid chromatography equipped with conductimetric, ultraviolet and radiometric detectors. Results: In the optimized experimental conditions, 18FECH was synthesized with a radiochemical yield of 43±3% and 48±1% (not corrected for decay) when the two-step or the one-step approach were used, respectively. The radiochemical purity was higher than 99% regardless of the different synthetic pathways or purification methods adopted. The main chemical impurity was due to N,N-dimethylmorpholinium. The identity of this impurity in 18FECH preparations was not previously reported. Conclusion: An improved two-step and an innovative one-step reaction for synthesizing 18FECH in a high yield were reported. The adaptation of a multistep synthesis to a single step process, opens further possibilities for simpler and more reliable automations.
Pearce, Charles
2009-01-01
Focuses on mathematical structure, and on real-world applications. This book includes developments in several optimization-related topics such as decision theory, linear programming, turnpike theory, duality theory, convex analysis, and queuing theory.
Dhakne, B. N.; Giri, V. V; Waghmode, S. S.
2010-01-01
New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.
Highlights: • We analyze the relationship between Out-of-the-Loop and the loss of human operators’ situation awareness. • We propose an ostracism rate estimation method by only considering the negative effects of automation. • The ostracism rate reflects how much automation interrupts human operators to receive information. • The higher the ostracism rate is, the lower the accuracy of human operators’ SA will be. - Abstract: With the introduction of automation in various industries including the nuclear field, its side effect, referred to as the Out-of-the-Loop (OOTL) problem, has emerged as a critical issue that needs to be addressed. Many studies have been attempted to analyze and solve the OOTL problem, but this issue still needs a clear solution to provide criteria for introducing automation. Therefore, a quantitative estimation method for identifying negative effects of automation is proposed in this paper. The representative aspect of the OOTL problem in nuclear power plants (NPPs) is that human operators in automated operations are given less information than human operators in manual operations. In other words, human operators have less opportunity to obtain needed information as automation is introduced. From this point of view, the degree of difficulty in obtaining information from automated systems is defined as the Level of Ostracism (LOO). Using the LOO and information theory, we propose the ostracism rate, which is a new estimation method that expresses how much automation interrupts human operators’ situation awareness. We applied production rules to describe the human operators’ thinking processes, Bayesian inference to describe the production rules mathematically, and information theory to calculate the amount of information that human operators receive through observations. The validity of the suggested method was proven by conducting an experiment. The results show that the ostracism rate was significantly related to the accuracy
Perekhodtseva, E. V.
2013-01-01
Development of successful method of automated statistical well-in-advance forecast (from 12 hours to two days) of dangerous phenomena – severe squalls and tornadoes – could allow mitigate the economic losses.
Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs
Sensitivity analysis approach to multibody systems described by natural coordinates
Li, Xiufeng; Wang, Yabin
2014-03-01
The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.
Components for automated microscopy
Determann, H.; Hartmann, H.; Schade, K. H.; Stankewitz, H. W.
1980-12-01
A number of devices, aiming at automated analysis of microscopic objects as regards their morphometrical parameters or their photometrical values, were developed. These comprise: (1) a device for automatic focusing tuned on maximum contrast; (2) a feedback system for automatic optimization of microscope illumination; and (3) microscope lenses with adjustable pupil distances for usage in the two previous devices. An extensive test program on histological and zytological applications proves the wide application possibilities of the autofocusing device.
The 'renaissance' of the therapeutic applications of radiopharmaceuticals during the last few years was in part due to a greater availability of radionuclides with appropriate nuclear decay properties, as well as to the development of carrier molecules with improved characteristics. Although radionuclides such as 32P, 89Sr and 131I, were used from the early days of nuclear medicine in the late 1930s and early 1940s, the inclusion of other particle emitting radionuclides into the nuclear medicine armamentarium was rather late. Only in the early 1980s did the specialized scientific literature start to show the potential for using other beta emitting nuclear reactor produced radionuclides such as 153Sm, 166 Ho, 165Dy and 186-188Re. Bone seeking agents radiolabelled with the above mentioned beta emitting radionuclides demonstrated clear clinical potential in relieving intense bone pain resulting from metastases of the breast, prostate and lung of cancer patients. Therefore, upon the recommendation of a consultants meeting held in Vienna in 1993, the Co-ordinated Research Project (CRP) on Optimization of the Production and quality control of Radiotherapeutic Radionuclides and Radiopharmaceuticals was established in 1994. The CRP aimed at developing and improving existing laboratory protocols for the production of therapeutic radionuclides using existing nuclear research reactors including the corresponding radiolabelling, quality control procedures; and validation in experimental animals. With the participation of ten scientists from IAEA Member States, several laboratory procedures for preparation and quality control were developed, tested and assessed as potential therapeutic radiopharmaceuticals for bone pain palliation. In particular, the CRP optimised the reactor production of 153Sm and the preparation of the radiopharmaceutical 153Sm-EDTMP (ethylene diamine tetramethylene phosphonate), as well as radiolabelling techniques and quality control methods for the
Durfee, Edmund H.
1999-01-01
To coordinate, intelligent agents might need to know something about themselves, about each other, about how others view themselves and others, about how others think others view themselves and others, and so on. Taken to an extreme, the amount of knowledge an agent might possess to coordinate its interactions with others might outstrip the agent's limited reasoning capacity (its available time, memory, and so on). Much of the work in studying and building multiagent systems has thus been dev...
Design automation, languages, and simulations
Chen, Wai-Kai
2003-01-01
As the complexity of electronic systems continues to increase, the micro-electronic industry depends upon automation and simulations to adapt quickly to market changes and new technologies. Compiled from chapters contributed to CRC's best-selling VLSI Handbook, this volume covers a broad range of topics relevant to design automation, languages, and simulations. These include a collaborative framework that coordinates distributed design activities through the Internet, an overview of the Verilog hardware description language and its use in a design environment, hardware/software co-design, syst
庞龙; 陆金桂
2012-01-01
Optimizing the order picking is a useful way to improve the efficiency of automated warehouses. According to analyzing the process and characteristics of picking in automated warehouses, a new mathematic model is proposed for automated warehouses. Firstly, we make an excellent initial population by the ant colony algorithm, and then optimize and solve the model with the genetic algorithms. The simulation results show that the model is feasible, and the mix of the ant colony and genetic algorithms is not only feasible but also accelerate the speed of the algorithm, and then improve the efficiency of order picking.%合理优化货物的拣选路径是提高自动化立体仓库运行效率的一种有效方法.通过分析自动化立体仓库拣选作业的工作流程与特点,为自动化仓库拣选作业建立优化数学模型,首先利用蚁群算法生成优异的初始种群,然后通过遗传算法对该数学模型进行优化求解.仿真结果表明该模型是可行的,蚁群遗传算法的混合不仅得到更精确的结果而且加速了算法的求解速度,从而能够改善拣选作业的效率.
Relative observability in coordination control
Komenda, Jan; Masopust, Tomáš; van Schuppen, J. H.
Piscataway: IEEE, 2015 - (Lennartson, B.), s. 75-80 ISBN 978-1-4673-8182-6. [IEEE International Conference on Automation Science and Engineering (CASE), 2015. Gothenburg (SE), 24.08.2015-28.08.2015] R&D Projects: GA MŠk LH13012; GA ČR GA15-02532S Institutional support: RVO:67985840 Keywords : supervisory control * coordination control * relative observability Subject RIV: BA - General Mathematics http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7294044
董岳昕; 杨洪耕
2011-01-01
Considering coordinated control between provincial and regional automatic voltage control(AVC) systems, a variable-objective optimal method is presented. In coordinated control mode, regional AVC needs to provide reserve gateway reactive power capacity to provincial AVC, and the optimal objective is the maxi mum devoting/removing reactive power. After receiving adjustable range of gateway power factors, regional AVC balances between reactive power flow and voltage quality and then selects feasible-region optimization, convergent optimization or relaxation optimization to accomplish various coordinated control goals. If there is communication interruption or other faults, regional AVC will switch to autonomous decentralized control mode, and stage-by-stage optimization method is introduced. The proposed control methods are applied to a practical grid, and the results indicate that the approach proposed is effective in optimizing the reactive power flow, improving the voltage quality and is practical for projects.%考虑省地自动电压控制系统(AVC)联合协调控制要求,提出一种可变目标函数的优化方法.协调控制模式下,地区AVC向省网提供关口无功备用容量时,以最大可投/切容量为优化目标；接收到省网下发的关口功率因数限值后,地区AVC在省地无功平衡与主网电压合格之间进行综合权衡,选择可行域优化、趋同优化或松弛优化策略,通过改变目标函数实现不同情况下的协调控制效果.在出现通讯中断等故障情况下,地区AVC自动转入自律分散控制模式,并采用逐级优化策略.将所提出的优化方法应用于实际电网,运行结果表明所提策略可以进一步优化无功潮流,改善电压质量,具有工程实用性.
Integrating Test-Form Formatting into Automated Test Assembly
Diao, Qi; van der Linden, Wim J.
2013-01-01
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…