WorldWideScience

Sample records for learning control system

  1. Indirect learning control for nonlinear dynamical systems

    Science.gov (United States)

    Ryu, Yeong Soon; Longman, Richard W.

    1993-01-01

    In a previous paper, learning control algorithms were developed based on adaptive control ideas for linear time variant systems. The learning control methods were shown to have certain advantages over their adaptive control counterparts, such as the ability to produce zero tracking error in time varying systems, and the ability to eliminate repetitive disturbances. In recent years, certain adaptive control algorithms have been developed for multi-body dynamic systems such as robots, with global guaranteed convergence to zero tracking error for the nonlinear system euations. In this paper we study the relationship between such adaptive control methods designed for this specific class of nonlinear systems, and the learning control problem for such systems, seeking to converge to zero tracking error in following a specific command repeatedly, starting from the same initial conditions each time. The extension of these methods from the adaptive control problem to the learning control problem is seen to be trivial. The advantages and disadvantages of using learning control based on such adaptive control concepts for nonlinear systems, and the use of other currently available learning control algorithms are discussed.

  2. Learning to Control Advanced Life Support Systems

    Science.gov (United States)

    Subramanian, Devika

    2004-01-01

    Advanced life support systems have many interacting processes and limited resources. Controlling and optimizing advanced life support systems presents unique challenges. In particular, advanced life support systems are nonlinear coupled dynamical systems and it is difficult for humans to take all interactions into account to design an effective control strategy. In this project. we developed several reinforcement learning controllers that actively explore the space of possible control strategies, guided by rewards from a user specified long term objective function. We evaluated these controllers using a discrete event simulation of an advanced life support system. This simulation, called BioSim, designed by Nasa scientists David Kortenkamp and Scott Bell has multiple, interacting life support modules including crew, food production, air revitalization, water recovery, solid waste incineration and power. They are implemented in a consumer/producer relationship in which certain modules produce resources that are consumed by other modules. Stores hold resources between modules. Control of this simulation is via adjusting flows of resources between modules and into/out of stores. We developed adaptive algorithms that control the flow of resources in BioSim. Our learning algorithms discovered several ingenious strategies for maximizing mission length by controlling the air and water recycling systems as well as crop planting schedules. By exploiting non-linearities in the overall system dynamics, the learned controllers easily out- performed controllers written by human experts. In sum, we accomplished three goals. We (1) developed foundations for learning models of coupled dynamical systems by active exploration of the state space, (2) developed and tested algorithms that learn to efficiently control air and water recycling processes as well as crop scheduling in Biosim, and (3) developed an understanding of the role machine learning in designing control systems for

  3. Fuzzy self-learning control for magnetic servo system

    Science.gov (United States)

    Tarn, J. H.; Kuo, L. T.; Juang, K. Y.; Lin, C. E.

    1994-01-01

    It is known that an effective control system is the key condition for successful implementation of high-performance magnetic servo systems. Major issues to design such control systems are nonlinearity; unmodeled dynamics, such as secondary effects for copper resistance, stray fields, and saturation; and that disturbance rejection for the load effect reacts directly on the servo system without transmission elements. One typical approach to design control systems under these conditions is a special type of nonlinear feedback called gain scheduling. It accommodates linear regulators whose parameters are changed as a function of operating conditions in a preprogrammed way. In this paper, an on-line learning fuzzy control strategy is proposed. To inherit the wealth of linear control design, the relations between linear feedback and fuzzy logic controllers have been established. The exercise of engineering axioms of linear control design is thus transformed into tuning of appropriate fuzzy parameters. Furthermore, fuzzy logic control brings the domain of candidate control laws from linear into nonlinear, and brings new prospects into design of the local controllers. On the other hand, a self-learning scheme is utilized to automatically tune the fuzzy rule base. It is based on network learning infrastructure; statistical approximation to assign credit; animal learning method to update the reinforcement map with a fast learning rate; and temporal difference predictive scheme to optimize the control laws. Different from supervised and statistical unsupervised learning schemes, the proposed method learns on-line from past experience and information from the process and forms a rule base of an FLC system from randomly assigned initial control rules.

  4. Linear System Control Using Stochastic Learning Automata

    Science.gov (United States)

    Ziyad, Nigel; Cox, E. Lucien; Chouikha, Mohamed F.

    1998-01-01

    This paper explains the use of a Stochastic Learning Automata (SLA) to control switching between three systems to produce the desired output response. The SLA learns the optimal choice of the damping ratio for each system to achieve a desired result. We show that the SLA can learn these states for the control of an unknown system with the proper choice of the error criteria. The results of using a single automaton are compared to using multiple automata.

  5. Repetitive learning control of continuous chaotic systems

    International Nuclear Information System (INIS)

    Chen Maoyin; Shang Yun; Zhou Donghua

    2004-01-01

    Combining a shift method and the repetitive learning strategy, a repetitive learning controller is proposed to stabilize unstable periodic orbits (UPOs) within chaotic attractors in the sense of least mean square. If nonlinear parts in chaotic systems satisfy Lipschitz condition, the proposed controller can be simplified into a simple proportional repetitive learning controller

  6. Online reinforcement learning control for aerospace systems

    NARCIS (Netherlands)

    Zhou, Y.

    2018-01-01

    Reinforcement Learning (RL) methods are relatively new in the field of aerospace guidance, navigation, and control. This dissertation aims to exploit RL methods to improve the autonomy and online learning of aerospace systems with respect to the a priori unknown system and environment, dynamical

  7. Fixed Point Learning Based Intelligent Traffic Control System

    Science.gov (United States)

    Zongyao, Wang; Cong, Sui; Cheng, Shao

    2017-10-01

    Fixed point learning has become an important tool to analyse large scale distributed system such as urban traffic network. This paper presents a fixed point learning based intelligence traffic network control system. The system applies convergence property of fixed point theorem to optimize the traffic flow density. The intelligence traffic control system achieves maximum road resources usage by averaging traffic flow density among the traffic network. The intelligence traffic network control system is built based on decentralized structure and intelligence cooperation. No central control is needed to manage the system. The proposed system is simple, effective and feasible for practical use. The performance of the system is tested via theoretical proof and simulations. The results demonstrate that the system can effectively solve the traffic congestion problem and increase the vehicles average speed. It also proves that the system is flexible, reliable and feasible for practical use.

  8. Systems control with generalized probabilistic fuzzy-reinforcement learning

    NARCIS (Netherlands)

    Hinojosa, J.; Nefti, S.; Kaymak, U.

    2011-01-01

    Reinforcement learning (RL) is a valuable learning method when the systems require a selection of control actions whose consequences emerge over long periods for which input-output data are not available. In most combinations of fuzzy systems and RL, the environment is considered to be

  9. Cognitive Models for Learning to Control Dynamic Systems

    National Research Council Canada - National Science Library

    Eberhart, Russ; Hu, Xiaohui; Chen, Yaobin

    2008-01-01

    Report developed under STTR contract for topic "Cognitive models for learning to control dynamic systems" demonstrated a swarm intelligence learning algorithm and its application in unmanned aerial vehicle (UAV) mission planning...

  10. Generalized projective synchronization of chaotic systems via adaptive learning control

    International Nuclear Information System (INIS)

    Yun-Ping, Sun; Jun-Min, Li; Hui-Lin, Wang; Jiang-An, Wang

    2010-01-01

    In this paper, a learning control approach is applied to the generalized projective synchronisation (GPS) of different chaotic systems with unknown periodically time-varying parameters. Using the Lyapunov–Krasovskii functional stability theory, a differential-difference mixed parametric learning law and an adaptive learning control law are constructed to make the states of two different chaotic systems asymptotically synchronised. The scheme is successfully applied to the generalized projective synchronisation between the Lorenz system and Chen system. Moreover, numerical simulations results are used to verify the effectiveness of the proposed scheme. (general)

  11. Recent developments in learning control and system identification for robots and structures

    Science.gov (United States)

    Phan, M.; Juang, J.-N.; Longman, R. W.

    1990-01-01

    This paper reviews recent results in learning control and learning system identification, with particular emphasis on discrete-time formulation, and their relation to adaptive theory. Related continuous-time results are also discussed. Among the topics presented are proportional, derivative, and integral learning controllers, time-domain formulation of discrete learning algorithms. Newly developed techniques are described including the concept of the repetition domain, and the repetition domain formulation of learning control by linear feedback, model reference learning control, indirect learning control with parameter estimation, as well as related basic concepts, recursive and non-recursive methods for learning identification.

  12. A Parametric Learning and Identification Based Robust Iterative Learning Control for Time Varying Delay Systems

    Directory of Open Access Journals (Sweden)

    Lun Zhai

    2014-01-01

    Full Text Available A parametric learning based robust iterative learning control (ILC scheme is applied to the time varying delay multiple-input and multiple-output (MIMO linear systems. The convergence conditions are derived by using the H∞ and linear matrix inequality (LMI approaches, and the convergence speed is analyzed as well. A practical identification strategy is applied to optimize the learning laws and to improve the robustness and performance of the control system. Numerical simulations are illustrated to validate the above concepts.

  13. GA-based fuzzy reinforcement learning for control of a magnetic bearing system.

    Science.gov (United States)

    Lin, C T; Jou, C P

    2000-01-01

    This paper proposes a TD (temporal difference) and GA (genetic algorithm)-based reinforcement (TDGAR) learning method and applies it to the control of a real magnetic bearing system. The TDGAR learning scheme is a new hybrid GA, which integrates the TD prediction method and the GA to perform the reinforcement learning task. The TDGAR learning system is composed of two integrated feedforward networks. One neural network acts as a critic network to guide the learning of the other network (the action network) which determines the outputs (actions) of the TDGAR learning system. The action network can be a normal neural network or a neural fuzzy network. Using the TD prediction method, the critic network can predict the external reinforcement signal and provide a more informative internal reinforcement signal to the action network. The action network uses the GA to adapt itself according to the internal reinforcement signal. The key concept of the TDGAR learning scheme is to formulate the internal reinforcement signal as the fitness function for the GA such that the GA can evaluate the candidate solutions (chromosomes) regularly, even during periods without external feedback from the environment. This enables the GA to proceed to new generations regularly without waiting for the arrival of the external reinforcement signal. This can usually accelerate the GA learning since a reinforcement signal may only be available at a time long after a sequence of actions has occurred in the reinforcement learning problem. The proposed TDGAR learning system has been used to control an active magnetic bearing (AMB) system in practice. A systematic design procedure is developed to achieve successful integration of all the subsystems including magnetic suspension, mechanical structure, and controller training. The results show that the TDGAR learning scheme can successfully find a neural controller or a neural fuzzy controller for a self-designed magnetic bearing system.

  14. Consensus-based distributed cooperative learning from closed-loop neural control systems.

    Science.gov (United States)

    Chen, Weisheng; Hua, Shaoyong; Zhang, Huaguang

    2015-02-01

    In this paper, the neural tracking problem is addressed for a group of uncertain nonlinear systems where the system structures are identical but the reference signals are different. This paper focuses on studying the learning capability of neural networks (NNs) during the control process. First, we propose a novel control scheme called distributed cooperative learning (DCL) control scheme, by establishing the communication topology among adaptive laws of NN weights to share their learned knowledge online. It is further proved that if the communication topology is undirected and connected, all estimated weights of NNs can converge to small neighborhoods around their optimal values over a domain consisting of the union of all state orbits. Second, as a corollary it is shown that the conclusion on the deterministic learning still holds in the decentralized adaptive neural control scheme where, however, the estimated weights of NNs just converge to small neighborhoods of the optimal values along their own state orbits. Thus, the learned controllers obtained by DCL scheme have the better generalization capability than ones obtained by decentralized learning method. A simulation example is provided to verify the effectiveness and advantages of the control schemes proposed in this paper.

  15. Design of fuzzy learning control systems for steam generator water level control

    International Nuclear Information System (INIS)

    Park, Gee Yong

    1996-02-01

    A fuzzy learning algorithm is developed in order to construct the useful control rules and tune the membership functions in the fuzzy logic controller used for water level control of nuclear steam generator. The fuzzy logic controllers have shown to perform better than conventional controllers for ill-defined or complex processes such as nuclear steam generator. Whereas the fuzzy logic controller does not need a detailed mathematical model of a plant to be controlled, its structure is to be made on the basis of the operator's linguistic information experienced from the plant operations. It is not an easy work and also there is no systematic way to translate the operator's linguistic information into quantitative information. When the linguistic information of operators is incomplete, tuning the parameters of fuzzy controller is to be performed for better control performance. It is the time and effort consuming procedure that controller designer has to tune the structure of fuzzy logic controller for optimal performance. And if the number of control inputs is many and the rule base is constructed in multidimensional space, it is very difficult for a controller designer to tune the fuzzy controller structure. Hence, the difficulty in putting the experimental knowledge into quantitative (or numerical) data and the difficulty in tuning the rules are the major problems in designing fuzzy logic controller. In order to overcome the problems described above, a learning algorithm by gradient descent method is included in the fuzzy control system such that the membership functions are tuned and the necessary rules are created automatically for good control performance. For stable learning in gradient descent method, the optimal range of learning coefficient not to be trapped and not to provide too slow learning speed is investigated. With the optimal range of learning coefficient, the optimal value of learning coefficient is suggested and with this value, the gradient

  16. Design of intelligent comfort control system with human learning and minimum power control strategies

    International Nuclear Information System (INIS)

    Liang, J.; Du, R.

    2008-01-01

    This paper presents the design of an intelligent comfort control system by combining the human learning and minimum power control strategies for the heating, ventilating and air conditioning (HVAC) system. In the system, the predicted mean vote (PMV) is adopted as the control objective to improve indoor comfort level by considering six comfort related variables, whilst a direct neural network controller is designed to overcome the nonlinear feature of the PMV calculation for better performance. To achieve the highest comfort level for the specific user, a human learning strategy is designed to tune the user's comfort zone, and then, a VAV and minimum power control strategy is proposed to minimize the energy consumption further. In order to validate the system design, a series of computer simulations are performed based on a derived HVAC and thermal space model. The simulation results confirm the design of the intelligent comfort control system. In comparison to the conventional temperature controller, this system can provide a higher comfort level and better system performance, so it has great potential for HVAC applications in the future

  17. Procedural learning during declarative control.

    Science.gov (United States)

    Crossley, Matthew J; Ashby, F Gregory

    2015-09-01

    There is now abundant evidence that human learning and memory are governed by multiple systems. As a result, research is now turning to the next question of how these putative systems interact. For instance, how is overall control of behavior coordinated, and does learning occur independently within systems regardless of what system is in control? Behavioral, neuroimaging, and neuroscience data are somewhat mixed with respect to these questions. Human neuroimaging and animal lesion studies suggest independent learning and are mostly agnostic with respect to control. Human behavioral studies suggest active inhibition of behavioral output but have little to say regarding learning. The results of two perceptual category-learning experiments are described that strongly suggest that procedural learning does occur while the explicit system is in control of behavior and that this learning might be just as good as if the procedural system was controlling the response. These results are consistent with the idea that declarative memory systems inhibit the ability of the procedural system to access motor output systems but do not prevent procedural learning. (c) 2015 APA, all rights reserved).

  18. E-Learning System for Learning Virtual Circuit Making with a Microcontroller and Programming to Control a Robot

    Science.gov (United States)

    Takemura, Atsushi

    2015-01-01

    This paper proposes a novel e-Learning system for learning electronic circuit making and programming a microcontroller to control a robot. The proposed e-Learning system comprises a virtual-circuit-making function for the construction of circuits with a versatile, Arduino microcontroller and an educational system that can simulate behaviors of…

  19. Off-policy integral reinforcement learning optimal tracking control for continuous-time chaotic systems

    International Nuclear Information System (INIS)

    Wei Qing-Lai; Song Rui-Zhuo; Xiao Wen-Dong; Sun Qiu-Ye

    2015-01-01

    This paper estimates an off-policy integral reinforcement learning (IRL) algorithm to obtain the optimal tracking control of unknown chaotic systems. Off-policy IRL can learn the solution of the HJB equation from the system data generated by an arbitrary control. Moreover, off-policy IRL can be regarded as a direct learning method, which avoids the identification of system dynamics. In this paper, the performance index function is first given based on the system tracking error and control error. For solving the Hamilton–Jacobi–Bellman (HJB) equation, an off-policy IRL algorithm is proposed. It is proven that the iterative control makes the tracking error system asymptotically stable, and the iterative performance index function is convergent. Simulation study demonstrates the effectiveness of the developed tracking control method. (paper)

  20. A new 2-d approach to iterative , learning control system

    International Nuclear Information System (INIS)

    Ashraf, S.; Muhammad, E.; Tasleem, M.

    2004-01-01

    The well known two-dimensional system theory is used to analyze and develop a class of learning control system. In this paper we first explore and test a method given by ZHENG and JAMSHIDI. In that paper all the input samples are treated at once. In comparison our paper presents a scheme in which one sample at a time is treated. The 2- D state-space model of proposed learning control scheme is given. An important consequence of the proposed scheme is that given the right choice of gain matrix and sampling time the system's output can be made to converge to any degree of accuracy. (author)

  1. Learning from neural control.

    Science.gov (United States)

    Wang, Cong; Hill, David J

    2006-01-01

    One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.

  2. Autonomy supported, learner-controlled or system-controlled learning in hypermedia environments and the influence of academic self-regulation style

    NARCIS (Netherlands)

    Gorissen, Chantal; Kester, Liesbeth; Brand-Gruwel, Saskia; Martens, Rob

    2012-01-01

    This study focuses on learning in three different hypermedia environments that either support autonomous learning, learner-controlled learning or system-controlled learning and explores the mediating role of academic self-regulation style ( ASRS; i.e., a macro level of motivation) on learning. This

  3. Autonomy supported, learner-controlled or system-controlled learning in hypermedia environments and the influence of academic self-regulation style

    NARCIS (Netherlands)

    Gorissen, Chantal J J; Kester, Liesbeth; Brand-Gruwel, Saskia; Martens, Rob

    2015-01-01

    This study focuses on learning in three different hypermedia environments that either support autonomous learning, learner-controlled learning or system-controlled learning and explores the mediating role of academic self-regulation style (ASRS; i.e. a macro level of motivation) on learning. This

  4. The Effectiveness of E-Learning Systems: A Review of the Empirical Literature on Learner Control

    Science.gov (United States)

    Sorgenfrei, Christian; Smolnik, Stefan

    2016-01-01

    E-learning systems are considerably changing education and organizational training. With the advancement of online-based learning systems, learner control over the instructional process has emerged as a decisive factor in technology-based forms of learning. However, conceptual work on the role of learner control in e-learning has not advanced…

  5. Iterative learning control for multi-agent systems coordination

    CERN Document Server

    Yang, Shiping; Li, Xuefang; Shen, Dong

    2016-01-01

    A timely guide using iterative learning control (ILC) as a solution for multi-agent systems (MAS) challenges, this book showcases recent advances and industrially relevant applications. Readers are first given a comprehensive overview of the intersection between ILC and MAS, then introduced to a range of topics that include both basic and advanced theoretical discussions, rigorous mathematics, engineering practice, and both linear and nonlinear systems. Through systematic discussion of network theory and intelligent control, the authors explore future research possibilities, develop new tools, and provide numerous applications such as power grids, communication and sensor networks, intelligent transportation systems, and formation control. Readers will gain a roadmap of the latest advances in the fields and can use their newfound knowledge to design their own algorithms.

  6. Autonomy Supported, Learner-Controlled or System-Controlled Learning in Hypermedia Environments and the Influence of Academic Self-Regulation Style

    Science.gov (United States)

    Gorissen, Chantal J. J.; Kester, Liesbeth; Brand-Gruwel, Saskia; Martens, Rob

    2015-01-01

    This study focuses on learning in three different hypermedia environments that either support autonomous learning, learner-controlled learning or system-controlled learning and explores the mediating role of academic self-regulation style (ASRS; i.e. a macro level of motivation) on learning. This research was performed to gain more insight in the…

  7. Lessons learned on the Ground Test Accelerator control system

    International Nuclear Information System (INIS)

    Kozubal, A.J.; Weiss, R.E.

    1994-01-01

    When we initiated the control system design for the Ground Test Accelerator (GTA), we envisioned a system that would be flexible enough to handle the changing requirements of an experimental project. This control system would use a developers' toolkit to reduce the cost and time to develop applications for GTA, and through the use of open standards, the system would accommodate unforeseen requirements as they arose. Furthermore, we would attempt to demonstrate on GTA a level of automation far beyond that achieved by existing accelerator control systems. How well did we achieve these goals? What were the stumbling blocks to deploying the control system, and what assumptions did we make about requirements that turned out to be incorrect? In this paper we look at the process of developing a control system that evolved into what is now the ''Experimental Physics and Industrial Control System'' (EPICS). Also, we assess the impact of this system on the GTA project, as well as the impact of GTA on EPICS. The lessons learned on GTA will be valuable for future projects

  8. Which Management Control System principles and aspects are relevant when deploying a learning machine?

    OpenAIRE

    Martin, Johansson; Mikael, Göthager

    2017-01-01

    How shall a business adapt its management control systems when learning machines enter the arena? Will the control system continue to focus on humans aspects and continue to consider a learning machine to be an automation tool as any other historically programmed computer? Learning machines introduces productivity capabilities that achieve very high levels of efficiency and quality. A learning machine can sort through large amounts of data and make conclusions difficult by a human mind. Howev...

  9. Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)

    Science.gov (United States)

    Niewoehner, Kevin R.; Carter, John (Technical Monitor)

    2001-01-01

    The research accomplishments for the cooperative agreement 'Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)' include the following: (1) previous IFC program data collection and analysis; (2) IFC program support site (configured IFC systems support network, configured Tornado/VxWorks OS development system, made Configuration and Documentation Management Systems Internet accessible); (3) Airborne Research Test Systems (ARTS) II Hardware (developed hardware requirements specification, developing environmental testing requirements, hardware design, and hardware design development); (4) ARTS II software development laboratory unit (procurement of lab style hardware, configured lab style hardware, and designed interface module equivalent to ARTS II faceplate); (5) program support documentation (developed software development plan, configuration management plan, and software verification and validation plan); (6) LWR algorithm analysis (performed timing and profiling on algorithm); (7) pre-trained neural network analysis; (8) Dynamic Cell Structures (DCS) Neural Network Analysis (performing timing and profiling on algorithm); and (9) conducted technical interchange and quarterly meetings to define IFC research goals.

  10. Iterative Learning Control design for uncertain and time-windowed systems

    NARCIS (Netherlands)

    Wijdeven, van de J.J.M.

    2008-01-01

    Iterative Learning Control (ILC) is a control strategy capable of dramatically increasing the performance of systems that perform batch repetitive tasks. This performance improvement is achieved by iteratively updating the command signal, using measured error data from previous trials, i.e., by

  11. Effectiveness of Adaptive Assessment versus Learner Control in a Multimedia Learning System

    Science.gov (United States)

    Chen, Ching-Huei; Chang, Shu-Wei

    2015-01-01

    The purpose of this study was to explore the effectiveness of adaptive assessment versus learner control in a multimedia learning system designed to help secondary students learn science. Unlike other systems, this paper presents a workflow of adaptive assessment following instructional materials that better align with learners' cognitive…

  12. Machine Learning Control For Highly Reconfigurable High-Order Systems

    Science.gov (United States)

    2015-01-02

    calibration and applications,” Mechatronics and Embedded Systems and Applications (MESA), 2010 IEEE/ASME International Conference on, IEEE, 2010, pp. 38–43...AFRL-OSR-VA-TR-2015-0012 MACHINE LEARNING CONTROL FOR HIGHLY RECONFIGURABLE HIGH-ORDER SYSTEMS John Valasek TEXAS ENGINEERING EXPERIMENT STATION...DIMENSIONAL RECONFIGURABLE SYSTEMS FA9550-11-1-0302 Period of Performance 1 July 2011 – 29 September 2014 John Valasek Aerospace Engineering

  13. Developing Learning Tool of Control System Engineering Using Matrix Laboratory Software Oriented on Industrial Needs

    Science.gov (United States)

    Isnur Haryudo, Subuh; Imam Agung, Achmad; Firmansyah, Rifqi

    2018-04-01

    The purpose of this research is to develop learning media of control technique using Matrix Laboratory software with industry requirement approach. Learning media serves as a tool for creating a better and effective teaching and learning situation because it can accelerate the learning process in order to enhance the quality of learning. Control Techniques using Matrix Laboratory software can enlarge the interest and attention of students, with real experience and can grow independent attitude. This research design refers to the use of research and development (R & D) methods that have been modified by multi-disciplinary team-based researchers. This research used Computer based learning method consisting of computer and Matrix Laboratory software which was integrated with props. Matrix Laboratory has the ability to visualize the theory and analysis of the Control System which is an integration of computing, visualization and programming which is easy to use. The result of this instructional media development is to use mathematical equations using Matrix Laboratory software on control system application with DC motor plant and PID (Proportional-Integral-Derivative). Considering that manufacturing in the field of Distributed Control systems (DCSs), Programmable Controllers (PLCs), and Microcontrollers (MCUs) use PID systems in production processes are widely used in industry.

  14. Patients with Parkinson's disease learn to control complex systems-an indication for intact implicit cognitive skill learning.

    Science.gov (United States)

    Witt, Karsten; Daniels, Christine; Daniel, Victoria; Schmitt-Eliassen, Julia; Volkmann, Jens; Deuschl, Günther

    2006-01-01

    Implicit memory and learning mechanisms are composed of multiple processes and systems. Previous studies demonstrated a basal ganglia involvement in purely cognitive tasks that form stimulus response habits by reinforcement learning such as implicit classification learning. We will test the basal ganglia influence on two cognitive implicit tasks previously described by Berry and Broadbent, the sugar production task and the personal interaction task. Furthermore, we will investigate the relationship between certain aspects of an executive dysfunction and implicit learning. To this end, we have tested 22 Parkinsonian patients and 22 age-matched controls on two implicit cognitive tasks, in which participants learned to control a complex system. They interacted with the system by choosing an input value and obtaining an output that was related in a complex manner to the input. The objective was to reach and maintain a specific target value across trials (dynamic system learning). The two tasks followed the same underlying complex rule but had different surface appearances. Subsequently, participants performed an executive test battery including the Stroop test, verbal fluency and the Wisconsin card sorting test (WCST). The results demonstrate intact implicit learning in patients, despite an executive dysfunction in the Parkinsonian group. They lead to the conclusion that the basal ganglia system affected in Parkinson's disease does not contribute to the implicit acquisition of a new cognitive skill. Furthermore, the Parkinsonian patients were able to reach a specific goal in an implicit learning context despite impaired goal directed behaviour in the WCST, a classic test of executive functions. These results demonstrate a functional independence of implicit cognitive skill learning and certain aspects of executive functions.

  15. Biomechanical Reconstruction Using the Tacit Learning System: Intuitive Control of Prosthetic Hand Rotation.

    Science.gov (United States)

    Oyama, Shintaro; Shimoda, Shingo; Alnajjar, Fady S K; Iwatsuki, Katsuyuki; Hoshiyama, Minoru; Tanaka, Hirotaka; Hirata, Hitoshi

    2016-01-01

    Background: For mechanically reconstructing human biomechanical function, intuitive proportional control, and robustness to unexpected situations are required. Particularly, creating a functional hand prosthesis is a typical challenge in the reconstruction of lost biomechanical function. Nevertheless, currently available control algorithms are in the development phase. The most advanced algorithms for controlling multifunctional prosthesis are machine learning and pattern recognition of myoelectric signals. Despite the increase in computational speed, these methods cannot avoid the requirement of user consciousness and classified separation errors. "Tacit Learning System" is a simple but novel adaptive control strategy that can self-adapt its posture to environment changes. We introduced the strategy in the prosthesis rotation control to achieve compensatory reduction, as well as evaluated the system and its effects on the user. Methods: We conducted a non-randomized study involving eight prosthesis users to perform a bar relocation task with/without Tacit Learning System support. Hand piece and body motions were recorded continuously with goniometers, videos, and a motion-capture system. Findings: Reduction in the participants' upper extremity rotatory compensation motion was monitored during the relocation task in all participants. The estimated profile of total body energy consumption improved in five out of six participants. Interpretation: Our system rapidly accomplished nearly natural motion without unexpected errors. The Tacit Learning System not only adapts human motions but also enhances the human ability to adapt to the system quickly, while the system amplifies compensation generated by the residual limb. The concept can be extended to various situations for reconstructing lost functions that can be compensated.

  16. Composite Intelligent Learning Control of Strict-Feedback Systems With Disturbance.

    Science.gov (United States)

    Xu, Bin; Sun, Fuchun

    2018-02-01

    This paper addresses the dynamic surface control of uncertain nonlinear systems on the basis of composite intelligent learning and disturbance observer in presence of unknown system nonlinearity and time-varying disturbance. The serial-parallel estimation model with intelligent approximation and disturbance estimation is built to obtain the prediction error and in this way the composite law for weights updating is constructed. The nonlinear disturbance observer is developed using intelligent approximation information while the disturbance estimation is guaranteed to converge to a bounded compact set. The highlight is that different from previous work directly toward asymptotic stability, the transparency of the intelligent approximation and disturbance estimation is included in the control scheme. The uniformly ultimate boundedness stability is analyzed via Lyapunov method. Through simulation verification, the composite intelligent learning with disturbance observer can efficiently estimate the effect caused by system nonlinearity and disturbance while the proposed approach obtains better performance with higher accuracy.

  17. A learning flight control system for the F8-DFBW aircraft. [Digital Fly-By-Wire

    Science.gov (United States)

    Montgomery, R. C.; Mekel, R.; Nachmias, S.

    1978-01-01

    This report contains a complete description of a learning control system designed for the F8-DFBW aircraft. The system is parameter-adaptive with the additional feature that it 'learns' the variation of the control system gains needed over the flight envelope. It, thus, generates and modifies its gain schedule when suitable data are available. The report emphasizes the novel learning features of the system: the forms of representation of the flight envelope and the process by which identified parameters are used to modify the gain schedule. It contains data taken during piloted real-time 6 degree-of-freedom simulations that were used to develop and evaluate the system.

  18. Controlled Experiment Replication in Evaluation of E-Learning System's Educational Influence

    Science.gov (United States)

    Grubisic, Ani; Stankov, Slavomir; Rosic, Marko; Zitko, Branko

    2009-01-01

    We believe that every effectiveness evaluation should be replicated at least in order to verify the original results and to indicate evaluated e-learning system's advantages or disadvantages. This paper presents the methodology for conducting controlled experiment replication, as well as, results of a controlled experiment and an internal…

  19. Alignment Condition-Based Robust Adaptive Iterative Learning Control of Uncertain Robot System

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available This paper proposes an adaptive iterative learning control strategy integrated with saturation-based robust control for uncertain robot system in presence of modelling uncertainties, unknown parameter, and external disturbance under alignment condition. An important merit is that it achieves adaptive switching of gain matrix both in conventional PD-type feedforward control and robust adaptive control in the iteration domain simultaneously. The analysis of convergence of proposed control law is based on Lyapunov's direct method under alignment initial condition. Simulation results demonstrate the faster learning rate and better robust performance with proposed algorithm by comparing with other existing robust controllers. The actual experiment on three-DOF robot manipulator shows its better practical effectiveness.

  20. How does a specific learning and memory system in the mammalian brain gain control of behavior?

    Science.gov (United States)

    McDonald, Robert J; Hong, Nancy S

    2013-11-01

    This review addresses a fundamental, yet poorly understood set of issues in systems neuroscience. The issues revolve around conceptualizations of the organization of learning and memory in the mammalian brain. One intriguing, and somewhat popular, conceptualization is the idea that there are multiple learning and memory systems in the mammalian brain and they interact in different ways to influence and/or control behavior. This approach has generated interesting empirical and theoretical work supporting this view. One issue that needs to be addressed is how these systems influence or gain control of voluntary behavior. To address this issue, we clearly specify what we mean by a learning and memory system. We then review two types of processes that might influence which memory system gains control of behavior. One set of processes are external factors that can affect which system controls behavior in a given situation including task parameters like the kind of information available to the subject, types of training experience, and amount of training. The second set of processes are brain mechanisms that might influence what memory system controls behavior in a given situation including executive functions mediated by the prefrontal cortex; switching mechanisms mediated by ascending neurotransmitter systems, the unique role of the hippocampus during learning. The issue of trait differences in control of different learning and memory systems will also be considered in which trait differences in learning and memory function are thought to potentially emerge from differences in level of prefrontal influence, differences in plasticity processes, differences in ascending neurotransmitter control, differential access to effector systems like motivational and motor systems. Finally, we present scenarios in which different mechanisms might interact. This review was conceived to become a jumping off point for new work directed at understanding these issues. The outcome of

  1. Real time reinforcement learning control of dynamic systems applied to an inverted pendulum

    NARCIS (Netherlands)

    van Luenen, W.T.C.; van Luenen, W.T.C.; Stender, J.; Addis, T.

    1990-01-01

    Describes work started in order to investigate the use of neural networks for application in adaptive or learning control systems. Neural networks have learning capabilities and they can be used to realize non-linear mappings. These are attractive features which could make them useful building

  2. Experiential learning in control systems laboratories and engineering project management

    Science.gov (United States)

    Reck, Rebecca Marie

    Experiential learning is a process by which a student creates knowledge through the insights gained from an experience. Kolb's model of experiential learning is a cycle of four modes: (1) concrete experience, (2) reflective observation, (3) abstract conceptualization, and (4) active experimentation. His model is used in each of the three studies presented in this dissertation. Laboratories are a popular way to apply the experiential learning modes in STEM courses. Laboratory kits allow students to take home laboratory equipment to complete experiments on their own time. Although students like laboratory kits, no previous studies compared student learning outcomes on assignments using laboratory kits with existing laboratory equipment. In this study, we examined the similarities and differences between the experiences of students who used a portable laboratory kit and students who used the traditional equipment. During the 2014- 2015 academic year, we conducted a quasi-experiment to compare students' achievement of learning outcomes and their experiences in the instructional laboratory for an introductory control systems course. Half of the laboratory sections in each semester used the existing equipment, while the other sections used a new kit. We collected both quantitative data and qualitative data. We did not identify any major differences in the student experience based on the equipment they used. Course objectives, like research objectives and product requirements, help provide clarity and direction for faculty and students. Unfortunately, course and laboratory objectives are not always clearly stated. Without a clear set of objectives, it can be hard to design a learning experience and determine whether students are achieving the intended outcomes of the course or laboratory. In this study, I identified a common set of laboratory objectives, concepts, and components of a laboratory apparatus for undergraduate control systems laboratories. During the summer of

  3. Using Feedback Error Learning for Control of Electro Hydraulic Servo System by Laguerre

    Directory of Open Access Journals (Sweden)

    Amir Reza Zare Bidaki

    2014-01-01

    Full Text Available In this paper, a new Laguerre controller is proposed to control the electro hydraulic servo system. The proposed controller uses feedback error learning method and leads to significantly improve performance in terms of settling time and amplitude of control signal rather than other controllers. All derived results are validated by simulation of nonlinear mathematical model of the system. The simulation results show the advantages of the proposed method for improved control in terms of both settling time and amplitude of control signal.

  4. A Robust Cooperated Control Method with Reinforcement Learning and Adaptive H∞ Control

    Science.gov (United States)

    Obayashi, Masanao; Uchiyama, Shogo; Kuremoto, Takashi; Kobayashi, Kunikazu

    This study proposes a robust cooperated control method combining reinforcement learning with robust control to control the system. A remarkable characteristic of the reinforcement learning is that it doesn't require model formula, however, it doesn't guarantee the stability of the system. On the other hand, robust control system guarantees stability and robustness, however, it requires model formula. We employ both the actor-critic method which is a kind of reinforcement learning with minimal amount of computation to control continuous valued actions and the traditional robust control, that is, H∞ control. The proposed system was compared method with the conventional control method, that is, the actor-critic only used, through the computer simulation of controlling the angle and the position of a crane system, and the simulation result showed the effectiveness of the proposed method.

  5. A model reference and sensitivity model-based self-learning fuzzy logic controller as a solution for control of nonlinear servo systems

    NARCIS (Netherlands)

    Kovacic, Z.; Bogdan, S.; Balenovic, M.

    1999-01-01

    In this paper, the design, simulation and experimental verification of a self-learning fuzzy logic controller (SLFLC) suitable for the control of nonlinear servo systems are described. The SLFLC contains a learning algorithm that utilizes a second-order reference model and a sensitivity model

  6. Robust Monotonically Convergent Iterative Learning Control for Discrete-Time Systems via Generalized KYP Lemma

    Directory of Open Access Journals (Sweden)

    Jian Ding

    2014-01-01

    Full Text Available This paper addresses the problem of P-type iterative learning control for a class of multiple-input multiple-output linear discrete-time systems, whose aim is to develop robust monotonically convergent control law design over a finite frequency range. It is shown that the 2 D iterative learning control processes can be taken as 1 D state space model regardless of relative degree. With the generalized Kalman-Yakubovich-Popov lemma applied, it is feasible to describe the monotonically convergent conditions with the help of linear matrix inequality technique and to develop formulas for the control gain matrices design. An extension to robust control law design against systems with structured and polytopic-type uncertainties is also considered. Two numerical examples are provided to validate the feasibility and effectiveness of the proposed method.

  7. Lessons Learned and Flight Results from the F15 Intelligent Flight Control System Project

    Science.gov (United States)

    Bosworth, John

    2006-01-01

    A viewgraph presentation on the lessons learned and flight results from the F15 Intelligent Flight Control System (IFCS) project is shown. The topics include: 1) F-15 IFCS Project Goals; 2) Motivation; 3) IFCS Approach; 4) NASA F-15 #837 Aircraft Description; 5) Flight Envelope; 6) Limited Authority System; 7) NN Floating Limiter; 8) Flight Experiment; 9) Adaptation Goals; 10) Handling Qualities Performance Metric; 11) Project Phases; 12) Indirect Adaptive Control Architecture; 13) Indirect Adaptive Experience and Lessons Learned; 14) Gen II Direct Adaptive Control Architecture; 15) Current Status; 16) Effect of Canard Multiplier; 17) Simulated Canard Failure Stab Open Loop; 18) Canard Multiplier Effect Closed Loop Freq. Resp.; 19) Simulated Canard Failure Stab Open Loop with Adaptation; 20) Canard Multiplier Effect Closed Loop with Adaptation; 21) Gen 2 NN Wts from Simulation; 22) Direct Adaptive Experience and Lessons Learned; and 23) Conclusions

  8. Lessons learned from the MIT Tara control and data system

    International Nuclear Information System (INIS)

    Gaudreau, M.P.J.; Sullivan, J.D.; Fredian, T.W.; Irby, J.H.; Karcher, C.A.; Rameriz, R.A.; Sevillano, E.; Stillerman, J.A.; Thomas, P.

    1987-10-01

    The control and data system of the MIT Tara Tandem Mirror has worked successfully throughout the lifetime of the experiment (1983 through 1987). As the Tara project winds down, it is appropriate to summarize the lessons learned from the implementation and operation of the control and data system over the years and in its final form. The control system handled ∼2400 I/0 points in real time throughout the 5 to 10 minute shot cycle while the data system, in near real time, handled ∼1000 signals with a total of 5 to 7 Mbytes of data each shot. The implementation depended upon a consistent approach based on separating physics and engineering functions and on detailed functional diagrams with narrowly defined cross communication. This paper is a comprehensive treatment of the principal successes, residual problems, and dilemmas that arose from the beginning until the final hardware and software implementation. Suggestions for future systems of either similar size or of larger scale such as CIT are made in the conclusion. 11 refs., 1 fig

  9. A Controller Design with ANFIS Architecture Attendant Learning Ability for SSSC-Based Damping Controller Applied in Single Machine Infinite Bus System

    Directory of Open Access Journals (Sweden)

    A. Khoshsaadat

    2014-09-01

    Full Text Available Static Synchronous Series Compensator (SSSC is a series compensating Flexible AC Transmission System (FACTS controller for maintaining to the power flow control on a transmission line by injecting a voltage in quadrature with the line current and in series mode with the line. In this work, an Adaptive Network-based Fuzzy Inference System controller (ANFISC has been proposed for controlling of the SSSC-based damping system and applied to a Single Machine Infinite Bus (SMIB power system. For implementation of the learning process in this controller, we use of the one approach of the learning ability that named as Forward Signal and Backward Error Back-Propagation (FSBEBP method for improving of the system efficiency. This artificial intelligence-based control model leads to a controller with adaptive structure, improved correctness, high damping ability and dynamic performance. System implementation is easy and it requires 49 fuzzy rules for inference engine of the system. As compared with the other complex neuro-fuzzy systems, this controller has medium number of the fuzzy rules and low number of layers, but it has high accuracy. In order to demonstrate of the proposed controller ability, it is simulated and its output compared with that of classic Lead-Lag-based Controller (LLC and PI controller.

  10. Development of a Computer-aided Learning System for Graphical Analysis of Continuous-Time Control Systems

    Directory of Open Access Journals (Sweden)

    J. F. Opadiji

    2010-06-01

    Full Text Available We present the development and deployment process of a computer-aided learning tool which serves as a training aid for undergraduate control engineering courses. We show the process of algorithm construction and implementation of the software which is also aimed at teaching software development at undergraduate level. The scope of this project is limited to graphical analysis of continuous-time control systems.

  11. Integrated Programme Control Systems: Lessons Learned

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C. W. [Babcock International Group PLC (formerly UKAEA Ltd) B21 Forss, Thurso, Caithness, Scotland (United Kingdom)

    2013-08-15

    Dounreay was the UK's centre of fast reactor research and development from 1955 until 1994 and is now Scotland's largest nuclear clean up and demolition project. After four decades of research, Dounreay is now a site of construction, demolition and waste management, designed to return the site to as near as practicable to its original condition. Dounreay has a turnover in the region of Pounds 150 million a year and employs approximately 900 people. It subcontracts work to 50 or so companies in the supply chain and this provides employment for a similar number of people. The plan for decommissioning the site anticipates all redundant buildings will be cleared in the short term. The target date to achieve interim end state by 2039 is being reviewed in light of Government funding constraints, and will be subject to change through the NDA led site management competition. In the longer term, controls will be put in place on the use of contaminated land until 2300. In supporting the planning, management and organisational aspects for this complex decommissioning programme an integrated programme controls system has been developed and deployed. This consists of a combination of commercial and bespoke tools integrated to support all aspects of programme management, namely scope, schedule, cost, estimating and risk in order to provide baseline and performance management data based upon the application of earned value management principles. Through system evolution and lessons learned, the main benefits of this approach are management data consistency, rapid communication of live information, and increased granularity of data providing summary and detailed reports which identify performance trends that lead to corrective actions. The challenges of such approach are effective use of the information to realise positive changes, balancing the annual system support and development costs against the business needs, and maximising system performance. (author)

  12. A neural learning classifier system with self-adaptive constructivism for mobile robot control.

    Science.gov (United States)

    Hurst, Jacob; Bull, Larry

    2006-01-01

    For artificial entities to achieve true autonomy and display complex lifelike behavior, they will need to exploit appropriate adaptable learning algorithms. In this context adaptability implies flexibility guided by the environment at any given time and an open-ended ability to learn appropriate behaviors. This article examines the use of constructivism-inspired mechanisms within a neural learning classifier system architecture that exploits parameter self-adaptation as an approach to realize such behavior. The system uses a rule structure in which each rule is represented by an artificial neural network. It is shown that appropriate internal rule complexity emerges during learning at a rate controlled by the learner and that the structure indicates underlying features of the task. Results are presented in simulated mazes before moving to a mobile robot platform.

  13. Event-Triggered Distributed Control of Nonlinear Interconnected Systems Using Online Reinforcement Learning With Exploration.

    Science.gov (United States)

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-09-07

    In this paper, a distributed control scheme for an interconnected system composed of uncertain input affine nonlinear subsystems with event triggered state feedback is presented by using a novel hybrid learning scheme-based approximate dynamic programming with online exploration. First, an approximate solution to the Hamilton-Jacobi-Bellman equation is generated with event sampled neural network (NN) approximation and subsequently, a near optimal control policy for each subsystem is derived. Artificial NNs are utilized as function approximators to develop a suite of identifiers and learn the dynamics of each subsystem. The NN weight tuning rules for the identifier and event-triggering condition are derived using Lyapunov stability theory. Taking into account, the effects of NN approximation of system dynamics and boot-strapping, a novel NN weight update is presented to approximate the optimal value function. Finally, a novel strategy to incorporate exploration in online control framework, using identifiers, is introduced to reduce the overall cost at the expense of additional computations during the initial online learning phase. System states and the NN weight estimation errors are regulated and local uniformly ultimately bounded results are achieved. The analytical results are substantiated using simulation studies.

  14. Traffic light control by multiagent reinforcement learning systems

    NARCIS (Netherlands)

    Bakker, B.; Whiteson, S.; Kester, L.; Groen, F.C.A.; Babuška, R.; Groen, F.C.A.

    2010-01-01

    Traffic light control is one of the main means of controlling road traffic. Improving traffic control is important because it can lead to higher traffic throughput and reduced traffic congestion. This chapter describes multiagent reinforcement learning techniques for automatic optimization of

  15. Traffic Light Control by Multiagent Reinforcement Learning Systems

    NARCIS (Netherlands)

    Bakker, B.; Whiteson, S.; Kester, L.J.H.M.; Groen, F.C.A.

    2010-01-01

    Traffic light control is one of the main means of controlling road traffic. Improving traffic control is important because it can lead to higher traffic throughput and reduced traffic congestion. This chapter describes multiagent reinforcement learning techniques for automatic optimization of

  16. Cooperative learning neural network output feedback control of uncertain nonlinear multi-agent systems under directed topologies

    Science.gov (United States)

    Wang, W.; Wang, D.; Peng, Z. H.

    2017-09-01

    Without assuming that the communication topologies among the neural network (NN) weights are to be undirected and the states of each agent are measurable, the cooperative learning NN output feedback control is addressed for uncertain nonlinear multi-agent systems with identical structures in strict-feedback form. By establishing directed communication topologies among NN weights to share their learned knowledge, NNs with cooperative learning laws are employed to identify the uncertainties. By designing NN-based κ-filter observers to estimate the unmeasurable states, a new cooperative learning output feedback control scheme is proposed to guarantee that the system outputs can track nonidentical reference signals with bounded tracking errors. A simulation example is given to demonstrate the effectiveness of the theoretical results.

  17. Rule-bases construction through self-learning for a table-based Sugeno-Takagi fuzzy logic control system

    Directory of Open Access Journals (Sweden)

    C. Boldisor

    2009-12-01

    Full Text Available A self-learning based methodology for building the rule-base of a fuzzy logic controller (FLC is presented and verified, aiming to engage intelligent characteristics to a fuzzy logic control systems. The methodology is a simplified version of those presented in today literature. Some aspects are intentionally ignored since it rarely appears in control system engineering and a SISO process is considered here. The fuzzy inference system obtained is a table-based Sugeno-Takagi type. System’s desired performance is defined by a reference model and rules are extracted from recorded data, after the correct control actions are learned. The presented algorithm is tested in constructing the rule-base of a fuzzy controller for a DC drive application. System’s performances and method’s viability are analyzed.

  18. Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System

    Science.gov (United States)

    Williams-Hayes, Peggy S.

    2004-01-01

    The NASA F-15 Intelligent Flight Control System project team developed a series of flight control concepts designed to demonstrate neural network-based adaptive controller benefits, with the objective to develop and flight-test control systems using neural network technology to optimize aircraft performance under nominal conditions and stabilize the aircraft under failure conditions. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to baseline aerodynamic derivatives in flight. This open-loop flight test set was performed in preparation for a future phase in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed - pitch frequency sweep and automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. Flight data examination shows that addition of flight-identified aerodynamic derivative increments into the simulation improved aircraft pitch handling qualities.

  19. Scheduled power tracking control of the wind-storage hybrid system based on the reinforcement learning theory

    Science.gov (United States)

    Li, Ze

    2017-09-01

    In allusion to the intermittency and uncertainty of the wind electricity, energy storage and wind generator are combined into a hybrid system to improve the controllability of the output power. A scheduled power tracking control method is proposed based on the reinforcement learning theory and Q-learning algorithm. In this method, the state space of the environment is formed with two key factors, i.e. the state of charge of the energy storage and the difference value between the actual wind power and scheduled power, the feasible action is the output power of the energy storage, and the corresponding immediate rewarding function is designed to reflect the rationality of the control action. By interacting with the environment and learning from the immediate reward, the optimal control strategy is gradually formed. After that, it could be applied to the scheduled power tracking control of the hybrid system. Finally, the rationality and validity of the method are verified through simulation examples.

  20. Group performance and group learning at dynamic system control tasks

    International Nuclear Information System (INIS)

    Drewes, Sylvana

    2013-01-01

    Proper management of dynamic systems (e.g. cooling systems of nuclear power plants or production and warehousing) is important to ensure public safety and economic success. So far, research has provided broad evidence for systematic shortcomings in individuals' control performance of dynamic systems. This research aims to investigate whether groups manifest synergy (Larson, 2010) and outperform individuals and if so, what processes lead to these performance advantages. In three experiments - including simulations of a nuclear power plant and a business setting - I compare the control performance of three-person-groups to the average individual performance and to nominal groups (N = 105 groups per experiment). The nominal group condition captures the statistical advantage of aggregated group judgements not due to social interaction. First, results show a superior performance of groups compared to individuals. Second, a meta-analysis across all three experiments shows interaction-based process gains in dynamic control tasks: Interacting groups outperform the average individual performance as well as the nominal group performance. Third, group interaction leads to stable individual improvements of group members that exceed practice effects. In sum, these results provide the first unequivocal evidence for interaction-based performance gains of groups in dynamic control tasks and imply that employers should rely on groups to provide opportunities for individual learning and to foster dynamic system control at its best.

  1. The information system of learning quality control in higher education institutions: achievements and problems of European universities

    Directory of Open Access Journals (Sweden)

    Orekhova Elena

    2016-01-01

    Full Text Available The article deals with the main trends in the development of the system of learning quality control connected with the European integration of higher education and the democratization of education. The authors analyze the state of information systems of learning quality control existing in European higher education and identify their strong and weak points. The authors show that in the learning process universities actively use innovative analytic methods as well as modern means of collecting, storing and transferring information that ensure the successful management of such a complex object as the university of the 21st century.

  2. An open-closed-loop iterative learning control approach for nonlinear switched systems with application to freeway traffic control

    Science.gov (United States)

    Sun, Shu-Ting; Li, Xiao-Dong; Zhong, Ren-Xin

    2017-10-01

    For nonlinear switched discrete-time systems with input constraints, this paper presents an open-closed-loop iterative learning control (ILC) approach, which includes a feedforward ILC part and a feedback control part. Under a given switching rule, the mathematical induction is used to prove the convergence of ILC tracking error in each subsystem. It is demonstrated that the convergence of ILC tracking error is dependent on the feedforward control gain, but the feedback control can speed up the convergence process of ILC by a suitable selection of feedback control gain. A switched freeway traffic system is used to illustrate the effectiveness of the proposed ILC law.

  3. Lessons learned in digital upgrade projects digital control system implementation at US nuclear power stations

    International Nuclear Information System (INIS)

    Kelley, S.; Bolian, T. W.

    2006-01-01

    AREVA NP has gained significant experience during the past five years in digital upgrades at operating nuclear power stations in the US. Plants are seeking modernization with digital technology to address obsolescence, spare parts availability, vendor support, increasing age-related failures and diminished reliability. New systems offer improved reliability and functionality, and decreased maintenance requirements. Significant lessons learned have been identified relating to the areas of licensing, equipment qualification, software quality assurance and other topics specific to digital controls. Digital control systems have been installed in non safety-related control applications at many utilities within the last 15 years. There have also been a few replacements of small safety-related systems with digital technology. Digital control systems are proving to be reliable, accurate, and easy to maintain. Digital technology is gaining acceptance and momentum with both utilities and regulatory agencies based upon the successes of these installations. Also, new plants are being designed with integrated digital control systems. To support plant life extension and address obsolescence of critical components, utilities are beginning to install digital technology for primary safety-system replacement. AREVA NP analyzed operating experience and lessons learned from its own digital upgrade projects as well as industry-wide experience to identify key issues that should be considered when implementing digital controls in nuclear power stations

  4. Sensitivity-based self-learning fuzzy logic control for a servo system

    NARCIS (Netherlands)

    Balenovic, M.

    1998-01-01

    Describes an experimental verification of a self-learning fuzzy logic controller (SLFLC). The SLFLC contains a learning algorithm that utilizes a second-order reference model and a sensitivity model related to the fuzzy controller parameters. The effectiveness of the proposed controller has been

  5. Feedback error learning controller for functional electrical stimulation assistance in a hybrid robotic system for reaching rehabilitation

    Directory of Open Access Journals (Sweden)

    Francisco Resquín

    2016-07-01

    Full Text Available Hybrid robotic systems represent a novel research field, where functional electrical stimulation (FES is combined with a robotic device for rehabilitation of motor impairment. Under this approach, the design of robust FES controllers still remains an open challenge. In this work, we aimed at developing a learning FES controller to assist in the performance of reaching movements in a simple hybrid robotic system setting. We implemented a Feedback Error Learning (FEL control strategy consisting of a feedback PID controller and a feedforward controller based on a neural network. A passive exoskeleton complemented the FES controller by compensating the effects of gravity. We carried out experiments with healthy subjects to validate the performance of the system. Results show that the FEL control strategy is able to adjust the FES intensity to track the desired trajectory accurately without the need of a previous mathematical model.

  6. A fuzzy controller with a robust learning function

    International Nuclear Information System (INIS)

    Tanji, Jun-ichi; Kinoshita, Mitsuo

    1987-01-01

    A self-organizing fuzzy controller is able to use linguistic decision rules of control strategy and has a strong adaptive property by virture of its rule learning function. While a simple linguistic description of the learning algorithm first introduced by Procyk, et al. has much flexibility for applications to a wide range of different processes, its detailed formulation, in particular with control stability and learning process convergence, is not clear. In this paper, we describe the formulation of an analytical basis for a self-organizing fuzzy controller by using a method of model reference adaptive control systems (MRACS) for which stability in the adaptive loop is theoretically proven. A detailed formulation is described regarding performance evaluation and rule modification in the rule learning process of the controller. Furthermore, an improved learning algorithm using adaptive rule is proposed. An adaptive rule gives a modification coefficient for a rule change estimating the effect of disturbance occurrence in performance evaluation. The effect of introducing an adaptive rule to improve the learning convergency is described by using a simple iterative formulation. Simulation tests are presented for an application of the proposed self-organizing fuzzy controller to the pressure control system in a Boiling Water Reactor (BWR) plant. Results with the tests confirm the improved learning algorithm has strong convergent properties, even in a very disturbed environment. (author)

  7. Predictive Variable Gain Iterative Learning Control for PMSM

    Directory of Open Access Journals (Sweden)

    Huimin Xu

    2015-01-01

    Full Text Available A predictive variable gain strategy in iterative learning control (ILC is introduced. Predictive variable gain iterative learning control is constructed to improve the performance of trajectory tracking. A scheme based on predictive variable gain iterative learning control for eliminating undesirable vibrations of PMSM system is proposed. The basic idea is that undesirable vibrations of PMSM system are eliminated from two aspects of iterative domain and time domain. The predictive method is utilized to determine the learning gain in the ILC algorithm. Compression mapping principle is used to prove the convergence of the algorithm. Simulation results demonstrate that the predictive variable gain is superior to constant gain and other variable gains.

  8. Combining Correlation-Based and Reward-Based Learning in Neural Control for Policy Improvement

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Kolodziejski, Christoph; Wörgötter, Florentin

    2013-01-01

    Classical conditioning (conventionally modeled as correlation-based learning) and operant conditioning (conventionally modeled as reinforcement learning or reward-based learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about...... associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlation-based learning using input correlation learning (ICO learning) and reward-based learning using continuous actor–critic reinforcement...... learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cart-pole system as a dynamic motion control problem and a mobile robot system as a goal-directed behavior control problem. Results show that the model can strongly improve pole balancing control...

  9. Self-learning fuzzy logic controllers based on reinforcement

    International Nuclear Information System (INIS)

    Wang, Z.; Shao, S.; Ding, J.

    1996-01-01

    This paper proposes a new method for learning and tuning Fuzzy Logic Controllers. The self-learning scheme in this paper is composed of Bucket-Brigade and Genetic Algorithm. The proposed method is tested on the cart-pole system. Simulation results show that our approach has good learning and control performance

  10. Adaptive Trajectory Tracking Control using Reinforcement Learning for Quadrotor

    Directory of Open Access Journals (Sweden)

    Wenjie Lou

    2016-02-01

    Full Text Available Inaccurate system parameters and unpredicted external disturbances affect the performance of non-linear controllers. In this paper, a new adaptive control algorithm under the reinforcement framework is proposed to stabilize a quadrotor helicopter. Based on a command-filtered non-linear control algorithm, adaptive elements are added and learned by policy-search methods. To predict the inaccurate system parameters, a new kernel-based regression learning method is provided. In addition, Policy learning by Weighting Exploration with the Returns (PoWER and Return Weighted Regression (RWR are utilized to learn the appropriate parameters for adaptive elements in order to cancel the effect of external disturbance. Furthermore, numerical simulations under several conditions are performed, and the ability of adaptive trajectory-tracking control with reinforcement learning are demonstrated.

  11. Application of parsimonious learning feedforward control to mechatronic systems

    NARCIS (Netherlands)

    de Vries, Theodorus J.A.; Velthuis, W.J.R.; Idema, L.J.

    2001-01-01

    For motion control, learning feedforward controllers (LFFCs) should be applied when accurate process modelling is difficult. When controlling such processes with LFFCs in the form of multidimensional B-spline networks, large network sizes and a poor generalising ability may result, known as the

  12. Optimal and Autonomous Control Using Reinforcement Learning: A Survey.

    Science.gov (United States)

    Kiumarsi, Bahare; Vamvoudakis, Kyriakos G; Modares, Hamidreza; Lewis, Frank L

    2018-06-01

    This paper reviews the current state of the art on reinforcement learning (RL)-based feedback control solutions to optimal regulation and tracking of single and multiagent systems. Existing RL solutions to both optimal and control problems, as well as graphical games, will be reviewed. RL methods learn the solution to optimal control and game problems online and using measured data along the system trajectories. We discuss Q-learning and the integral RL algorithm as core algorithms for discrete-time (DT) and continuous-time (CT) systems, respectively. Moreover, we discuss a new direction of off-policy RL for both CT and DT systems. Finally, we review several applications.

  13. Gaussian Processes for Data-Efficient Learning in Robotics and Control.

    Science.gov (United States)

    Deisenroth, Marc Peter; Fox, Dieter; Rasmussen, Carl Edward

    2015-02-01

    Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.

  14. Algebraic and adaptive learning in neural control systems

    Science.gov (United States)

    Ferrari, Silvia

    A systematic approach is developed for designing adaptive and reconfigurable nonlinear control systems that are applicable to plants modeled by ordinary differential equations. The nonlinear controller comprising a network of neural networks is taught using a two-phase learning procedure realized through novel techniques for initialization, on-line training, and adaptive critic design. A critical observation is that the gradients of the functions defined by the neural networks must equal corresponding linear gain matrices at chosen operating points. On-line training is based on a dual heuristic adaptive critic architecture that improves control for large, coupled motions by accounting for actual plant dynamics and nonlinear effects. An action network computes the optimal control law; a critic network predicts the derivative of the cost-to-go with respect to the state. Both networks are algebraically initialized based on prior knowledge of satisfactory pointwise linear controllers and continue to adapt on line during full-scale simulations of the plant. On-line training takes place sequentially over discrete periods of time and involves several numerical procedures. A backpropagating algorithm called Resilient Backpropagation is modified and successfully implemented to meet these objectives, without excessive computational expense. This adaptive controller is as conservative as the linear designs and as effective as a global nonlinear controller. The method is successfully implemented for the full-envelope control of a six-degree-of-freedom aircraft simulation. The results show that the on-line adaptation brings about improved performance with respect to the initialization phase during aircraft maneuvers that involve large-angle and coupled dynamics, and parameter variations.

  15. An iterative learning controller for nonholonomic mobile robots

    International Nuclear Information System (INIS)

    Oriolo, G.; Panzieri, S.; Ulivi, G.

    1998-01-01

    The authors present an iterative learning controller that applies to nonholonomic mobile robots, as well as other systems that can be put in chained form. The learning algorithm exploits the fact that chained-form. The learning algorithm exploits the fact that chained-form systems are linear under piecewise-constant inputs. The proposed control scheme requires the execution of a small number of experiments to drive the system to the desired state in finite time, with nice convergence and robustness properties with respect to modeling inaccuracies as well as disturbances. To avoid the necessity of exactly reinitializing the system at each iteration, the basic method is modified so as to obtain a cyclic controller, by which the system is cyclically steered through an arbitrary sequence of states. As a case study, a carlike mobile robot is considered. Both simulation and experimental results are reported to show the performance of the method

  16. An e-Learning System with MR for Experiments Involving Circuit Construction to Control a Robot

    Science.gov (United States)

    Takemura, Atsushi

    2016-01-01

    This paper proposes a novel e-Learning system for technological experiments involving electronic circuit-construction and controlling robot motion that are necessary in the field of technology. The proposed system performs automated recognition of circuit images transmitted from individual learners and automatically supplies the learner with…

  17. A Reinforcement Learning Approach to Call Admission Control in HAPS Communication System

    Directory of Open Access Journals (Sweden)

    Ni Shu Yan

    2017-01-01

    Full Text Available The large changing of link capacity and number of users caused by the movement of both platform and users in communication system based on high altitude platform station (HAPS will resulting in high dropping rate of handover and reduce resource utilization. In order to solve these problems, this paper proposes an adaptive call admission control strategy based on reinforcement learning approach. The goal of this strategy is to maximize long-term gains of system, with the introduction of cross-layer interaction and the service downgraded. In order to access different traffics adaptively, the access utility of handover traffics and new call traffics is designed in different state of communication system. Numerical simulation result shows that the proposed call admission control strategy can enhance bandwidth resource utilization and the performances of handover traffics.

  18. Learning-based position control of a closed-kinematic chain robot end-effector

    Science.gov (United States)

    Nguyen, Charles C.; Zhou, Zhen-Lei

    1990-01-01

    A trajectory control scheme whose design is based on learning theory, for a six-degree-of-freedom (DOF) robot end-effector built to study robotic assembly of NASA hardwares in space is presented. The control scheme consists of two control systems: the feedback control system and the learning control system. The feedback control system is designed using the concept of linearization about a selected operating point, and the method of pole placement so that the closed-loop linearized system is stabilized. The learning control scheme consisting of PD-type learning controllers, provides additional inputs to improve the end-effector performance after each trial. Experimental studies performed on a 2 DOF end-effector built at CUA, for three tracking cases show that actual trajectories approach desired trajectories as the number of trials increases. The tracking errors are substantially reduced after only five trials.

  19. neural control system

    International Nuclear Information System (INIS)

    Elshazly, A.A.E.

    2002-01-01

    Automatic power stabilization control is the desired objective for any reactor operation , especially, nuclear power plants. A major problem in this area is inevitable gap between a real plant ant the theory of conventional analysis and the synthesis of linear time invariant systems. in particular, the trajectory tracking control of a nonlinear plant is a class of problems in which the classical linear transfer function methods break down because no transfer function can represent the system over the entire operating region . there is a considerable amount of research on the model-inverse approach using feedback linearization technique. however, this method requires a prices plant model to implement the exact linearizing feedback, for nuclear reactor systems, this approach is not an easy task because of the uncertainty in the plant parameters and un-measurable state variables . therefore, artificial neural network (ANN) is used either in self-tuning control or in improving the conventional rule-based exper system.the main objective of this thesis is to suggest an ANN, based self-learning controller structure . this method is capable of on-line reinforcement learning and control for a nuclear reactor with a totally unknown dynamics model. previously, researches are based on back- propagation algorithm . back -propagation (BP), fast back -propagation (FBP), and levenberg-marquardt (LM), algorithms are discussed and compared for reinforcement learning. it is found that, LM algorithm is quite superior

  20. Personalised Learning Object System Based on Self-Regulated Learning Theories

    Directory of Open Access Journals (Sweden)

    Ali Alharbi

    2014-06-01

    Full Text Available Self-regulated learning has become an important construct in education research in the last few years. Selfregulated learning in its simple form is the learner’s ability to monitor and control the learning process. There is increasing research in the literature on how to support students become more self-regulated learners. However, the advancement in the information technology has led to paradigm changes in the design and development of educational content. The concept of learning object instructional technology has emerged as a result of this shift in educational technology paradigms. This paper presents the results of a study that investigated the potential educational effectiveness of a pedagogical framework based on the self-regulated learning theories to support the design of learning object systems to help computer science students. A prototype learning object system was developed based on the contemporary research on self-regulated learning. The system was educationally evaluated in a quasi-experimental study over two semesters in a core programming languages concepts course. The evaluation revealed that a learning object system that takes into consideration contemporary research on self-regulated learning can be an effective learning environment to support computer science education.

  1. Turbine Control System Replacement at NPP NEK; System Specifics, Project Experience and Lessons Learned

    International Nuclear Information System (INIS)

    Mandic, D.; Zilavy, M. J.

    2010-01-01

    constitutes only of soft panels or monitor graphics (all MCB - Main Control Board and its controls are available as graphic images on workstations), while the HMI for FG KFSS includes full scope replica of NEK MCR and MCB. The new PDEH system was installed on two KFSS platforms (BG and FG) in October-November, 2008; pre-outage or on-line field installation work was performed in the January-March 2009 time frame; while the old DEH Mod II was decommissioned and the new plant PDEH system was installed during the outage in April, 2009 and tested with the plant on line in May, 2009. PDEH system improvements and specifics compared to the old DEH system and compared to other similar references will be presented and the most interesting project experience and lessons learned will also be discussed in the paper.(author).

  2. Toward A Dual-Learning Systems Model of Speech Category Learning

    Directory of Open Access Journals (Sweden)

    Bharath eChandrasekaran

    2014-07-01

    Full Text Available More than two decades of work in vision posits the existence of dual-learning systems of category learning. The reflective system uses working memory to develop and test rules for classifying in an explicit fashion, while the reflexive system operates by implicitly associating perception with actions that lead to reinforcement. Dual-learning systems models hypothesize that in learning natural categories, learners initially use the reflective system and, with practice, transfer control to the reflexive system. The role of reflective and reflexive systems in auditory category learning and more specifically in speech category learning has not been systematically examined. In this article we describe a neurobiologically-constrained dual-learning systems theoretical framework that is currently being developed in speech category learning and review recent applications of this framework. Using behavioral and computational modeling approaches, we provide evidence that speech category learning is predominantly mediated by the reflexive learning system. In one application, we explore the effects of normal aging on non-speech and speech category learning. We find an age related deficit in reflective-optimal but not reflexive-optimal auditory category learning. Prominently, we find a large age-related deficit in speech learning. The computational modeling suggests that older adults are less likely to transition from simple, reflective, uni-dimensional rules to more complex, reflexive, multi-dimensional rules. In a second application we summarize a recent study examining auditory category learning in individuals with elevated depressive symptoms. We find a deficit in reflective-optimal and an enhancement in reflexive-optimal auditory category learning. Interestingly, individuals with elevated depressive symptoms also show an advantage in learning speech categories. We end with a brief summary and description of a number of future directions.

  3. Multi Car Elevator Control by using Learning Automaton

    Science.gov (United States)

    Shiraishi, Kazuaki; Hamagami, Tomoki; Hirata, Hironori

    We study an adaptive control technique for multi car elevators (MCEs) by adopting learning automatons (LAs.) The MCE is a high performance and a near-future elevator system with multi shafts and multi cars. A strong point of the system is that realizing a large carrying capacity in small shaft area. However, since the operation is too complicated, realizing an efficient MCE control is difficult for top-down approaches. For example, “bunching up together" is one of the typical phenomenon in a simple traffic environment like the MCE. Furthermore, an adapting to varying environment in configuration requirement is a serious issue in a real elevator service. In order to resolve these issues, having an autonomous behavior is required to the control system of each car in MCE system, so that the learning automaton, as the solutions for this requirement, is supposed to be appropriate for the simple traffic control. First, we assign a stochastic automaton (SA) to each car control system. Then, each SA varies its stochastic behavior distributions for adapting to environment in which its policy is evaluated with each passenger waiting times. That is LA which learns the environment autonomously. Using the LA based control technique, the MCE operation efficiency is evaluated through simulation experiments. Results show the technique enables reducing waiting times efficiently, and we confirm the system can adapt to the dynamic environment.

  4. Learning-Based Adaptive Optimal Tracking Control of Strict-Feedback Nonlinear Systems.

    Science.gov (United States)

    Gao, Weinan; Jiang, Zhong-Ping; Weinan Gao; Zhong-Ping Jiang; Gao, Weinan; Jiang, Zhong-Ping

    2018-06-01

    This paper proposes a novel data-driven control approach to address the problem of adaptive optimal tracking for a class of nonlinear systems taking the strict-feedback form. Adaptive dynamic programming (ADP) and nonlinear output regulation theories are integrated for the first time to compute an adaptive near-optimal tracker without any a priori knowledge of the system dynamics. Fundamentally different from adaptive optimal stabilization problems, the solution to a Hamilton-Jacobi-Bellman (HJB) equation, not necessarily a positive definite function, cannot be approximated through the existing iterative methods. This paper proposes a novel policy iteration technique for solving positive semidefinite HJB equations with rigorous convergence analysis. A two-phase data-driven learning method is developed and implemented online by ADP. The efficacy of the proposed adaptive optimal tracking control methodology is demonstrated via a Van der Pol oscillator with time-varying exogenous signals.

  5. Controlling the chaotic discrete-Hénon system using a feedforward neural network with an adaptive learning rate

    OpenAIRE

    GÖKCE, Kürşad; UYAROĞLU, Yılmaz

    2013-01-01

    This paper proposes a feedforward neural network-based control scheme to control the chaotic trajectories of a discrete-Hénon map in order to stay within an acceptable distance from the stable fixed point. An adaptive learning back propagation algorithm with online training is employed to improve the effectiveness of the proposed method. The simulation study carried in the discrete-Hénon system verifies the validity of the proposed control system.

  6. Application of machine learning and expert systems to Statistical Process Control (SPC) chart interpretation

    Science.gov (United States)

    Shewhart, Mark

    1991-01-01

    Statistical Process Control (SPC) charts are one of several tools used in quality control. Other tools include flow charts, histograms, cause and effect diagrams, check sheets, Pareto diagrams, graphs, and scatter diagrams. A control chart is simply a graph which indicates process variation over time. The purpose of drawing a control chart is to detect any changes in the process signalled by abnormal points or patterns on the graph. The Artificial Intelligence Support Center (AISC) of the Acquisition Logistics Division has developed a hybrid machine learning expert system prototype which automates the process of constructing and interpreting control charts.

  7. Reinforcement learning controller design for affine nonlinear discrete-time systems using online approximators.

    Science.gov (United States)

    Yang, Qinmin; Jagannathan, Sarangapani

    2012-04-01

    In this paper, reinforcement learning state- and output-feedback-based adaptive critic controller designs are proposed by using the online approximators (OLAs) for a general multi-input and multioutput affine unknown nonlinear discretetime systems in the presence of bounded disturbances. The proposed controller design has two entities, an action network that is designed to produce optimal signal and a critic network that evaluates the performance of the action network. The critic estimates the cost-to-go function which is tuned online using recursive equations derived from heuristic dynamic programming. Here, neural networks (NNs) are used both for the action and critic whereas any OLAs, such as radial basis functions, splines, fuzzy logic, etc., can be utilized. For the output-feedback counterpart, an additional NN is designated as the observer to estimate the unavailable system states, and thus, separation principle is not required. The NN weight tuning laws for the controller schemes are also derived while ensuring uniform ultimate boundedness of the closed-loop system using Lyapunov theory. Finally, the effectiveness of the two controllers is tested in simulation on a pendulum balancing system and a two-link robotic arm system.

  8. Discrete-time online learning control for a class of unknown nonaffine nonlinear systems using reinforcement learning.

    Science.gov (United States)

    Yang, Xiong; Liu, Derong; Wang, Ding; Wei, Qinglai

    2014-07-01

    In this paper, a reinforcement-learning-based direct adaptive control is developed to deliver a desired tracking performance for a class of discrete-time (DT) nonlinear systems with unknown bounded disturbances. We investigate multi-input-multi-output unknown nonaffine nonlinear DT systems and employ two neural networks (NNs). By using Implicit Function Theorem, an action NN is used to generate the control signal and it is also designed to cancel the nonlinearity of unknown DT systems, for purpose of utilizing feedback linearization methods. On the other hand, a critic NN is applied to estimate the cost function, which satisfies the recursive equations derived from heuristic dynamic programming. The weights of both the action NN and the critic NN are directly updated online instead of offline training. By utilizing Lyapunov's direct method, the closed-loop tracking errors and the NN estimated weights are demonstrated to be uniformly ultimately bounded. Two numerical examples are provided to show the effectiveness of the present approach. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. The Roles of Feedback and Feedforward as Humans Learn to Control Unknown Dynamic Systems.

    Science.gov (United States)

    Zhang, Xingye; Wang, Shaoqian; Hoagg, Jesse B; Seigler, T Michael

    2018-02-01

    We present results from an experiment in which human subjects interact with an unknown dynamic system 40 times during a two-week period. During each interaction, subjects are asked to perform a command-following (i.e., pursuit tracking) task. Each subject's performance at that task improves from the first trial to the last trial. For each trial, we use subsystem identification to estimate each subject's feedforward (or anticipatory) control, feedback (or reactive) control, and feedback time delay. Over the 40 trials, the magnitudes of the identified feedback controllers and the identified feedback time delays do not change significantly. In contrast, the identified feedforward controllers do change significantly. By the last trial, the average identified feedforward controller approximates the inverse of the dynamic system. This observation provides evidence that a fundamental component of human learning is updating the anticipatory control until it models the inverse dynamics.

  10. Towards autonomous neuroprosthetic control using Hebbian reinforcement learning.

    Science.gov (United States)

    Mahmoudi, Babak; Pohlmeyer, Eric A; Prins, Noeline W; Geng, Shijia; Sanchez, Justin C

    2013-12-01

    Our goal was to design an adaptive neuroprosthetic controller that could learn the mapping from neural states to prosthetic actions and automatically adjust adaptation using only a binary evaluative feedback as a measure of desirability/undesirability of performance. Hebbian reinforcement learning (HRL) in a connectionist network was used for the design of the adaptive controller. The method combines the efficiency of supervised learning with the generality of reinforcement learning. The convergence properties of this approach were studied using both closed-loop control simulations and open-loop simulations that used primate neural data from robot-assisted reaching tasks. The HRL controller was able to perform classification and regression tasks using its episodic and sequential learning modes, respectively. In our experiments, the HRL controller quickly achieved convergence to an effective control policy, followed by robust performance. The controller also automatically stopped adapting the parameters after converging to a satisfactory control policy. Additionally, when the input neural vector was reorganized, the controller resumed adaptation to maintain performance. By estimating an evaluative feedback directly from the user, the HRL control algorithm may provide an efficient method for autonomous adaptation of neuroprosthetic systems. This method may enable the user to teach the controller the desired behavior using only a simple feedback signal.

  11. Learning in tele-autonomous systems using Soar

    Science.gov (United States)

    Laird, John E.; Yager, Eric S.; Tuck, Christopher M.; Hucka, Michael

    1989-01-01

    Robo-Soar is a high-level robot arm control system implemented in Soar. Robo-Soar learns to perform simple block manipulation tasks using advice from a human. Following learning, the system is able to perform similar tasks without external guidance. It can also learn to correct its knowledge, using its own problem solving in addition to outside guidance. Robo-Soar corrects its knowledge by accepting advice about relevance of features in its domain, using a unique integration of analytic and empirical learning techniques.

  12. Reinforcement learning for optimal control of low exergy buildings

    International Nuclear Information System (INIS)

    Yang, Lei; Nagy, Zoltan; Goffin, Philippe; Schlueter, Arno

    2015-01-01

    Highlights: • Implementation of reinforcement learning control for LowEx Building systems. • Learning allows adaptation to local environment without prior knowledge. • Presentation of reinforcement learning control for real-life applications. • Discussion of the applicability for real-life situations. - Abstract: Over a third of the anthropogenic greenhouse gas (GHG) emissions stem from cooling and heating buildings, due to their fossil fuel based operation. Low exergy building systems are a promising approach to reduce energy consumption as well as GHG emissions. They consists of renewable energy technologies, such as PV, PV/T and heat pumps. Since careful tuning of parameters is required, a manual setup may result in sub-optimal operation. A model predictive control approach is unnecessarily complex due to the required model identification. Therefore, in this work we present a reinforcement learning control (RLC) approach. The studied building consists of a PV/T array for solar heat and electricity generation, as well as geothermal heat pumps. We present RLC for the PV/T array, and the full building model. Two methods, Tabular Q-learning and Batch Q-learning with Memory Replay, are implemented with real building settings and actual weather conditions in a Matlab/Simulink framework. The performance is evaluated against standard rule-based control (RBC). We investigated different neural network structures and find that some outperformed RBC already during the learning phase. Overall, every RLC strategy for PV/T outperformed RBC by over 10% after the third year. Likewise, for the full building, RLC outperforms RBC in terms of meeting the heating demand, maintaining the optimal operation temperature and compensating more effectively for ground heat. This allows to reduce engineering costs associated with the setup of these systems, as well as decrease the return-of-invest period, both of which are necessary to create a sustainable, zero-emission building

  13. Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints.

    Science.gov (United States)

    Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai

    2015-07-01

    The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.

  14. Fuzzy gain scheduling of velocity PI controller with intelligent learning algorithm for reactor control

    International Nuclear Information System (INIS)

    Kim, Dong Yun; Seong, Poong Hyun

    1996-01-01

    In this study, we proposed a fuzzy gain scheduler with intelligent learning algorithm for a reactor control. In the proposed algorithm, we used the gradient descent method to learn the rule bases of a fuzzy algorithm. These rule bases are learned toward minimizing an objective function, which is called a performance cost function. The objective of fuzzy gain scheduler with intelligent learning algorithm is the generation of adequate gains, which minimize the error of system. The condition of every plant is generally changed as time gose. That is, the initial gains obtained through the analysis of system are no longer suitable for the changed plant. And we need to set new gains, which minimize the error stemmed from changing the condition of a plant. In this paper, we applied this strategy for reactor control of nuclear power plant (NPP), and the results were compared with those of a simple PI controller, which has fixed gains. As a result, it was shown that the proposed algorithm was superior to the simple PI controller

  15. Closed-Loop and Robust Control of Quantum Systems

    Directory of Open Access Journals (Sweden)

    Chunlin Chen

    2013-01-01

    Full Text Available For most practical quantum control systems, it is important and difficult to attain robustness and reliability due to unavoidable uncertainties in the system dynamics or models. Three kinds of typical approaches (e.g., closed-loop learning control, feedback control, and robust control have been proved to be effective to solve these problems. This work presents a self-contained survey on the closed-loop and robust control of quantum systems, as well as a brief introduction to a selection of basic theories and methods in this research area, to provide interested readers with a general idea for further studies. In the area of closed-loop learning control of quantum systems, we survey and introduce such learning control methods as gradient-based methods, genetic algorithms (GA, and reinforcement learning (RL methods from a unified point of view of exploring the quantum control landscapes. For the feedback control approach, the paper surveys three control strategies including Lyapunov control, measurement-based control, and coherent-feedback control. Then such topics in the field of quantum robust control as H∞ control, sliding mode control, quantum risk-sensitive control, and quantum ensemble control are reviewed. The paper concludes with a perspective of future research directions that are likely to attract more attention.

  16. Closed-loop and robust control of quantum systems.

    Science.gov (United States)

    Chen, Chunlin; Wang, Lin-Cheng; Wang, Yuanlong

    2013-01-01

    For most practical quantum control systems, it is important and difficult to attain robustness and reliability due to unavoidable uncertainties in the system dynamics or models. Three kinds of typical approaches (e.g., closed-loop learning control, feedback control, and robust control) have been proved to be effective to solve these problems. This work presents a self-contained survey on the closed-loop and robust control of quantum systems, as well as a brief introduction to a selection of basic theories and methods in this research area, to provide interested readers with a general idea for further studies. In the area of closed-loop learning control of quantum systems, we survey and introduce such learning control methods as gradient-based methods, genetic algorithms (GA), and reinforcement learning (RL) methods from a unified point of view of exploring the quantum control landscapes. For the feedback control approach, the paper surveys three control strategies including Lyapunov control, measurement-based control, and coherent-feedback control. Then such topics in the field of quantum robust control as H(∞) control, sliding mode control, quantum risk-sensitive control, and quantum ensemble control are reviewed. The paper concludes with a perspective of future research directions that are likely to attract more attention.

  17. Importance Of Quality Control in Reducing System Risk, a Lesson Learned From The Shuttle and a Recommendation for Future Launch Vehicles

    Science.gov (United States)

    Safie, Fayssal M.; Messer, Bradley P.

    2006-01-01

    This paper presents lessons learned from the Space Shuttle return to flight experience and the importance of these lessons learned in the development of new the NASA Crew Launch Vehicle (CLV). Specifically, the paper discusses the relationship between process control and system risk, and the importance of process control in improving space vehicle flight safety. It uses the External Tank (ET) Thermal Protection System (TPS) experience and lessons learned from the redesign and process enhancement activities performed in preparation for Return to Flight after the Columbia accident. The paper also, discusses in some details, the Probabilistic engineering physics based risk assessment performed by the Shuttle program to evaluate the impact of TPS failure on system risk and the application of the methodology to the CLV.

  18. Learning Companion Systems, Social Learning Systems, and the Global Social Learning Club.

    Science.gov (United States)

    Chan, Tak-Wai

    1996-01-01

    Describes the development of learning companion systems and their contributions to the class of social learning systems that integrate artificial intelligence agents and use machine learning to tutor and interact with students. Outlines initial social learning projects, their programming languages, and weakness. Future improvements will include…

  19. A statistical learning strategy for closed-loop control of fluid flows

    Science.gov (United States)

    Guéniat, Florimond; Mathelin, Lionel; Hussaini, M. Yousuff

    2016-12-01

    This work discusses a closed-loop control strategy for complex systems utilizing scarce and streaming data. A discrete embedding space is first built using hash functions applied to the sensor measurements from which a Markov process model is derived, approximating the complex system's dynamics. A control strategy is then learned using reinforcement learning once rewards relevant with respect to the control objective are identified. This method is designed for experimental configurations, requiring no computations nor prior knowledge of the system, and enjoys intrinsic robustness. It is illustrated on two systems: the control of the transitions of a Lorenz'63 dynamical system, and the control of the drag of a cylinder flow. The method is shown to perform well.

  20. Learned parametrized dynamic movement primitives with shared synergies for controlling robotic and musculoskeletal systems

    Directory of Open Access Journals (Sweden)

    Elmar eRückert

    2013-10-01

    Full Text Available A salient feature of human motor skill learning is the ability to exploitsimilarities across related tasks.In biological motor control, it has been hypothesized that muscle synergies,coherent activations of groups of muscles, allow for exploiting shared knowledge.Recent studies have shown that a rich set of complex motor skills can be generated bya combination of a small number of muscle synergies.In robotics, dynamic movement primitives are commonlyused for motor skill learning. This machine learning approach implements a stable attractor systemthat facilitates learning and it can be used in high-dimensional continuous spaces. However, it does not allow for reusing shared knowledge, i.e. for each task an individual set of parameters has to be learned.We propose a novel movement primitive representationthat employs parametrized basis functions, which combines the benefits of muscle synergiesand dynamic movement primitives. For each task asuperposition of synergies modulates a stable attractor system.This approach leads to a compact representation of multiple motor skills andat the same time enables efficient learning in high-dimensional continuous systems.The movement representation supports discrete and rhythmic movements andin particular includes the dynamic movement primitive approach as a special case.We demonstrate the feasibility of the movement representation in three multi-task learning simulated scenarios.First, the characteristics of the proposed representation are illustrated in a point-mass task.Second, in complex humanoid walking experiments,multiple walking patterns with different step heights are learned robustly and efficiently.Finally, in a multi-directional reaching task simulated with a musculoskeletal modelof the human arm, we show how the proposed movement primitives can be used tolearn appropriate muscle excitation patterns and to generalize effectively to new reaching skills.

  1. Changing pulse-shape basis for molecular learning control

    International Nuclear Information System (INIS)

    Cardoza, David; Langhojer, Florian; Trallero-Herrero, Carlos; Weinacht, Thomas; Monti, Oliver L.A.

    2004-01-01

    We interpret the results of a molecular fragmentation learning control experiment. We show that in the case of a system where control can be related to the structure of the optimal pulse matching the vibrational dynamics of the molecule, a simple change of pulse-shape basis in which the learning algorithm performs the search can reduce the dimensionality of the search space to one or two degrees of freedom

  2. Request Stream Control for the Access to Broadband Multimedia Educational Resources in the Distance Learning System

    Directory of Open Access Journals (Sweden)

    Irina Pavlovna Bolodurina

    2013-10-01

    Full Text Available This article presents a model of queuing system for broadband multimedia educational resources, as well as a model of access to a hybrid cloud system storage. These models are used to enhance the efficiency of computing resources in a distance learning system. An additional OpenStack control module has been developed to achieve the distribution of request streams and balance the load between cloud nodes.

  3. A New Learning Control System for Basketball Free Throws Based on Real Time Video Image Processing and Biofeedback

    Directory of Open Access Journals (Sweden)

    R. Sarang

    2018-02-01

    Full Text Available Shooting free throws plays an important role in basketball. The major problem in performing a correct free throw seems to be inappropriate training. Training is performed offline and it is often not that persistent. The aim of this paper is to consciously modify and control the free throw using biofeedback. Elbow and shoulder dynamics are calculated by an image processing technique equipped with a video image acquisition system. The proposed setup in this paper, named learning control system, is able to quantify and provide feedback of the above parameters in real time as audio signals. Therefore, it yielded to performing a correct learning and conscious control of shooting. Experimental results showed improvements in the free throw shooting style including shot pocket and locked position. The mean values of elbow and shoulder angles were controlled approximately on 89o and 26o, for shot pocket and also these angles were tuned approximately on 180o and 47o respectively for the locked position (closed to the desired pattern of the free throw based on valid FIBA references. Not only the mean values enhanced but also the standard deviations of these angles decreased meaningfully, which shows shooting style convergence and uniformity. Also, in training conditions, the average percentage of making successful free throws increased from about 64% to even 87% after using this setup and in competition conditions the average percentage of successful free throws enhanced about 20%, although using the learning control system may not be the only reason for these outcomes. The proposed system is easy to use, inexpensive, portable and real time applicable.

  4. Open-closed-loop iterative learning control for a class of nonlinear systems with random data dropouts

    Science.gov (United States)

    Cheng, X. Y.; Wang, H. B.; Jia, Y. L.; Dong, YH

    2018-05-01

    In this paper, an open-closed-loop iterative learning control (ILC) algorithm is constructed for a class of nonlinear systems subjecting to random data dropouts. The ILC algorithm is implemented by a networked control system (NCS), where only the off-line data is transmitted by network while the real-time data is delivered in the point-to-point way. Thus, there are two controllers rather than one in the control system, which makes better use of the saved and current information and thereby improves the performance achieved by open-loop control alone. During the transfer of off-line data between the nonlinear plant and the remote controller data dropout occurs randomly and the data dropout rate is modeled as a binary Bernoulli random variable. Both measurement and control data dropouts are taken into consideration simultaneously. The convergence criterion is derived based on rigorous analysis. Finally, the simulation results verify the effectiveness of the proposed method.

  5. Machine learning control taming nonlinear dynamics and turbulence

    CERN Document Server

    Duriez, Thomas; Noack, Bernd R

    2017-01-01

    This is the first book on a generally applicable control strategy for turbulence and other complex nonlinear systems. The approach of the book employs powerful methods of machine learning for optimal nonlinear control laws. This machine learning control (MLC) is motivated and detailed in Chapters 1 and 2. In Chapter 3, methods of linear control theory are reviewed. In Chapter 4, MLC is shown to reproduce known optimal control laws for linear dynamics (LQR, LQG). In Chapter 5, MLC detects and exploits a strongly nonlinear actuation mechanism of a low-dimensional dynamical system when linear control methods are shown to fail. Experimental control demonstrations from a laminar shear-layer to turbulent boundary-layers are reviewed in Chapter 6, followed by general good practices for experiments in Chapter 7. The book concludes with an outlook on the vast future applications of MLC in Chapter 8. Matlab codes are provided for easy reproducibility of the presented results. The book includes interviews with leading r...

  6. Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control

    Directory of Open Access Journals (Sweden)

    Takashi Watanabe

    2010-01-01

    Full Text Available Feedforward controller would be useful for hybrid Functional Electrical Stimulation (FES system using powered orthotic devices. In this paper, Feedback Error Learning (FEL controller for FES (FEL-FES controller was examined using an inverse statics model (ISM with an inverse dynamics model (IDM to realize a feedforward FES controller. For FES application, the ISM was tested in learning off line using training data obtained by PID control of very slow movements. Computer simulation tests in controlling wrist joint movements showed that the ISM performed properly in positioning task and that IDM learning was improved by using the ISM showing increase of output power ratio of the feedforward controller. The simple ISM learning method and the FEL-FES controller using the ISM would be useful in controlling the musculoskeletal system that has nonlinear characteristics to electrical stimulation and therefore is expected to be useful in applying to hybrid FES system using powered orthotic device.

  7. Amygdala subsystems and control of feeding behavior by learned cues.

    Science.gov (United States)

    Petrovich, Gorica D; Gallagher, Michela

    2003-04-01

    A combination of behavioral studies and a neural systems analysis approach has proven fruitful in defining the role of the amygdala complex and associated circuits in fear conditioning. The evidence presented in this chapter suggests that this approach is also informative in the study of other adaptive functions that involve the amygdala. In this chapter we present a novel model to study learning in an appetitive context. Furthermore, we demonstrate that long-recognized connections between the amygdala and the hypothalamus play a crucial role in allowing learning to modulate feeding behavior. In the first part we describe a behavioral model for motivational learning. In this model a cue that acquires motivational properties through pairings with food delivery when an animal is hungry can override satiety and promote eating in sated rats. Next, we present evidence that a specific amygdala subsystem (basolateral area) is responsible for allowing such learned cues to control eating (override satiety and promote eating in sated rats). We also show that basolateral amygdala mediates these actions via connectivity with the lateral hypothalamus. Lastly, we present evidence that the amygdalohypothalamic system is specific for the control of eating by learned motivational cues, as it does not mediate another function that depends on intact basolateral amygdala, namely, the ability of a conditioned cue to support new learning based on its acquired value. Knowledge about neural systems through which food-associated cues specifically control feeding behavior provides a defined model for the study of learning. In addition, this model may be informative for understanding mechanisms of maladaptive aspects of learned control of eating that contribute to eating disorders and more moderate forms of overeating.

  8. Fuzzy gain scheduling of velocity PI controller with intelligent learning algorithm for reactor control

    International Nuclear Information System (INIS)

    Dong Yun Kim; Poong Hyun Seong; .

    1997-01-01

    In this research, we propose a fuzzy gain scheduler (FGS) with an intelligent learning algorithm for a reactor control. In the proposed algorithm, the gradient descent method is used in order to generate the rule bases of a fuzzy algorithm by learning. These rule bases are obtained by minimizing an objective function, which is called a performance cost function. The objective of the FGS with an intelligent learning algorithm is to generate gains, which minimize the error of system. The proposed algorithm can reduce the time and effort required for obtaining the fuzzy rules through the intelligent learning function. It is applied to reactor control of nuclear power plant (NPP), and the results are compared with those of a conventional PI controller with fixed gains. As a result, it is shown that the proposed algorithm is superior to the conventional PI controller. (author)

  9. Maze learning by a hybrid brain-computer system.

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-13

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  10. Maze learning by a hybrid brain-computer system

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  11. Development of Remote Monitoring and a Control System Based on PLC and WebAccess for Learning Mechatronics

    OpenAIRE

    Wen-Jye Shyr; Te-Jen Su; Chia-Ming Lin

    2013-01-01

    This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC) and WebAccess. A mechatronics module, a Web‐CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equ...

  12. Development of Remote Monitoring and a Control System Based on PLC and WebAccess for Learning Mechatronics

    Directory of Open Access Journals (Sweden)

    Wen-Jye Shyr

    2013-02-01

    Full Text Available This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC and WebAccess. A mechatronics module, a Web-CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equipment from a remote location. Mechatronics control and long-distance monitoring were realized by establishing communication between the PLC and WebAccess. Analytical results indicate that the proposed system is feasible. The suitability of this system is demonstrated in the department of industrial education and technology at National Changhua University of Education, Taiwan. Preliminary evaluation of the system was encouraging and has shown that it has achieved success in helping students understand concepts and master remote monitoring and control techniques.

  13. Intelligent control of an IPMC actuated manipulator using emotional learning-based controller

    Science.gov (United States)

    Shariati, Azadeh; Meghdari, Ali; Shariati, Parham

    2008-08-01

    In this research an intelligent emotional learning controller, Takagi- Sugeno- Kang (TSK) is applied to govern the dynamics of a novel Ionic-Polymer Metal Composite (IPMC) actuated manipulator. Ionic-Polymer Metal Composites are active actuators that show very large deformation in existence of low applied voltage. In this research, a new IPMC actuator is considered and applied to a 2-dof miniature manipulator. This manipulator is designed for miniature tasks. The control system consists of a set of neurofuzzy controller whose parameters are adapted according to the emotional learning rules, and a critic with task to assess the present situation resulted from the applied control action in terms of satisfactory achievement of the control goals and provides the emotional signal (the stress). The controller modifies its characteristics so that the critic's stress decreased.

  14. From brain synapses to systems for learning and memory: Object recognition, spatial navigation, timed conditioning, and movement control.

    Science.gov (United States)

    Grossberg, Stephen

    2015-09-24

    This article provides an overview of neural models of synaptic learning and memory whose expression in adaptive behavior depends critically on the circuits and systems in which the synapses are embedded. It reviews Adaptive Resonance Theory, or ART, models that use excitatory matching and match-based learning to achieve fast category learning and whose learned memories are dynamically stabilized by top-down expectations, attentional focusing, and memory search. ART clarifies mechanistic relationships between consciousness, learning, expectation, attention, resonance, and synchrony. ART models are embedded in ARTSCAN architectures that unify processes of invariant object category learning, recognition, spatial and object attention, predictive remapping, and eye movement search, and that clarify how conscious object vision and recognition may fail during perceptual crowding and parietal neglect. The generality of learned categories depends upon a vigilance process that is regulated by acetylcholine via the nucleus basalis. Vigilance can get stuck at too high or too low values, thereby causing learning problems in autism and medial temporal amnesia. Similar synaptic learning laws support qualitatively different behaviors: Invariant object category learning in the inferotemporal cortex; learning of grid cells and place cells in the entorhinal and hippocampal cortices during spatial navigation; and learning of time cells in the entorhinal-hippocampal system during adaptively timed conditioning, including trace conditioning. Spatial and temporal processes through the medial and lateral entorhinal-hippocampal system seem to be carried out with homologous circuit designs. Variations of a shared laminar neocortical circuit design have modeled 3D vision, speech perception, and cognitive working memory and learning. A complementary kind of inhibitory matching and mismatch learning controls movement. This article is part of a Special Issue entitled SI: Brain and Memory

  15. Optimization and control of a continuous stirred tank fermenter using learning system

    Energy Technology Data Exchange (ETDEWEB)

    Thibault, J [Dept. of Chemical Engineering, Laval Univ., Quebec City, PQ (Canada); Najim, K [CNRS, URA 192, GRECO SARTA, Ecole Nationale Superieure d' Ingenieurs de Genie Chimique, 31 - Toulouse (France)

    1993-05-01

    A variable structure learning automaton is used as an optimization and control of a continuous stirred tank fermenter. The alogrithm requires no modelling of the process. The use of appropriate learning rules enables to locate the optimum dilution rate in order to maximize an objective cost function. It is shown that a hierarchical structure of automata can adapt to environmental changes and can also modify efficiently the domain of variation of the control variable in order to encompass the optimum value. (orig.)

  16. Digital control systems training on a distance learning platform

    Directory of Open Access Journals (Sweden)

    Jan PIECHA

    2009-01-01

    Full Text Available The paper deals with new training technologies development based on approach to distance learning website, implemented in the laboratory of a Traffic Engineering study branch at Faculty of Transport. The discussed computing interface allows students complete knowledge of traffic controllers’ architecture and machine language programming fundamentals. These training facilities are available at home; at their remote terminal. The training resources consist of electronic / computer based training; guidebooks and software units. The laboratory provides the students with an interface entering into simulation packages and programming interfaces, supporting the web training facilities. The courseware complexity selection is one of the most difficult factors in intelligent training unit’s development. The dynamically configured application provides the user with his individually set structure of the training resources. The trainee controls the application structure and complexity, from the time he started. For simplifying the training process and studying activities, several unifications were provided. The introduced ideas need various standardisations, simplifying the e-learning units’ development and application control processes [8], [9]. Further training facilities development concerns virtual laboratory environment organisation in laboratories of Transport Faculty.

  17. Tunnel Ventilation Control Using Reinforcement Learning Methodology

    Science.gov (United States)

    Chu, Baeksuk; Kim, Dongnam; Hong, Daehie; Park, Jooyoung; Chung, Jin Taek; Kim, Tae-Hyung

    The main purpose of tunnel ventilation system is to maintain CO pollutant concentration and VI (visibility index) under an adequate level to provide drivers with comfortable and safe driving environment. Moreover, it is necessary to minimize power consumption used to operate ventilation system. To achieve the objectives, the control algorithm used in this research is reinforcement learning (RL) method. RL is a goal-directed learning of a mapping from situations to actions without relying on exemplary supervision or complete models of the environment. The goal of RL is to maximize a reward which is an evaluative feedback from the environment. In the process of constructing the reward of the tunnel ventilation system, two objectives listed above are included, that is, maintaining an adequate level of pollutants and minimizing power consumption. RL algorithm based on actor-critic architecture and gradient-following algorithm is adopted to the tunnel ventilation system. The simulations results performed with real data collected from existing tunnel ventilation system and real experimental verification are provided in this paper. It is confirmed that with the suggested controller, the pollutant level inside the tunnel was well maintained under allowable limit and the performance of energy consumption was improved compared to conventional control scheme.

  18. Optimal critic learning for robot control in time-varying environments.

    Science.gov (United States)

    Wang, Chen; Li, Yanan; Ge, Shuzhi Sam; Lee, Tong Heng

    2015-10-01

    In this paper, optimal critic learning is developed for robot control in a time-varying environment. The unknown environment is described as a linear system with time-varying parameters, and impedance control is employed for the interaction control. Desired impedance parameters are obtained in the sense of an optimal realization of the composite of trajectory tracking and force regulation. Q -function-based critic learning is developed to determine the optimal impedance parameters without the knowledge of the system dynamics. The simulation results are presented and compared with existing methods, and the efficacy of the proposed method is verified.

  19. Fuzzy gain scheduling of velocity PI controller with intelligent learning algorithm for reactor control

    International Nuclear Information System (INIS)

    Kim, Dong Yun

    1997-02-01

    In this research, we propose a fuzzy gain scheduler (FGS) with an intelligent learning algorithm for a reactor control. In the proposed algorithm, the gradient descent method is used in order to generate the rule bases of a fuzzy algorithm by learning. These rule bases are obtained by minimizing an objective function, which is called a performance cost function. The objective of the FGS with an intelligent learning algorithm is to generate adequate gains, which minimize the error of system. The proposed algorithm can reduce the time and efforts required for obtaining the fuzzy rules through the intelligent learning function. The evolutionary programming algorithm is modified and adopted as the method in order to find the optimal gains which are used as the initial gains of FGS with learning function. It is applied to reactor control of nuclear power plant (NPP), and the results are compared with those of a conventional PI controller with fixed gains. As a result, it is shown that the proposed algorithm is superior to the conventional PI controller

  20. Online Adaptation and Over-Trial Learning in Macaque Visuomotor Control

    Science.gov (United States)

    Braun, Daniel A.; Aertsen, Ad; Paz, Rony; Vaadia, Eilon; Rotter, Stefan; Mehring, Carsten

    2011-01-01

    When faced with unpredictable environments, the human motor system has been shown to develop optimized adaptation strategies that allow for online adaptation during the control process. Such online adaptation is to be contrasted to slower over-trial learning that corresponds to a trial-by-trial update of the movement plan. Here we investigate the interplay of both processes, i.e., online adaptation and over-trial learning, in a visuomotor experiment performed by macaques. We show that simple non-adaptive control schemes fail to perform in this task, but that a previously suggested adaptive optimal feedback control model can explain the observed behavior. We also show that over-trial learning as seen in learning and aftereffect curves can be explained by learning in a radial basis function network. Our results suggest that both the process of over-trial learning and the process of online adaptation are crucial to understand visuomotor learning. PMID:21720526

  1. Research on intelligent algorithm of electro - hydraulic servo control system

    Science.gov (United States)

    Wang, Yannian; Zhao, Yuhui; Liu, Chengtao

    2017-09-01

    In order to adapt the nonlinear characteristics of the electro-hydraulic servo control system and the influence of complex interference in the industrial field, using a fuzzy PID switching learning algorithm is proposed and a fuzzy PID switching learning controller is designed and applied in the electro-hydraulic servo controller. The designed controller not only combines the advantages of the fuzzy control and PID control, but also introduces the learning algorithm into the switching function, which makes the learning of the three parameters in the switching function can avoid the instability of the system during the switching between the fuzzy control and PID control algorithms. It also makes the switch between these two control algorithm more smoother than that of the conventional fuzzy PID.

  2. Interorganizational learning systems

    DEFF Research Database (Denmark)

    Hjalager, Anne-Mette

    1999-01-01

    The occurrence of organizational and interorganizational learning processes is not only the result of management endeavors. Industry structures and market related issues have substantial spill-over effects. The article reviews literature, and it establishes a learning model in which elements from...... organizational environments are included into a systematic conceptual framework. The model allows four types of learning to be identified: P-learning (professional/craft systems learning), T-learning (technology embedded learning), D-learning (dualistic learning systems, where part of the labor force is exclude...... from learning), and S-learning (learning in social networks or clans). The situation related to service industries illustrates the typology....

  3. Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.

    Science.gov (United States)

    Pan, Yongping; Yu, Haoyong

    2017-06-01

    This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.

  4. Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints

    Science.gov (United States)

    Yang, Xiong; Liu, Derong; Wang, Ding

    2014-03-01

    In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.

  5. ISO learning approximates a solution to the inverse-controller problem in an unsupervised behavioral paradigm.

    Science.gov (United States)

    Porr, Bernd; von Ferber, Christian; Wörgötter, Florentin

    2003-04-01

    In "Isotropic Sequence Order Learning" (pp. 831-864 in this issue), we introduced a novel algorithm for temporal sequence learning (ISO learning). Here, we embed this algorithm into a formal nonevaluating (teacher free) environment, which establishes a sensor-motor feedback. The system is initially guided by a fixed reflex reaction, which has the objective disadvantage that it can react only after a disturbance has occurred. ISO learning eliminates this disadvantage by replacing the reflex-loop reactions with earlier anticipatory actions. In this article, we analytically demonstrate that this process can be understood in terms of control theory, showing that the system learns the inverse controller of its own reflex. Thereby, this system is able to learn a simple form of feedforward motor control.

  6. Iterative learning control with sampled-data feedback for robot manipulators

    Directory of Open Access Journals (Sweden)

    Delchev Kamen

    2014-09-01

    Full Text Available This paper deals with the improvement of the stability of sampled-data (SD feedback control for nonlinear multiple-input multiple-output time varying systems, such as robotic manipulators, by incorporating an off-line model based nonlinear iterative learning controller. The proposed scheme of nonlinear iterative learning control (NILC with SD feedback is applicable to a large class of robots because the sampled-data feedback is required for model based feedback controllers, especially for robotic manipulators with complicated dynamics (6 or 7 DOF, or more, while the feedforward control from the off-line iterative learning controller should be assumed as a continuous one. The robustness and convergence of the proposed NILC law with SD feedback is proven, and the derived sufficient condition for convergence is the same as the condition for a NILC with a continuous feedback control input. With respect to the presented NILC algorithm applied to a virtual PUMA 560 robot, simulation results are presented in order to verify convergence and applicability of the proposed learning controller with SD feedback controller attached

  7. Learning to push and learning to move: The adaptive control of contact forces

    Directory of Open Access Journals (Sweden)

    Maura eCasadio

    2015-11-01

    Full Text Available To be successful at manipulating objects one needs to apply simultaneously well controlled movements and contact forces. We present a computational theory of how the brain may successfully generate a vast spectrum of interactive behaviors by combining two independent processes. One process is competent to control movements in free space and the other is competent to control contact forces against rigid constraints. Free space and rigid constraints are singularities at the boundaries of a continuum of mechanical impedance. Within this continuum, forces and motions occur in compatible pairs connected by the equations of Newtonian dynamics. The force applied to an object determines its motion. Conversely, inverse dynamics determine a unique force trajectory from a movement trajectory. In this perspective, we describe motor learning as a process leading to the discovery of compatible force/motion pairs. The learned compatible pairs constitute a local representation of the environment's mechanics. Experiments on force field adaptation have already provided us with evidence that the brain is able to predict and compensate the forces encountered when one is attempting to generate a motion. Here, we tested the theory in the dual case, i.e. when one attempts at applying a desired contact force against a simulated rigid surface. If the surface becomes unexpectedly compliant, the contact point moves as a function of the applied force and this causes the applied force to deviate from its desired value. We found that, through repeated attempts at generating the desired contact force, subjects discovered the unique compatible hand motion. When, after learning, the rigid contact was unexpectedly restored, subjects displayed after effects of learning, consistent with the concurrent operation of a motion control system and a force control system. Together, theory and experiment support a new and broader view of modularity in the coordinated control of forces and

  8. Development of an E-learning System for the Endoscopic Diagnosis of Early Gastric Cancer: An International Multicenter Randomized Controlled Trial.

    Science.gov (United States)

    Yao, K; Uedo, N; Muto, M; Ishikawa, H; Cardona, H J; Filho, E C Castro; Pittayanon, R; Olano, C; Yao, F; Parra-Blanco, A; Ho, S H; Avendano, A G; Piscoya, A; Fedorov, E; Bialek, A P; Mitrakov, A; Caro, L; Gonen, C; Dolwani, S; Farca, A; Cuaresma, L F; Bonilla, J J; Kasetsermwiriya, W; Ragunath, K; Kim, S E; Marini, M; Li, H; Cimmino, D G; Piskorz, M M; Iacopini, F; So, J B; Yamazaki, K; Kim, G H; Ang, T L; Milhomem-Cardoso, D M; Waldbaum, C A; Carvajal, W A Piedra; Hayward, C M; Singh, R; Banerjee, R; Anagnostopoulos, G K; Takahashi, Y

    2016-07-01

    In many countries, gastric cancer is not diagnosed until an advanced stage. An Internet-based e-learning system to improve the ability of endoscopists to diagnose gastric cancer at an early stage was developed and was evaluated for its effectiveness. The study was designed as a randomized controlled trial. After receiving a pre-test, participants were randomly allocated to either an e-learning or non-e-learning group. Only those in the e-learning group gained access to the e-learning system. Two months after the pre-test, both groups received a post-test. The primary endpoint was the difference between the two groups regarding the rate of improvement of their test results. 515 endoscopists from 35 countries were assessed for eligibility, and 332 were enrolled in the study, with 166 allocated to each group. Of these, 151 participants in the e-learning group and 144 in the non-e-learning group were included in the analysis. The mean improvement rate (standard deviation) in the e-learning and non-e-learning groups was 1·24 (0·26) and 1·00 (0·16), respectively (Pe-learning system to expand knowledge and provide invaluable experience regarding the endoscopic detection of early gastric cancer (R000012039). Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Emotional Learning Based Intelligent Controllers for Rotor Flux Oriented Control of Induction Motor

    Science.gov (United States)

    Abdollahi, Rohollah; Farhangi, Reza; Yarahmadi, Ali

    2014-08-01

    This paper presents design and evaluation of a novel approach based on emotional learning to improve the speed control system of rotor flux oriented control of induction motor. The controller includes a neuro-fuzzy system with speed error and its derivative as inputs. A fuzzy critic evaluates the present situation, and provides the emotional signal (stress). The controller modifies its characteristics so that the critics stress is reduced. The comparative simulation results show that the proposed controller is more robust and hence found to be a suitable replacement of the conventional PI controller for the high performance industrial drive applications.

  10. Learning feedback and feedforward control in a mirror-reversed visual environment.

    Science.gov (United States)

    Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi; Diedrichsen, Jörn

    2015-10-01

    When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. Copyright © 2015 the American Physiological Society.

  11. Memory and cognitive control circuits in mathematical cognition and learning

    Science.gov (United States)

    Menon, V.

    2018-01-01

    Numerical cognition relies on interactions within and between multiple functional brain systems, including those subserving quantity processing, working memory, declarative memory, and cognitive control. This chapter describes recent advances in our understanding of memory and control circuits in mathematical cognition and learning. The working memory system involves multiple parietal–frontal circuits which create short-term representations that allow manipulation of discrete quantities over several seconds. In contrast, hippocampal–frontal circuits underlying the declarative memory system play an important role in formation of associative memories and binding of new and old information, leading to the formation of long-term memories that allow generalization beyond individual problem attributes. The flow of information across these systems is regulated by flexible cognitive control systems which facilitate the integration and manipulation of quantity and mnemonic information. The implications of recent research for formulating a more comprehensive systems neuroscience view of the neural basis of mathematical learning and knowledge acquisition in both children and adults are discussed. PMID:27339012

  12. Automatic learning rate adjustment for self-supervising autonomous robot control

    Science.gov (United States)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.

  13. Formation Learning Control of Multiple Autonomous Underwater Vehicles With Heterogeneous Nonlinear Uncertain Dynamics.

    Science.gov (United States)

    Yuan, Chengzhi; Licht, Stephen; He, Haibo

    2017-09-26

    In this paper, a new concept of formation learning control is introduced to the field of formation control of multiple autonomous underwater vehicles (AUVs), which specifies a joint objective of distributed formation tracking control and learning/identification of nonlinear uncertain AUV dynamics. A novel two-layer distributed formation learning control scheme is proposed, which consists of an upper-layer distributed adaptive observer and a lower-layer decentralized deterministic learning controller. This new formation learning control scheme advances existing techniques in three important ways: 1) the multi-AUV system under consideration has heterogeneous nonlinear uncertain dynamics; 2) the formation learning control protocol can be designed and implemented by each local AUV agent in a fully distributed fashion without using any global information; and 3) in addition to the formation control performance, the distributed control protocol is also capable of accurately identifying the AUVs' heterogeneous nonlinear uncertain dynamics and utilizing experiences to improve formation control performance. Extensive simulations have been conducted to demonstrate the effectiveness of the proposed results.

  14. Reinforcement-learning-based output-feedback control of nonstrict nonlinear discrete-time systems with application to engine emission control.

    Science.gov (United States)

    Shih, Peter; Kaul, Brian C; Jagannathan, Sarangapani; Drallmeier, James A

    2009-10-01

    A novel reinforcement-learning-based output adaptive neural network (NN) controller, which is also referred to as the adaptive-critic NN controller, is developed to deliver the desired tracking performance for a class of nonlinear discrete-time systems expressed in nonstrict feedback form in the presence of bounded and unknown disturbances. The adaptive-critic NN controller consists of an observer, a critic, and two action NNs. The observer estimates the states and output, and the two action NNs provide virtual and actual control inputs to the nonlinear discrete-time system. The critic approximates a certain strategic utility function, and the action NNs minimize the strategic utility function and control inputs. All NN weights adapt online toward minimization of a performance index, utilizing the gradient-descent-based rule, in contrast with iteration-based adaptive-critic schemes. Lyapunov functions are used to show the stability of the closed-loop tracking error, weights, and observer estimates. Separation and certainty equivalence principles, persistency of excitation condition, and linearity in the unknown parameter assumption are not needed. Experimental results on a spark ignition (SI) engine operating lean at an equivalence ratio of 0.75 show a significant (25%) reduction in cyclic dispersion in heat release with control, while the average fuel input changes by less than 1% compared with the uncontrolled case. Consequently, oxides of nitrogen (NO(x)) drop by 30%, and unburned hydrocarbons drop by 16% with control. Overall, NO(x)'s are reduced by over 80% compared with stoichiometric levels.

  15. The role of interactive control systems in obtaining internal consistency in the management control system package

    DEFF Research Database (Denmark)

    Toldbod, Thomas; Israelsen, Poul

    2014-01-01

    Companies rely on multiple Management Control Systems to obtain their short and long term objectives. When applying a multifaceted perspective on Management Control System the concept of internal consistency has been found to be important in obtaining goal congruency in the company. However, to d...... management is aware of this shortcoming they use the cybernetic controls more interactively to overcome this shortcoming, whereby the cybernetic controls are also used as a learning platform and not just for performance control....

  16. Intelligent Control System Taking Account of Cooperativeness Using Weighting Information on System Objective

    Directory of Open Access Journals (Sweden)

    Masaki Takahashi

    2004-08-01

    Full Text Available This study considers an intelligent control system to integrate flexibly its components by using weighted information where the system evaluation is reflected. Such system evaluates the information flowing through the components and converts them by weighting depending on the degree of importance. Integration of components based on the system evaluation enables a system consisting of them to realize various, flexible and adaptive control. In this study, the intelligent control method is applied to a swing up and stabilization control problem of a number of cart and pendulum systems on a restricted straight guide. To stabilize the pendulum in a restricted environment, each system should realize not only a swing-up and stabilization control of the pendulum, but also a position control of the cart to avoid collision or deadlock. The experiment using a real apparatus demonstrated that the controller learning light interaction acquires egoistic character, the controller learning heavy interaction behaves altruistically, and the controller equally considering self cart and another cart becomes cooperative. In other words, these autonomous decentralized controllers can acquire various characters and flexibility for cooperation.

  17. Learning System Center App Controller

    CERN Document Server

    Naeem, Nasir

    2015-01-01

    This book is intended for IT professionals working with Hyper-V, Azure cloud, VMM, and private cloud technologies who are looking for a quick way to get up and running with System Center 2012 R2 App Controller. To get the most out of this book, you should be familiar with Microsoft Hyper-V technology. Knowledge of Virtual Machine Manager is helpful but not mandatory.

  18. Reinforcement learning design-based adaptive tracking control with less learning parameters for nonlinear discrete-time MIMO systems.

    Science.gov (United States)

    Liu, Yan-Jun; Tang, Li; Tong, Shaocheng; Chen, C L Philip; Li, Dong-Juan

    2015-01-01

    Based on the neural network (NN) approximator, an online reinforcement learning algorithm is proposed for a class of affine multiple input and multiple output (MIMO) nonlinear discrete-time systems with unknown functions and disturbances. In the design procedure, two networks are provided where one is an action network to generate an optimal control signal and the other is a critic network to approximate the cost function. An optimal control signal and adaptation laws can be generated based on two NNs. In the previous approaches, the weights of critic and action networks are updated based on the gradient descent rule and the estimations of optimal weight vectors are directly adjusted in the design. Consequently, compared with the existing results, the main contributions of this paper are: 1) only two parameters are needed to be adjusted, and thus the number of the adaptation laws is smaller than the previous results and 2) the updating parameters do not depend on the number of the subsystems for MIMO systems and the tuning rules are replaced by adjusting the norms on optimal weight vectors in both action and critic networks. It is proven that the tracking errors, the adaptation laws, and the control inputs are uniformly bounded using Lyapunov analysis method. The simulation examples are employed to illustrate the effectiveness of the proposed algorithm.

  19. Status Checking System of Home Appliances using machine learning

    Directory of Open Access Journals (Sweden)

    Yoon Chi-Yurl

    2017-01-01

    Full Text Available This paper describes status checking system of home appliances based on machine learning, which can be applied to existing household appliances without networking function. Designed status checking system consists of sensor modules, a wireless communication module, cloud server, android application and a machine learning algorithm. The developed system applied to washing machine analyses and judges the four-kinds of appliance’s status such as staying, washing, rinsing and spin-drying. The measurements of sensor and transmission of sensing data are operated on an Arduino board and the data are transmitted to cloud server in real time. The collected data are parsed by an Android application and injected into the machine learning algorithm for learning the status of the appliances. The machine learning algorithm compares the stored learning data with collected real-time data from the appliances. Our results are expected to contribute as a base technology to design an automatic control system based on machine learning technology for household appliances in real-time.

  20. Evolutionary Acquisition of the Global Command and Control System: Lessons Learned

    National Research Council Canada - National Science Library

    Wallis, Johnathan

    1998-01-01

    This paper summarizes a "lessons learned" study that reviews DoD's approach to managing the GCCS program on behalf on the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (ASD/C3I...

  1. A self-learning rule base for command following in dynamical systems

    Science.gov (United States)

    Tsai, Wei K.; Lee, Hon-Mun; Parlos, Alexander

    1992-01-01

    In this paper, a self-learning Rule Base for command following in dynamical systems is presented. The learning is accomplished though reinforcement learning using an associative memory called SAM. The main advantage of SAM is that it is a function approximator with explicit storage of training samples. A learning algorithm patterned after the dynamic programming is proposed. Two artificially created, unstable dynamical systems are used for testing, and the Rule Base was used to generate a feedback control to improve the command following ability of the otherwise uncontrolled systems. The numerical results are very encouraging. The controlled systems exhibit a more stable behavior and a better capability to follow reference commands. The rules resulting from the reinforcement learning are explicitly stored and they can be modified or augmented by human experts. Due to overlapping storage scheme of SAM, the stored rules are similar to fuzzy rules.

  2. Interacting Learning Processes during Skill Acquisition: Learning to control with gradually changing system dynamics.

    Science.gov (United States)

    Ludolph, Nicolas; Giese, Martin A; Ilg, Winfried

    2017-10-16

    There is increasing evidence that sensorimotor learning under real-life conditions relies on a composition of several learning processes. Nevertheless, most studies examine learning behaviour in relation to one specific learning mechanism. In this study, we examined the interaction between reward-based skill acquisition and motor adaptation to changes of object dynamics. Thirty healthy subjects, split into two groups, acquired the skill of balancing a pole on a cart in virtual reality. In one group, we gradually increased the gravity, making the task easier in the beginning and more difficult towards the end. In the second group, subjects had to acquire the skill on the maximum, most difficult gravity level. We hypothesized that the gradual increase in gravity during skill acquisition supports learning despite the necessary adjustments to changes in cart-pole dynamics. We found that the gradual group benefits from the slow increment, although overall improvement was interrupted by the changes in gravity and resulting system dynamics, which caused short-term degradations in performance and timing of actions. In conclusion, our results deliver evidence for an interaction of reward-based skill acquisition and motor adaptation processes, which indicates the importance of both processes for the development of optimized skill acquisition schedules.

  3. Filtering sensory information with XCSF: improving learning robustness and robot arm control performance.

    Science.gov (United States)

    Kneissler, Jan; Stalph, Patrick O; Drugowitsch, Jan; Butz, Martin V

    2014-01-01

    It has been shown previously that the control of a robot arm can be efficiently learned using the XCSF learning classifier system, which is a nonlinear regression system based on evolutionary computation. So far, however, the predictive knowledge about how actual motor activity changes the state of the arm system has not been exploited. In this paper, we utilize the forward velocity kinematics knowledge of XCSF to alleviate the negative effect of noisy sensors for successful learning and control. We incorporate Kalman filtering for estimating successive arm positions, iteratively combining sensory readings with XCSF-based predictions of hand position changes over time. The filtered arm position is used to improve both trajectory planning and further learning of the forward velocity kinematics. We test the approach on a simulated kinematic robot arm model. The results show that the combination can improve learning and control performance significantly. However, it also shows that variance estimates of XCSF prediction may be underestimated, in which case self-delusional spiraling effects can hinder effective learning. Thus, we introduce a heuristic parameter, which can be motivated by theory, and which limits the influence of XCSF's predictions on its own further learning input. As a result, we obtain drastic improvements in noise tolerance, allowing the system to cope with more than 10 times higher noise levels.

  4. Computer-aided auscultation learning system for nursing technique instruction.

    Science.gov (United States)

    Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih

    2008-01-01

    Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.

  5. Neurofeedback Control of the Human GABAergic System Using Non-invasive Brain Stimulation.

    Science.gov (United States)

    Koganemaru, Satoko; Mikami, Yusuke; Maezawa, Hitoshi; Ikeda, Satoshi; Ikoma, Katsunori; Mima, Tatsuya

    2018-06-01

    Neurofeedback has been a powerful method for self-regulating brain activities to elicit potential ability of human mind. GABA is a major inhibitory neurotransmitter in the central nervous system. Transcranial magnetic stimulation (TMS) is a tool that can evaluate the GABAergic system within the primary motor cortex (M1) using paired-pulse stimuli, short intracortical inhibition (SICI). Herein we investigated whether neurofeedback learning using SICI enabled us to control the GABAergic system within the M1 area. Forty-five healthy subjects were randomly divided into two groups: those receiving SICI neurofeedback learning or those receiving no neurofeedback (control) learning. During both learning periods, subjects made attempts to change the size of a circle, which was altered according to the degree of SICI in the SICI neurofeedback learning group, and which was altered independent of the degree of SICI in the control learning group. Results demonstrated that the SICI neurofeedback learning group showed a significant enhancement in SICI. Moreover, this group showed a significant reduction in choice reaction time compared to the control group. Our findings indicate that humans can intrinsically control the intracortical GABAergic system within M1 and can thus improve motor behaviors by SICI neurofeedback learning. SICI neurofeedback learning is a novel and promising approach to control our neural system and potentially represents a new therapy for patients with abnormal motor symptoms caused by CNS disorders. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. Automatic learning algorithm for the MD-logic artificial pancreas system.

    Science.gov (United States)

    Miller, Shahar; Nimri, Revital; Atlas, Eran; Grunberg, Eli A; Phillip, Moshe

    2011-10-01

    Applying real-time learning into an artificial pancreas system could effectively track the unpredictable behavior of glucose-insulin dynamics and adjust insulin treatment accordingly. We describe a novel learning algorithm and its performance when integrated into the MD-Logic Artificial Pancreas (MDLAP) system developed by the Diabetes Technology Center, Schneider Children's Medical Center of Israel, Petah Tikva, Israel. The algorithm was designed to establish an initial patient profile using open-loop data (Initial Learning Algorithm component) and then make periodic adjustments during closed-loop operation (Runtime Learning Algorithm component). The MDLAP system, integrated with the learning algorithm, was tested in seven different experiments using the University of Virginia/Padova simulator, comprising adults, adolescents, and children. The experiments included simulations using the open-loop and closed-loop control strategy under nominal and varying insulin sensitivity conditions. The learning algorithm was automatically activated at the end of the open-loop segment and after every day of the closed-loop operation. Metabolic control parameters achieved at selected time points were compared. The percentage of time glucose levels were maintained within 70-180 mg/dL for children and adolescents significantly improved when open-loop was compared with day 6 of closed-loop control (Psignificantly reduced by approximately sevenfold (Psignificant reduction in the Low Blood Glucose Index (P<0.001). The new algorithm was effective in characterizing the patient profiles from open-loop data and in adjusting treatment to provide better glycemic control during closed-loop control in both conditions. These findings warrant corroboratory clinical trials.

  7. Social software: E-learning beyond learning management systems

    DEFF Research Database (Denmark)

    Dalsgaard, Christian

    2006-01-01

    The article argues that it is necessary to move e-learning beyond learning management systems and engage students in an active use of the web as a resource for their self-governed, problem-based and collaborative activities. The purpose of the article is to discuss the potential of social software...... to move e-learning beyond learning management systems. An approach to use of social software in support of a social constructivist approach to e-learning is presented, and it is argued that learning management systems do not support a social constructivist approach which emphasizes self-governed learning...... activities of students. The article suggests a limitation of the use of learning management systems to cover only administrative issues. Further, it is argued that students' self-governed learning processes are supported by providing students with personal tools and engaging them in different kinds of social...

  8. Applications of learning based systems at AREVA group

    International Nuclear Information System (INIS)

    Jeanmart, F.; Leclerc, C.

    2006-01-01

    As part of its work on advanced information systems, AREVA is exploring the use of computerized tools based on 'machine learning' techniques. Some of these studies are being carried out by EURIWARE - continuing on from previous work done by AREVA NC - focused on the supervision of complex systems. Systems based on machine learning techniques are one of the possible solutions being investigated by AREVA: knowing that the stakes are high and involve better anticipation and control and high financial considerations. (authors)

  9. Learning Control of Fixed-Wing Unmanned Aerial Vehicles Using Fuzzy Neural Networks

    Directory of Open Access Journals (Sweden)

    Erdal Kayacan

    2017-01-01

    Full Text Available A learning control strategy is preferred for the control and guidance of a fixed-wing unmanned aerial vehicle to deal with lack of modeling and flight uncertainties. For learning the plant model as well as changing working conditions online, a fuzzy neural network (FNN is used in parallel with a conventional P (proportional controller. Among the learning algorithms in the literature, a derivative-free one, sliding mode control (SMC theory-based learning algorithm, is preferred as it has been proved to be computationally efficient in real-time applications. Its proven robustness and finite time converging nature make the learning algorithm appropriate for controlling an unmanned aerial vehicle as the computational power is always limited in unmanned aerial vehicles (UAVs. The parameter update rules and stability conditions of the learning are derived, and the proof of the stability of the learning algorithm is shown by using a candidate Lyapunov function. Intensive simulations are performed to illustrate the applicability of the proposed controller which includes the tracking of a three-dimensional trajectory by the UAV subject to time-varying wind conditions. The simulation results show the efficiency of the proposed control algorithm, especially in real-time control systems because of its computational efficiency.

  10. New designing of E-Learning systems with using network learning

    OpenAIRE

    Malayeri, Amin Daneshmand; Abdollahi, Jalal

    2010-01-01

    One of the most applied learning in virtual spaces is using E-Learning systems. Some E-Learning methodologies has been introduced, but the main subject is the most positive feedback from E-Learning systems. In this paper, we introduce a new methodology of E-Learning systems entitle "Network Learning" with review of another aspects of E-Learning systems. Also, we present benefits and advantages of using these systems in educating and fast learning programs. Network Learning can be programmable...

  11. A neural fuzzy controller learning by fuzzy error propagation

    Science.gov (United States)

    Nauck, Detlef; Kruse, Rudolf

    1992-01-01

    In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.

  12. Memory and cognitive control circuits in mathematical cognition and learning.

    Science.gov (United States)

    Menon, V

    2016-01-01

    Numerical cognition relies on interactions within and between multiple functional brain systems, including those subserving quantity processing, working memory, declarative memory, and cognitive control. This chapter describes recent advances in our understanding of memory and control circuits in mathematical cognition and learning. The working memory system involves multiple parietal-frontal circuits which create short-term representations that allow manipulation of discrete quantities over several seconds. In contrast, hippocampal-frontal circuits underlying the declarative memory system play an important role in formation of associative memories and binding of new and old information, leading to the formation of long-term memories that allow generalization beyond individual problem attributes. The flow of information across these systems is regulated by flexible cognitive control systems which facilitate the integration and manipulation of quantity and mnemonic information. The implications of recent research for formulating a more comprehensive systems neuroscience view of the neural basis of mathematical learning and knowledge acquisition in both children and adults are discussed. © 2016 Elsevier B.V. All rights reserved.

  13. Evaluation of an e-learning system for diagnosis of gastric lesions using magnifying narrow-band imaging: a multicenter randomized controlled study.

    Science.gov (United States)

    Nakanishi, Hiroyoshi; Doyama, Hisashi; Ishikawa, Hideki; Uedo, Noriya; Gotoda, Takuji; Kato, Mototsugu; Nagao, Shigeaki; Nagami, Yasuaki; Aoyagi, Hiroyuki; Imagawa, Atsushi; Kodaira, Junichi; Mitsui, Shinya; Kobayashi, Nozomu; Muto, Manabu; Takatori, Hajime; Abe, Takashi; Tsujii, Masahiko; Watari, Jiro; Ishiyama, Shuhei; Oda, Ichiro; Ono, Hiroyuki; Kaneko, Kazuhiro; Yokoi, Chizu; Ueo, Tetsuya; Uchita, Kunihisa; Matsumoto, Kenshi; Kanesaka, Takashi; Morita, Yoshinori; Katsuki, Shinichi; Nishikawa, Jun; Inamura, Katsuhisa; Kinjo, Tetsu; Yamamoto, Katsumi; Yoshimura, Daisuke; Araki, Hiroshi; Kashida, Hiroshi; Hosokawa, Ayumu; Mori, Hirohito; Yamashita, Haruhiro; Motohashi, Osamu; Kobayashi, Kazuhiko; Hirayama, Michiaki; Kobayashi, Hiroyuki; Endo, Masaki; Yamano, Hiroo; Murakami, Kazunari; Koike, Tomoyuki; Hirasawa, Kingo; Miyaoka, Youichi; Hamamoto, Hidetaka; Hikichi, Takuto; Hanabata, Norihiro; Shimoda, Ryo; Hori, Shinichiro; Sato, Tadashi; Kodashima, Shinya; Okada, Hiroyuki; Mannami, Tomohiko; Yamamoto, Shojiro; Niwa, Yasumasa; Yashima, Kazuo; Tanabe, Satoshi; Satoh, Hiro; Sasaki, Fumisato; Yamazato, Tetsuro; Ikeda, Yoshiou; Nishisaki, Hogara; Nakagawa, Masahiro; Matsuda, Akio; Tamura, Fumio; Nishiyama, Hitoshi; Arita, Keiko; Kawasaki, Keisuke; Hoppo, Kazushige; Oka, Masashi; Ishihara, Shinichi; Mukasa, Michita; Minamino, Hiroaki; Yao, Kenshi

    2017-10-01

    Background and study aim  Magnifying narrow-band imaging (M-NBI) is useful for the accurate diagnosis of early gastric cancer (EGC). However, acquiring skill at M-NBI diagnosis takes substantial effort. An Internet-based e-learning system to teach endoscopic diagnosis of EGC using M-NBI has been developed. This study evaluated its effectiveness. Participants and methods  This study was designed as a multicenter randomized controlled trial. We recruited endoscopists as participants from all over Japan. After completing Test 1, which consisted of M-NBI images of 40 gastric lesions, participants were randomly assigned to the e-learning or non-e-learning groups. Only the e-learning group was allowed to access the e-learning system. After the e-learning period, both groups received Test 2. The analysis set was participants who scored e-learning group and 197 in the non-e-learning group). After the e-learning period, all 395 completed Test 2. The analysis sets were e-learning group: n = 184; and non-e-learning group: n = 184. The mean Test 1 score was 59.9 % for the e-learning group and 61.7 % for the non-e-learning group. The change in accuracy in Test 2 was significantly higher in the e-learning group than in the non-e-learning group (7.4 points vs. 0.14 points, respectively; P  e-learning system in improving practitioners' capabilities to diagnose EGC using M-NBI.Trial registered at University Hospital Medical Information Network Clinical Trials Registry (UMIN000008569). © Georg Thieme Verlag KG Stuttgart · New York.

  14. Experiences with establishing and implementing learning management system and computer-based test system in medical college.

    Science.gov (United States)

    Park, Joo Hyun; Son, Ji Young; Kim, Sun

    2012-09-01

    The purpose of this study was to establish an e-learning system to support learning in medical education and identify solutions for improving the system. A learning management system (LMS) and computer-based test (CBT) system were established to support e-learning for medical students. A survey of 219 first- and second-grade medical students was administered. The questionnaire included 9 forced choice questions about the usability of system and 2 open-ended questions about necessary improvements to the system. The LMS consisted of a class management, class evaluation, and class attendance system. CBT consisted of a test management, item bank, and authoring tool system. The results of the survey showed a high level of satisfaction in all system usability items except for stability. Further, the advantages of the e-learning system were ensuring information accessibility, providing constant feedback, and designing an intuitive interface. Necessary improvements to the system were stability, user control, readability, and diverse device usage. Based on the findings, suggestions for developing an e-learning system to improve usability by medical students and support learning effectively are recommended.

  15. Digital control for nuclear reactors - lessons learned

    International Nuclear Information System (INIS)

    Bernard, J.A.; Aviles, B.N.; Lanning, D.D.

    1992-01-01

    Lessons learned during the course of the now decade-old MIT program on the digital control of nuclear reactors are enumerated. Relative to controller structure, these include the importance of a separate safety system, the need for signal validation, the role of supervisory algorithms, the significance of command validation, and the relevance of automated reasoning. Relative to controller implementation, these include the value of nodal methods to the creation of real-time reactor physics and thermal hydraulic models, the advantages to be gained from the use of real-time system models, and the importance of a multi-tiered structure to the simultaneous achievement of supervisory, global, and local control. Block diagrams are presented of proposed controllers and selected experimental and simulation-study results are shown. In addition, a history is given of the MIT program on reactor digital control

  16. Learning-based controller for biotechnology processing, and method of using

    Science.gov (United States)

    Johnson, John A.; Stoner, Daphne L.; Larsen, Eric D.; Miller, Karen S.; Tolle, Charles R.

    2004-09-14

    The present invention relates to process control where some of the controllable parameters are difficult or impossible to characterize. The present invention relates to process control in biotechnology of such systems, but not limited to. Additionally, the present invention relates to process control in biotechnology minerals processing. In the inventive method, an application of the present invention manipulates a minerals bioprocess to find local exterma (maxima or minima) for selected output variables/process goals by using a learning-based controller for bioprocess oxidation of minerals during hydrometallurgical processing. The learning-based controller operates with or without human supervision and works to find processor optima without previously defined optima due to the non-characterized nature of the process being manipulated.

  17. The organization of an autonomous learning system

    Science.gov (United States)

    Kanerva, Pentti

    1988-01-01

    The organization of systems that learn from experience is examined, human beings and animals being prime examples of such systems. How is their information processing organized. They build an internal model of the world and base their actions on the model. The model is dynamic and predictive, and it includes the systems' own actions and their effects. In modeling such systems, a large pattern of features represents a moment of the system's experience. Some of the features are provided by the system's senses, some control the system's motors, and the rest have no immediate external significance. A sequence of such patterns then represents the system's experience over time. By storing such sequences appropriately in memory, the system builds a world model based on experience. In addition to the essential function of memory, fundamental roles are played by a sensory system that makes raw information about the world suitable for memory storage and by a motor system that affects the world. The relation of sensory and motor systems to the memory is discussed, together with how favorable actions can be learned and unfavorable actions can be avoided. Results in classical learning theory are explained in terms of the model, more advanced forms of learning are discussed, and the relevance of the model to the frame problem of robotics is examined.

  18. A new subspace based approach to iterative learning control

    NARCIS (Netherlands)

    Nijsse, G.; Verhaegen, M.; Doelman, N.J.

    2001-01-01

    This paper1 presents an iterative learning control (ILC) procedure based on an inverse model of the plant under control. Our first contribution is that we formulate the inversion procedure as a Kalman smoothing problem: based on a compact state space model of a possibly non-minimum phase system,

  19. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    Science.gov (United States)

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Practical iterative learning control with frequency domain design and sampled data implementation

    CERN Document Server

    Wang, Danwei; Zhang, Bin

    2014-01-01

    This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much h...

  1. Supervised Learning for Dynamical System Learning.

    Science.gov (United States)

    Hefny, Ahmed; Downey, Carlton; Gordon, Geoffrey J

    2015-01-01

    Recently there has been substantial interest in spectral methods for learning dynamical systems. These methods are popular since they often offer a good tradeoff between computational and statistical efficiency. Unfortunately, they can be difficult to use and extend in practice: e.g., they can make it difficult to incorporate prior information such as sparsity or structure. To address this problem, we present a new view of dynamical system learning: we show how to learn dynamical systems by solving a sequence of ordinary supervised learning problems, thereby allowing users to incorporate prior knowledge via standard techniques such as L 1 regularization. Many existing spectral methods are special cases of this new framework, using linear regression as the supervised learner. We demonstrate the effectiveness of our framework by showing examples where nonlinear regression or lasso let us learn better state representations than plain linear regression does; the correctness of these instances follows directly from our general analysis.

  2. Designing E-learning Model to Learn About Transportation Management System to Support Supply Chain Management with Simulation Problems

    OpenAIRE

    Wiyono, Didiek Sri; Pribadi, Sidigdoyo; Permana, Ryan

    2011-01-01

    Focus of this research is designing Transportation Management System (TMS) as e-learning media for logistic education. E-learning is the use of Internet technologies to enhance knowledge and performance. E-learning technologies offer learners control over content, learning sequence, pace of learning, time, and often media, allowing them to tailor their experiences to meet their personal learning objectives. E-learning appears to be at least as effective as classical lectures. Students do not ...

  3. Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control

    OpenAIRE

    Watanabe, Takashi; Sugi, Yoshihiro

    2010-01-01

    Feedforward controller would be useful for hybrid Functional Electrical Stimulation (FES) system using powered orthotic devices. In this paper, Feedback Error Learning (FEL) controller for FES (FEL-FES controller) was examined using an inverse statics model (ISM) with an inverse dynamics model (IDM) to realize a feedforward FES controller. For FES application, the ISM was tested in learning off line using training data obtained by PID control of very slow movements. Computer simulation tests ...

  4. Data-Driven H∞ Control for Nonlinear Distributed Parameter Systems.

    Science.gov (United States)

    Luo, Biao; Huang, Tingwen; Wu, Huai-Ning; Yang, Xiong

    2015-11-01

    The data-driven H∞ control problem of nonlinear distributed parameter systems is considered in this paper. An off-policy learning method is developed to learn the H∞ control policy from real system data rather than the mathematical model. First, Karhunen-Loève decomposition is used to compute the empirical eigenfunctions, which are then employed to derive a reduced-order model (ROM) of slow subsystem based on the singular perturbation theory. The H∞ control problem is reformulated based on the ROM, which can be transformed to solve the Hamilton-Jacobi-Isaacs (HJI) equation, theoretically. To learn the solution of the HJI equation from real system data, a data-driven off-policy learning approach is proposed based on the simultaneous policy update algorithm and its convergence is proved. For implementation purpose, a neural network (NN)- based action-critic structure is developed, where a critic NN and two action NNs are employed to approximate the value function, control, and disturbance policies, respectively. Subsequently, a least-square NN weight-tuning rule is derived with the method of weighted residuals. Finally, the developed data-driven off-policy learning approach is applied to a nonlinear diffusion-reaction process, and the obtained results demonstrate its effectiveness.

  5. Learning Content Management Systems

    Directory of Open Access Journals (Sweden)

    Tache JURUBESCU

    2008-01-01

    Full Text Available The paper explains the evolution of e-Learning and related concepts and tools and its connection with other concepts such as Knowledge Management, Human Resources Management, Enterprise Resource Planning, and Information Technology. The paper also distinguished Learning Content Management Systems from Learning Management Systems and Content Management Systems used for general web-based content. The newest Learning Content Management System, very expensive and yet very little implemented is one of the best tools that helps us to cope with the realities of the 21st Century in what learning concerns. The debates over how beneficial one or another system is for an organization, can be driven by costs involved, efficiency envisaged, and availability of the product on the market.

  6. Research on cultivating medical students' self-learning ability using teaching system integrated with learning analysis technology.

    Science.gov (United States)

    Luo, Hong; Wu, Cheng; He, Qian; Wang, Shi-Yong; Ma, Xiu-Qiang; Wang, Ri; Li, Bing; He, Jia

    2015-01-01

    Along with the advancement of information technology and the era of big data education, using learning process data to provide strategic decision-making in cultivating and improving medical students' self-learning ability has become a trend in educational research. Educator Abuwen Toffler said once, the illiterates in the future may not be the people not able to read and write, but not capable to know how to learn. Serving as educational institutions cultivating medical students' learning ability, colleges and universities should not only instruct specific professional knowledge and skills, but also develop medical students' self-learning ability. In this research, we built a teaching system which can help to restore medical students' self-learning processes and analyze their learning outcomes and behaviors. To evaluate the effectiveness of the system in supporting medical students' self-learning, an experiment was conducted in 116 medical students from two grades. The results indicated that problems in self-learning process through this system was consistent with problems raised from traditional classroom teaching. Moreover, the experimental group (using this system) acted better than control group (using traditional classroom teaching) to some extent. Thus, this system can not only help medical students to develop their self-learning ability, but also enhances the ability of teachers to target medical students' questions quickly, improving the efficiency of answering questions in class.

  7. The control of tonic pain by active relief learning

    Science.gov (United States)

    Mano, Hiroaki; Lee, Michael; Yoshida, Wako; Kawato, Mitsuo; Robbins, Trevor W

    2018-01-01

    Tonic pain after injury characterises a behavioural state that prioritises recovery. Although generally suppressing cognition and attention, tonic pain needs to allow effective relief learning to reduce the cause of the pain. Here, we describe a central learning circuit that supports learning of relief and concurrently suppresses the level of ongoing pain. We used computational modelling of behavioural, physiological and neuroimaging data in two experiments in which subjects learned to terminate tonic pain in static and dynamic escape-learning paradigms. In both studies, we show that active relief-seeking involves a reinforcement learning process manifest by error signals observed in the dorsal putamen. Critically, this system uses an uncertainty (‘associability’) signal detected in pregenual anterior cingulate cortex that both controls the relief learning rate, and endogenously and parametrically modulates the level of tonic pain. The results define a self-organising learning circuit that reduces ongoing pain when learning about potential relief. PMID:29482716

  8. The control of tonic pain by active relief learning.

    Science.gov (United States)

    Zhang, Suyi; Mano, Hiroaki; Lee, Michael; Yoshida, Wako; Kawato, Mitsuo; Robbins, Trevor W; Seymour, Ben

    2018-02-27

    Tonic pain after injury characterises a behavioural state that prioritises recovery. Although generally suppressing cognition and attention, tonic pain needs to allow effective relief learning to reduce the cause of the pain. Here, we describe a central learning circuit that supports learning of relief and concurrently suppresses the level of ongoing pain. We used computational modelling of behavioural, physiological and neuroimaging data in two experiments in which subjects learned to terminate tonic pain in static and dynamic escape-learning paradigms. In both studies, we show that active relief-seeking involves a reinforcement learning process manifest by error signals observed in the dorsal putamen. Critically, this system uses an uncertainty ('associability') signal detected in pregenual anterior cingulate cortex that both controls the relief learning rate, and endogenously and parametrically modulates the level of tonic pain. The results define a self-organising learning circuit that reduces ongoing pain when learning about potential relief. © 2018, Zhang et al.

  9. LANSCE personnel access control system (PACS)

    International Nuclear Information System (INIS)

    Sturrock, J.C.; Gallegos, F.R.; Hall, M.J.

    1997-01-01

    The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) provides personnel protection from prompt radiation due to accelerated beam. The Personnel Access Control System (PACS) is a component of the RSS that is designed to prevent personnel access to areas where prompt radiation is a hazard. PACS was designed to replace several older personnel safety systems (PSS) with a single modem unified design. Lessons learned from the operation over the last 20 years were incorporated into a redundant sensor, single-point failure safe, fault tolerant, and tamper-resistant system that prevents access to the beam areas by controlling the access keys and beam stoppers. PACS uses a layered philosophy to the physical and electronic design. The most critical assemblies are battery backed up, relay logic circuits; less critical devices use Programmable Logic Controllers (PLCs) for timing functions and communications. Outside reviewers have reviewed the operational safety of the design. The design philosophy, lessons learned, hardware design, software design, operation, and limitations of the device are described

  10. Instructional control of reinforcement learning: a behavioral and neurocomputational investigation.

    Science.gov (United States)

    Doll, Bradley B; Jacobs, W Jake; Sanfey, Alan G; Frank, Michael J

    2009-11-24

    Humans learn how to behave directly through environmental experience and indirectly through rules and instructions. Behavior analytic research has shown that instructions can control behavior, even when such behavior leads to sub-optimal outcomes (Hayes, S. (Ed.). 1989. Rule-governed behavior: cognition, contingencies, and instructional control. Plenum Press.). Here we examine the control of behavior through instructions in a reinforcement learning task known to depend on striatal dopaminergic function. Participants selected between probabilistically reinforced stimuli, and were (incorrectly) told that a specific stimulus had the highest (or lowest) reinforcement probability. Despite experience to the contrary, instructions drove choice behavior. We present neural network simulations that capture the interactions between instruction-driven and reinforcement-driven behavior via two potential neural circuits: one in which the striatum is inaccurately trained by instruction representations coming from prefrontal cortex/hippocampus (PFC/HC), and another in which the striatum learns the environmentally based reinforcement contingencies, but is "overridden" at decision output. Both models capture the core behavioral phenomena but, because they differ fundamentally on what is learned, make distinct predictions for subsequent behavioral and neuroimaging experiments. Finally, we attempt to distinguish between the proposed computational mechanisms governing instructed behavior by fitting a series of abstract "Q-learning" and Bayesian models to subject data. The best-fitting model supports one of the neural models, suggesting the existence of a "confirmation bias" in which the PFC/HC system trains the reinforcement system by amplifying outcomes that are consistent with instructions while diminishing inconsistent outcomes.

  11. Extracting quantum dynamics from genetic learning algorithms through principal control analysis

    International Nuclear Information System (INIS)

    White, J L; Pearson, B J; Bucksbaum, P H

    2004-01-01

    Genetic learning algorithms are widely used to control ultrafast optical pulse shapes for photo-induced quantum control of atoms and molecules. An unresolved issue is how to use the solutions found by these algorithms to learn about the system's quantum dynamics. We propose a simple method based on covariance analysis of the control space, which can reveal the degrees of freedom in the effective control Hamiltonian. We have applied this technique to stimulated Raman scattering in liquid methanol. A simple model of two-mode stimulated Raman scattering is consistent with the results. (letter to the editor)

  12. Learning-based identification and iterative learning control of direct-drive robots

    NARCIS (Netherlands)

    Bukkems, B.H.M.; Kostic, D.; Jager, de A.G.; Steinbuch, M.

    2005-01-01

    A combination of model-based and Iterative Learning Control is proposed as a method to achieve high-quality motion control of direct-drive robots in repetitive motion tasks. We include both model-based and learning components in the total control law, as their individual properties influence the

  13. Mind map learning for advanced engineering study: case study in system dynamics

    Science.gov (United States)

    Woradechjumroen, Denchai

    2018-01-01

    System Dynamics (SD) is one of the subjects that were use in learning Automatic Control Systems in dynamic and control field. Mathematical modelling and solving skills of students for engineering systems are expecting outcomes of the course which can be further used to efficiently study control systems and mechanical vibration; however, the fundamental of the SD includes strong backgrounds in Dynamics and Differential Equations, which are appropriate to the students in governmental universities that have strong skills in Mathematics and Scientifics. For private universities, students are weak in the above subjects since they obtained high vocational certificate from Technical College or Polytechnic School, which emphasize the learning contents in practice. To enhance their learning for improving their backgrounds, this paper applies mind maps based problem based learning to relate the essential relations of mathematical and physical equations. With the advantages of mind maps, each student is assigned to design individual mind maps for self-leaning development after they attend the class and learn overall picture of each chapter from the class instructor. Four problems based mind maps learning are assigned to each student. Each assignment is evaluated via mid-term and final examinations, which are issued in terms of learning concepts and applications. In the method testing, thirty students are tested and evaluated via student learning backgrounds in the past. The result shows that well-design mind maps can improve learning performance based on outcome evaluation. Especially, mind maps can reduce time-consuming and reviewing for Mathematics and Physics in SD significantly.

  14. INTELLIGENT FRACTIONAL ORDER ITERATIVE LEARNING CONTROL USING FEEDBACK LINEARIZATION FOR A SINGLE-LINK ROBOT

    Directory of Open Access Journals (Sweden)

    Iman Ghasemi

    2017-05-01

    Full Text Available In this paper, iterative learning control (ILC is combined with an optimal fractional order derivative (BBO-Da-type ILC and optimal fractional and proportional-derivative (BBO-PDa-type ILC. In the update law of Arimoto's derivative iterative learning control, a first order derivative of tracking error signal is used. In the proposed method, fractional order derivative of the error signal is stated in term of 'sa' where  to update iterative learning control law. Two types of fractional order iterative learning control namely PDa-type ILC and Da-type ILC are gained for different value of a. In order to improve the performance of closed-loop control system, coefficients of both  and  learning law i.e. proportional , derivative  and  are optimized using Biogeography-Based optimization algorithm (BBO. Outcome of the simulation results are compared with those of the conventional fractional order iterative learning control to verify effectiveness of BBO-Da-type ILC and BBO-PDa-type ILC

  15. Lessons Learned from the Crew Health Care System (CHeCS) Rack 1 Environmental Control and Life Support (ECLS) Design

    Science.gov (United States)

    Williams, David E.

    2006-01-01

    This paper will provide an overview of the International Space Station (ISS) Environmental Control and Life Support (ECLS) design of the Crew Health Care System (CHeCS) Rack 1 and it will document some of the lessons that have been learned to date for the ECLS equipment in this rack.

  16. Learning styles: The learning methods of air traffic control students

    Science.gov (United States)

    Jackson, Dontae L.

    In the world of aviation, air traffic controllers are an integral part in the overall level of safety that is provided. With a number of controllers reaching retirement age, the Air Traffic Collegiate Training Initiative (AT-CTI) was created to provide a stronger candidate pool. However, AT-CTI Instructors have found that a number of AT-CTI students are unable to memorize types of aircraft effectively. This study focused on the basic learning styles (auditory, visual, and kinesthetic) of students and created a teaching method to try to increase memorization in AT-CTI students. The participants were asked to take a questionnaire to determine their learning style. Upon knowing their learning styles, participants attended two classroom sessions. The participants were given a presentation in the first class, and divided into a control and experimental group for the second class. The control group was given the same presentation from the first classroom session while the experimental group had a group discussion and utilized Middle Tennessee State University's Air Traffic Control simulator to learn the aircraft types. Participants took a quiz and filled out a survey, which tested the new teaching method. An appropriate statistical analysis was applied to determine if there was a significant difference between the control and experimental groups. The results showed that even though the participants felt that the method increased their learning, there was no significant difference between the two groups.

  17. Chaos control of ferroresonance system based on RBF-maximum entropy clustering algorithm

    International Nuclear Information System (INIS)

    Liu Fan; Sun Caixin; Sima Wenxia; Liao Ruijin; Guo Fei

    2006-01-01

    With regards to the ferroresonance overvoltage of neutral grounded power system, a maximum-entropy learning algorithm based on radial basis function neural networks is used to control the chaotic system. The algorithm optimizes the object function to derive learning rule of central vectors, and uses the clustering function of network hidden layers. It improves the regression and learning ability of neural networks. The numerical experiment of ferroresonance system testifies the effectiveness and feasibility of using the algorithm to control chaos in neutral grounded system

  18. The implementation of the situational control concept of information security in automated training systems

    Directory of Open Access Journals (Sweden)

    A. M. Chernih

    2016-01-01

    Full Text Available The main approaches to ensuring security of information in the automated training systems are considered, need of application of situational management of security of information for the automated training systems is proved, the mathematical model and a problem definition of situational control is offered, the technique of situational control of security of information is developed.The purpose of the study. The aim of the study is to base the application of situational control of information security by subsystem of the control and protection of information in automated learning systems and to develop implementation methods of the situational control concept.Materials and methods. It is assumed that the automated learning system is a fragment of a larger information system that contains several information paths, each of them treats different information in the protection degree from information, containing constituting state secrets, to open access information.It is considered that the technical methods, measures and means of information protection in automated learning systems implement less than half (30% functions of subsystems of control and protection information. The main part of the functions of this subsystem are organizational measures to protect information. It is obvious that the task of ensuring the security of information in automated learning systems associated with the adoption of decisions on rational selection and proper combination of technical methods and institutional arrangements. Conditions of practical application of automated learning systems change over time and transform the situation of such a decision, and this leads to the use of situational control methods.When situational control is implementing, task of the protection of information in automated learning system is solved by the subsystem control and protection of information by distributing the processes ensuring the security of information and resources of

  19. Reinforcement Learning for Ramp Control: An Analysis of Learning Parameters

    Directory of Open Access Journals (Sweden)

    Chao Lu

    2016-08-01

    Full Text Available Reinforcement Learning (RL has been proposed to deal with ramp control problems under dynamic traffic conditions; however, there is a lack of sufficient research on the behaviour and impacts of different learning parameters. This paper describes a ramp control agent based on the RL mechanism and thoroughly analyzed the influence of three learning parameters; namely, learning rate, discount rate and action selection parameter on the algorithm performance. Two indices for the learning speed and convergence stability were used to measure the algorithm performance, based on which a series of simulation-based experiments were designed and conducted by using a macroscopic traffic flow model. Simulation results showed that, compared with the discount rate, the learning rate and action selection parameter made more remarkable impacts on the algorithm performance. Based on the analysis, some suggestionsabout how to select suitable parameter values that can achieve a superior performance were provided.

  20. Interactive Web-based e-learning for Studying Flexible Manipulator Systems

    Directory of Open Access Journals (Sweden)

    Abul K. M. Azad

    2008-03-01

    Full Text Available Abstract— This paper presents a web-based e-leaning facility for simulation, modeling, and control of flexible manipulator systems. The simulation and modeling part includes finite difference and finite element simulations along with neural network and genetic algorithm based modeling strategies for flexible manipulator systems. The controller part constitutes a number of open-loop and closed-loop designs. Closed loop control designs include the classical, adaptive, and neuro-model based strategies. Matlab software package and its associated toolboxes are used to implement these. The Matlab web server is used as the gateway between the facility and web-access. ASP.NET technology and SQL database are utilized to develop web applications for access control, user account and password maintenance, administrative management, and facility utilization monitoring. The reported facility provides a flexible but effective approach of web-based interactive e-learning facility of an engineering system. This can be extended to incorporate additional engineering systems within the e-learning framework.

  1. Recurrent fuzzy neural network by using feedback error learning approaches for LFC in interconnected power system

    International Nuclear Information System (INIS)

    Sabahi, Kamel; Teshnehlab, Mohammad; Shoorhedeli, Mahdi Aliyari

    2009-01-01

    In this study, a new adaptive controller based on modified feedback error learning (FEL) approaches is proposed for load frequency control (LFC) problem. The FEL strategy consists of intelligent and conventional controllers in feedforward and feedback paths, respectively. In this strategy, a conventional feedback controller (CFC), i.e. proportional, integral and derivative (PID) controller, is essential to guarantee global asymptotic stability of the overall system; and an intelligent feedforward controller (INFC) is adopted to learn the inverse of the controlled system. Therefore, when the INFC learns the inverse of controlled system, the tracking of reference signal is done properly. Generally, the CFC is designed at nominal operating conditions of the system and, therefore, fails to provide the best control performance as well as global stability over a wide range of changes in the operating conditions of the system. So, in this study a supervised controller (SC), a lookup table based controller, is addressed for tuning of the CFC. During abrupt changes of the power system parameters, the SC adjusts the PID parameters according to these operating conditions. Moreover, for improving the performance of overall system, a recurrent fuzzy neural network (RFNN) is adopted in INFC instead of the conventional neural network, which was used in past studies. The proposed FEL controller has been compared with the conventional feedback error learning controller (CFEL) and the PID controller through some performance indices

  2. Perceptual-motor skill learning in Gilles de la Tourette syndrome. Evidence for multiple procedural learning and memory systems.

    Science.gov (United States)

    Marsh, Rachel; Alexander, Gerianne M; Packard, Mark G; Zhu, Hongtu; Peterson, Bradley S

    2005-01-01

    Procedural learning and memory systems likely comprise several skills that are differentially affected by various illnesses of the central nervous system, suggesting their relative functional independence and reliance on differing neural circuits. Gilles de la Tourette syndrome (GTS) is a movement disorder that involves disturbances in the structure and function of the striatum and related circuitry. Recent studies suggest that patients with GTS are impaired in performance of a probabilistic classification task that putatively involves the acquisition of stimulus-response (S-R)-based habits. Assessing the learning of perceptual-motor skills and probabilistic classification in the same samples of GTS and healthy control subjects may help to determine whether these various forms of procedural (habit) learning rely on the same or differing neuroanatomical substrates and whether those substrates are differentially affected in persons with GTS. Therefore, we assessed perceptual-motor skill learning using the pursuit-rotor and mirror tracing tasks in 50 patients with GTS and 55 control subjects who had previously been compared at learning a task of probabilistic classifications. The GTS subjects did not differ from the control subjects in performance of either the pursuit rotor or mirror-tracing tasks, although they were significantly impaired in the acquisition of a probabilistic classification task. In addition, learning on the perceptual-motor tasks was not correlated with habit learning on the classification task in either the GTS or healthy control subjects. These findings suggest that the differing forms of procedural learning are dissociable both functionally and neuroanatomically. The specific deficits in the probabilistic classification form of habit learning in persons with GTS are likely to be a consequence of disturbances in specific corticostriatal circuits, but not the same circuits that subserve the perceptual-motor form of habit learning.

  3. Control Systems with Normalized and Covariance Adaptation by Optimal Control Modification

    Science.gov (United States)

    Nguyen, Nhan T. (Inventor); Burken, John J. (Inventor); Hanson, Curtis E. (Inventor)

    2016-01-01

    Disclosed is a novel adaptive control method and system called optimal control modification with normalization and covariance adjustment. The invention addresses specifically to current challenges with adaptive control in these areas: 1) persistent excitation, 2) complex nonlinear input-output mapping, 3) large inputs and persistent learning, and 4) the lack of stability analysis tools for certification. The invention has been subject to many simulations and flight testing. The results substantiate the effectiveness of the invention and demonstrate the technical feasibility for use in modern aircraft flight control systems.

  4. Reinforcement learning techniques for controlling resources in power networks

    Science.gov (United States)

    Kowli, Anupama Sunil

    As power grids transition towards increased reliance on renewable generation, energy storage and demand response resources, an effective control architecture is required to harness the full functionalities of these resources. There is a critical need for control techniques that recognize the unique characteristics of the different resources and exploit the flexibility afforded by them to provide ancillary services to the grid. The work presented in this dissertation addresses these needs. Specifically, new algorithms are proposed, which allow control synthesis in settings wherein the precise distribution of the uncertainty and its temporal statistics are not known. These algorithms are based on recent developments in Markov decision theory, approximate dynamic programming and reinforcement learning. They impose minimal assumptions on the system model and allow the control to be "learned" based on the actual dynamics of the system. Furthermore, they can accommodate complex constraints such as capacity and ramping limits on generation resources, state-of-charge constraints on storage resources, comfort-related limitations on demand response resources and power flow limits on transmission lines. Numerical studies demonstrating applications of these algorithms to practical control problems in power systems are discussed. Results demonstrate how the proposed control algorithms can be used to improve the performance and reduce the computational complexity of the economic dispatch mechanism in a power network. We argue that the proposed algorithms are eminently suitable to develop operational decision-making tools for large power grids with many resources and many sources of uncertainty.

  5. Development of adaptive control applied to chaotic systems

    Science.gov (United States)

    Rhode, Martin Andreas

    1997-12-01

    Continuous-time derivative control and adaptive map-based recursive feedback control techniques are used to control chaos in a variety of systems and in situations that are of practical interest. The theoretical part of the research includes the review of fundamental concept of control theory in the context of its applications to deterministic chaotic systems, the development of a new adaptive algorithm to identify the linear system properties necessary for control, and the extension of the recursive proportional feedback control technique, RPF, to high dimensional systems. Chaos control was applied to models of a thermal pulsed combustor, electro-chemical dissolution and the hyperchaotic Rossler system. Important implications for combustion engineering were suggested by successful control of the model of the thermal pulsed combustor. The system was automatically tracked while maintaining control into regions of parameter and state space where no stable attractors exist. In a simulation of the electrochemical dissolution system, application of derivative control to stabilize a steady state, and adaptive RPF to stabilize a period one orbit, was demonstrated. The high dimensional adaptive control algorithm was applied in a simulation using the Rossler hyperchaotic system, where a period-two orbit with two unstable directions was stabilized and tracked over a wide range of a system parameter. In the experimental part, the electrochemical system was studied in parameter space, by scanning the applied potential and the frequency of the rotating copper disk. The automated control algorithm is demonstrated to be effective when applied to stabilize a period-one orbit in the experiment. We show the necessity of small random perturbations applied to the system in order to both learn the dynamics and control the system at the same time. The simultaneous learning and control capability is shown to be an important part of the active feedback control.

  6. A Web-Based Learning Support System for Inquiry-Based Learning

    Science.gov (United States)

    Kim, Dong Won; Yao, Jingtao

    The emergence of the Internet and Web technology makes it possible to implement the ideals of inquiry-based learning, in which students seek truth, information, or knowledge by questioning. Web-based learning support systems can provide a good framework for inquiry-based learning. This article presents a study on a Web-based learning support system called Online Treasure Hunt. The Web-based learning support system mainly consists of a teaching support subsystem, a learning support subsystem, and a treasure hunt game. The teaching support subsystem allows instructors to design their own inquiry-based learning environments. The learning support subsystem supports students' inquiry activities. The treasure hunt game enables students to investigate new knowledge, develop ideas, and review their findings. Online Treasure Hunt complies with a treasure hunt model. The treasure hunt model formalizes a general treasure hunt game to contain the learning strategies of inquiry-based learning. This Web-based learning support system empowered with the online-learning game and founded on the sound learning strategies furnishes students with the interactive and collaborative student-centered learning environment.

  7. Intelligent Web-Based Learning System with Personalized Learning Path Guidance

    Science.gov (United States)

    Chen, C. M.

    2008-01-01

    Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths…

  8. Machine learning systems

    Energy Technology Data Exchange (ETDEWEB)

    Forsyth, R

    1984-05-01

    With the dramatic rise of expert systems has come a renewed interest in the fuel that drives them-knowledge. For it is specialist knowledge which gives expert systems their power. But extracting knowledge from human experts in symbolic form has proved arduous and labour-intensive. So the idea of machine learning is enjoying a renaissance. Machine learning is any automatic improvement in the performance of a computer system over time, as a result of experience. Thus a learning algorithm seeks to do one or more of the following: cover a wider range of problems, deliver more accurate solutions, obtain answers more cheaply, and simplify codified knowledge. 6 references.

  9. Do questions help? The impact of audience response systems on medical student learning: a randomised controlled trial.

    Science.gov (United States)

    Mains, Tyler E; Cofrancesco, Joseph; Milner, Stephen M; Shah, Nina G; Goldberg, Harry

    2015-07-01

    Audience response systems (ARSs) are electronic devices that allow educators to pose questions during lectures and receive immediate feedback on student knowledge. The current literature on the effectiveness of ARSs is contradictory, and their impact on student learning remains unclear. This randomised controlled trial was designed to isolate the impact of ARSs on student learning and students' perception of ARSs during a lecture. First-year medical student volunteers at Johns Hopkins were randomly assigned to either (i) watch a recorded lecture on an unfamiliar topic in which three ARS questions were embedded or (ii) watch the same lecture without the ARS questions. Immediately after the lecture on 5 June 2012, and again 2 weeks later, both groups were asked to complete a questionnaire to assess their knowledge of the lecture content and satisfaction with the learning experience. 92 students participated. The mean (95% CI) initial knowledge assessment score was 7.63 (7.17 to 8.09) for the ARS group (N=45) and 6.39 (5.81 to 6.97) for the control group (N=47), p=0.001. Similarly, the second knowledge assessment mean score was 6.95 (6.38 to 7.52) for the ARS group and 5.88 (5.29 to 6.47) for the control group, p=0.001. The ARS group also reported higher levels of engagement and enjoyment. Embedding three ARS questions within a 30 min lecture increased students' knowledge immediately after the lecture and 2 weeks later. We hypothesise that this increase was due to forced information retrieval by students during the learning process, a form of the testing effect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Prototype learning and dissociable categorization systems in Alzheimer's disease.

    Science.gov (United States)

    Heindel, William C; Festa, Elena K; Ott, Brian R; Landy, Kelly M; Salmon, David P

    2013-08-01

    Recent neuroimaging studies suggest that prototype learning may be mediated by at least two dissociable memory systems depending on the mode of acquisition, with A/Not-A prototype learning dependent upon a perceptual representation system located within posterior visual cortex and A/B prototype learning dependent upon a declarative memory system associated with medial temporal and frontal regions. The degree to which patients with Alzheimer's disease (AD) can acquire new categorical information may therefore critically depend upon the mode of acquisition. The present study examined A/Not-A and A/B prototype learning in AD patients using procedures that allowed direct comparison of learning across tasks. Despite impaired explicit recall of category features in all tasks, patients showed differential patterns of category acquisition across tasks. First, AD patients demonstrated impaired prototype induction along with intact exemplar classification under incidental A/Not-A conditions, suggesting that the loss of functional connectivity within visual cortical areas disrupted the integration processes supporting prototype induction within the perceptual representation system. Second, AD patients demonstrated intact prototype induction but impaired exemplar classification during A/B learning under observational conditions, suggesting that this form of prototype learning is dependent on a declarative memory system that is disrupted in AD. Third, the surprisingly intact classification of both prototypes and exemplars during A/B learning under trial-and-error feedback conditions suggests that AD patients shifted control from their deficient declarative memory system to a feedback-dependent procedural memory system when training conditions allowed. Taken together, these findings serve to not only increase our understanding of category learning in AD, but to also provide new insights into the ways in which different memory systems interact to support the acquisition of

  11. Automatic generation control of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2016-03-01

    Full Text Available This paper presents the design and analysis of Proportional-Integral-Double Derivative (PIDD controller for Automatic Generation Control (AGC of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization (TLBO algorithm. At first, a two-area reheat thermal power system with appropriate Generation Rate Constraint (GRC is considered. The design problem is formulated as an optimization problem and TLBO is employed to optimize the parameters of the PIDD controller. The superiority of the proposed TLBO based PIDD controller has been demonstrated by comparing the results with recently published optimization technique such as hybrid Firefly Algorithm and Pattern Search (hFA-PS, Firefly Algorithm (FA, Bacteria Foraging Optimization Algorithm (BFOA, Genetic Algorithm (GA and conventional Ziegler Nichols (ZN for the same interconnected power system. Also, the proposed approach has been extended to two-area power system with diverse sources of generation like thermal, hydro, wind and diesel units. The system model includes boiler dynamics, GRC and Governor Dead Band (GDB non-linearity. It is observed from simulation results that the performance of the proposed approach provides better dynamic responses by comparing the results with recently published in the literature. Further, the study is extended to a three unequal-area thermal power system with different controllers in each area and the results are compared with published FA optimized PID controller for the same system under study. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions in the range of ±25% from their nominal values to test the robustness.

  12. User's manual of self learning gas puffing system for plasma density control

    International Nuclear Information System (INIS)

    Tanahashi, S.

    1989-04-01

    Pre-programmed gas puffing is often used to get adequet plasma density wave forms in the pulse operating devices for fusion experiments. This method has a defect that preset values have to be adjusted manually in accordance with changes of out gassing rate in successive shots. In order to remove this defect, a self learning system has been developed so as to keep the plasma density close to a given reference waveform. After a few succesive shots, it accomplishes self learning and is ready to keep up with a gradual change of the wall condition. This manual gives the usage of the system and the program list written in BASIC and ASSEMBLER languages. (author)

  13. Sensorless speed control of switched reluctance motor using brain emotional learning based intelligent controller

    International Nuclear Information System (INIS)

    Dehkordi, Behzad Mirzaeian; Parsapoor, Amir; Moallem, Mehdi; Lucas, Caro

    2011-01-01

    In this paper, a brain emotional learning based intelligent controller (BELBIC) is developed to control the switched reluctance motor (SRM) speed. Like other intelligent controllers, BELBIC is model free and is suitable to control nonlinear systems. Motor parameter changes, operating point changes, measurement noise, open circuit fault in one phase and asymmetric phases in SRM are also simulated to show the robustness and superior performance of BELBIC. To compare the BELBIC performance with other intelligent controllers, Fuzzy Logic Controller (FLC) is developed. System responses with BELBIC and FLC are compared. Furthermore, by eliminating the position sensor, a method is introduced to estimate the rotor position. This method is based on Adaptive Neuro Fuzzy Inference System (ANFIS). The estimator inputs are four phase flux linkages. Suggested rotor position estimator is simulated in different conditions. Simulation results confirm the accurate rotor position estimation in different loads and speeds.

  14. Sensorless speed control of switched reluctance motor using brain emotional learning based intelligent controller

    Energy Technology Data Exchange (ETDEWEB)

    Dehkordi, Behzad Mirzaeian, E-mail: mirzaeian@eng.ui.ac.i [Department of Electrical Engineering, Faculty of Engineering, University of Isfahan, Hezar-Jerib St., Postal code 8174673441, Isfahan (Iran, Islamic Republic of); Parsapoor, Amir, E-mail: amirparsapoor@yahoo.co [Department of Electrical Engineering, Faculty of Engineering, University of Isfahan, Hezar-Jerib St., Postal code 8174673441, Isfahan (Iran, Islamic Republic of); Moallem, Mehdi, E-mail: moallem@cc.iut.ac.i [Department of Electrical Engineering, Isfahan University of Technology, Isfahan (Iran, Islamic Republic of); Lucas, Caro, E-mail: lucas@ut.ac.i [Centre of Excellence for Control and Intelligent Processing, Electrical and Computer Engineering Faculty, College of Engineering, University of Tehran, Tehran (Iran, Islamic Republic of)

    2011-01-15

    In this paper, a brain emotional learning based intelligent controller (BELBIC) is developed to control the switched reluctance motor (SRM) speed. Like other intelligent controllers, BELBIC is model free and is suitable to control nonlinear systems. Motor parameter changes, operating point changes, measurement noise, open circuit fault in one phase and asymmetric phases in SRM are also simulated to show the robustness and superior performance of BELBIC. To compare the BELBIC performance with other intelligent controllers, Fuzzy Logic Controller (FLC) is developed. System responses with BELBIC and FLC are compared. Furthermore, by eliminating the position sensor, a method is introduced to estimate the rotor position. This method is based on Adaptive Neuro Fuzzy Inference System (ANFIS). The estimator inputs are four phase flux linkages. Suggested rotor position estimator is simulated in different conditions. Simulation results confirm the accurate rotor position estimation in different loads and speeds.

  15. Mosaic model for sensorimotor learning and control.

    Science.gov (United States)

    Haruno, M; Wolpert, D M; Kawato, M

    2001-10-01

    Humans demonstrate a remarkable ability to generate accurate and appropriate motor behavior under many different and often uncertain environmental conditions. We previously proposed a new modular architecture, the modular selection and identification for control (MOSAIC) model, for motor learning and control based on multiple pairs of forward (predictor) and inverse (controller) models. The architecture simultaneously learns the multiple inverse models necessary for control as well as how to select the set of inverse models appropriate for a given environment. It combines both feedforward and feedback sensorimotor information so that the controllers can be selected both prior to movement and subsequently during movement. This article extends and evaluates the MOSAIC architecture in the following respects. The learning in the architecture was implemented by both the original gradient-descent method and the expectation-maximization (EM) algorithm. Unlike gradient descent, the newly derived EM algorithm is robust to the initial starting conditions and learning parameters. Second, simulations of an object manipulation task prove that the architecture can learn to manipulate multiple objects and switch between them appropriately. Moreover, after learning, the model shows generalization to novel objects whose dynamics lie within the polyhedra of already learned dynamics. Finally, when each of the dynamics is associated with a particular object shape, the model is able to select the appropriate controller before movement execution. When presented with a novel shape-dynamic pairing, inappropriate activation of modules is observed followed by on-line correction.

  16. Recommender Systems for Learning

    CERN Document Server

    Manouselis, Nikos; Verbert, Katrien; Duval, Erik

    2013-01-01

    Technology enhanced learning (TEL) aims to design, develop and test sociotechnical innovations that will support and enhance learning practices of both individuals and organisations. It is therefore an application domain that generally covers technologies that support all forms of teaching and learning activities. Since information retrieval (in terms of searching for relevant learning resources to support teachers or learners) is a pivotal activity in TEL, the deployment of recommender systems has attracted increased interest. This brief attempts to provide an introduction to recommender systems for TEL settings, as well as to highlight their particularities compared to recommender systems for other application domains.

  17. Computer simulation of nuclear reactor control by means of heuristic learning controller

    International Nuclear Information System (INIS)

    Bubak, M.; Moscinski, J.

    1976-01-01

    A trial of application of two techniques of Artificial Intelligence: heuristic Programming and Learning Machines Theory for nuclear reactor control is presented. Considering complexity of the mathematical models describing satisfactorily the nuclear reactors, value changes of these models parameters in course of operation, knowledge of some parameters value with too small exactness, there appear diffucluties in the classical approach application for these objects control systems design. The classical approach consists in definition of the permissible control actions set on the base of the set performance index and the object mathematical model. The Artificial Intelligence methods enable construction of the control system, which gets during work an information being a priori inaccessible and uses it for its action change for the control to be the optimum one. Applying these methods we have elaborated the reactor power control system. As the performance index there has been taken the integral of the error square. For the control system there are only accessible: the set power trajectory, the reactor power and the control rod position. The set power trajectory has been divided into time intervals called heuristic intervals. At the beginning of every heuristic interval, on the base of the obtained experience, the control system chooses from the control (heuristic) set the optimum control. The heuristic set it is the set of relations between the control rod rate and the state variables, the set and the obtained power, similar to simplifications applied by nuclear reactors operators. The results obtained for the different control rod rates and different reactor (simulated on the digital computer) show the proper work of the system. (author)

  18. Self-teaching neural network learns difficult reactor control problem

    International Nuclear Information System (INIS)

    Jouse, W.C.

    1989-01-01

    A self-teaching neural network used as an adaptive controller quickly learns to control an unstable reactor configuration. The network models the behavior of a human operator. It is trained by allowing it to operate the reactivity control impulsively. It is punished whenever either the power or fuel temperature stray outside technical limits. Using a simple paradigm, the network constructs an internal representation of the punishment and of the reactor system. The reactor is constrained to small power orbits

  19. Optimal Control via Reinforcement Learning with Symbolic Policy Approximation

    NARCIS (Netherlands)

    Kubalìk, Jiřì; Alibekov, Eduard; Babuska, R.; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    Model-based reinforcement learning (RL) algorithms can be used to derive optimal control laws for nonlinear dynamic systems. With continuous-valued state and input variables, RL algorithms have to rely on function approximators to represent the value function and policy mappings. This paper

  20. Reversal Learning in Humans and Gerbils: Dynamic Control Network Facilitates Learning.

    Science.gov (United States)

    Jarvers, Christian; Brosch, Tobias; Brechmann, André; Woldeit, Marie L; Schulz, Andreas L; Ohl, Frank W; Lommerzheim, Marcel; Neumann, Heiko

    2016-01-01

    Biologically plausible modeling of behavioral reinforcement learning tasks has seen great improvements over the past decades. Less work has been dedicated to tasks involving contingency reversals, i.e., tasks in which the original behavioral goal is reversed one or multiple times. The ability to adjust to such reversals is a key element of behavioral flexibility. Here, we investigate the neural mechanisms underlying contingency-reversal tasks. We first conduct experiments with humans and gerbils to demonstrate memory effects, including multiple reversals in which subjects (humans and animals) show a faster learning rate when a previously learned contingency re-appears. Motivated by recurrent mechanisms of learning and memory for object categories, we propose a network architecture which involves reinforcement learning to steer an orienting system that monitors the success in reward acquisition. We suggest that a model sensory system provides feature representations which are further processed by category-related subnetworks which constitute a neural analog of expert networks. Categories are selected dynamically in a competitive field and predict the expected reward. Learning occurs in sequentialized phases to selectively focus the weight adaptation to synapses in the hierarchical network and modulate their weight changes by a global modulator signal. The orienting subsystem itself learns to bias the competition in the presence of continuous monotonic reward accumulation. In case of sudden changes in the discrepancy of predicted and acquired reward the activated motor category can be switched. We suggest that this subsystem is composed of a hierarchically organized network of dis-inhibitory mechanisms, dubbed a dynamic control network (DCN), which resembles components of the basal ganglia. The DCN selectively activates an expert network, corresponding to the current behavioral strategy. The trace of the accumulated reward is monitored such that large sudden

  1. Communication and control tools, systems, and new dimensions

    CERN Document Server

    MacDougall, Robert; Cummings, Kevin

    2015-01-01

    Communication and Control: Tools, Systems, and New Dimensions advocates a systems view of human communication in a time of intelligent, learning machines. This edited collection sheds new light on things as mundane yet still profoundly consequential (and seemingly "low-tech") today as push buttons, pagers and telemarketing systems. Contributors also investigate aspects of "remote control" related to education, organizational design, artificial intelligence, cyberwarfa

  2. Neural network-based model reference adaptive control system.

    Science.gov (United States)

    Patino, H D; Liu, D

    2000-01-01

    In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.

  3. Modeling learning technology systems as business systems

    NARCIS (Netherlands)

    Avgeriou, Paris; Retalis, Symeon; Papaspyrou, Nikolaos

    2003-01-01

    The design of Learning Technology Systems, and the Software Systems that support them, is largely conducted on an intuitive, ad hoc basis, thus resulting in inefficient systems that defectively support the learning process. There is now justifiable, increasing effort in formalizing the engineering

  4. Synergetic motor control paradigm for optimizing energy efficiency of multijoint reaching via tacit learning.

    Science.gov (United States)

    Hayashibe, Mitsuhiro; Shimoda, Shingo

    2014-01-01

    A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach.

  5. Synergetic motor control paradigm for optimizing energy efficiency of multijoint reaching via tacit learning

    Science.gov (United States)

    Hayashibe, Mitsuhiro; Shimoda, Shingo

    2014-01-01

    A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach

  6. Network Traffic Features for Anomaly Detection in Specific Industrial Control System Network

    Directory of Open Access Journals (Sweden)

    Matti Mantere

    2013-09-01

    Full Text Available The deterministic and restricted nature of industrial control system networks sets them apart from more open networks, such as local area networks in office environments. This improves the usability of network security, monitoring approaches that would be less feasible in more open environments. One of such approaches is machine learning based anomaly detection. Without proper customization for the special requirements of the industrial control system network environment, many existing anomaly or misuse detection systems will perform sub-optimally. A machine learning based approach could reduce the amount of manual customization required for different industrial control system networks. In this paper we analyze a possible set of features to be used in a machine learning based anomaly detection system in the real world industrial control system network environment under investigation. The network under investigation is represented by architectural drawing and results derived from network trace analysis. The network trace is captured from a live running industrial process control network and includes both control data and the data flowing between the control network and the office network. We limit the investigation to the IP traffic in the traces.

  7. Iterative learning control an optimization paradigm

    CERN Document Server

    Owens, David H

    2016-01-01

    This book develops a coherent theoretical approach to algorithm design for iterative learning control based on the use of optimization concepts. Concentrating initially on linear, discrete-time systems, the author gives the reader access to theories based on either signal or parameter optimization. Although the two approaches are shown to be related in a formal mathematical sense, the text presents them separately because their relevant algorithm design issues are distinct and give rise to different performance capabilities. Together with algorithm design, the text demonstrates that there are new algorithms that are capable of incorporating input and output constraints, enable the algorithm to reconfigure systematically in order to meet the requirements of different reference signals and also to support new algorithms for local convergence of nonlinear iterative control. Simulation and application studies are used to illustrate algorithm properties and performance in systems like gantry robots and other elect...

  8. A Project-Based Laboratory for Learning Embedded System Design with Industry Support

    Science.gov (United States)

    Lee, Chyi-Shyong; Su, Juing-Huei; Lin, Kuo-En; Chang, Jia-Hao; Lin, Gu-Hong

    2010-01-01

    A project-based laboratory for learning embedded system design with support from industry is presented in this paper. The aim of this laboratory is to motivate students to learn the building blocks of embedded systems and practical control algorithms by constructing a line-following robot using the quadratic interpolation technique to predict the…

  9. Biological Systems Thinking for Control Engineering Design

    Directory of Open Access Journals (Sweden)

    D. J. Murray-Smith

    2004-01-01

    Full Text Available Artificial neural networks and genetic algorithms are often quoted in discussions about the contribution of biological systems thinking to engineering design. This paper reviews work on the neuromuscular system, a field in which biological systems thinking could make specific contributions to the development and design of automatic control systems for mechatronics and robotics applications. The paper suggests some specific areas in which a better understanding of this biological control system could be expected to contribute to control engineering design methods in the future. Particular emphasis is given to the nonlinear nature of elements within the neuromuscular system and to processes of neural signal processing, sensing and system adaptivity. Aspects of the biological system that are of particular significance for engineering control systems include sensor fusion, sensor redundancy and parallelism, together with advanced forms of signal processing for adaptive and learning control

  10. Accelerator control systems without minicomputers

    International Nuclear Information System (INIS)

    Altaber, J.; Beck, F.; Rausch, R.

    1980-01-01

    A paper given last year described in general terms a plan for the control of a large machine using assemblies of microcomputer units which simulate a conventional minicomputer by multiprocessing. In every other way the SPS control philosophy is followed. The design of a model assembly has allowed us to learn something about the protocols needed inside and between assemblies, as well as to assess more accurately what level of technology it is reasonable to apply. In any control system of this kind it would be desirable to allow engineering contributions from a variety of sources, and yet ensure the homogeneity needed for the system to remain reliable and comprehensible. Methods of achieving this are discussed. (Auth.)

  11. Action Control, L2 Motivational Self System, and Motivated Learning Behavior in a Foreign Language Learning Context

    Science.gov (United States)

    Khany, Reza; Amiri, Majid

    2018-01-01

    Theoretical developments in second or foreign language motivation research have led to a better understanding of the convoluted nature of motivation in the process of language acquisition. Among these theories, action control theory has recently shown a good deal of explanatory power in second language learning contexts and in the presence of…

  12. Self-Learning Power Control in Wireless Sensor Networks.

    Science.gov (United States)

    Chincoli, Michele; Liotta, Antonio

    2018-01-27

    Current trends in interconnecting myriad smart objects to monetize on Internet of Things applications have led to high-density communications in wireless sensor networks. This aggravates the already over-congested unlicensed radio bands, calling for new mechanisms to improve spectrum management and energy efficiency, such as transmission power control. Existing protocols are based on simplistic heuristics that often approach interference problems (i.e., packet loss, delay and energy waste) by increasing power, leading to detrimental results. The scope of this work is to investigate how machine learning may be used to bring wireless nodes to the lowest possible transmission power level and, in turn, to respect the quality requirements of the overall network. Lowering transmission power has benefits in terms of both energy consumption and interference. We propose a protocol of transmission power control through a reinforcement learning process that we have set in a multi-agent system. The agents are independent learners using the same exploration strategy and reward structure, leading to an overall cooperative network. The simulation results show that the system converges to an equilibrium where each node transmits at the minimum power while respecting high packet reception ratio constraints. Consequently, the system benefits from low energy consumption and packet delay.

  13. International Space Station Passive Thermal Control System Analysis, Top Ten Lessons-Learned

    Science.gov (United States)

    Iovine, John

    2011-01-01

    The International Space Station (ISS) has been on-orbit for over 10 years, and there have been numerous technical challenges along the way from design to assembly to on-orbit anomalies and repairs. The Passive Thermal Control System (PTCS) management team has been a key player in successfully dealing with these challenges. The PTCS team performs thermal analysis in support of design and verification, launch and assembly constraints, integration, sustaining engineering, failure response, and model validation. This analysis is a significant body of work and provides a unique opportunity to compile a wealth of real world engineering and analysis knowledge and the corresponding lessons-learned. The analysis lessons encompass the full life cycle of flight hardware from design to on-orbit performance and sustaining engineering. These lessons can provide significant insight for new projects and programs. Key areas to be presented include thermal model fidelity, verification methods, analysis uncertainty, and operations support.

  14. An Improved Reinforcement Learning System Using Affective Factors

    Directory of Open Access Journals (Sweden)

    Takashi Kuremoto

    2013-07-01

    Full Text Available As a powerful and intelligent machine learning method, reinforcement learning (RL has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches to find the optimal solution or semi-optimal solution of goal-directed problems. However, when RL is applied to multi-agent systems (MASs, problems such as “curse of dimension”, “perceptual aliasing problem”, and uncertainty of the environment constitute high hurdles to RL. Meanwhile, although RL is inspired by behavioral psychology and reward/punishment from the environment is used, higher mental factors such as affects, emotions, and motivations are rarely adopted in the learning procedure of RL. In this paper, to challenge agents learning in MASs, we propose a computational motivation function, which adopts two principle affective factors “Arousal” and “Pleasure” of Russell’s circumplex model of affects, to improve the learning performance of a conventional RL algorithm named Q-learning (QL. Compared with the conventional QL, computer simulations of pursuit problems with static and dynamic preys were carried out, and the results showed that the proposed method results in agents having a faster and more stable learning performance.

  15. Theories and control models and motor learning: clinical applications in neuro-rehabilitation.

    Science.gov (United States)

    Cano-de-la-Cuerda, R; Molero-Sánchez, A; Carratalá-Tejada, M; Alguacil-Diego, I M; Molina-Rueda, F; Miangolarra-Page, J C; Torricelli, D

    2015-01-01

    In recent decades there has been a special interest in theories that could explain the regulation of motor control, and their applications. These theories are often based on models of brain function, philosophically reflecting different criteria on how movement is controlled by the brain, each being emphasised in different neural components of the movement. The concept of motor learning, regarded as the set of internal processes associated with practice and experience that produce relatively permanent changes in the ability to produce motor activities through a specific skill, is also relevant in the context of neuroscience. Thus, both motor control and learning are seen as key fields of study for health professionals in the field of neuro-rehabilitation. The major theories of motor control are described, which include, motor programming theory, systems theory, the theory of dynamic action, and the theory of parallel distributed processing, as well as the factors that influence motor learning and its applications in neuro-rehabilitation. At present there is no consensus on which theory or model defines the regulations to explain motor control. Theories of motor learning should be the basis for motor rehabilitation. The new research should apply the knowledge generated in the fields of control and motor learning in neuro-rehabilitation. Copyright © 2011 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.

  16. Automation and Control Learning Environment with Mixed Reality Remote Experiments Architecture

    Directory of Open Access Journals (Sweden)

    Frederico M. Schaf

    2007-05-01

    Full Text Available This work aims to the use of remotely web-based experiments to improve the learning process of automation and control systems theory courses. An architecture combining virtual learning environments, remote experiments, students guide and experiments analysis is proposed based on a wide state of art study. The validation of the architecture uses state of art technologies and new simple developed programs to implement the case studies presented. All implementations presented use an internet accessible virtual learning environment providing educational resources, guides and learning material to create a distance learning course associated with the remote mixed reality experiment. This work is part of the RExNet consortium, supported by the European Alfa project.

  17. Optimal control in microgrid using multi-agent reinforcement learning.

    Science.gov (United States)

    Li, Fu-Dong; Wu, Min; He, Yong; Chen, Xin

    2012-11-01

    This paper presents an improved reinforcement learning method to minimize electricity costs on the premise of satisfying the power balance and generation limit of units in a microgrid with grid-connected mode. Firstly, the microgrid control requirements are analyzed and the objective function of optimal control for microgrid is proposed. Then, a state variable "Average Electricity Price Trend" which is used to express the most possible transitions of the system is developed so as to reduce the complexity and randomicity of the microgrid, and a multi-agent architecture including agents, state variables, action variables and reward function is formulated. Furthermore, dynamic hierarchical reinforcement learning, based on change rate of key state variable, is established to carry out optimal policy exploration. The analysis shows that the proposed method is beneficial to handle the problem of "curse of dimensionality" and speed up learning in the unknown large-scale world. Finally, the simulation results under JADE (Java Agent Development Framework) demonstrate the validity of the presented method in optimal control for a microgrid with grid-connected mode. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated.

    Science.gov (United States)

    Kumaran, Dharshan; Hassabis, Demis; McClelland, James L

    2016-07-01

    We update complementary learning systems (CLS) theory, which holds that intelligent agents must possess two learning systems, instantiated in mammalians in neocortex and hippocampus. The first gradually acquires structured knowledge representations while the second quickly learns the specifics of individual experiences. We broaden the role of replay of hippocampal memories in the theory, noting that replay allows goal-dependent weighting of experience statistics. We also address recent challenges to the theory and extend it by showing that recurrent activation of hippocampal traces can support some forms of generalization and that neocortical learning can be rapid for information that is consistent with known structure. Finally, we note the relevance of the theory to the design of artificial intelligent agents, highlighting connections between neuroscience and machine learning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Episodic reinforcement learning control approach for biped walking

    Directory of Open Access Journals (Sweden)

    Katić Duško

    2012-01-01

    Full Text Available This paper presents a hybrid dynamic control approach to the realization of humanoid biped robotic walk, focusing on the policy gradient episodic reinforcement learning with fuzzy evaluative feedback. The proposed structure of controller involves two feedback loops: a conventional computed torque controller and an episodic reinforcement learning controller. The reinforcement learning part includes fuzzy information about Zero-Moment- Point errors. Simulation tests using a medium-size 36-DOF humanoid robot MEXONE were performed to demonstrate the effectiveness of our method.

  20. Investigation of Drive-Reinforcement Learning and Application of Learning to Flight Control

    Science.gov (United States)

    1993-08-01

    WL-TR-93-1153 INVESTIGATION OF DRIVE-REINFORCEMEN% LEARNING AND APPLICATION OF LEARNING TO FLIGHT CONTROL AD-A277 442 WALTER L. BAKER (ED), STEPHEN ...OF LEARNING TO FUIGHT CONTROL PE 62204 ___ ___ ___ ___ __ ___ ___ ___ ___ ___ ___ __ PR 2003 6. AUTHOR(S) TA 05 WALTER L. BAKER (ED), STEPHEN C. ATKINS...34 Computers and Thought, E. A. Freigenbaum and J. Feldman (eds.), Mc- Graw Hill, New York, (1959). [19] Holland, J. H., "Escaping Brittleness: The Possibility

  1. THE DYNAMIC MODEL FOR CONTROL OF STUDENT’S LEARNING INDIVIDUAL TRAJECTORY

    Directory of Open Access Journals (Sweden)

    A. A. Mitsel

    2015-01-01

    Full Text Available In connection with the transition of the educational system to a competence-oriented approach, the problem of learning outcomes assessment and creating an individual learning trajectory of a student has become relevant. Its solution requires the application of modern information technologies. The third generation of Federal state educational standards of higher professional education (FSES HPE defines the requirements for the results of Mastering the basic educational programs (BEP. According to FSES HPE up to 50% of subjects have a variable character, i.e. depend on the choice of a student. It significantly influences on the results of developing various competencies. The problem of forming student’s learning trajectory is analyzed in general and the choice of an individual direction was studied in details. Various methods, models and algorithms of the student’s individual learning trajectory formation were described. The analysis of the model of educational process organization in terms of individual approach makes it possible to develop a decision support system (DSS. DSS is a set of interrelated programs and data used for analysis of situation, development of alternative solutions and selection of the most acceptable alternative. DSSs are often used when building individual learning path, because this task can be considered as a discrete multi-criteria problem, creating a significant burden on the decision maker. A new method of controlling the learning trajectory has been developed. The article discusses problem statement and solution of determining student’s optimal individual educational trajectory as a dynamic model of learning trajectory control, which uses score assessment to construct a sequence of studied subjects. A new model of management learning trajectory is based on dynamic models for tracking the reference trajectory. The task can be converted to an equivalent model of linear programming, for which a reliable solution

  2. A Control Systems Concept Inventory Test Design and Assessment

    Science.gov (United States)

    Bristow, M.; Erkorkmaz, K.; Huissoon, J. P.; Jeon, Soo; Owen, W. S.; Waslander, S. L.; Stubley, G. D.

    2012-01-01

    Any meaningful initiative to improve the teaching and learning in introductory control systems courses needs a clear test of student conceptual understanding to determine the effectiveness of proposed methods and activities. The authors propose a control systems concept inventory. Development of the inventory was collaborative and iterative. The…

  3. Prototype Learning and Dissociable Categorization Systems in Alzheimer’s Disease

    Science.gov (United States)

    Heindel, William C.; Festa, Elena K.; Ott, Brian R.; Landy, Kelly M.; Salmon, David P.

    2015-01-01

    Recent neuroimaging studies suggest that prototype learning may be mediated by at least two dissociable memory systems depending on the mode of acquisition, with A/Not-A prototype learning dependent upon a perceptual representation system located within posterior visual cortex and A/B prototype learning dependent upon a declarative memory system associated with medial temporal and frontal regions. The degree to which patients with Alzheimer’s disease (AD) can acquire new categorical information may therefore critically depend upon the mode of acquisition. The present study examined A/Not-A and A/B prototype learning in AD patients using procedures that allowed direct comparison of learning across tasks. Despite impaired explicit recall of category features in all tasks, patients showed differential patterns of category acquisition across tasks. First, AD patients demonstrated impaired prototype induction along with intact exemplar classification under incidental A/Not-A conditions, suggesting that the loss of functional connectivity within visual cortical areas disrupted the integration processes supporting prototype induction within the perceptual representation system. Second, AD patients demonstrated intact prototype induction but impaired exemplar classification during A/B learning under observational conditions, suggesting that this form of prototype learning is dependent on a declarative memory system that is disrupted in AD. Third, the surprisingly intact classification of both prototypes and exemplars during A/B learning under trial-and-error feedback conditions suggests that AD patients shifted control from their deficient declarative memory system to a feedback-dependent procedural memory system when training conditions allowed. Taken together, these findings serve to not only increase our understanding of category learning in AD, but to also provide new insights into the ways in which different memory systems interact to support the acquisition of

  4. A Fuzzy Control System for Inductive Video Games

    OpenAIRE

    Lara-Alvarez, Carlos; Mitre-Hernandez, Hugo; Flores, Juan; Fuentes, Maria

    2017-01-01

    It has been shown that the emotional state of students has an important relationship with learning; for instance, engaged concentration is positively correlated with learning. This paper proposes the Inductive Control (IC) for educational games. Unlike conventional approaches that only modify the game level, the proposed technique also induces emotions in the player for supporting the learning process. This paper explores a fuzzy system that analyzes the players' performance and their emotion...

  5. Critical Points in Distance Learning System

    Directory of Open Access Journals (Sweden)

    Airina Savickaitė

    2013-08-01

    Full Text Available Purpose – This article presents the results of distance learning system analysis, i.e. the critical elements of the distance learning system. The critical points of distance learning are a part of distance education online environment interactivity/community process model. The most important is the fact that the critical point is associated with distance learning participants. Design/methodology/approach – Comparative review of articles and analysis of distance learning module. Findings – A modern man is a lifelong learner and distance learning is a way to be a modern person. The focus on a learner and feedback is the most important thing of learning distance system. Also, attention should be paid to the lecture-appropriate knowledge and ability to convey information. Distance system adaptation is the way to improve the learner’s learning outcomes. Research limitations/implications – Different learning disciplines and learning methods may have different critical points. Practical implications – The information of analysis could be important for both lecturers and students, who studies distance education systems. There are familiar critical points which may deteriorate the quality of learning. Originality/value – The study sought to develop remote systems for applications in order to improve the quality of knowledge. Keywords: distance learning, process model, critical points. Research type: review of literature and general overview.

  6. Methods for control over learning individual trajectory

    Science.gov (United States)

    Mitsel, A. A.; Cherniaeva, N. V.

    2015-09-01

    The article discusses models, methods and algorithms of determining student's optimal individual educational trajectory. A new method of controlling the learning trajectory has been developed as a dynamic model of learning trajectory control, which uses score assessment to construct a sequence of studied subjects.

  7. Implementing Google Apps for Education as Learning Management System in Math Education

    Science.gov (United States)

    Widodo, S.

    2017-09-01

    This study aims to find the effectiveness of math education using Google Apps for Education (GAFE) as learning management system to improve mathematical communication skill primary school preservice teacher. This research used quasi-experimental approach, utilizing the control group pre-test - post-test design of two group of primary school preservice teachers at UPI Kampus Purwakarta. The result of this study showed that mathematical communication skill of primary school preservice teacher in the experiment group is better than the control group. This is because the primary school preservice teacher in the experiment group used GAFE as a tool to communicate their idea. The students can communicate their idea because they have read the learning material on the learning management system using GAFE. All in all, it can be concluded that the communication tool is very important, beside the learning material, and also the options to choose the learning model to achieve the better result.

  8. Systems approach for design control at Monitored Retrievable Storage Project

    International Nuclear Information System (INIS)

    Kumar, P.N.; Williams, J.R.

    1994-01-01

    This paper describes the systems approach in establishing design control for the Monitored Retrievable Storage Project design development. Key elements in design control are enumerated and systems engineering aspects are detailed. Application of lessons learned from the Yucca Mountain Project experience is addressed. An integrated approach combining quality assurance and systems engineering requirements is suggested to practice effective design control

  9. Impact on learning of an e-learning module on leukaemia: a randomised controlled trial.

    Science.gov (United States)

    Morgulis, Yuri; Kumar, Rakesh K; Lindeman, Robert; Velan, Gary M

    2012-05-28

    e-learning resources may be beneficial for complex or conceptually difficult topics. Leukaemia is one such topic, yet there are no reports on the efficacy of e-learning for leukaemia. This study compared the learning impact on senior medical students of a purpose-built e-learning module on leukaemia, compared with existing online resources. A randomised controlled trial was performed utilising volunteer senior medical students. Participants were randomly allocated to Study and Control groups. Following a pre-test on leukaemia administered to both groups, the Study group was provided with access to the new e-learning module, while the Control group was directed to existing online resources. A post-test and an evaluation questionnaire were administered to both groups at the end of the trial period. Study and Control groups were equivalent in gender distribution, mean academic ability, pre-test performance and time studying leukaemia during the trial. The Study group performed significantly better than the Control group in the post-test, in which the group to which the students had been allocated was the only significant predictor of performance. The Study group's evaluation of the module was overwhelmingly positive. A targeted e-learning module on leukaemia had a significant effect on learning in this cohort, compared with existing online resources. We believe that the interactivity, dialogic feedback and integration with the curriculum offered by the e-learning module contributed to its impact. This has implications for e-learning design in medicine and other disciplines.

  10. Fidelity-Based Ant Colony Algorithm with Q-learning of Quantum System

    Science.gov (United States)

    Liao, Qin; Guo, Ying; Tu, Yifeng; Zhang, Hang

    2018-03-01

    Quantum ant colony algorithm (ACA) has potential applications in quantum information processing, such as solutions of traveling salesman problem, zero-one knapsack problem, robot route planning problem, and so on. To shorten the search time of the ACA, we suggest the fidelity-based ant colony algorithm (FACA) for the control of quantum system. Motivated by structure of the Q-learning algorithm, we demonstrate the combination of a FACA with the Q-learning algorithm and suggest the design of a fidelity-based ant colony algorithm with the Q-learning to improve the performance of the FACA in a spin-1/2 quantum system. The numeric simulation results show that the FACA with the Q-learning can efficiently avoid trapping into local optimal policies and increase the speed of convergence process of quantum system.

  11. Steering the dynamics within reduced space through quantum learning control

    International Nuclear Information System (INIS)

    Kim, Young Sik

    2003-01-01

    In quantum dynamics of many-body systems, to identify the Hamiltonian becomes more difficult very rapidly as the number of degrees of freedom increases. In order to simplify the dynamics and to deduce dynamically relevant Hamiltonian information, it is desirable to control the dynamics to lie within a reduced space. With a judicious choice for the cost functional, the closed loop optimal control experiments can be manipulated efficiently to steer the dynamics to lie within a subspace of the system eigenstates without requiring any prior detailed knowledge about the system Hamiltonian. The procedure is simulated for optimally controlled population transfer experiments in the system of two degrees of freedom. To show the feasibility of steering the dynamics to lie in a specified subspace, the learning algorithms guiding the dynamics are presented along with frequency filtering. The results demonstrate that the optimal control fields derive the system to the desired target state through the desired subspace

  12. Casual Games and Casual Learning About Human Biological Systems

    Science.gov (United States)

    Price, C. Aaron; Gean, Katherine; Christensen, Claire G.; Beheshti, Elham; Pernot, Bryn; Segovia, Gloria; Person, Halcyon; Beasley, Steven; Ward, Patricia

    2016-02-01

    Casual games are everywhere. People play them throughout life to pass the time, to engage in social interactions, and to learn. However, their simplicity and use in distraction-heavy environments can attenuate their potential for learning. This experimental study explored the effects playing an online, casual game has on awareness of human biological systems. Two hundred and forty-two children were given pretests at a Museum and posttests at home after playing either a treatment or control game. Also, 41 children were interviewed to explore deeper meanings behind the test results. Results show modest improvement in scientific attitudes, ability to identify human biological systems and in the children's ability to describe how those systems work together in real-world scenarios. Interviews reveal that children drew upon their prior school learning as they played the game. Also, on the surface they perceived the game as mainly entertainment but were easily able to discern learning outcomes when prompted. Implications for the design of casual games and how they can be used to enhance transfer of knowledge from the classroom to everyday life are discussed.

  13. Control of power plants and power systems. Proceedings

    International Nuclear Information System (INIS)

    Canales-Ruiz, R.

    1996-01-01

    The 88 papers in this volume constitute the proceedings of the International Federation of Automatic Control Symposium held in Mexico in 1995. The broad areas which they cover are: self tuning control; power plant operations; dynamic stability; fuzzy logic applications; power plants modelling; artificial intelligence applications; power plants simulation; voltage control; control of hydro electric units; state estimation; fault diagnosis and monitoring systems; system expansion and operation planning; security assessment; economic dispatch and optimal load flow; adaptive control; distribution; transient stability and preventive control; modelling and control of nuclear plant; knowledge data bases for automatic learning methods applied to power system dynamic security assessment; control of combined cycle units; power control centres. Separate abstracts have been prepared for the three papers relating to nuclear power plants. (UK)

  14. Web-based e-learning and virtual lab of human-artificial immune system.

    Science.gov (United States)

    Gong, Tao; Ding, Yongsheng; Xiong, Qin

    2014-05-01

    Human immune system is as important in keeping the body healthy as the brain in supporting the intelligence. However, the traditional models of the human immune system are built on the mathematics equations, which are not easy for students to understand. To help the students to understand the immune systems, a web-based e-learning approach with virtual lab is designed for the intelligent system control course by using new intelligent educational technology. Comparing the traditional graduate educational model within the classroom, the web-based e-learning with the virtual lab shows the higher inspiration in guiding the graduate students to think independently and innovatively, as the students said. It has been found that this web-based immune e-learning system with the online virtual lab is useful for teaching the graduate students to understand the immune systems in an easier way and design their simulations more creatively and cooperatively. The teaching practice shows that the optimum web-based e-learning system can be used to increase the learning effectiveness of the students.

  15. Impact on learning of an e-learning module on leukaemia: a randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Morgulis Yuri

    2012-05-01

    Full Text Available Abstract Background e-learning resources may be beneficial for complex or conceptually difficult topics. Leukaemia is one such topic, yet there are no reports on the efficacy of e-learning for leukaemia. This study compared the learning impact on senior medical students of a purpose-built e-learning module on leukaemia, compared with existing online resources. Methods A randomised controlled trial was performed utilising volunteer senior medical students. Participants were randomly allocated to Study and Control groups. Following a pre-test on leukaemia administered to both groups, the Study group was provided with access to the new e-learning module, while the Control group was directed to existing online resources. A post-test and an evaluation questionnaire were administered to both groups at the end of the trial period. Results Study and Control groups were equivalent in gender distribution, mean academic ability, pre-test performance and time studying leukaemia during the trial. The Study group performed significantly better than the Control group in the post-test, in which the group to which the students had been allocated was the only significant predictor of performance. The Study group’s evaluation of the module was overwhelmingly positive. Conclusions A targeted e-learning module on leukaemia had a significant effect on learning in this cohort, compared with existing online resources. We believe that the interactivity, dialogic feedback and integration with the curriculum offered by the e-learning module contributed to its impact. This has implications for e-learning design in medicine and other disciplines.

  16. Impact on learning of an e-learning module on leukaemia: a randomised controlled trial

    Science.gov (United States)

    2012-01-01

    Background e-learning resources may be beneficial for complex or conceptually difficult topics. Leukaemia is one such topic, yet there are no reports on the efficacy of e-learning for leukaemia. This study compared the learning impact on senior medical students of a purpose-built e-learning module on leukaemia, compared with existing online resources. Methods A randomised controlled trial was performed utilising volunteer senior medical students. Participants were randomly allocated to Study and Control groups. Following a pre-test on leukaemia administered to both groups, the Study group was provided with access to the new e-learning module, while the Control group was directed to existing online resources. A post-test and an evaluation questionnaire were administered to both groups at the end of the trial period. Results Study and Control groups were equivalent in gender distribution, mean academic ability, pre-test performance and time studying leukaemia during the trial. The Study group performed significantly better than the Control group in the post-test, in which the group to which the students had been allocated was the only significant predictor of performance. The Study group’s evaluation of the module was overwhelmingly positive. Conclusions A targeted e-learning module on leukaemia had a significant effect on learning in this cohort, compared with existing online resources. We believe that the interactivity, dialogic feedback and integration with the curriculum offered by the e-learning module contributed to its impact. This has implications for e-learning design in medicine and other disciplines. PMID:22640463

  17. Project-Based Learning in Programmable Logic Controller

    Science.gov (United States)

    Seke, F. R.; Sumilat, J. M.; Kembuan, D. R. E.; Kewas, J. C.; Muchtar, H.; Ibrahim, N.

    2018-02-01

    Project-based learning is a learning method that uses project activities as the core of learning and requires student creativity in completing the project. The aims of this study is to investigate the influence of project-based learning methods on students with a high level of creativity in learning the Programmable Logic Controller (PLC). This study used experimental methods with experimental class and control class consisting of 24 students, with 12 students of high creativity and 12 students of low creativity. The application of project-based learning methods into the PLC courses combined with the level of student creativity enables the students to be directly involved in the work of the PLC project which gives them experience in utilizing PLCs for the benefit of the industry. Therefore, it’s concluded that project-based learning method is one of the superior learning methods to apply on highly creative students to PLC courses. This method can be used as an effort to improve student learning outcomes and student creativity as well as to educate prospective teachers to become reliable educators in theory and practice which will be tasked to create qualified human resources candidates in order to meet future industry needs.

  18. Web-Based Learning Support System

    Science.gov (United States)

    Fan, Lisa

    Web-based learning support system offers many benefits over traditional learning environments and has become very popular. The Web is a powerful environment for distributing information and delivering knowledge to an increasingly wide and diverse audience. Typical Web-based learning environments, such as Web-CT, Blackboard, include course content delivery tools, quiz modules, grade reporting systems, assignment submission components, etc. They are powerful integrated learning management systems (LMS) that support a number of activities performed by teachers and students during the learning process [1]. However, students who study a course on the Internet tend to be more heterogeneously distributed than those found in a traditional classroom situation. In order to achieve optimal efficiency in a learning process, an individual learner needs his or her own personalized assistance. For a web-based open and dynamic learning environment, personalized support for learners becomes more important. This chapter demonstrates how to realize personalized learning support in dynamic and heterogeneous learning environments by utilizing Adaptive Web technologies. It focuses on course personalization in terms of contents and teaching materials that is according to each student's needs and capabilities. An example of using Rough Set to analyze student personal information to assist students with effective learning and predict student performance is presented.

  19. Magnetic induction of hyperthermia by a modified self-learning fuzzy temperature controller

    Science.gov (United States)

    Wang, Wei-Cheng; Tai, Cheng-Chi

    2017-07-01

    The aim of this study involved developing a temperature controller for magnetic induction hyperthermia (MIH). A closed-loop controller was applied to track a reference model to guarantee a desired temperature response. The MIH system generated an alternating magnetic field to heat a high magnetic permeability material. This wireless induction heating had few side effects when it was extensively applied to cancer treatment. The effects of hyperthermia strongly depend on the precise control of temperature. However, during the treatment process, the control performance is degraded due to severe perturbations and parameter variations. In this study, a modified self-learning fuzzy logic controller (SLFLC) with a gain tuning mechanism was implemented to obtain high control performance in a wide range of treatment situations. This implementation was performed by appropriately altering the output scaling factor of a fuzzy inverse model to adjust the control rules. In this study, the proposed SLFLC was compared to the classical self-tuning fuzzy logic controller and fuzzy model reference learning control. Additionally, the proposed SLFLC was verified by conducting in vitro experiments with porcine liver. The experimental results indicated that the proposed controller showed greater robustness and excellent adaptability with respect to the temperature control of the MIH system.

  20. Development of Advanced Verification and Validation Procedures and Tools for the Certification of Learning Systems in Aerospace Applications

    Science.gov (United States)

    Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola

    2005-01-01

    Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.

  1. Automated Subsystem Control for Life Support System (ASCLSS)

    Science.gov (United States)

    Block, Roger F.

    1987-01-01

    The Automated Subsystem Control for Life Support Systems (ASCLSS) program has successfully developed and demonstrated a generic approach to the automation and control of space station subsystems. The automation system features a hierarchical and distributed real-time control architecture which places maximum controls authority at the lowest or process control level which enhances system autonomy. The ASCLSS demonstration system pioneered many automation and control concepts currently being considered in the space station data management system (DMS). Heavy emphasis is placed on controls hardware and software commonality implemented in accepted standards. The approach demonstrates successfully the application of real-time process and accountability with the subsystem or process developer. The ASCLSS system completely automates a space station subsystem (air revitalization group of the ASCLSS) which moves the crew/operator into a role of supervisory control authority. The ASCLSS program developed over 50 lessons learned which will aide future space station developers in the area of automation and controls..

  2. Implementation of a Surface Electromyography-Based Upper Extremity Exoskeleton Controller Using Learning from Demonstration

    Science.gov (United States)

    Arenas, Ana M.; Sun, Tingxiao

    2018-01-01

    Upper-extremity exoskeletons have demonstrated potential as augmentative, assistive, and rehabilitative devices. Typical control of upper-extremity exoskeletons have relied on switches, force/torque sensors, and surface electromyography (sEMG), but these systems are usually reactionary, and/or rely on entirely hand-tuned parameters. sEMG-based systems may be able to provide anticipatory control, since they interface directly with muscle signals, but typically require expert placement of sensors on muscle bodies. We present an implementation of an adaptive sEMG-based exoskeleton controller that learns a mapping between muscle activation and the desired system state during interaction with a user, generating a personalized sEMG feature classifier to allow for anticipatory control. This system is robust to novice placement of sEMG sensors, as well as subdermal muscle shifts. We validate this method with 18 subjects using a thumb exoskeleton to complete a book-placement task. This learning-from-demonstration system for exoskeleton control allows for very short training times, as well as the potential for improvement in intent recognition over time, and adaptation to physiological changes in the user, such as those due to fatigue. PMID:29401754

  3. Implementation of a Surface Electromyography-Based Upper Extremity Exoskeleton Controller Using Learning from Demonstration

    Directory of Open Access Journals (Sweden)

    Ho Chit Siu

    2018-02-01

    Full Text Available Upper-extremity exoskeletons have demonstrated potential as augmentative, assistive, and rehabilitative devices. Typical control of upper-extremity exoskeletons have relied on switches, force/torque sensors, and surface electromyography (sEMG, but these systems are usually reactionary, and/or rely on entirely hand-tuned parameters. sEMG-based systems may be able to provide anticipatory control, since they interface directly with muscle signals, but typically require expert placement of sensors on muscle bodies. We present an implementation of an adaptive sEMG-based exoskeleton controller that learns a mapping between muscle activation and the desired system state during interaction with a user, generating a personalized sEMG feature classifier to allow for anticipatory control. This system is robust to novice placement of sEMG sensors, as well as subdermal muscle shifts. We validate this method with 18 subjects using a thumb exoskeleton to complete a book-placement task. This learning-from-demonstration system for exoskeleton control allows for very short training times, as well as the potential for improvement in intent recognition over time, and adaptation to physiological changes in the user, such as those due to fatigue.

  4. Collective learning for the emergence of social norms in networked multiagent systems.

    Science.gov (United States)

    Yu, Chao; Zhang, Minjie; Ren, Fenghui

    2014-12-01

    Social norms such as social rules and conventions play a pivotal role in sustaining system order by regulating and controlling individual behaviors toward a global consensus in large-scale distributed systems. Systematic studies of efficient mechanisms that can facilitate the emergence of social norms enable us to build and design robust distributed systems, such as electronic institutions and norm-governed sensor networks. This paper studies the emergence of social norms via learning from repeated local interactions in networked multiagent systems. A collective learning framework, which imitates the opinion aggregation process in human decision making, is proposed to study the impact of agent local collective behaviors on the emergence of social norms in a number of different situations. In the framework, each agent interacts repeatedly with all of its neighbors. At each step, an agent first takes a best-response action toward each of its neighbors and then combines all of these actions into a final action using ensemble learning methods. Extensive experiments are carried out to evaluate the framework with respect to different network topologies, learning strategies, numbers of actions, influences of nonlearning agents, and so on. Experimental results reveal some significant insights into the manipulation and control of norm emergence in networked multiagent systems achieved through local collective behaviors.

  5. Scheduling lessons learned from the Autonomous Power System

    Science.gov (United States)

    Ringer, Mark J.

    1992-01-01

    The Autonomous Power System (APS) project at NASA LeRC is designed to demonstrate the applications of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution systems. The project consists of three elements: the Autonomous Power Expert System (APEX) for Fault Diagnosis, Isolation, and Recovery (FDIR); the Autonomous Intelligent Power Scheduler (AIPS) to efficiently assign activities start times and resources; and power hardware (Brassboard) to emulate a space-based power system. The AIPS scheduler was tested within the APS system. This scheduler is able to efficiently assign available power to the requesting activities and share this information with other software agents within the APS system in order to implement the generated schedule. The AIPS scheduler is also able to cooperatively recover from fault situations by rescheduling the affected loads on the Brassboard in conjunction with the APEX FDIR system. AIPS served as a learning tool and an initial scheduling testbed for the integration of FDIR and automated scheduling systems. Many lessons were learned from the AIPS scheduler and are now being integrated into a new scheduler called SCRAP (Scheduler for Continuous Resource Allocation and Planning). This paper will service three purposes: an overview of the AIPS implementation, lessons learned from the AIPS scheduler, and a brief section on how these lessons are being applied to the new SCRAP scheduler.

  6. Recommendation System for Adaptive Learning.

    Science.gov (United States)

    Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Ying, Zhiliang

    2018-01-01

    An adaptive learning system aims at providing instruction tailored to the current status of a learner, differing from the traditional classroom experience. The latest advances in technology make adaptive learning possible, which has the potential to provide students with high-quality learning benefit at a low cost. A key component of an adaptive learning system is a recommendation system, which recommends the next material (video lectures, practices, and so on, on different skills) to the learner, based on the psychometric assessment results and possibly other individual characteristics. An important question then follows: How should recommendations be made? To answer this question, a mathematical framework is proposed that characterizes the recommendation process as a Markov decision problem, for which decisions are made based on the current knowledge of the learner and that of the learning materials. In particular, two plain vanilla systems are introduced, for which the optimal recommendation at each stage can be obtained analytically.

  7. Adaptive learning fuzzy control of a mobile robot

    International Nuclear Information System (INIS)

    Tsukada, Akira; Suzuki, Katsuo; Fujii, Yoshio; Shinohara, Yoshikuni

    1989-11-01

    In this report a problem is studied to construct a fuzzy controller for a mobile robot to move autonomously along a given reference direction curve, for which control rules are generated and acquired through an adaptive learning process. An adaptive learning fuzzy controller has been developed for a mobile robot. Good properties of the controller are shown through the travelling experiments of the mobile robot. (author)

  8. Brain computer interface learning for systems based on electrocorticography and intracortical microelectrode arrays.

    Science.gov (United States)

    Hiremath, Shivayogi V; Chen, Weidong; Wang, Wei; Foldes, Stephen; Yang, Ying; Tyler-Kabara, Elizabeth C; Collinger, Jennifer L; Boninger, Michael L

    2015-01-01

    A brain-computer interface (BCI) system transforms neural activity into control signals for external devices in real time. A BCI user needs to learn to generate specific cortical activity patterns to control external devices effectively. We call this process BCI learning, and it often requires significant effort and time. Therefore, it is important to study this process and develop novel and efficient approaches to accelerate BCI learning. This article reviews major approaches that have been used for BCI learning, including computer-assisted learning, co-adaptive learning, operant conditioning, and sensory feedback. We focus on BCIs based on electrocorticography and intracortical microelectrode arrays for restoring motor function. This article also explores the possibility of brain modulation techniques in promoting BCI learning, such as electrical cortical stimulation, transcranial magnetic stimulation, and optogenetics. Furthermore, as proposed by recent BCI studies, we suggest that BCI learning is in many ways analogous to motor and cognitive skill learning, and therefore skill learning should be a useful metaphor to model BCI learning.

  9. Attentional control of associative learning--a possible role of the central cholinergic system.

    Science.gov (United States)

    Pauli, Wolfgang M; O'Reilly, Randall C

    2008-04-02

    How does attention interact with learning? Kruschke [Kruschke, J.K. (2001). Toward a unified Model of Attention in Associative Learning. J. Math. Psychol. 45, 812-863.] proposed a model (EXIT) that captures Mackintosh's [Mackintosh, N.J. (1975). A theory of attention: Variations in the associability of stimuli with reinforcement. Psychological Review, 82(4), 276-298.] framework for attentional modulation of associative learning. We developed a computational model that showed analogous interactions between selective attention and associative learning, but is significantly simplified and, in contrast to EXIT, is motivated by neurophysiological findings. Competition among input representations in the internal representation layer, which increases the contrast between stimuli, is critical for simulating these interactions in human behavior. Furthermore, this competition is modulated in a way that might be consistent with the phasic activation of the central cholinergic system, which modulates activity in sensory cortices. Specifically, phasic increases in acetylcholine can cause increased excitability of both pyramidal excitatory neurons in cortical layers II/III and cortical GABAergic inhibitory interneurons targeting the same pyramidal neurons. These effects result in increased attentional contrast in our model. This model thus represents an initial attempt to link human attentional learning data with underlying neural substrates.

  10. Experimental Learning of Digital Power Controller for Photovoltaic Module Using Proteus VSM

    Directory of Open Access Journals (Sweden)

    Abhijit V. Padgavhankar

    2014-01-01

    Full Text Available The electric power supplied by photovoltaic module depends on light intensity and temperature. It is necessary to control the operating point to draw the maximum power of photovoltaic module. This paper presents the design and implementation of digital power converters using Proteus software. Its aim is to enhance student’s learning for virtual system modeling and to simulate in software for PIC microcontroller along with the hardware design. The buck and boost converters are designed to interface with the renewable energy source that is PV module. PIC microcontroller is used as a digital controller, which senses the PV electric signal for maximum power using sensors and output voltage of the dc-dc converter and according to that switching pulse is generated for the switching of MOSFET. The implementation of proposed system is based on learning platform of Proteus virtual system modeling (VSM and the experimental results are presented.

  11. Research on Open-Closed-Loop Iterative Learning Control with Variable Forgetting Factor of Mobile Robots

    Directory of Open Access Journals (Sweden)

    Hongbin Wang

    2016-01-01

    Full Text Available We propose an iterative learning control algorithm (ILC that is developed using a variable forgetting factor to control a mobile robot. The proposed algorithm can be categorized as an open-closed-loop iterative learning control, which produces control instructions by using both previous and current data. However, introducing a variable forgetting factor can weaken the former control output and its variance in the control law while strengthening the robustness of the iterative learning control. If it is applied to the mobile robot, this will reduce position errors in robot trajectory tracking control effectively. In this work, we show that the proposed algorithm guarantees tracking error bound convergence to a small neighborhood of the origin under the condition of state disturbances, output measurement noises, and fluctuation of system dynamics. By using simulation, we demonstrate that the controller is effective in realizing the prefect tracking.

  12. Oxygen control systems and impurity purification in LBE: Learning from DEMETRA project

    Energy Technology Data Exchange (ETDEWEB)

    Brissonneau, L., E-mail: laurent.brissonneau@cea.fr [CEA/DEN, Cadarache, DTN/STPA/LIPC, F-13108 Saint-Paul-lez-Durance (France); Beauchamp, F.; Morier, O. [CEA/DEN, Cadarache, DTN/STPA/LIPC, F-13108 Saint-Paul-lez-Durance (France); Schroer, C.; Konys, J. [Karlsruher Institut fuer Technologie (KIT), Institut fuer Materialforschung III, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Kobzova, A.; Di Gabriele, F. [NRI, UJV Husinec-Rez 130, Rez 25068 (Czech Republic); Courouau, J.-L. [CEA/DEN, Saclay, DPC/SCCME/LECNA, F-919191 Gif-sur-Yvette (France)

    2011-08-31

    Operating a system using Lead-Bismuth Eutectic (LBE) requires a control of the dissolved oxygen concentration to avoid corrosion of structural materials and oxide build-up in the coolant. Reliable devices are therefore needed to monitor and adjust the oxygen concentration and to remove impurities during operation. In this article, we describe the learning gained from experiments run in the framework of the DEMETRA project (IP-EUROTRANS 6th FP contract) on the oxygen supply in LBE and on impurity filtration and management in different European facilities. An oxygen control device should supply oxygen in LBE at sufficient rate to compensate loss by surface oxidation, otherwise local dissolution of oxide layers might lead to the loss of steel protection against dissolution. Oxygen can be supplied by gas phase H{sub 2}O or O{sub 2}, or by solid phase, PbO dissolution. Each of these systems has substantial advantages and drawbacks. Considerations are given on devices for large scale facilities. The management of impurities (lead oxides and corrosion products) is also a crucial issue as their presence in the liquid phase or in the aerosols is likely to impair the facility, instrumentation and mechanical devices. To avoid impurity build-up on the long-term, purification of LBE is required to keep the impurity inventory low by trapping oxide and metallic impurities in specific filter units. On the basis of impurities characterisation and experimental results gained through filtration tests in different loops, this paper gives a description of the state-of-art knowledge of LBE purification with different filter media. It is now understood that the nature and behaviour of impurities formed in LBE will change according to the operating modes as well as the method to propose to remove impurities. This experience can be used to validate the basis filtration process, define the operating procedures and evaluate perspectives for the design of purification units for long

  13. Human-level control through deep reinforcement learning

    Science.gov (United States)

    Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A.; Veness, Joel; Bellemare, Marc G.; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K.; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis

    2015-02-01

    The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

  14. Human-level control through deep reinforcement learning.

    Science.gov (United States)

    Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A; Veness, Joel; Bellemare, Marc G; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis

    2015-02-26

    The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

  15. E-learning: controlling costs and increasing value.

    Science.gov (United States)

    Walsh, Kieran

    2015-04-01

    E-learning now accounts for a substantial proportion of medical education provision. This progress has required significant investment and this investment has in turn come under increasing scrutiny so that the costs of e-learning may be controlled and its returns maximised. There are multiple methods by which the costs of e-learning can be controlled and its returns maximised. This short paper reviews some of those methods that are likely to be most effective and that are likely to save costs without compromising quality. Methods might include accessing free or low-cost resources from elsewhere; create short learning resources that will work on multiple devices; using open source platforms to host content; using in-house faculty to create content; sharing resources between institutions; and promoting resources to ensure high usage. Whatever methods are used to control costs or increase value, it is most important to evaluate the impact of these methods.

  16. Agile Design of Sewer System Control

    NARCIS (Netherlands)

    Van Nooijen, R.P.; Kolechkina, A.G.; Van Leeuwen, P.E.R.M.; Van Velzen, E.

    2011-01-01

    We describe the first part of an attempt to include stakeholder participation in the design of a central automatic controller for a sewer system in a small pilot project (five subcatchments) and present lessons learned so far. The pilot is part of a project aimed at the improvement of water quality

  17. ENGINEERING OF UNIVERSITY INTELLIGENT LEARNING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Vasiliy M. Trembach

    2016-01-01

    Full Text Available In the article issues of engineering intelligent tutoring systems of University with adaptation are considered. The article also dwells on some modern approaches to engineering of information systems. It shows the role of engineering e-learning devices (systems in system engineering. The article describes the basic principles of system engineering and these principles are expanded regarding to intelligent information systems. The structure of intelligent learning systems with adaptation of the individual learning environments based on services is represented in the article.

  18. Establishment of a Learning Management System

    International Nuclear Information System (INIS)

    Han, K. W.; Kim, Y. T.; Lee, E. J.; Min, B. J.

    2006-01-01

    A web-based learning management system (LMS) has been established to address the need of customized education and training of Nuclear Training Center (NTC) of KAERI. The LMS is designed to deal with various learning types (e.g. on-line, off-line and blended) and a practically comprehensive learning activity cycle (e.g. course preparation, registration, learning, and postlearning) as well as to be user-friendly. A test with an example course scenario on the established system has shown its satisfactory performance. This paper discusses details of the established webbased learning management system in terms of development approach and functions of the LMS

  19. Neural-network hybrid control for antilock braking systems.

    Science.gov (United States)

    Lin, Chih-Min; Hsu, C F

    2003-01-01

    The antilock braking systems are designed to maximize wheel traction by preventing the wheels from locking during braking, while also maintaining adequate vehicle steerability; however, the performance is often degraded under harsh road conditions. In this paper, a hybrid control system with a recurrent neural network (RNN) observer is developed for antilock braking systems. This hybrid control system is comprised of an ideal controller and a compensation controller. The ideal controller, containing an RNN uncertainty observer, is the principal controller; and the compensation controller is a compensator for the difference between the system uncertainty and the estimated uncertainty. Since for dynamic response the RNN has capabilities superior to the feedforward NN, it is utilized for the uncertainty observer. The Taylor linearization technique is employed to increase the learning ability of the RNN. In addition, the on-line parameter adaptation laws are derived based on a Lyapunov function, so the stability of the system can be guaranteed. Simulations are performed to demonstrate the effectiveness of the proposed NN hybrid control system for antilock braking control under various road conditions.

  20. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    Science.gov (United States)

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Transformative Learning: Patterns of Psychophysiologic Response and Technology-Enabled Learning and Intervention Systems

    Science.gov (United States)

    2008-09-01

    Psychophysiologic Response and Technology -Enabled Learning and Intervention Systems PRINCIPAL INVESTIGATOR: Leigh W. Jerome, Ph.D...NUMBER Transformative Learning : Patterns of Psychophysiologic Response and Technology - Enabled Learning and Intervention Systems 5b. GRANT NUMBER...project entitled “Transformative Learning : Patterns of Psychophysiologic Response in Technology Enabled Learning and Intervention Systems.” The

  2. Elevator Group Supervisory Control System Using Genetic Network Programming with Macro Nodes and Reinforcement Learning

    Science.gov (United States)

    Zhou, Jin; Yu, Lu; Mabu, Shingo; Hirasawa, Kotaro; Hu, Jinglu; Markon, Sandor

    Elevator Group Supervisory Control System (EGSCS) is a very large scale stochastic dynamic optimization problem. Due to its vast state space, significant uncertainty and numerous resource constraints such as finite car capacities and registered hall/car calls, it is hard to manage EGSCS using conventional control methods. Recently, many solutions for EGSCS using Artificial Intelligence (AI) technologies have been reported. Genetic Network Programming (GNP), which is proposed as a new evolutionary computation method several years ago, is also proved to be efficient when applied to EGSCS problem. In this paper, we propose an extended algorithm for EGSCS by introducing Reinforcement Learning (RL) into GNP framework, and an improvement of the EGSCS' performances is expected since the efficiency of GNP with RL has been clarified in some other studies like tile-world problem. Simulation tests using traffic flows in a typical office building have been made, and the results show an actual improvement of the EGSCS' performances comparing to the algorithms using original GNP and conventional control methods. Furthermore, as a further study, an importance weight optimization algorithm is employed based on GNP with RL and its efficiency is also verified with the better performances.

  3. Authoring Systems Delivering Reusable Learning Objects

    Directory of Open Access Journals (Sweden)

    George Nicola Sammour

    2009-10-01

    Full Text Available A three layer e-learning course development model has been defined based on a conceptual model of learning content object. It starts by decomposing the learning content into small chunks which are initially placed in a hierarchic structure of units and blocks. The raw content components, being the atomic learning objects (ALO, were linked to the blocks and are structured in the database. We set forward a dynamic generation of LO's using re-usable e-learning raw materials or ALO’s In that view we need a LO authoring/ assembling system fitting the requirements of interoperability and reusability and starting from selecting the raw learning content from the learning materials content database. In practice authoring systems are used to develop e-learning courses. The company EDUWEST has developed an authoring system that is database based and will be SCORM compliant in the near future.

  4. Off-policy reinforcement learning for H∞ control design.

    Science.gov (United States)

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen

    2015-01-01

    The H∞ control design problem is considered for nonlinear systems with unknown internal system model. It is known that the nonlinear H∞ control problem can be transformed into solving the so-called Hamilton-Jacobi-Isaacs (HJI) equation, which is a nonlinear partial differential equation that is generally impossible to be solved analytically. Even worse, model-based approaches cannot be used for approximately solving HJI equation, when the accurate system model is unavailable or costly to obtain in practice. To overcome these difficulties, an off-policy reinforcement leaning (RL) method is introduced to learn the solution of HJI equation from real system data instead of mathematical system model, and its convergence is proved. In the off-policy RL method, the system data can be generated with arbitrary policies rather than the evaluating policy, which is extremely important and promising for practical systems. For implementation purpose, a neural network (NN)-based actor-critic structure is employed and a least-square NN weight update algorithm is derived based on the method of weighted residuals. Finally, the developed NN-based off-policy RL method is tested on a linear F16 aircraft plant, and further applied to a rotational/translational actuator system.

  5. Learning and Understanding System Stability Using Illustrative Dynamic Texture Examples

    Science.gov (United States)

    Liu, Huaping; Xiao, Wei; Zhao, Hongyan; Sun, Fuchun

    2014-01-01

    System stability is a basic concept in courses on dynamic system analysis and control for undergraduate students with computer science backgrounds. Typically, this was taught using a simple simulation example of an inverted pendulum. Unfortunately, many difficult issues arise in the learning and understanding of the concepts of stability,…

  6. Mountain Plains Learning Experience Guide: Automotive Repair. Course: Emission Systems.

    Science.gov (United States)

    Schramm, C.; Osland, Walt

    One of twelve individualized courses included in an automotive repair curriculum, this course covers the theory, testing, and servicing of automotive emission control systems. The course is comprised of one unit, Fundamentals of Emission Systems. The unit begins with a Unit Learning Experience Guide that gives directions for unit completion. The…

  7. CLASSIFICATION OF LEARNING MANAGEMENT SYSTEMS

    Directory of Open Access Journals (Sweden)

    Yu. B. Popova

    2016-01-01

    Full Text Available Using of information technologies and, in particular, learning management systems, increases opportunities of teachers and students in reaching their goals in education. Such systems provide learning content, help organize and monitor training, collect progress statistics and take into account the individual characteristics of each user. Currently, there is a huge inventory of both paid and free systems are physically located both on college servers and in the cloud, offering different features sets of different licensing scheme and the cost. This creates the problem of choosing the best system. This problem is partly due to the lack of comprehensive classification of such systems. Analysis of more than 30 of the most common now automated learning management systems has shown that a classification of such systems should be carried out according to certain criteria, under which the same type of system can be considered. As classification features offered by the author are: cost, functionality, modularity, keeping the customer’s requirements, the integration of content, the physical location of a system, adaptability training. Considering the learning management system within these classifications and taking into account the current trends of their development, it is possible to identify the main requirements to them: functionality, reliability, ease of use, low cost, support for SCORM standard or Tin Can API, modularity and adaptability. According to the requirements at the Software Department of FITR BNTU under the guidance of the author since 2009 take place the development, the use and continuous improvement of their own learning management system.

  8. Bio-inspired spiking neural network for nonlinear systems control.

    Science.gov (United States)

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. An H(∞) control approach to robust learning of feedforward neural networks.

    Science.gov (United States)

    Jing, Xingjian

    2011-09-01

    A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Recommendation System Based On Association Rules For Distributed E-Learning Management Systems

    Science.gov (United States)

    Mihai, Gabroveanu

    2015-09-01

    Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.

  11. Minimal-Learning-Parameter Technique Based Adaptive Neural Sliding Mode Control of MEMS Gyroscope

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2017-01-01

    Full Text Available This paper investigates an adaptive neural sliding mode controller for MEMS gyroscopes with minimal-learning-parameter technique. Considering the system uncertainty in dynamics, neural network is employed for approximation. Minimal-learning-parameter technique is constructed to decrease the number of update parameters, and in this way the computation burden is greatly reduced. Sliding mode control is designed to cancel the effect of time-varying disturbance. The closed-loop stability analysis is established via Lyapunov approach. Simulation results are presented to demonstrate the effectiveness of the method.

  12. Development and Evaluation of Mechatronics Learning System in a Web-Based Environment

    Science.gov (United States)

    Shyr, Wen-Jye

    2011-01-01

    The development of remote laboratory suitable for the reinforcement of undergraduate level teaching of mechatronics is important. For the reason, a Web-based mechatronics learning system, called the RECOLAB (REmote COntrol LABoratory), for remote learning in engineering education has been developed in this study. The web-based environment is an…

  13. Robust Learning Control Design for Quantum Unitary Transformations.

    Science.gov (United States)

    Wu, Chengzhi; Qi, Bo; Chen, Chunlin; Dong, Daoyi

    2017-12-01

    Robust control design for quantum unitary transformations has been recognized as a fundamental and challenging task in the development of quantum information processing due to unavoidable decoherence or operational errors in the experimental implementation of quantum operations. In this paper, we extend the systematic methodology of sampling-based learning control (SLC) approach with a gradient flow algorithm for the design of robust quantum unitary transformations. The SLC approach first uses a "training" process to find an optimal control strategy robust against certain ranges of uncertainties. Then a number of randomly selected samples are tested and the performance is evaluated according to their average fidelity. The approach is applied to three typical examples of robust quantum transformation problems including robust quantum transformations in a three-level quantum system, in a superconducting quantum circuit, and in a spin chain system. Numerical results demonstrate the effectiveness of the SLC approach and show its potential applications in various implementation of quantum unitary transformations.

  14. A Scalable Neuro-inspired Robot Controller Integrating a Machine Learning Algorithm and a Spiking Cerebellar-like Network

    DEFF Research Database (Denmark)

    Baira Ojeda, Ismael; Tolu, Silvia; Lund, Henrik Hautop

    2017-01-01

    Combining Fable robot, a modular robot, with a neuroinspired controller, we present the proof of principle of a system that can scale to several neurally controlled compliant modules. The motor control and learning of a robot module are carried out by a Unit Learning Machine (ULM) that embeds...... the Locally Weighted Projection Regression algorithm (LWPR) and a spiking cerebellar-like microcircuit. The LWPR guarantees both an optimized representation of the input space and the learning of the dynamic internal model (IM) of the robot. However, the cerebellar-like sub-circuit integrates LWPR input...

  15. Design strategy for optimal iterative learning control applied on a deep drawing process

    DEFF Research Database (Denmark)

    Endelt, Benny Ørtoft

    2017-01-01

    Metal forming processes in general can be characterised as repetitive processes; this work will take advantage of this characteristic by developing an algorithm or control system which transfers process information from part to part, reducing the impact of repetitive uncertainties, e.g. a gradual...... changes in the material properties. The process is highly non-linear and the system plant is modelled using a non-linear finite element and the gain factors for the iterative learning controller is identified solving a non-linear optimal control problem. The optimal control problem is formulated as a non...

  16. Cost Estimation and Control for Flight Systems

    Science.gov (United States)

    Hammond, Walter E.; Vanhook, Michael E. (Technical Monitor)

    2002-01-01

    Good program management practices, cost analysis, cost estimation, and cost control for aerospace flight systems are interrelated and depend upon each other. The best cost control process cannot overcome poor design or poor systems trades that lead to the wrong approach. The project needs robust Technical, Schedule, Cost, Risk, and Cost Risk practices before it can incorporate adequate Cost Control. Cost analysis both precedes and follows cost estimation -- the two are closely coupled with each other and with Risk analysis. Parametric cost estimating relationships and computerized models are most often used. NASA has learned some valuable lessons in controlling cost problems, and recommends use of a summary Project Manager's checklist as shown here.

  17. Design issues of a reinforcement-based self-learning fuzzy controller for petrochemical process control

    Science.gov (United States)

    Yen, John; Wang, Haojin; Daugherity, Walter C.

    1992-01-01

    Fuzzy logic controllers have some often-cited advantages over conventional techniques such as PID control, including easier implementation, accommodation to natural language, and the ability to cover a wider range of operating conditions. One major obstacle that hinders the broader application of fuzzy logic controllers is the lack of a systematic way to develop and modify their rules; as a result the creation and modification of fuzzy rules often depends on trial and error or pure experimentation. One of the proposed approaches to address this issue is a self-learning fuzzy logic controller (SFLC) that uses reinforcement learning techniques to learn the desirability of states and to adjust the consequent part of its fuzzy control rules accordingly. Due to the different dynamics of the controlled processes, the performance of a self-learning fuzzy controller is highly contingent on its design. The design issue has not received sufficient attention. The issues related to the design of a SFLC for application to a petrochemical process are discussed, and its performance is compared with that of a PID and a self-tuning fuzzy logic controller.

  18. Exploring Effects of Multi-Touch Tabletop on Collaborative Fraction Learning and the Relationship of Learning Behavior and Interaction with Learning Achievement

    Science.gov (United States)

    Hwang, Wu-Yuin; Shadiev, Rustam; Tseng, Chi-Wei; Huang, Yueh-Min

    2015-01-01

    This study designed a learning system to facilitate elementary school students' fraction learning. An experiment was carried out to investigate how the system, which runs on multi-touch tabletop versus tablet PC, affects fraction learning. Two groups, a control and experimental, were assigned. Control students have learned fraction by using tablet…

  19. Learning from authoritarian teachers: Controlling the situation or controlling yourself can sustain motivation

    Directory of Open Access Journals (Sweden)

    Kathryn Everhart Chaffee

    2014-01-01

    Full Text Available Positive psychology encompasses the study of positive outcomes, optimal functioning, and resilience in difficult circumstances. In the context of language learning, positive outcomes include academic engagement, self-determined motivation, persistence in language learning, and eventually becoming a proficient user of the language. These questionnaire studies extend previous research by addressing how these positive outcomes can be achieved even in adverse circumstances. In Study 1, the primary and secondary control scales of interest were validated using 2468 students at a Canadian university. Study 2 examined the capacity of 100 Canadian language learners to adjust themselves to fit in with their environment, termed secondary control, and how it was related to their motivation for and engagement in language learning and their feelings of anxiety speaking in the classroom. Secondary control in the form of adjusting one’s attitude towards language learning challenges through positive reappraisals was positively associated with self-determined motivation, need satisfaction, and engagement. analyses, positive reappraisals were also found to buffer the negative effects of having a controlling instructor on students’ engagement and anxiety. These findings suggest that personal characteristics interact with the learning environment to allow students to function optimally in their language courses even when the teacher is controlling.

  20. Application of a fuzzy control algorithm with improved learning speed to nuclear steam generator level control

    International Nuclear Information System (INIS)

    Park, Gee Yong; Seong, Poong Hyun

    1994-01-01

    In order to reduce the load of tuning works by trial-and-error for obtaining the best control performance of conventional fuzzy control algorithm, a fuzzy control algorithm with learning function is investigated in this work. This fuzzy control algorithm can make its rule base and tune the membership functions automatically by use of learning function which needs the data from the control actions of the plant operator or other controllers. Learning process in fuzzy control algorithm is to find the optimal values of parameters, which consist of the membership functions and the rule base, by gradient descent method. Learning speed of gradient descent is significantly improved in this work with the addition of modified momentum. This control algorithm is applied to the steam generator level control by computer simulations. The simulation results confirm the good performance of this control algorithm for level control and show that the fuzzy learning algorithm has the generalization capability for the relation of inputs and outputs and it also has the excellent capability of disturbance rejection

  1. Student Modelling in Adaptive E-Learning Systems

    Directory of Open Access Journals (Sweden)

    Clemens Bechter

    2011-09-01

    Full Text Available Most e-Learning systems provide web-based learning so that students can access the same online courses via the Internet without adaptation, based on each student's profile and behavior. In an e-Learning system, one size does not fit all. Therefore, it is a challenge to make e-Learning systems that are suitably “adaptive”. The aim of adaptive e-Learning is to provide the students the appropriate content at the right time, means that the system is able to determine the knowledge level, keep track of usage, and arrange content automatically for each student for the best learning result. This study presents a proposed system which includes major adaptive features based on a student model. The proposed system is able to initialize the student model for determining the knowledge level of a student when the student registers for the course. After a student starts learning the lessons and doing many activities, the system can track information of the student until he/she takes a test. The student’s knowledge level, based on the test scores, is updated into the system for use in the adaptation process, which combines the student model with the domain model in order to deliver suitable course contents to the students. In this study, the proposed adaptive e-Learning system is implemented on an “Introduction to Java Programming Language” course, using LearnSquare software. After the system was tested, the results showed positive feedback towards the proposed system, especially in its adaptive capability.

  2. Controlling changes - lessons learned from waste management facilities

    International Nuclear Information System (INIS)

    Johnson, B.M.; Koplow, A.S.; Stoll, F.E.; Waetje, W.D.

    1995-01-01

    This paper discusses lessons learned about change control at the Waste Reduction Operations Complex (WROC) and Waste Experimental Reduction Facility (WERF) of the Idaho National Engineering Laboratory (INEL). WROC and WERF have developed and implemented change control and an as-built drawing process and have identified structures, systems, and components (SSCS) for configuration management. The operations have also formed an Independent Review Committee to minimize costs and resources associated with changing documents. WROC and WERF perform waste management activities at the INEL. WROC activities include storage, treatment, and disposal of hazardous and mixed waste. WERF provides volume reduction of solid low-level waste through compaction, incineration, and sizing operations. WROC and WERF's efforts aim to improve change control processes that have worked inefficiently in the past

  3. Robust intelligent backstepping tracking control for uncertain non-linear chaotic systems using H∞ control technique

    International Nuclear Information System (INIS)

    Peng, Y.-F.

    2009-01-01

    The cerebellar model articulation controller (CMAC) is a non-linear adaptive system with built-in simple computation, good generalization capability and fast learning property. In this paper, a robust intelligent backstepping tracking control (RIBTC) system combined with adaptive CMAC and H ∞ control technique is proposed for a class of chaotic systems with unknown system dynamics and external disturbance. In the proposed control system, an adaptive backstepping cerebellar model articulation controller (ABCMAC) is used to mimic an ideal backstepping control (IBC), and a robust H ∞ controller is designed to attenuate the effect of the residual approximation errors and external disturbances with desired attenuation level. Moreover, the all adaptation laws of the RIBTC system are derived based on the Lyapunov stability analysis, the Taylor linearization technique and H ∞ control theory, so that the stability of the closed-loop system and H ∞ tracking performance can be guaranteed. Finally, three application examples, including a Duffing-Holmes chaotic system, a Genesio chaotic system and a Sprott circuit system, are used to demonstrate the effectiveness and performance of proposed robust control technique.

  4. Measuring strategic control in artificial grammar learning.

    Science.gov (United States)

    Norman, Elisabeth; Price, Mark C; Jones, Emma

    2011-12-01

    In response to concerns with existing procedures for measuring strategic control over implicit knowledge in artificial grammar learning (AGL), we introduce a more stringent measurement procedure. After two separate training blocks which each consisted of letter strings derived from a different grammar, participants either judged the grammaticality of novel letter strings with respect to only one of these two grammars (pure-block condition), or had the target grammar varying randomly from trial to trial (novel mixed-block condition) which required a higher degree of conscious flexible control. Random variation in the colour and font of letters was introduced to disguise the nature of the rule and reduce explicit learning. Strategic control was observed both in the pure-block and mixed-block conditions, and even among participants who did not realise the rule was based on letter identity. This indicated detailed strategic control in the absence of explicit learning. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Learning feedforward controller for a mobile robot vehicle

    NARCIS (Netherlands)

    Starrenburg, J.G.; Starrenburg, J.G.; van Luenen, W.T.C.; van Luenen, W.T.C.; Oelen, W.; Oelen, W.; van Amerongen, J.

    1996-01-01

    This paper describes the design and realisation of an on-line learning posetracking controller for a three-wheeled mobile robot vehicle. The controller consists of two components. The first is a constant-gain feedback component, designed on the basis of a second-order model. The second is a learning

  6. Control systems for power electronics a practical guide

    CERN Document Server

    Patil, Mahesh

    2015-01-01

    The scope of the book covers most of the aspects as a primer on power electronics starting from a simple diode bridge to a DC-DC convertor using PWM control. The thyristor-bridge and the mechanism of designing a closed loop system are discussed in chapter one, two and three. The concepts are applied in the fourth chapter as a case study for buck converter which uses MOSFETs as switching devices and the closed loop system is elaborated in the fifth chapter. Chapter six is focused on the embedded system basics and the implementation of controls in the digital domain. Chapter seven is a case study of application of an embedded control system for a DC motor. With this book, the reader will find it easy to work on the practical control systems with microcontroller implementation. The core intent of this book is to help gain an accelerated learning path to practical control system engineering and transform control theory to an implementable control system through electronics. Illustrations are provided for most of...

  7. Switched Two-Level H∞ and Robust Fuzzy Learning Control of an Overhead Crane

    Directory of Open Access Journals (Sweden)

    Kao-Ting Hung

    2013-01-01

    Full Text Available Overhead cranes are typical dynamic systems which can be modeled as a combination of a nominal linear part and a highly nonlinear part. For such kind of systems, we propose a control scheme that deals with each part separately, yet ensures global Lyapunov stability. The former part is readily controllable by the H∞ PDC techniques, and the latter part is compensated by fuzzy mixture of affine constants, leaving the remaining unmodeled dynamics or modeling error under robust learning control using the Nelder-Mead simplex algorithm. Comparison with the adaptive fuzzy control method is given via simulation studies, and the validity of the proposed control scheme is demonstrated by experiments on a prototype crane system.

  8. Brain Computer Interface Learning for Systems Based on Electrocorticography and Intracortical Microelectrode Arrays

    Directory of Open Access Journals (Sweden)

    Shivayogi V Hiremath

    2015-06-01

    Full Text Available A brain-computer interface (BCI system transforms neural activity into control signals for external devices in real time. A BCI user needs to learn to generate specific cortical activity patterns to control external devices effectively. We call this process BCI learning, and it often requires significant effort and time. Therefore, it is important to study this process and develop novel and efficient approaches to accelerate BCI learning. This article reviews major approaches that have been used for BCI learning, including computer-assisted learning, co-adaptive learning, operant conditioning, and sensory feedback. We focus on BCIs based on electrocorticography and intracortical microelectrode arrays for restoring motor function. This article also explores the possibility of brain modulation techniques in promoting BCI learning, such as electrical cortical stimulation, transcranial magnetic stimulation, and optogenetics. Furthermore, as proposed by recent BCI studies, we suggest that BCI learning is in many ways analogous to motor and cognitive skill learning, and therefore skill learning should be a useful metaphor to model BCI learning.

  9. Statistical learning methods: Basics, control and performance

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de

    2006-04-01

    The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms.

  10. Statistical learning methods: Basics, control and performance

    International Nuclear Information System (INIS)

    Zimmermann, J.

    2006-01-01

    The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms

  11. Micro Learning: A Modernized Education System

    Directory of Open Access Journals (Sweden)

    Omer Jomah

    2016-03-01

    Full Text Available Learning is an understanding of how the human brain is wired to learning rather than to an approach or a system. It is one of the best and most frequent approaches for the 21st century learners. Micro learning is more interesting due to its way of teaching and learning the content in a small, very specific burst. Here the learners decide what and when to learn. Content, time, curriculum, form, process, mediality, and learning type are the dimensions of micro learning. Our paper will discuss about micro learning and about the micro-content management system. The study will reflect the views of different users, and will analyze the collected data. Finally, it will be concluded with its pros and cons. 

  12. Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems

    Science.gov (United States)

    Chen, Sanfeng; Li, Shuai; Liu, Bo; Lou, Yuesheng; Liang, Yongsheng

    2012-01-01

    Variable structure strategy is widely used for the control of sensor-actuator systems modeled by Euler-Lagrange equations. However, accurate knowledge on the model structure and model parameters are often required for the control design. In this paper, we consider model-free variable structure control of a class of sensor-actuator systems, where only the online input and output of the system are available while the mathematic model of the system is unknown. The problem is formulated from an optimal control perspective and the implicit form of the control law are analytically obtained by using the principle of optimality. The control law and the optimal cost function are explicitly solved iteratively. Simulations demonstrate the effectiveness and the efficiency of the proposed method. PMID:22778633

  13. Approaches to Learning to Control Dynamic Uncertainty

    Directory of Open Access Journals (Sweden)

    Magda Osman

    2015-10-01

    Full Text Available In dynamic environments, when faced with a choice of which learning strategy to adopt, do people choose to mostly explore (maximizing their long term gains or exploit (maximizing their short term gains? More to the point, how does this choice of learning strategy influence one’s later ability to control the environment? In the present study, we explore whether people’s self-reported learning strategies and levels of arousal (i.e., surprise, stress correspond to performance measures of controlling a Highly Uncertain or Moderately Uncertain dynamic environment. Generally, self-reports suggest a preference for exploring the environment to begin with. After which, those in the Highly Uncertain environment generally indicated they exploited more than those in the Moderately Uncertain environment; this difference did not impact on performance on later tests of people’s ability to control the dynamic environment. Levels of arousal were also differentially associated with the uncertainty of the environment. Going beyond behavioral data, our model of dynamic decision-making revealed that, in actual fact, there was no difference in exploitation levels between those in the highly uncertain or moderately uncertain environments, but there were differences based on sensitivity to negative reinforcement. We consider the implications of our findings with respect to learning and strategic approaches to controlling dynamic uncertainty.

  14. Learning Management Systems and Comparison of Open Source Learning Management Systems and Proprietary Learning Management Systems

    Directory of Open Access Journals (Sweden)

    Yücel Yılmaz

    2016-04-01

    Full Text Available The concept of learning has been increasingly gaining importance for individuals, businesses and communities in the age of information. On the other hand, developments in information and communication technologies take effect in the field of learning activities. With these technologies, barriers of time and space against the learning activities largely disappear and these technologies make it easier to carry out these activities more effectively. There remain a lot of questions regarding selection of learning management system (LMS to be used for the management of e-learning processes by all organizations conducing educational practices including universities, companies, non-profit organizations, etc. The main questions are as follows: Shall we choose open source LMS or commercial LMS? Can the selected LMS meet existing needs and future potential needs for the organization? What are the possibilities of technical support in the management of LMS? What kind of problems may be experienced in the use of LMS and how can these problems be solved? How much effective can officials in the organization be in the management of LMS? In this study, primarily e-learning and the concept of LMS will be discussed, and in the next section, as for answers to these questions, open source LMSs and centrally developed LMSs will be examined and their advantages and disadvantages relative to each other will be discussed.

  15. Concurrent Learning of Control in Multi agent Sequential Decision Tasks

    Science.gov (United States)

    2018-04-17

    Concurrent Learning of Control in Multi-agent Sequential Decision Tasks The overall objective of this project was to develop multi-agent reinforcement... learning (MARL) approaches for intelligent agents to autonomously learn distributed control policies in decentral- ized partially observable... learning of policies in Dec-POMDPs, established performance bounds, evaluated these algorithms both theoretically and empirically, The views

  16. Efficient control of mechatronic systems in dynamic motion tasks

    Directory of Open Access Journals (Sweden)

    Despotova Desislava

    2018-01-01

    Full Text Available Robots and powered exoskeletons have often complex and non-linear dynamics due to friction, elasticity, and changing load. The proposed study addresses various-type robots that have to perform dynamic point-to-point motion tasks (PTPMT. The performance demands are for faster motion, higher positioning accuracy, and lower energy consumption. With given motion task, it is of primary importance to study the structure and controllability of the corresponding controlled system. The following natural decentralized controllability condition is assumed: the signs of any control input and the corresponding output (the acceleration are the same, at least when the control input is at its maximum absolute value. Then we find explicit necessary and sufficient conditions on the control transfer matrix that can guarantee robust controllability in the face of arbitrary, but bounded disturbances. Further on, we propose a generic optimisation approach for control learning synthesis of various type robotic systems in PTPMT. Our procedure for iterative learning control (LC has the following main steps: (1 choose a set of appropriate test control functions; (2 define the most relevant input-output pairs; and (3 solve shooting equations and perform control parameter optimisation. We will give several examples to explain our controllability and optimisation concepts.

  17. Active controllers and the time duration to learn a task

    Science.gov (United States)

    Repperger, D. W.; Goodyear, C.

    1986-01-01

    An active controller was used to help train naive subjects involved in a compensatory tracking task. The controller is called active in this context because it moves the subject's hand in a direction to improve tracking. It is of interest here to question whether the active controller helps the subject to learn a task more rapidly than the passive controller. Six subjects, inexperienced to compensatory tracking, were run to asymptote root mean square error tracking levels with an active controller or a passive controller. The time required to learn the task was defined several different ways. The results of the different measures of learning were examined across pools of subjects and across controllers using statistical tests. The comparison between the active controller and the passive controller as to their ability to accelerate the learning process as well as reduce levels of asymptotic tracking error is reported here.

  18. VAR control in distribution systems by using artificial intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Golkar, M.A. [Curtin Univ. of Technology, Sarawak (Malaysia). School of Engineering and Science

    2005-07-01

    This paper reviewed artificial intelligence techniques used in VAR control systems. Reactive power controls in distribution systems were also reviewed. While artificial intelligence methods are widely used in power control systems, the techniques require extensive human knowledge bases and experiences in order to operate correctly. Expert systems use knowledge and interface procedures to solve problems that often require human expertise. Expert systems often cause knowledge bottlenecks as they are unable to learn or adopt to new situations. While neural networks possess learning ability, they are computationally expensive. However, test results in recent neural network studies have demonstrated that they work well in a variety of loading conditions. Fuzzy logic techniques are used to accurately represent the operational constraints of power systems. Fuzzy logic has an advantage over other artificial intelligence techniques as it is able to remedy uncertainties in data. Evolutionary computing algorithms use probabilistic transition rules which can search complicated data to determine optimal constraints and parameters. Over 95 per cent of all papers published on power systems use genetic algorithms. It was concluded that hybrid systems using various artificial intelligence techniques are now being used by researchers. 69 refs.

  19. Resolving the Problem of Intelligent Learning Content in Learning Management Systems

    Science.gov (United States)

    Rey-Lopez, Marta; Brusilovsky, Peter; Meccawy, Maram; Diaz-Redondo, Rebeca; Fernandez-Vilas, Ana; Ashman, Helen

    2008-01-01

    Current e-learning standardization initiatives have put much effort into easing interoperability between systems and the reusability of contents. For this to be possible, one of the most relevant areas is the definition of a run-time environment, which allows Learning Management Systems to launch, track and communicate with learning objects.…

  20. Communication and control for networked complex systems

    CERN Document Server

    Peng, Chen; Han, Qing-Long

    2015-01-01

    This book reports on the latest advances in the study of Networked Control Systems (NCSs). It highlights novel research concepts on NCSs; the analysis and synthesis of NCSs with special attention to their networked character; self- and event-triggered communication schemes for conserving limited network resources; and communication and control co-design for improving the efficiency of NCSs. The book will be of interest to university researchers, control and network engineers, and graduate students in the control engineering, communication and network sciences interested in learning the core principles, methods, algorithms and applications of NCSs.

  1. Evaluating Usability of E-Learning Systems in Universities

    OpenAIRE

    Nicholas Kipkurui Kiget; Professor G. Wanyembi; Anselemo Ikoha Peters

    2014-01-01

    The use of e-learning systems has increased significantly in the recent times. E-learning systems are supplementing teaching and learning in universities globally. Kenyan universities have adopted e-learning technologies as means for delivering course content. However despite adoption of these systems, there are considerable challenges facing the usability of the systems. Lecturers and students have different perceptions in regard to the usability of e-learning systems. The aim of this study ...

  2. Analysis of learning curves in the on-the-job training of air traffic controllers

    NARCIS (Netherlands)

    Oprins, E.A.P.B.; Bruggraaff, E.; Roe, R.

    2011-01-01

    This chapter describes a competence-based assessment system, called CBAS, for air traffic control (ATC) simulator and on-the-job training (OJT), developed at Air Traffic Control The Netherlands (LVNL). In contrast with simulator training, learning processes in OJT are difficult to assess, because

  3. Feedback Design Patterns for Math Online Learning Systems

    Science.gov (United States)

    Inventado, Paul Salvador; Scupelli, Peter; Heffernan, Cristina; Heffernan, Neil

    2017-01-01

    Increasingly, computer-based learning systems are used by educators to facilitate learning. Evaluations of several math learning systems show that they result in significant student learning improvements. Feedback provision is one of the key features in math learning systems that contribute to its success. We have recently been uncovering feedback…

  4. The Effect of Contextual Teaching and Learning Combined with Peer Tutoring towards Learning Achievement on Human Digestive System Concept

    Directory of Open Access Journals (Sweden)

    Farhah Abadiyah

    2017-11-01

    Full Text Available This research aims to know the influence of contextual teaching and learning (CTL combined with peer tutoring toward learning achievement on human digestive system concept. This research was conducted at one of State Senior High School in South Tangerang in the academic year of 2016/2017. The research method was quasi experiment with nonequivalent pretest-postest control group design. The sample was taken by simple random sampling. The total of the sampels were 86 students which consisted of 44 students as a controlled group and 42 students as an experimental group. The research instrument was objective test which consisted of 25 multiple choice items of each pretest and posttest. The research also used observation sheets for teacher and students activity. The result of data analysis using t-test on the two groups show that the value of tcount was 2.40 and ttable was 1.99 on significant level α = 0,05, so that tcount > ttable.. This result indicated that there was influence of contextual teaching and learning (CTL combined with peer tutoring toward learning achievement on human digestive system concept.

  5. Data-Driven Based Asynchronous Motor Control for Printing Servo Systems

    Science.gov (United States)

    Bian, Min; Guo, Qingyun

    Modern digital printing equipment aims to the environmental-friendly industry with high dynamic performances and control precision and low vibration and abrasion. High performance motion control system of printing servo systems was required. Control system of asynchronous motor based on data acquisition was proposed. Iterative learning control (ILC) algorithm was studied. PID control was widely used in the motion control. However, it was sensitive to the disturbances and model parameters variation. The ILC applied the history error data and present control signals to approximate the control signal directly in order to fully track the expect trajectory without the system models and structures. The motor control algorithm based on the ILC and PID was constructed and simulation results were given. The results show that data-driven control method is effective dealing with bounded disturbances for the motion control of printing servo systems.

  6. The Development of Learning Management System Using Edmodo

    Science.gov (United States)

    Joko; Septia Wulandari, Gayuh

    2018-04-01

    The development of Learning Management System (LMS) can be used as an online learning media by managing the teacher in delivering the material and giving a task. This study aims to: 1) to know the validity of learning devices using LMS with Edmodo, 2) know the student’s response to LMS implementation using Edmodo, and 3) to know the difference of the learning outcome that is students who learned by using LMS with Edmodo and Direct Learning Model (DLM). This research method is quasi experimental by using control group pretest-posttest design. The population of the study was the student at SMKN 1 Sidoarjo. Research sample X TITL 1 class as control goup, and X TITL 2 class as experimental group. The researcher used scale rating to analyze the data validity and students’ respon, and t-test was used to examine the difference of learning outcomes with significant 0.05. The result of the research shows: 1) the average learning device validity use Edmodo 88.14%, lesson plan validity is 92.45%, pretest-posttest validity is 89.15%, learning material validity is 84.64%, and affective and psychomotor-portfolio observation sheets validity is 86.33 included very good criteria or very suitable to be used for research; 2) the result of students’ response questionnaire after taught by using LMS with Edmodo 86.03% in very good category and students agreed that Edmodo can be used in learning; and 3) the learning outcome of LMS by using Edmodo with DLM are: a) there are significant difference of the student cognitive learning outcome which is taught by using Edmodo with the student who use DLM. The average of student learning outcome that is taught LMS using Edmodo is 81.69 compared to student with DLM outcome 76.39, b) there is difference of affective learning outcome that is taught LMS using Edmodo compared to student using DLM. The average of student learning outcomeof affective that is taught LMS by using Edmodo is 83.50 compared students who use DLM 80.34, and c) there is

  7. MECAR (Main Ring Excitation Controller and Regulator): A real time learning regulator for the Fermilab Main Ring or the Main Injector synchrotron

    International Nuclear Information System (INIS)

    Flora, R.; Martin, K.; Moibenko, A.; Pfeffer, H.; Wolff, D.; Prieto, P.; Hays, S.

    1995-04-01

    The real time computer for controlling and regulating the FNAL Main Ring power supplies has been upgraded with a new learning control system. The learning time of the system has been reduced by an order of magnitude, mostly through the implementation of a 95 tap FIR filter in the learning algorithm. The magnet system consists of three buses, which must track each other during a ramp from 100 to 1700 amps at a 2.4 second repetition rate. This paper will present the system configuration and the tools used during development and testing

  8. Self-learning fuzzy controllers based on temporal back propagation

    Science.gov (United States)

    Jang, Jyh-Shing R.

    1992-01-01

    This paper presents a generalized control strategy that enhances fuzzy controllers with self-learning capability for achieving prescribed control objectives in a near-optimal manner. This methodology, termed temporal back propagation, is model-insensitive in the sense that it can deal with plants that can be represented in a piecewise-differentiable format, such as difference equations, neural networks, GMDH structures, and fuzzy models. Regardless of the numbers of inputs and outputs of the plants under consideration, the proposed approach can either refine the fuzzy if-then rules if human experts, or automatically derive the fuzzy if-then rules obtained from human experts are not available. The inverted pendulum system is employed as a test-bed to demonstrate the effectiveness of the proposed control scheme and the robustness of the acquired fuzzy controller.

  9. Multiobjective Reinforcement Learning for Traffic Signal Control Using Vehicular Ad Hoc Network

    Directory of Open Access Journals (Sweden)

    Houli Duan

    2010-01-01

    Full Text Available We propose a new multiobjective control algorithm based on reinforcement learning for urban traffic signal control, named multi-RL. A multiagent structure is used to describe the traffic system. A vehicular ad hoc network is used for the data exchange among agents. A reinforcement learning algorithm is applied to predict the overall value of the optimization objective given vehicles' states. The policy which minimizes the cumulative value of the optimization objective is regarded as the optimal one. In order to make the method adaptive to various traffic conditions, we also introduce a multiobjective control scheme in which the optimization objective is selected adaptively to real-time traffic states. The optimization objectives include the vehicle stops, the average waiting time, and the maximum queue length of the next intersection. In addition, we also accommodate a priority control to the buses and the emergency vehicles through our model. The simulation results indicated that our algorithm could perform more efficiently than traditional traffic light control methods.

  10. Fuzzy control in robot-soccer, evolutionary learning in the first layer of control

    Directory of Open Access Journals (Sweden)

    Peter J Thomas

    2003-02-01

    Full Text Available In this paper an evolutionary algorithm is developed to learn a fuzzy knowledge base for the control of a soccer playing micro-robot from any configuration belonging to a grid of initial configurations to hit the ball along the ball to goal line of sight. The knowledge base uses relative co-ordinate system including left and right wheel velocities of the robot. Final path positions allow forward and reverse facing robot to ball and include its physical dimensions.

  11. An Intelligent System for Determining Learning Style

    Science.gov (United States)

    Ozdemir, Ali; Alaybeyoglu, Aysegul; Mulayim, Naciye; Uysal, Muhammed

    2018-01-01

    In this study, an intelligent system which determines learning style of the students is developed to increase success in effective and easy learning. The importance of the proposed software system is to determine convenience degree of the student's learning style. Personal information form and Dunn Learning Style Preference Survey are used to…

  12. Learning Markov models for stationary system behaviors

    DEFF Research Database (Denmark)

    Chen, Yingke; Mao, Hua; Jaeger, Manfred

    2012-01-01

    to a single long observation sequence, and in these situations existing automatic learning methods cannot be applied. In this paper, we adapt algorithms for learning variable order Markov chains from a single observation sequence of a target system, so that stationary system properties can be verified using......Establishing an accurate model for formal verification of an existing hardware or software system is often a manual process that is both time consuming and resource demanding. In order to ease the model construction phase, methods have recently been proposed for automatically learning accurate...... the learned model. Experiments demonstrate that system properties (formulated as stationary probabilities of LTL formulas) can be reliably identified using the learned model....

  13. Organisational reporting and learning systems: Innovating inside and outside of the box.

    Science.gov (United States)

    Sujan, Mark; Furniss, Dominic

    2015-01-01

    Reporting and learning systems are key organisational tools for the management and prevention of clinical risk. However, current approaches, such as incident reporting, are struggling to meet expectations of turning health systems like the UK National Health Service (NHS) into learning organisations. This article aims to open up debate on the potential for novel reporting and learning systems in healthcare, by reflecting on experiences from two recent projects: Proactive Risk Monitoring in Healthcare (PRIMO) and Errordiary in Healthcare. These two approaches demonstrate how paying attention to ordinary, everyday clinical work can derive useful learning and active discussion about clinical risk. We argue that innovations in reporting and learning systems might come from both inside and outside of the box. 'Inside' being along traditional paths of controlled organisational innovation. 'Outside' in the sense that inspiration comes outside of the healthcare domain, or more extremely, outside official channels through external websites and social media (e.g. patient forums, public review sites, whistleblower blogs and Twitter streams). Reporting routes that bypass official channels could empower staff and patient activism, and turn out to be a driver to challenge organisational processes, assumptions and priorities where the organisation is failing and has become unresponsive.

  14. Online and Compositional Learning of Controllers with Application to Floor Heating

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Mikučionis, Marius; Muniz, Marco

    2016-01-01

    possible input temperatures and an arbitrary time horizon, we propose an on-line synthesis methodology, where we periodically compute the controller only for the near future based on the current sensor readings. This computation is itself done by employing machine learning in order to avoid enumeration...... of continuous variables (e.g. temperature readings in the different rooms) and even after digitization, the state-space remains huge and cannot be fully explored. We suggest a general and scalable methodology for controller synthesis for such systems. Instead of off-line synthesis of a controller for all...

  15. Flyback CCM inverter for AC module applications: iterative learning control and convergence analysis

    Science.gov (United States)

    Lee, Sung-Ho; Kim, Minsung

    2017-12-01

    This paper presents an iterative learning controller (ILC) for an interleaved flyback inverter operating in continuous conduction mode (CCM). The flyback CCM inverter features small output ripple current, high efficiency, and low cost, and hence it is well suited for photovoltaic power applications. However, it exhibits the non-minimum phase behaviour, because its transfer function from control duty to output current has the right-half-plane (RHP) zero. Moreover, the flyback CCM inverter suffers from the time-varying grid voltage disturbance. Thus, conventional control scheme results in inaccurate output tracking. To overcome these problems, the ILC is first developed and applied to the flyback inverter operating in CCM. The ILC makes use of both predictive and current learning terms which help the system output to converge to the reference trajectory. We take into account the nonlinear averaged model and use it to construct the proposed controller. It is proven that the system output globally converges to the reference trajectory in the absence of state disturbances, output noises, or initial state errors. Numerical simulations are performed to validate the proposed control scheme, and experiments using 400-W AC module prototype are carried out to demonstrate its practical feasibility.

  16. VIRTUAL LABORATORY IN DISTANCE LEARNING SYSTEM

    Directory of Open Access Journals (Sweden)

    Е. Kozlovsky

    2011-11-01

    Full Text Available Questions of designing and a choice of technologies of creation of virtual laboratory for the distance learning system are considered. Distance learning system «Kherson Virtual University» is used as illustration.

  17. Learn How to Control Asthma

    Science.gov (United States)

    ... Guidelines Asthma & Community Health Learn How to Control Asthma Language: English (US) Español (Spanish) Arabic Chinese Français ... Is Asthma Treated? Select a Language What Is Asthma? Asthma is a disease that affects your lungs. ...

  18. Implementation Challenges for Multivariable Control: What You Did Not Learn in School

    Science.gov (United States)

    Garg, Sanjay

    2008-01-01

    Multivariable control allows controller designs that can provide decoupled command tracking and robust performance in the presence of modeling uncertainties. Although the last two decades have seen extensive development of multivariable control theory and example applications to complex systems in software/hardware simulations, there are no production flying systems aircraft or spacecraft, that use multivariable control. This is because of the tremendous challenges associated with implementation of such multivariable control designs. Unfortunately, the curriculum in schools does not provide sufficient time to be able to provide an exposure to the students in such implementation challenges. The objective of this paper is to share the lessons learned by a practitioner of multivariable control in the process of applying some of the modern control theory to the Integrated Flight Propulsion Control (IFPC) design for an advanced Short Take-Off Vertical Landing (STOVL) aircraft simulation.

  19. A Computer-Assisted Learning Model Based on the Digital Game Exponential Reward System

    Science.gov (United States)

    Moon, Man-Ki; Jahng, Surng-Gahb; Kim, Tae-Yong

    2011-01-01

    The aim of this research was to construct a motivational model which would stimulate voluntary and proactive learning using digital game methods offering players more freedom and control. The theoretical framework of this research lays the foundation for a pedagogical learning model based on digital games. We analyzed the game reward system, which…

  20. HETDEX tracker control system design and implementation

    Science.gov (United States)

    Beno, Joseph H.; Hayes, Richard; Leck, Ron; Penney, Charles; Soukup, Ian

    2012-09-01

    To enable the Hobby-Eberly Telescope Dark Energy Experiment, The University of Texas at Austin Center for Electromechanics and McDonald Observatory developed a precision tracker and control system - an 18,000 kg robot to position a 3,100 kg payload within 10 microns of a desired dynamic track. Performance requirements to meet science needs and safety requirements that emerged from detailed Failure Modes and Effects Analysis resulted in a system of 13 precision controlled actuators and 100 additional analog and digital devices (primarily sensors and safety limit switches). Due to this complexity, demanding accuracy requirements, and stringent safety requirements, two independent control systems were developed. First, a versatile and easily configurable centralized control system that links with modeling and simulation tools during the hardware and software design process was deemed essential for normal operation including motion control. A second, parallel, control system, the Hardware Fault Controller (HFC) provides independent monitoring and fault control through a dedicated microcontroller to force a safe, controlled shutdown of the entire system in the event a fault is detected. Motion controls were developed in a Matlab-Simulink simulation environment, and coupled with dSPACE controller hardware. The dSPACE real-time operating system collects sensor information; motor commands are transmitted over a PROFIBUS network to servo amplifiers and drive motor status is received over the same network. To interface the dSPACE controller directly to absolute Heidenhain sensors with EnDat 2.2 protocol, a custom communication board was developed. This paper covers details of operational control software, the HFC, algorithms, tuning, debugging, testing, and lessons learned.

  1. U.S. national nuclear material control and accounting system

    International Nuclear Information System (INIS)

    Taylor, S; Terentiev, V G

    1998-01-01

    Issues related to nuclear material control and accounting and illegal dealing in these materials were discussed at the April 19--20, 1996 Moscow summit meeting (G7 + Russia). The declaration from this meeting reaffirmed that governments are responsible for the safety of all nuclear materials in their possession and for the effectiveness of the national control and accounting system for these materials. The Russian delegation at this meeting stated that ''the creation of a nuclear materials accounting, control, and physical protection system has become a government priority''. Therefore, in order to create a government nuclear material control and accounting system for the Russian Federation, it is critical to study the structure, operating principles, and regulations supporting the control and accounting of nuclear materials in the national systems of nuclear powers. In particular, Russian specialists have a definite interest in learning about the National Nuclear Material Control and Accounting System of the US, which has been operating successfully as an automated system since 1968

  2. Maximum power point tracking-based control algorithm for PMSG wind generation system without mechanical sensors

    International Nuclear Information System (INIS)

    Hong, Chih-Ming; Chen, Chiung-Hsing; Tu, Chia-Sheng

    2013-01-01

    Highlights: ► This paper presents MPPT based control for optimal wind energy capture using RBFN. ► MPSO is adopted to adjust the learning rates to improve the learning capability. ► This technique can maintain the system stability and reach the desired performance. ► The EMF in the rotating reference frame is utilized in order to estimate speed. - Abstract: This paper presents maximum-power-point-tracking (MPPT) based control algorithms for optimal wind energy capture using radial basis function network (RBFN) and a proposed torque observer MPPT algorithm. The design of a high-performance on-line training RBFN using back-propagation learning algorithm with modified particle swarm optimization (MPSO) regulating controller for the sensorless control of a permanent magnet synchronous generator (PMSG). The MPSO is adopted in this study to adapt the learning rates in the back-propagation process of the RBFN to improve the learning capability. The PMSG is controlled by the loss-minimization control with MPPT below the base speed, which corresponds to low and high wind speed, and the maximum energy can be captured from the wind. Then the observed disturbance torque is feed-forward to increase the robustness of the PMSG system

  3. Continuous residual reinforcement learning for traffic signal control optimization

    NARCIS (Netherlands)

    Aslani, Mohammad; Seipel, Stefan; Wiering, Marco

    2018-01-01

    Traffic signal control can be naturally regarded as a reinforcement learning problem. Unfortunately, it is one of the most difficult classes of reinforcement learning problems owing to its large state space. A straightforward approach to address this challenge is to control traffic signals based on

  4. Teach Them How They Learn: Learning Styles and Information Systems Education

    Science.gov (United States)

    Cegielski, Casey G.; Hazen, Benjamin T.; Rainer, R. Kelly

    2011-01-01

    The rich, interdisciplinary tradition of learning styles is markedly absent in information systems-related research. The current study applies the framework of learning styles to a common educational component of many of today's information systems curricula--object-oriented systems development--in an effort to answer the question as to whether…

  5. A Neuro-Control Design Based on Fuzzy Reinforcement Learning

    DEFF Research Database (Denmark)

    Katebi, S.D.; Blanke, M.

    This paper describes a neuro-control fuzzy critic design procedure based on reinforcement learning. An important component of the proposed intelligent control configuration is the fuzzy credit assignment unit which acts as a critic, and through fuzzy implications provides adjustment mechanisms....... The fuzzy credit assignment unit comprises a fuzzy system with the appropriate fuzzification, knowledge base and defuzzification components. When an external reinforcement signal (a failure signal) is received, sequences of control actions are evaluated and modified by the action applier unit. The desirable...... ones instruct the neuro-control unit to adjust its weights and are simultaneously stored in the memory unit during the training phase. In response to the internal reinforcement signal (set point threshold deviation), the stored information is retrieved by the action applier unit and utilized for re...

  6. Intelligent data analysis for e-learning enhancing security and trustworthiness in online learning systems

    CERN Document Server

    Miguel, Jorge; Xhafa, Fatos

    2016-01-01

    Intelligent Data Analysis for e-Learning: Enhancing Security and Trustworthiness in Online Learning Systems addresses information security within e-Learning based on trustworthiness assessment and prediction. Over the past decade, many learning management systems have appeared in the education market. Security in these systems is essential for protecting against unfair and dishonest conduct-most notably cheating-however, e-Learning services are often designed and implemented without considering security requirements. This book provides functional approaches of trustworthiness analysis, modeling, assessment, and prediction for stronger security and support in online learning, highlighting the security deficiencies found in most online collaborative learning systems. The book explores trustworthiness methodologies based on collective intelligence than can overcome these deficiencies. It examines trustworthiness analysis that utilizes the large amounts of data-learning activities generate. In addition, as proc...

  7. Precursor systems analyses of automated highway systems. Knowledge based systems and learning methods for AHS. Volume 10. Final report, September 1993-February 1995

    Energy Technology Data Exchange (ETDEWEB)

    Schmoltz, J.; Blumer, A.; Noonan, J.; Shedd, D.; Twarog, J.

    1995-06-01

    Managing each AHS vehicle and the AHS system as a whole is an extremely complex yndertaking. The authors have investigated and now report on Artificial Intelligence (AI) approaches that can help. In particular, we focus on AI technologies known as Knowledge Based Systems (KBSs) and Learning Methods (LMs). Our primary purpose is to identify opportunities: we identify several problems in AHS and AI technologies that can solve them. Our secondary purpose is to examine in some detail a subset of these opportunities: we examine how KBSs and LMs can help in controlling the high level movements--e.g., keep in lane, change lanes, speed up, slow down--of an automated vehicle. This detailed examination includes the implementation of a prototype system having three primary components. The Tufts Automated Highway System Kit(TAHSK) discrete time micro-level traffic simulator is a generic AHS simulator. TAHSK interfaces with the Knowledge Based Controller (KBCon) knowledge based high level controller, which controls the high level actions of individual AHS vehicles. Finally, TAHSK also interfaces with a Reinforcement learning (RL) module that was used to explore the possibilities of RL techniques in an AHS environment.

  8. A Mobile Gamification Learning System for Improving the Learning Motivation and Achievements

    Science.gov (United States)

    Su, C-H.; Cheng, C-H.

    2015-01-01

    This paper aims to investigate how a gamified learning approach influences science learning, achievement and motivation, through a context-aware mobile learning environment, and explains the effects on motivation and student learning. A series of gamified learning activities, based on MGLS (Mobile Gamification Learning System), was developed and…

  9. Game-Theoretic Learning in Distributed Control

    KAUST Repository

    Marden, Jason R.

    2018-01-05

    In distributed architecture control problems, there is a collection of interconnected decision-making components that seek to realize desirable collective behaviors through local interactions and by processing local information. Applications range from autonomous vehicles to energy to transportation. One approach to control of such distributed architectures is to view the components as players in a game. In this approach, two design considerations are the components’ incentives and the rules that dictate how components react to the decisions of other components. In game-theoretic language, the incentives are defined through utility functions, and the reaction rules are online learning dynamics. This chapter presents an overview of this approach, covering basic concepts in game theory, special game classes, measures of distributed efficiency, utility design, and online learning rules, all with the interpretation of using game theory as a prescriptive paradigm for distributed control design.

  10. E-learning systems intelligent techniques for personalization

    CERN Document Server

    Klašnja-Milićević, Aleksandra; Ivanović, Mirjana; Budimac, Zoran; Jain, Lakhmi C

    2017-01-01

    This monograph provides a comprehensive research review of intelligent techniques for personalisation of e-learning systems. Special emphasis is given to intelligent tutoring systems as a particular class of e-learning systems, which support and improve the learning and teaching of domain-specific knowledge. A new approach to perform effective personalization based on Semantic web technologies achieved in a tutoring system is presented. This approach incorporates a recommender system based on collaborative tagging techniques that adapts to the interests and level of students' knowledge. These innovations are important contributions of this monograph. Theoretical models and techniques are illustrated on a real personalised tutoring system for teaching Java programming language. The monograph is directed to, students and researchers interested in the e-learning and personalization techniques. .

  11. Deep Learning-Based Data Forgery Detection in Automatic Generation Control

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Fengli [Univ. of Arkansas, Fayetteville, AR (United States); Li, Qinghua [Univ. of Arkansas, Fayetteville, AR (United States)

    2017-10-09

    Automatic Generation Control (AGC) is a key control system in the power grid. It is used to calculate the Area Control Error (ACE) based on frequency and tie-line power flow between balancing areas, and then adjust power generation to maintain the power system frequency in an acceptable range. However, attackers might inject malicious frequency or tie-line power flow measurements to mislead AGC to do false generation correction which will harm the power grid operation. Such attacks are hard to be detected since they do not violate physical power system models. In this work, we propose algorithms based on Neural Network and Fourier Transform to detect data forgery attacks in AGC. Different from the few previous work that rely on accurate load prediction to detect data forgery, our solution only uses the ACE data already available in existing AGC systems. In particular, our solution learns the normal patterns of ACE time series and detects abnormal patterns caused by artificial attacks. Evaluations on the real ACE dataset show that our methods have high detection accuracy.

  12. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    Science.gov (United States)

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  14. Assisted Learning Systems in e-Education

    Directory of Open Access Journals (Sweden)

    Gabriel ZAMFIR

    2014-01-01

    Full Text Available Human society, analyzed as a learning environment, presumes different languages in order to know, to understand or to develop it. This statement results as a default application of the cog-nitive domain in the educational scientific research, and it highlights a key feature: each essen-tial discovery was available for the entire language compatible society. E-Society is constructed as an application of E-Science in social services, and it is going to reveal a learning system for each application of the information technology developed for a compatible society. This article is proposed as a conceptual one focused on scientific research and the interrelationship be-tween the building blocks of research, defined as an engine for any designed learning system applied in the cognitive domain. In this approach, educational research become a learning sys-tem in e-Education. The purpose of this analysis is to configure the teacher assisted learning system and to expose its main principles which could be integrated in standard assisted instruc-tion applications, available in e-Classroom, supporting the design of specific didactic activities.

  15. Personalized E- learning System Based on Intelligent Agent

    Science.gov (United States)

    Duo, Sun; Ying, Zhou Cai

    Lack of personalized learning is the key shortcoming of traditional e-Learning system. This paper analyzes the personal characters in e-Learning activity. In order to meet the personalized e-learning, a personalized e-learning system based on intelligent agent was proposed and realized in the paper. The structure of system, work process, the design of intelligent agent and the realization of intelligent agent were introduced in the paper. After the test use of the system by certain network school, we found that the system could improve the learner's initiative participation, which can provide learners with personalized knowledge service. Thus, we thought it might be a practical solution to realize self- learning and self-promotion in the lifelong education age.

  16. Barrier Function-Based Neural Adaptive Control With Locally Weighted Learning and Finite Neuron Self-Growing Strategy.

    Science.gov (United States)

    Jia, Zi-Jun; Song, Yong-Duan

    2017-06-01

    This paper presents a new approach to construct neural adaptive control for uncertain nonaffine systems. By integrating locally weighted learning with barrier Lyapunov function (BLF), a novel control design method is presented to systematically address the two critical issues in neural network (NN) control field: one is how to fulfill the compact set precondition for NN approximation, and the other is how to use varying rather than a fixed NN structure to improve the functionality of NN control. A BLF is exploited to ensure the NN inputs to remain bounded during the entire system operation. To account for system nonlinearities, a neuron self-growing strategy is proposed to guide the process for adding new neurons to the system, resulting in a self-adjustable NN structure for better learning capabilities. It is shown that the number of neurons needed to accomplish the control task is finite, and better performance can be obtained with less number of neurons as compared with traditional methods. The salient feature of the proposed method also lies in the continuity of the control action everywhere. Furthermore, the resulting control action is smooth almost everywhere except for a few time instants at which new neurons are added. Numerical example illustrates the effectiveness of the proposed approach.

  17. A Simple and Effective Remedial Learning System with a Fuzzy Expert System

    Science.gov (United States)

    Lin, C.-C.; Guo, K.-H.; Lin, Y.-C.

    2016-01-01

    This study aims at implementing a simple and effective remedial learning system. Based on fuzzy inference, a remedial learning material selection system is proposed for a digital logic course. Two learning concepts of the course have been used in the proposed system: number systems and combinational logic. We conducted an experiment to validate…

  18. Intelligent control of robotic arm/hand systems for the NASA EVA retriever using neural networks

    Science.gov (United States)

    Mclauchlan, Robert A.

    1989-01-01

    Adaptive/general learning algorithms using varying neural network models are considered for the intelligent control of robotic arm plus dextrous hand/manipulator systems. Results are summarized and discussed for the use of the Barto/Sutton/Anderson neuronlike, unsupervised learning controller as applied to the stabilization of an inverted pendulum on a cart system. Recommendations are made for the application of the controller and a kinematic analysis for trajectory planning to simple object retrieval (chase/approach and capture/grasp) scenarios in two dimensions.

  19. Online learning control using adaptive critic designs with sparse kernel machines.

    Science.gov (United States)

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  20. Exploring nursing e-learning systems success based on information system success model.

    Science.gov (United States)

    Chang, Hui-Chuan; Liu, Chung-Feng; Hwang, Hsin-Ginn

    2011-12-01

    E-learning is thought of as an innovative approach to enhance nurses' care service knowledge. Extensive research has provided rich information toward system development, courses design, and nurses' satisfaction with an e-learning system. However, a comprehensive view in understanding nursing e-learning system success is an important but less focused-on topic. The purpose of this research was to explore net benefits of nursing e-learning systems based on the updated DeLone and McLean's Information System Success Model. The study used a self-administered questionnaire to collected 208 valid nurses' responses from 21 of Taiwan's medium- and large-scale hospitals that have implemented nursing e-learning systems. The result confirms that the model is sufficient to explore the nurses' use of e-learning systems in terms of intention to use, user satisfaction, and net benefits. However, while the three exogenous quality factors (system quality, information quality, and service quality) were all found to be critical factors affecting user satisfaction, only information quality showed a direct effect on the intention to use. This study provides useful insights for evaluating nursing e-learning system qualities as well as an understanding of nurses' intentions and satisfaction related to performance benefits.

  1. Robust Control Methods for On-Line Statistical Learning

    Directory of Open Access Journals (Sweden)

    Capobianco Enrico

    2001-01-01

    Full Text Available The issue of controlling that data processing in an experiment results not affected by the presence of outliers is relevant for statistical control and learning studies. Learning schemes should thus be tested for their capacity of handling outliers in the observed training set so to achieve reliable estimates with respect to the crucial bias and variance aspects. We describe possible ways of endowing neural networks with statistically robust properties by defining feasible error criteria. It is convenient to cast neural nets in state space representations and apply both Kalman filter and stochastic approximation procedures in order to suggest statistically robustified solutions for on-line learning.

  2. A proof-of-principle simulation for closed-loop control based on preexisting experimental thalamic DBS-enhanced instrumental learning.

    Science.gov (United States)

    Wang, Ching-Fu; Yang, Shih-Hung; Lin, Sheng-Huang; Chen, Po-Chuan; Lo, Yu-Chun; Pan, Han-Chi; Lai, Hsin-Yi; Liao, Lun-De; Lin, Hui-Ching; Chen, Hsu-Yan; Huang, Wei-Chen; Huang, Wun-Jhu; Chen, You-Yin

    Deep brain stimulation (DBS) has been applied as an effective therapy for treating Parkinson's disease or essential tremor. Several open-loop DBS control strategies have been developed for clinical experiments, but they are limited by short battery life and inefficient therapy. Therefore, many closed-loop DBS control systems have been designed to tackle these problems by automatically adjusting the stimulation parameters via feedback from neural signals, which has been reported to reduce the power consumption. However, when the association between the biomarkers of the model and stimulation is unclear, it is difficult to develop an optimal control scheme for other DBS applications, i.e., DBS-enhanced instrumental learning. Furthermore, few studies have investigated the effect of closed-loop DBS control for cognition function, such as instrumental skill learning, and have been implemented in simulation environments. In this paper, we proposed a proof-of-principle design for a closed-loop DBS system, cognitive-enhancing DBS (ceDBS), which enhanced skill learning based on in vivo experimental data. The ceDBS acquired local field potential (LFP) signal from the thalamic central lateral (CL) nuclei of animals through a neural signal processing system. A strong coupling of the theta oscillation (4-7 Hz) and the learning period was found in the water reward-related lever-pressing learning task. Therefore, the theta-band power ratio, which was the averaged theta band to averaged total band (1-55 Hz) power ratio, could be used as a physiological marker for enhancement of instrumental skill learning. The on-line extraction of the theta-band power ratio was implemented on a field-programmable gate array (FPGA). An autoregressive with exogenous inputs (ARX)-based predictor was designed to construct a CL-thalamic DBS model and forecast the future physiological marker according to the past physiological marker and applied DBS. The prediction could further assist the design of

  3. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected Systems.

    Science.gov (United States)

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-06-08

    This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

  4. Hybrid Recurrent Laguerre-Orthogonal-Polynomial NN Control System Applied in V-Belt Continuously Variable Transmission System Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Chih-Hong Lin

    2015-01-01

    Full Text Available Because the V-belt continuously variable transmission (CVT system driven by permanent magnet synchronous motor (PMSM has much unknown nonlinear and time-varying characteristics, the better control performance design for the linear control design is a time consuming procedure. In order to overcome difficulties for design of the linear controllers, the hybrid recurrent Laguerre-orthogonal-polynomial neural network (NN control system which has online learning ability to respond to the system’s nonlinear and time-varying behaviors is proposed to control PMSM servo-driven V-belt CVT system under the occurrence of the lumped nonlinear load disturbances. The hybrid recurrent Laguerre-orthogonal-polynomial NN control system consists of an inspector control, a recurrent Laguerre-orthogonal-polynomial NN control with adaptive law, and a recouped control with estimated law. Moreover, the adaptive law of online parameters in the recurrent Laguerre-orthogonal-polynomial NN is derived using the Lyapunov stability theorem. Furthermore, the optimal learning rate of the parameters by means of modified particle swarm optimization (PSO is proposed to achieve fast convergence. Finally, to show the effectiveness of the proposed control scheme, comparative studies are demonstrated by experimental results.

  5. Recommender Systems in Technology Enhanced Learning

    NARCIS (Netherlands)

    Manouselis, Nikos; Drachsler, Hendrik; Verbert, Katrien; Santos, Olga

    2010-01-01

    Manouselis, N., Drachsler, H., Verbert, K., & Santos, C. S. (Eds.) (2010). Recommender System in Technology Enhanced Learning. Elsevier Procedia Computer Science: Volume 1, Issue 2. Proceedings of the 1st Workshop on Recommender Systems for Technology Enhanced Learning (RecSysTEL). September, 29-30,

  6. Reconceptualizing Learning as a Dynamical System.

    Science.gov (United States)

    Ennis, Catherine D.

    1992-01-01

    Dynamical systems theory can increase our understanding of the constantly evolving learning process. Current research using experimental and interpretive paradigms focuses on describing the attractors and constraints stabilizing the educational process. Dynamical systems theory focuses attention on critical junctures in the learning process as…

  7. Model-based iterative learning control of Parkinsonian state in thalamic relay neuron

    Science.gov (United States)

    Liu, Chen; Wang, Jiang; Li, Huiyan; Xue, Zhiqin; Deng, Bin; Wei, Xile

    2014-09-01

    Although the beneficial effects of chronic deep brain stimulation on Parkinson's disease motor symptoms are now largely confirmed, the underlying mechanisms behind deep brain stimulation remain unclear and under debate. Hence, the selection of stimulation parameters is full of challenges. Additionally, due to the complexity of neural system, together with omnipresent noises, the accurate model of thalamic relay neuron is unknown. Thus, the iterative learning control of the thalamic relay neuron's Parkinsonian state based on various variables is presented. Combining the iterative learning control with typical proportional-integral control algorithm, a novel and efficient control strategy is proposed, which does not require any particular knowledge on the detailed physiological characteristics of cortico-basal ganglia-thalamocortical loop and can automatically adjust the stimulation parameters. Simulation results demonstrate the feasibility of the proposed control strategy to restore the fidelity of thalamic relay in the Parkinsonian condition. Furthermore, through changing the important parameter—the maximum ionic conductance densities of low-threshold calcium current, the dominant characteristic of the proposed method which is independent of the accurate model can be further verified.

  8. Doing learning

    DEFF Research Database (Denmark)

    Mathiasen, John Bang; Koch, Christian

    2014-01-01

    Purpose: To investigate how learning occurs in a systems development project, using a company developing wind turbine control systems in collaboration with customers as case. Design/methodology/approach: Dewey’s approach to learning is used, emphasising reciprocity between the individual...... learning processes and that the interchanges between materiality and systems developers block the learning processes due to a customer with imprecise demands and unclear system specifications. In the four cases discussed, learning does occur however. Research limitations/implications: A qualitative study...... focusing on individual systems developers gives limited insight into whether the learning processes found would occur in other systems development processes. Practical implications: Managers should ensure that constitutive means, such as specifications, are available, and that they are sufficiently...

  9. Framework for Designing Context-Aware Learning Systems

    Science.gov (United States)

    Tortorella, Richard A. W.; Kinshuk; Chen, Nian-Shing

    2018-01-01

    Today people learn in many diverse locations and contexts, beyond the confines of classical brick and mortar classrooms. This trend is ever increasing, progressing hand-in-hand with the progress of technology. Context-aware learning systems are systems which adapt to the learner's context, providing tailored learning for a particular learning…

  10. Estimating Students’ Satisfaction with Web Based Learning System in Blended Learning Environment

    Directory of Open Access Journals (Sweden)

    Sanja Bauk

    2014-01-01

    Full Text Available Blended learning became the most popular educational model that universities apply for teaching and learning. This model combines online and face-to-face learning environments, in order to enhance learning with implementation of new web technologies and tools in learning process. In this paper principles of DeLone and Mclean success model for information system are applied to Kano two-dimensional model, for categorizing quality attributes related to satisfaction of students with web based learning system used in blended learning model. Survey results are obtained among the students at “Mediterranean” University in Montenegro. The (dysfunctional dimensions of Kano model, including Kano basic matrix for assessment of the degree of students’ satisfaction level, have been considered in some more detail through corresponding numerical, graphical, and statistical analysis.

  11. Simulating closed- and open-loop voluntary movement: a nonlinear control-systems approach.

    Science.gov (United States)

    Davidson, Paul R; Jones, Richard D; Andreae, John H; Sirisena, Harsha R

    2002-11-01

    In many recent human motor control models, including feedback-error learning and adaptive model theory (AMT), feedback control is used to correct errors while an inverse model is simultaneously tuned to provide accurate feedforward control. This popular and appealing hypothesis, based on a combination of psychophysical observations and engineering considerations, predicts that once the tuning of the inverse model is complete the role of feedback control is limited to the correction of disturbances. This hypothesis was tested by looking at the open-loop behavior of the human motor system during adaptation. An experiment was carried out involving 20 normal adult subjects who learned a novel visuomotor relationship on a pursuit tracking task with a steering wheel for input. During learning, the response cursor was periodically blanked, removing all feedback about the external system (i.e., about the relationship between hand motion and response cursor motion). Open-loop behavior was not consistent with a progressive transfer from closed- to open-loop control. Our recently developed computational model of the brain--a novel nonlinear implementation of AMT--was able to reproduce the observed closed- and open-loop results. In contrast, other control-systems models exhibited only minimal feedback control following adaptation, leading to incorrect open-loop behavior. This is because our model continues to use feedback to control slow movements after adaptation is complete. This behavior enhances the internal stability of the inverse model. In summary, our computational model is currently the only motor control model able to accurately simulate the closed- and open-loop characteristics of the experimental response trajectories.

  12. Flight Test of an Intelligent Flight-Control System

    Science.gov (United States)

    Davidson, Ron; Bosworth, John T.; Jacobson, Steven R.; Thomson, Michael Pl; Jorgensen, Charles C.

    2003-01-01

    The F-15 Advanced Controls Technology for Integrated Vehicles (ACTIVE) airplane (see figure) was the test bed for a flight test of an intelligent flight control system (IFCS). This IFCS utilizes a neural network to determine critical stability and control derivatives for a control law, the real-time gains of which are computed by an algorithm that solves the Riccati equation. These derivatives are also used to identify the parameters of a dynamic model of the airplane. The model is used in a model-following portion of the control law, in order to provide specific vehicle handling characteristics. The flight test of the IFCS marks the initiation of the Intelligent Flight Control System Advanced Concept Program (IFCS ACP), which is a collaboration between NASA and Boeing Phantom Works. The goals of the IFCS ACP are to (1) develop the concept of a flight-control system that uses neural-network technology to identify aircraft characteristics to provide optimal aircraft performance, (2) develop a self-training neural network to update estimates of aircraft properties in flight, and (3) demonstrate the aforementioned concepts on the F-15 ACTIVE airplane in flight. The activities of the initial IFCS ACP were divided into three Phases, each devoted to the attainment of a different objective. The objective of Phase I was to develop a pre-trained neural network to store and recall the wind-tunnel-based stability and control derivatives of the vehicle. The objective of Phase II was to develop a neural network that can learn how to adjust the stability and control derivatives to account for failures or modeling deficiencies. The objective of Phase III was to develop a flight control system that uses the neural network outputs as a basis for controlling the aircraft. The flight test of the IFCS was performed in stages. In the first stage, the Phase I version of the pre-trained neural network was flown in a passive mode. The neural network software was running using flight data

  13. Digital case-based learning system in school.

    Science.gov (United States)

    Gu, Peipei; Guo, Jiayang

    2017-01-01

    With the continuing growth of multi-media learning resources, it is important to offer methods helping learners to explore and acquire relevant learning information effectively. As services that organize multi-media learning materials together to support programming learning, the digital case-based learning system is needed. In order to create a case-oriented e-learning system, this paper concentrates on the digital case study of multi-media resources and learning processes with an integrated framework. An integration of multi-media resources, testing and learning strategies recommendation as the learning unit is proposed in the digital case-based learning framework. The learning mechanism of learning guidance, multi-media materials learning and testing feedback is supported in our project. An improved personalized genetic algorithm which incorporates preference information and usage degree into the crossover and mutation process is proposed to assemble the personalized test sheet for each learner. A learning strategies recommendation solution is proposed to recommend learning strategies for learners to help them to learn. The experiments are conducted to prove that the proposed approaches are capable of constructing personalized sheets and the effectiveness of the framework.

  14. Digital case-based learning system in school.

    Directory of Open Access Journals (Sweden)

    Peipei Gu

    Full Text Available With the continuing growth of multi-media learning resources, it is important to offer methods helping learners to explore and acquire relevant learning information effectively. As services that organize multi-media learning materials together to support programming learning, the digital case-based learning system is needed. In order to create a case-oriented e-learning system, this paper concentrates on the digital case study of multi-media resources and learning processes with an integrated framework. An integration of multi-media resources, testing and learning strategies recommendation as the learning unit is proposed in the digital case-based learning framework. The learning mechanism of learning guidance, multi-media materials learning and testing feedback is supported in our project. An improved personalized genetic algorithm which incorporates preference information and usage degree into the crossover and mutation process is proposed to assemble the personalized test sheet for each learner. A learning strategies recommendation solution is proposed to recommend learning strategies for learners to help them to learn. The experiments are conducted to prove that the proposed approaches are capable of constructing personalized sheets and the effectiveness of the framework.

  15. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.

    Science.gov (United States)

    Chen, C L Philip; Liu, Zhulin

    2018-01-01

    Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

  16. Adaptive e-learning system using ontology

    OpenAIRE

    Yarandi, Maryam; Tawil, Abdel-Rahman; Jahankhani, Hossein

    2011-01-01

    This paper proposes an innovative ontological approach to design a personalised e-learning system which creates a tailored workflow for individual learner. Moreover, the learning content and sequencing logic is separated into content model and pedagogical model to increase the reusability and flexibility of the system.

  17. Motor skill learning, retention, and control deficits in Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Lisa Katharina Pendt

    Full Text Available Parkinson's disease, which affects the basal ganglia, is known to lead to various impairments of motor control. Since the basal ganglia have also been shown to be involved in learning processes, motor learning has frequently been investigated in this group of patients. However, results are still inconsistent, mainly due to skill levels and time scales of testing. To bridge across the time scale problem, the present study examined de novo skill learning over a long series of practice sessions that comprised early and late learning stages as well as retention. 19 non-demented, medicated, mild to moderate patients with Parkinson's disease and 19 healthy age and gender matched participants practiced a novel throwing task over five days in a virtual environment where timing of release was a critical element. Six patients and seven control participants came to an additional long-term retention testing after seven to nine months. Changes in task performance were analyzed by a method that differentiates between three components of motor learning prominent in different stages of learning: Tolerance, Noise and Covariation. In addition, kinematic analysis related the influence of skill levels as affected by the specific motor control deficits in Parkinson patients to the process of learning. As a result, patients showed similar learning in early and late stages compared to the control subjects. Differences occurred in short-term retention tests; patients' performance constantly decreased after breaks arising from poorer release timing. However, patients were able to overcome the initial timing problems within the course of each practice session and could further improve their throwing performance. Thus, results demonstrate the intact ability to learn a novel motor skill in non-demented, medicated patients with Parkinson's disease and indicate confounding effects of motor control deficits on retention performance.

  18. Mechatronic Control Engineering: A Problem Oriented And Project Based Learning Curriculum In Mechatronic

    DEFF Research Database (Denmark)

    Pedersen, Henrik Clemmensen; Andersen, Torben Ole; Hansen, Michael Rygaard

    2008-01-01

    Mechatronics is a field of multidisciplinary engineering that not only requires knowledge about different technical areas, but also insight into how to combine technologies optimally, to design efficient products and systems.This paper addresses the group project based and problem-oriented learning...... the well established methods from control engineering form very powerful techniques in both analysis and synthesis of mechatronic systems. The necessary skills for mechatronic engineers are outlined followed up by a discussion on how problem oriented project based learning is implemented. A complete...... curriculum named Mechatronic Control Engineering is presented, which is started at Aalborg University, Denmark, and the content of the semesters and projects are described. The projects are all characterized by the use of simulation and control for the purpose of analyzing and designing complex commercial...

  19. Voice over Internet Protocol (VoIP) Technology as a Global Learning Tool: Information Systems Success and Control Belief Perspectives

    Science.gov (United States)

    Chen, Charlie C.; Vannoy, Sandra

    2013-01-01

    Voice over Internet Protocol- (VoIP) enabled online learning service providers struggling with high attrition rates and low customer loyalty issues despite VoIP's high degree of system fit for online global learning applications. Effective solutions to this prevalent problem rely on the understanding of system quality, information quality, and…

  20. Component-Based Approach in Learning Management System Development

    Science.gov (United States)

    Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey

    2013-01-01

    The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…

  1. A Matlab/Simulink-Based Interactive Module for Servo Systems Learning

    Science.gov (United States)

    Aliane, N.

    2010-01-01

    This paper presents an interactive module for learning both the fundamental and practical issues of servo systems. This module, developed using Simulink in conjunction with the Matlab graphical user interface (Matlab-GUI) tool, is used to supplement conventional lectures in control engineering and robotics subjects. First, the paper introduces the…

  2. Development of Computer-Aided Learning Programs on Nuclear Nonproliferation and Control

    International Nuclear Information System (INIS)

    Kim, Hyun Chul

    2011-01-01

    The fulfillment of international norms for nuclear nonproliferation is indispensable to the promotion of nuclear energy. The education and training for personnel and mangers related to the nuclear material are one of crucial factors to avoid unintended non-compliance to international norms. Korea Institute of Nuclear Nonproliferation and Control (KINAC) has been providing education and training on nuclear control as its legal duty. One of the legally mandatory educations is 'nuclear control education' performed since 2006 for the observation of the international norms on nuclear nonproliferation and the spread of the nuclear control culture. The other is 'physical protection education' performed since 2010 for maintaining the national physical protection regime effectively and the spread of the nuclear security culture. The 2010 Nuclear Security Summit was held in Washington, DC to enhance international cooperation to prevent nuclear terrorism. During the Summit, the South Korea was chosen to host the second Nuclear Summit in 2012. South Korean President announced that South Korea would share its expertise and support the Summit's mission by setting up an international education and training center on nuclear security in 2014. KINAC is making a full effort to set up the center successfully. An important function of the center is education and training in the subjects of nuclear nonproliferation, nuclear safeguards, nuclear security, and nuclear export/import control. With increasing importance of education and training education on nuclear nonproliferation and control, KINAC has been developing computer-aided learning programs on nuclear nonproliferation and control to overcome the weaknesses in classroom educations. This paper shows two learning programs. One is an e-learning system on the nuclear nonproliferation and control and the other is a virtual reality program for training nuclear material accountancy inspection of light water reactor power plants

  3. Development of Computer-Aided Learning Programs on Nuclear Nonproliferation and Control

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun Chul [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of)

    2011-10-15

    The fulfillment of international norms for nuclear nonproliferation is indispensable to the promotion of nuclear energy. The education and training for personnel and mangers related to the nuclear material are one of crucial factors to avoid unintended non-compliance to international norms. Korea Institute of Nuclear Nonproliferation and Control (KINAC) has been providing education and training on nuclear control as its legal duty. One of the legally mandatory educations is 'nuclear control education' performed since 2006 for the observation of the international norms on nuclear nonproliferation and the spread of the nuclear control culture. The other is 'physical protection education' performed since 2010 for maintaining the national physical protection regime effectively and the spread of the nuclear security culture. The 2010 Nuclear Security Summit was held in Washington, DC to enhance international cooperation to prevent nuclear terrorism. During the Summit, the South Korea was chosen to host the second Nuclear Summit in 2012. South Korean President announced that South Korea would share its expertise and support the Summit's mission by setting up an international education and training center on nuclear security in 2014. KINAC is making a full effort to set up the center successfully. An important function of the center is education and training in the subjects of nuclear nonproliferation, nuclear safeguards, nuclear security, and nuclear export/import control. With increasing importance of education and training education on nuclear nonproliferation and control, KINAC has been developing computer-aided learning programs on nuclear nonproliferation and control to overcome the weaknesses in classroom educations. This paper shows two learning programs. One is an e-learning system on the nuclear nonproliferation and control and the other is a virtual reality program for training nuclear material accountancy inspection of light water

  4. Object oriented run control for the CEBAF data acquisition system

    International Nuclear Information System (INIS)

    Quarrie, D.R.; Heyes, G.; Jastrzembski, E.; Watson, W.A. III

    1992-01-01

    After an extensive evaluation, the Eiffel object oriented language has been selected for the design and implementation of the run control portion of the CEBAF Data Acquisition System. The OSF/Motif graphical user interface toolkit and Data Views process control system have been incorporated into this framework. in this paper, the authors discuss the evaluation process, the status of the implementation and the lessons learned, particularly in the use of object oriented techniques

  5. Measuring strategic control in implicit learning: how and why?

    Science.gov (United States)

    Norman, Elisabeth

    2015-01-01

    Several methods have been developed for measuring the extent to which implicitly learned knowledge can be applied in a strategic, flexible manner. Examples include generation exclusion tasks in Serial Reaction Time (SRT) learning (Goschke, 1998; Destrebecqz and Cleeremans, 2001) and 2-grammar classification tasks in Artificial Grammar Learning (AGL; Dienes et al., 1995; Norman et al., 2011). Strategic control has traditionally been used as a criterion for determining whether acquired knowledge is conscious or unconscious, or which properties of knowledge are consciously available. In this paper I first summarize existing methods that have been developed for measuring strategic control in the SRT and AGL tasks. I then address some methodological and theoretical questions. Methodological questions concern choice of task, whether the measurement reflects inhibitory control or task switching, and whether or not strategic control should be measured on a trial-by-trial basis. Theoretical questions concern the rationale for including measurement of strategic control, what form of knowledge is strategically controlled, and how strategic control can be combined with subjective awareness measures.

  6. The Office Software Learning and Examination System Design Based on Fragmented Learning Idea

    Directory of Open Access Journals (Sweden)

    Xu Ling

    2016-01-01

    Full Text Available Fragmented learning is that through the segmentation of learning content or learning time, make learners can use the fragmented time for learning fragmentated content, have the characteristics of time flexibility, learning targeted and high learning efficiency. Based on the fragmented learning ideas, combined with the teaching idea of micro class and interactive teaching, comprehensive utilization of flash animation design software, .NET development platform, VSTO technology, multimedia development technology and so on, design and develop a system integrated with learning, practice and examination of the Office software, which is not only conducive to the effective and personalized learning of students, but also conducive to the understanding the students’ situation of teachers, and liberate teachers from the heavy labor of mechanical, focus on promoting the formation of students’ knowledge system.

  7. Genetic algorithms for adaptive real-time control in space systems

    Science.gov (United States)

    Vanderzijp, J.; Choudry, A.

    1988-01-01

    Genetic Algorithms that are used for learning as one way to control the combinational explosion associated with the generation of new rules are discussed. The Genetic Algorithm approach tends to work best when it can be applied to a domain independent knowledge representation. Applications to real time control in space systems are discussed.

  8. E-Learning Systems, Environments and Approaches

    OpenAIRE

    Isaias, P.; Spector, J.M.; Ifenthaler, D.; Sampson, D.G.

    2015-01-01

    The volume consists of twenty-five chapters selected from among peer-reviewed papers presented at the CELDA (Cognition and Exploratory Learning in the Digital Age) 2013 Conference held in Fort Worth, Texas, USA, in October 2013 and also from world class scholars in e-learning systems, environments and approaches. The following sub-topics are included: Exploratory Learning Technologies (Part I), e-Learning social web design (Part II), Learner communities through e-Learning implementations (Par...

  9. Manifold traversing as a model for learning control of autonomous robots

    Science.gov (United States)

    Szakaly, Zoltan F.; Schenker, Paul S.

    1992-01-01

    This paper describes a recipe for the construction of control systems that support complex machines such as multi-limbed/multi-fingered robots. The robot has to execute a task under varying environmental conditions and it has to react reasonably when previously unknown conditions are encountered. Its behavior should be learned and/or trained as opposed to being programmed. The paper describes one possible method for organizing the data that the robot has learned by various means. This framework can accept useful operator input even if it does not fully specify what to do, and can combine knowledge from autonomous, operator assisted and programmed experiences.

  10. Learning control for batch thermal sterilization of canned foods.

    Science.gov (United States)

    Syafiie, S; Tadeo, F; Villafin, M; Alonso, A A

    2011-01-01

    A control technique based on Reinforcement Learning is proposed for the thermal sterilization of canned foods. The proposed controller has the objective of ensuring a given degree of sterilization during Heating (by providing a minimum temperature inside the cans during a given time) and then a smooth Cooling, avoiding sudden pressure variations. For this, three automatic control valves are manipulated by the controller: a valve that regulates the admission of steam during Heating, and a valve that regulate the admission of air, together with a bleeder valve, during Cooling. As dynamical models of this kind of processes are too complex and involve many uncertainties, controllers based on learning are proposed. Thus, based on the control objectives and the constraints on input and output variables, the proposed controllers learn the most adequate control actions by looking up a certain matrix that contains the state-action mapping, starting from a preselected state-action space. This state-action matrix is constantly updated based on the performance obtained with the applied control actions. Experimental results at laboratory scale show the advantages of the proposed technique for this kind of processes. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Off-Policy Actor-Critic Structure for Optimal Control of Unknown Systems With Disturbances.

    Science.gov (United States)

    Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai; Zhang, Huaguang

    2016-05-01

    An optimal control method is developed for unknown continuous-time systems with unknown disturbances in this paper. The integral reinforcement learning (IRL) algorithm is presented to obtain the iterative control. Off-policy learning is used to allow the dynamics to be completely unknown. Neural networks are used to construct critic and action networks. It is shown that if there are unknown disturbances, off-policy IRL may not converge or may be biased. For reducing the influence of unknown disturbances, a disturbances compensation controller is added. It is proven that the weight errors are uniformly ultimately bounded based on Lyapunov techniques. Convergence of the Hamiltonian function is also proven. The simulation study demonstrates the effectiveness of the proposed optimal control method for unknown systems with disturbances.

  12. Intelligent fractions learning system: implementation

    CSIR Research Space (South Africa)

    Smith, Andrew C

    2011-05-01

    Full Text Available Conference Proceedings Paul Cunningham and Miriam Cunningham (Eds) IIMC International Information Management Corporation, 2011 ISBN: 978-1-905824-24-3 An Intelligent Fractions Learning System: Implementation Andrew Cyrus SMITH1, Teemu H. LAINE2 1CSIR... to fractions. Our aim with the current research project is to extend the existing UFractions learning system to incorporate automatic data capturing. ?Intelligent UFractions? allows a teacher to remotely monitor the children?s progress during...

  13. A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.

    Science.gov (United States)

    Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng

    2016-05-01

    In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.

  14. Can we (control) Engineer the degree learning process?

    Science.gov (United States)

    White, A. S.; Censlive, M.; Neilsen, D.

    2014-07-01

    This paper investigates how control theory could be applied to learning processes in engineering education. The initial point for the analysis is White's Double Loop learning model of human automation control modified for the education process where a set of governing principals is chosen, probably by the course designer. After initial training the student decides unknowingly on a mental map or model. After observing how the real world is behaving, a strategy to achieve the governing variables is chosen and a set of actions chosen. This may not be a conscious operation, it maybe completely instinctive. These actions will cause some consequences but not until a certain time delay. The current model is compared with the work of Hollenbeck on goal setting, Nelson's model of self-regulation and that of Abdulwahed, Nagy and Blanchard at Loughborough who investigated control methods applied to the learning process.

  15. Can we (control) Engineer the degree learning process?

    International Nuclear Information System (INIS)

    White, A S; Censlive, M; Neilsen, D

    2014-01-01

    This paper investigates how control theory could be applied to learning processes in engineering education. The initial point for the analysis is White's Double Loop learning model of human automation control modified for the education process where a set of governing principals is chosen, probably by the course designer. After initial training the student decides unknowingly on a mental map or model. After observing how the real world is behaving, a strategy to achieve the governing variables is chosen and a set of actions chosen. This may not be a conscious operation, it maybe completely instinctive. These actions will cause some consequences but not until a certain time delay. The current model is compared with the work of Hollenbeck on goal setting, Nelson's model of self-regulation and that of Abdulwahed, Nagy and Blanchard at Loughborough who investigated control methods applied to the learning process

  16. Altitude control in honeybees: joint vision-based learning and guidance.

    Science.gov (United States)

    Portelli, Geoffrey; Serres, Julien R; Ruffier, Franck

    2017-08-23

    Studies on insects' visual guidance systems have shed little light on how learning contributes to insects' altitude control system. In this study, honeybees were trained to fly along a double-roofed tunnel after entering it near either the ceiling or the floor of the tunnel. The honeybees trained to hug the ceiling therefore encountered a sudden change in the tunnel configuration midways: i.e. a "dorsal ditch". Thus, the trained honeybees met a sudden increase in the distance to the ceiling, corresponding to a sudden strong change in the visual cues available in their dorsal field of view. Honeybees reacted by rising quickly and hugging the new, higher ceiling, keeping a similar forward speed, distance to the ceiling and dorsal optic flow to those observed during the training step; whereas bees trained to follow the floor kept on following the floor regardless of the change in the ceiling height. When trained honeybees entered the tunnel via the other entry (the lower or upper entry) to that used during the training step, they quickly changed their altitude and hugged the surface they had previously learned to follow. These findings clearly show that trained honeybees control their altitude based on visual cues memorized during training. The memorized visual cues generated by the surfaces followed form a complex optic flow pattern: trained honeybees may attempt to match the visual cues they perceive with this memorized optic flow pattern by controlling their altitude.

  17. An E-learning System based on Affective Computing

    Science.gov (United States)

    Duo, Sun; Song, Lu Xue

    In recent years, e-learning as a learning system is very popular. But the current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. The emergence of the theory about "Affective computing" can solve this question. It can make the computer's intelligence no longer be a pure cognitive one. In this paper, we construct an emotional intelligent e-learning system based on "Affective computing". A dimensional model is put forward to recognize and analyze the student's emotion state and a virtual teacher's avatar is offered to regulate student's learning psychology with consideration of teaching style based on his personality trait. A "man-to-man" learning environment is built to simulate the traditional classroom's pedagogy in the system.

  18. LOCUS OF CONTROL AND LEARNED HELPLESSNESS PHENOMENON IN PATIENTS WITH CHRONIC INTERNAL DISEASES

    Directory of Open Access Journals (Sweden)

    Grekhov R.A.

    2016-04-01

    Full Text Available The article discloses the concept of locus of control (or the level of subjective control, the phenomenon of learned helplessness in the framework of psychosomatic medicine, and their impact on the efficacy of treatment process. The data on the impact of these factors on the daily living and emotional state of patients, their interpersonal and social relationships, the reasons for the formation of learned helplessness are listened. The alternative psychophysiological treatment methods for emotional and behavioral disorders in psychosomatic diseases, in particular the effectiveness of biofeedback therapy in different types of physical pathology, which opens up the possibility of the patient to implement self-regulation mechanisms are presented. Biofeedback is a practically single psychophysiological evidence-based method of alternative medicine and it is regarded as a branch of behavioral therapy, which aims not only to the regulation of psychophysiological state, but also to shift the external locus of control to the inside. During the application of biofeedback, developed “functional system of self-regulation” form its perfect result. Biofeedback is the process of achieving a greater patient’s awareness of many physiological functions of his body, primarily with the use of tools that provide him with information on his activities, in order to obtain the possibility to manage the systems of his body by his own discretion. The probable mechanism of therapeutic action is the cognitive effect of biofeedback experiences, learning skills of self-control which patients had never happened before. The faith of the patient in his ability to control the symptoms of the disease is considered as a critical value, not a degree of measurable physiological changes.

  19. Moving towards Virtual Learning Clouds from Traditional Learning: Higher Educational Systems in India

    Directory of Open Access Journals (Sweden)

    Vasanthi Muniasamy

    2014-10-01

    Full Text Available E-Learning has become an increasingly popular learning approach in higher Education institutions due to the rapid growth of Communication and Information Technology (CIT. In recent years, it has been integrated in many university programs and it is one of the new learning trends. But in many Indian Universities did not implement this novel technology in their Educational Systems. E-Learning is not intended to replace the traditional classroom setting, but to provide new opportunities and new virtual environment for interaction and communication between the students and teacher. E-Learning through Cloud is now becoming an interesting and very useful revolutionary technology in the field of education. E-Learning system usually requires huge amount of hardware and software resources. Due to the cost, many universities in India do not want to implement the E-Learning technology in their Educational system and they cannot afford such investments. Cloud Virtual Learning is the only solution for this problem. This paper presents the benefits of using cloud technology in E-Learning system, working mode, Services, Models. And also we discuss the cloud computing educational environment and how higher education may take advantage of clouds not only in terms of cost but also in terms of Security, flexibility, portability, efficiency and reliability. And also we present some educational clouds introduced by popular cloud providers.

  20. Drive Control Scheme of Electric Power Assisted Wheelchair Based on Neural Network Learning of Human Wheelchair Operation Characteristics

    Science.gov (United States)

    Tanohata, Naoki; Seki, Hirokazu

    This paper describes a novel drive control scheme of electric power assisted wheelchairs based on neural network learning of human wheelchair operation characteristics. “Electric power assisted wheelchair” which enhances the drive force of the operator by employing electric motors is expected to be widely used as a mobility support system for elderly and disabled people. However, some handicapped people with paralysis of the muscles of one side of the body cannot maneuver the wheelchair as desired because of the difference in the right and left input force. Therefore, this study proposes a neural network learning system of such human wheelchair operation characteristics and a drive control scheme with variable distribution and assistance ratios. Some driving experiments will be performed to confirm the effectiveness of the proposed control system.

  1. Modeling student's learning styles in web 2.0 learning systems

    Directory of Open Access Journals (Sweden)

    Ramon Cabada Zatarain Cabada, M. L. Barron Estrada, L. Zepeda Sanchez, Guillermo Sandoval, J.M. Osorio Velazquez, J.E. Urias Barrientos

    2009-12-01

    Full Text Available The identification of the best learning style in an Intelligent Tutoring System must be considered essential as part of thesuccess in the teaching process. In many implementations of automatic classifiers finding the right student learning styler e p r e s e n t s t h e h a r d e s t a s s i g n m e n t . T h e r e a s o n i s t h a t m o s t o f t h e t e c h n i q u e s w o r k u s i n g e x p e r t g r o u p s o r a s e t o fquestionnaires which define how the learning styles are assigned to students. This paper presents a novel approach forautomatic learning styles classification using a Kohonen network. The approach is used by an author tool for buildingIntelligent Tutoring Systems running under a Web 2.0 collaborative learning platform. The tutoring systems together withthe neural network can also be exported to mobile devices. We present different results to the approach working under theauthor tool.

  2. FY1995 distributed control of man-machine cooperative multi agent systems; 1995 nendo ningen kyochogata multi agent kikai system no jiritsu seigyo

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    In the near future, distributed autonomous systems will be practical in many situations, e.g., interactive production systems, hazardous environments, nursing homes, and individual houses. The agents which consist of the distributed system must not give damages to human being and should be working economically. In this project man-machine cooperative multi agent systems are studied in many kind of respects, and basic design technology, basic control technique are developed by establishing fundamental theories and by constructing experimental systems. In this project theoretical and experimental studies are conducted in the following sub-projects: (1) Distributed cooperative control in multi agent type actuation systems (2) Control of non-holonomic systems (3) Man-machine Cooperative systems (4) Robot systems learning human skills (5) Robust force control of constrained systems In each sub-project cooperative nature between machine agent systems and human being, interference between artificial multi agents and environment and new function emergence in coordination of the multi agents and the environment, robust force control against for the environments, control methods for non-holonomic systems, robot systems which can mimic and learn human skills were studied. In each sub-project, some problems were hi-lighted and solutions for the problems have been given based on construction of experimental systems. (NEDO)

  3. A service based adaptive U-learning system using UX.

    Science.gov (United States)

    Jeong, Hwa-Young; Yi, Gangman

    2014-01-01

    In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques.

  4. EBR-II secondary sodium loop Plugging Temperature Indicator control system upgrade

    International Nuclear Information System (INIS)

    Carlson, R.B.; Gehrman, R.L.

    1995-01-01

    The Experimental Breeder Reactor II (EBR-II) secondary sodium coolant loop Plugging Temperature Indicator (PTI) control system was upgraded in 1993 to a real-time computer based system. This was done to improve control, to remove obsolete and high maintenance equipment, and to provide a graphical CRT based operator interface. A goal was to accomplish this inexpensively using small, reliable computer and display hardware with a minimum of purchased software. This paper describes the PTI system, the upgraded control system and its operator interface, and development methods and tools. The paper then assesses how well the system met its goals, discusses lessons learned and operational improvements noted, and provides some recommendations and suggestions on applying small real-time control systems of this type

  5. A Novel Approach for Enhancing Lifelong Learning Systems by Using Hybrid Recommender System

    Science.gov (United States)

    Kardan, Ahmad A.; Speily, Omid R. B.; Modaberi, Somayyeh

    2011-01-01

    The majority of current web-based learning systems are closed learning environments where courses and learning materials are fixed, and the only dynamic aspect is the organization of the material that can be adapted to allow a relatively individualized learning environment. In this paper, we propose an evolving web-based learning system which can…

  6. Efficient model learning methods for actor-critic control.

    Science.gov (United States)

    Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik

    2012-06-01

    We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.

  7. Informed Systems: Enabling Collaborative Evidence Based Organizational Learning

    Directory of Open Access Journals (Sweden)

    Mary M. Somerville

    2015-12-01

    Full Text Available Objective – In response to unrelenting disruptions in academic publishing and higher education ecosystems, the Informed Systems approach supports evidence based professional activities to make decisions and take actions. This conceptual paper presents two core models, Informed Systems Leadership Model and Collaborative Evidence-Based Information Process Model, whereby co-workers learn to make informed decisions by identifying the decisions to be made and the information required for those decisions. This is accomplished through collaborative design and iterative evaluation of workplace systems, relationships, and practices. Over time, increasingly effective and efficient structures and processes for using information to learn further organizational renewal and advance nimble responsiveness amidst dynamically changing circumstances. Methods – The integrated Informed Systems approach to fostering persistent workplace inquiry has its genesis in three theories that together activate and enable robust information usage and organizational learning. The information- and learning-intensive theories of Peter Checkland in England, which advance systems design, stimulate participants’ appreciation during the design process of the potential for using information to learn. Within a co-designed environment, intentional social practices continue workplace learning, described by Christine Bruce in Australia as informed learning enacted through information experiences. In addition, in Japan, Ikujiro Nonaka’s theories foster information exchange processes and knowledge creation activities within and across organizational units. In combination, these theories promote the kind of learning made possible through evolving and transferable capacity to use information to learn through design and usage of collaborative communication systems with associated professional practices. Informed Systems therein draws from three antecedent theories to create an original

  8. VTA GABA neurons modulate specific learning behaviours through the control of dopamine and cholinergic systems

    Directory of Open Access Journals (Sweden)

    Meaghan C Creed

    2014-01-01

    Full Text Available The mesolimbic reward system is primarily comprised of the ventral tegmental area (VTA and the nucleus accumbens (NAc as well as their afferent and efferent connections. This circuitry is essential for learning about stimuli associated with motivationally-relevant outcomes. Moreover, addictive drugs affect and remodel this system, which may underlie their addictive properties. In addition to DA neurons, the VTA also contains approximately 30% ɣ-aminobutyric acid (GABA neurons. The task of signalling both rewarding and aversive events from the VTA to the NAc has mostly been ascribed to DA neurons and the role of GABA neurons has been largely neglected until recently. GABA neurons provide local inhibition of DA neurons and also long-range inhibition of projection regions, including the NAc. Here we review studies using a combination of in vivo and ex vivo electrophysiology, pharmacogenetic and optogenetic manipulations that have characterized the functional neuroanatomy of inhibitory circuits in the mesolimbic system, and describe how GABA neurons of the VTA regulate reward and aversion-related learning. We also discuss pharmacogenetic manipulation of this system with benzodiazepines (BDZs, a class of addictive drugs, which act directly on GABAA receptors located on GABA neurons of the VTA. The results gathered with each of these approaches suggest that VTA GABA neurons bi-directionally modulate activity of local DA neurons, underlying reward or aversion at the behavioural level. Conversely, long-range GABA projections from the VTA to the NAc selectively target cholinergic interneurons (CINs to pause their firing and temporarily reduce cholinergic tone in the NAc, which modulates associative learning. Further characterization of inhibitory circuit function within and beyond the VTA is needed in order to fully understand the function of the mesolimbic system under normal and pathological conditions.

  9. Contribution of expert systems to data processing in non-destructive control

    International Nuclear Information System (INIS)

    Augendre, H.; Perron, M.C.

    1990-01-01

    The increase of non-destructive control in industrial applications requires the development of new data processing methods. The expert system approach is able to provide signal modelling means which are closer to the human behaviour. Such methods used in more traditional programs lead to substantial improvements. These investigations come within our design to apply sophisticated methods to industrial non-destructive control. For defect characterization purposes in ultrasonic control, various supervised learning methods have been investigated in an experimental study. The traditional approach is concerned with statistics based methods, whereas the second one lies in learning logical decision rules valid within a numerical description space [fr

  10. Improving the Critic Learning for Event-Based Nonlinear $H_{\\infty }$ Control Design.

    Science.gov (United States)

    Wang, Ding; He, Haibo; Liu, Derong

    2017-10-01

    In this paper, we aim at improving the critic learning criterion to cope with the event-based nonlinear H ∞ state feedback control design. First of all, the H ∞ control problem is regarded as a two-player zero-sum game and the adaptive critic mechanism is used to achieve the minimax optimization under event-based environment. Then, based on an improved updating rule, the event-based optimal control law and the time-based worst-case disturbance law are obtained approximately by training a single critic neural network. The initial stabilizing control is no longer required during the implementation process of the new algorithm. Next, the closed-loop system is formulated as an impulsive model and its stability issue is handled by incorporating the improved learning criterion. The infamous Zeno behavior of the present event-based design is also avoided through theoretical analysis on the lower bound of the minimal intersample time. Finally, the applications to an aircraft dynamics and a robot arm plant are carried out to verify the efficient performance of the present novel design method.

  11. [Voluntary postural control learning with a use of visual bio-feedback in patients with spinocerebellar degenerations].

    Science.gov (United States)

    Ustinova, K I; Ioffe, M E; Chernikova, L A; Kulikov, M A; Illarioshkin, S N; Markova, E D

    2004-01-01

    The study aimed at evaluation of possibility and features of voluntary postural control learning using biofeedback from a force platform in patients with spinocerebellar ataxias. Thirty-seven patients with different forms of spinocerebellar degenerations and 13 age-matched healthy subjects were trained to shift the center of pressure (CP) during several stabilographic computer games which tested an ability to learn 2 different types of voluntary postural control: general strategy and precise coordination of CP shifting. Despite the disturbances of static posture and ability for voluntary control of CP position, patients with spinocerebellar degenerations can learn to control a vertical posture using biofeedback on stabilogram. In contrast to healthy subjects, improvement of coordination in the training process does not exert a significant influence on the static posture characteristics, in particular on lateral CP oscillations. The results obtained suggest involvement of the cerebellum in both types of postural control that distinguishes them from pathology caused by motor cortex and nigro-striatal system involved only in one type of postural control.

  12. Iterative learning control with applications in energy generation, lasers and health care.

    Science.gov (United States)

    Rogers, E; Tutty, O R

    2016-09-01

    Many physical systems make repeated executions of the same finite time duration task. One example is a robot in a factory or warehouse whose task is to collect an object in sequence from a location, transfer it over a finite duration, place it at a specified location or on a moving conveyor and then return for the next one and so on. Iterative learning control was especially developed for systems with this mode of operation and this paper gives an overview of this control design method using relatively recent relevant applications in wind turbines, free-electron lasers and health care, as exemplars to demonstrate its applicability.

  13. JACoW Model learning algorithms for anomaly detection in CERN control systems

    CERN Document Server

    Tilaro, Filippo; Gonzalez-Berges, Manuel; Roshchin, Mikhail; Varela, Fernando

    2018-01-01

    The CERN automation infrastructure consists of over 600 heterogeneous industrial control systems with around 45 million deployed sensors, actuators and control objects. Therefore, it is evident that the monitoring of such huge system represents a challenging and complex task. This paper describes three different mathematical approaches that have been designed and developed to detect anomalies in any of the CERN control systems. Specifically, one of these algorithms is purely based on expert knowledge; the other two mine the historical generated data to create a simple model of the system; this model is then used to detect faulty sensors measurements. The presented methods can be categorized as dynamic unsupervised anomaly detection; “dynamic” since the behaviour of the system and the evolution of its attributes are observed and changing in time. They are “unsupervised” because we are trying to predict faulty events without examples in the data history. So, the described strategies involve monitoring t...

  14. Complexity control in statistical learning

    Indian Academy of Sciences (India)

    Then we describe how the method of regularization is used to control complexity in learning. We discuss two examples of regularization, one in which the function space used is finite dimensional, and another in which it is a reproducing kernel Hilbert space. Our exposition follows the formulation of Cucker and Smale.

  15. A Reactive Blended Learning Proposal for an Introductory Control Engineering Course

    Science.gov (United States)

    Mendez, Juan A.; Gonzalez, Evelio J.

    2010-01-01

    As it happens in other fields of engineering, blended learning is widely used to teach process control topics. In this paper, the inclusion of a reactive element--a Fuzzy Logic based controller--is proposed for a blended learning approach in an introductory control engineering course. This controller has been designed in order to regulate the…

  16. A novel model of motor learning capable of developing an optimal movement control law online from scratch.

    Science.gov (United States)

    Shimansky, Yury P; Kang, Tao; He, Jiping

    2004-02-01

    A computational model of a learning system (LS) is described that acquires knowledge and skill necessary for optimal control of a multisegmental limb dynamics (controlled object or CO), starting from "knowing" only the dimensionality of the object's state space. It is based on an optimal control problem setup different from that of reinforcement learning. The LS solves the optimal control problem online while practicing the manipulation of CO. The system's functional architecture comprises several adaptive components, each of which incorporates a number of mapping functions approximated based on artificial neural nets. Besides the internal model of the CO's dynamics and adaptive controller that computes the control law, the LS includes a new type of internal model, the minimal cost (IM(mc)) of moving the controlled object between a pair of states. That internal model appears critical for the LS's capacity to develop an optimal movement trajectory. The IM(mc) interacts with the adaptive controller in a cooperative manner. The controller provides an initial approximation of an optimal control action, which is further optimized in real time based on the IM(mc). The IM(mc) in turn provides information for updating the controller. The LS's performance was tested on the task of center-out reaching to eight randomly selected targets with a 2DOF limb model. The LS reached an optimal level of performance in a few tens of trials. It also quickly adapted to movement perturbations produced by two different types of external force field. The results suggest that the proposed design of a self-optimized control system can serve as a basis for the modeling of motor learning that includes the formation and adaptive modification of the plan of a goal-directed movement.

  17. Semi-active control of magnetorheological elastomer base isolation system utilising learning-based inverse model

    Science.gov (United States)

    Gu, Xiaoyu; Yu, Yang; Li, Jianchun; Li, Yancheng

    2017-10-01

    Magnetorheological elastomer (MRE) base isolations have attracted considerable attention over the last two decades thanks to its self-adaptability and high-authority controllability in semi-active control realm. Due to the inherent nonlinearity and hysteresis of the devices, it is challenging to obtain a reasonably complicated mathematical model to describe the inverse dynamics of MRE base isolators and hence to realise control synthesis of the MRE base isolation system. Two aims have been achieved in this paper: i) development of an inverse model for MRE base isolator based on optimal general regression neural network (GRNN); ii) numerical and experimental validation of a real-time semi-active controlled MRE base isolation system utilising LQR controller and GRNN inverse model. The superiority of GRNN inverse model lays in fewer input variables requirement, faster training process and prompt calculation response, which makes it suitable for online training and real-time control. The control system is integrated with a three-storey shear building model and control performance of the MRE base isolation system is compared with bare building, passive-on isolation system and passive-off isolation system. Testing results show that the proposed GRNN inverse model is able to reproduce desired control force accurately and the MRE base isolation system can effectively suppress the structural responses when compared to the passive isolation system.

  18. The NASA F-15 Intelligent Flight Control Systems: Generation II

    Science.gov (United States)

    Buschbacher, Mark; Bosworth, John

    2006-01-01

    The Second Generation (Gen II) control system for the F-15 Intelligent Flight Control System (IFCS) program implements direct adaptive neural networks to demonstrate robust tolerance to faults and failures. The direct adaptive tracking controller integrates learning neural networks (NNs) with a dynamic inversion control law. The term direct adaptive is used because the error between the reference model and the aircraft response is being compensated or directly adapted to minimize error without regard to knowing the cause of the error. No parameter estimation is needed for this direct adaptive control system. In the Gen II design, the feedback errors are regulated with a proportional-plus-integral (PI) compensator. This basic compensator is augmented with an online NN that changes the system gains via an error-based adaptation law to improve aircraft performance at all times, including normal flight, system failures, mispredicted behavior, or changes in behavior resulting from damage.

  19. A Studi on High Plant Systems Course with Active Learning in Higher Education Through Outdoor Learning to Increase Student Learning Activities

    OpenAIRE

    Nur Rokhimah Hanik, Anwari Adi Nugroho

    2015-01-01

    Biology learning especially high plant system courses needs to be applied to active learning centered on the student (Active Learning In Higher Education) to enhance the students' learning activities so that the quality of learning for the better. Outdoor Learning is one of the active learning invites students to learn outside of the classroom by exploring the surrounding environment. This research aims to improve the students' learning activities in the course of high plant systems through t...

  20. Metabolic learning and memory formation by the brain influence systemic metabolic homeostasis

    Science.gov (United States)

    Zhang, Yumin; Liu, Gang; Yan, Jingqi; Zhang, Yalin; Li, Bo; Cai, Dongsheng

    2015-01-01

    Metabolic homeostasis is regulated by the brain, whether this regulation involves learning and memory of metabolic information remains unexplored. Here we use a calorie-based, taste-independent learning/memory paradigm to show that Drosophila form metabolic memories that help balancing food choice with caloric intake; however, this metabolic learning or memory is lost under chronic high-calorie feeding. We show that loss of individual learning/memory-regulating genes causes a metabolic learning defect, leading to elevated trehalose and lipids levels. Importantly, this function of metabolic learning requires not only the mushroom body but the hypothalamus-like pars intercerebralis, while NF-κB activation in the pars intercerebralis mimics chronic overnutrition in that it causes metabolic learning impairment and disorders. Finally, we evaluate this concept of metabolic learning/memory in mice, suggesting the hypothalamus is involved in a form of nutritional learning and memory, which is critical for determining resistance or susceptibility to obesity. In conclusion, our data indicate the brain, and potentially the hypothalamus, direct metabolic learning and the formation of memories, which contribute to the control of systemic metabolic homeostasis. PMID:25848677

  1. Enriching Adaptation in E-Learning Systems through a Situation-Aware Ontology Network

    Science.gov (United States)

    Pernas, Ana Marilza; Diaz, Alicia; Motz, Regina; de Oliveira, Jose Palazzo Moreira

    2012-01-01

    Purpose: The broader adoption of the internet along with web-based systems has defined a new way of exchanging information. That advance added by the multiplication of mobile devices has required systems to be even more flexible and personalized. Maybe because of that, the traditional teaching-controlled learning style has given up space to a new…

  2. On equivalence classes in iterative learning control

    NARCIS (Netherlands)

    Verwoerd, M.H.A.; Meinsma, Gjerrit; de Vries, Theodorus J.A.

    2003-01-01

    This paper advocates a new approach to study the relation between causal iterative learning control (ILC) and conventional feedback control. Central to this approach is the introduction of the set of admissible pairs (of operators) defined with respect to a family of iterations. Considered are two

  3. A Service Based Adaptive U-Learning System Using UX

    Directory of Open Access Journals (Sweden)

    Hwa-Young Jeong

    2014-01-01

    Full Text Available In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users’ tailored materials according to their learning style. That is, we analyzed the user’s data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques.

  4. Speed tracking control of pneumatic motor servo systems using observation-based adaptive dynamic sliding-mode control

    Science.gov (United States)

    Chen, Syuan-Yi; Gong, Sheng-Sian

    2017-09-01

    This study aims to develop an adaptive high-precision control system for controlling the speed of a vane-type air motor (VAM) pneumatic servo system. In practice, the rotor speed of a VAM depends on the input mass air flow, which can be controlled by the effective orifice area (EOA) of an electronic throttle valve (ETV). As the control variable of a second-order pneumatic system is the integral of the EOA, an observation-based adaptive dynamic sliding-mode control (ADSMC) system is proposed to derive the differential of the control variable, namely, the EOA control signal. In the ADSMC system, a proportional-integral-derivative fuzzy neural network (PIDFNN) observer is used to achieve an ideal dynamic sliding-mode control (DSMC), and a supervisor compensator is designed to eliminate the approximation error. As a result, the ADSMC incorporates the robustness of a DSMC and the online learning ability of a PIDFNN. To ensure the convergence of the tracking error, a Lyapunov-based analytical method is employed to obtain the adaptive algorithms required to tune the control parameters of the online ADSMC system. Finally, our experimental results demonstrate the precision and robustness of the ADSMC system for highly nonlinear and time-varying VAM pneumatic servo systems.

  5. The Design and Analysis of Learning Effects for a Game-based Learning System

    OpenAIRE

    Wernhuar Tarng; Weichian Tsai

    2010-01-01

    The major purpose of this study is to use network and multimedia technologies to build a game-based learning system for junior high school students to apply in learning “World Geography" through the “role-playing" game approaches. This study first investigated the motivation and habits of junior high school students to use the Internet and online games, and then designed a game-based learning system according to situated and game-based learning theories. A teaching experiment was conducted to...

  6. Chaos Synchronization Using Adaptive Dynamic Neural Network Controller with Variable Learning Rates

    Directory of Open Access Journals (Sweden)

    Chih-Hong Kao

    2011-01-01

    Full Text Available This paper addresses the synchronization of chaotic gyros with unknown parameters and external disturbance via an adaptive dynamic neural network control (ADNNC system. The proposed ADNNC system is composed of a neural controller and a smooth compensator. The neural controller uses a dynamic RBF (DRBF network to online approximate an ideal controller. The DRBF network can create new hidden neurons online if the input data falls outside the hidden layer and prune the insignificant hidden neurons online if the hidden neuron is inappropriate. The smooth compensator is designed to compensate for the approximation error between the neural controller and the ideal controller. Moreover, the variable learning rates of the parameter adaptation laws are derived based on a discrete-type Lyapunov function to speed up the convergence rate of the tracking error. Finally, the simulation results which verified the chaotic behavior of two nonlinear identical chaotic gyros can be synchronized using the proposed ADNNC scheme.

  7. Learning-based traffic signal control algorithms with neighborhood information sharing: An application for sustainable mobility

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhu, Feng [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering; Ukkusuri, Satish V. [Purdue University, West Lafayette, IN (United States). Lyles School of Civil Engineering

    2017-10-04

    Here, this research applies R-Markov Average Reward Technique based reinforcement learning (RL) algorithm, namely RMART, for vehicular signal control problem leveraging information sharing among signal controllers in connected vehicle environment. We implemented the algorithm in a network of 18 signalized intersections and compare the performance of RMART with fixed, adaptive, and variants of the RL schemes. Results show significant improvement in system performance for RMART algorithm with information sharing over both traditional fixed signal timing plans and real time adaptive control schemes. Additionally, the comparison with reinforcement learning algorithms including Q learning and SARSA indicate that RMART performs better at higher congestion levels. Further, a multi-reward structure is proposed that dynamically adjusts the reward function with varying congestion states at the intersection. Finally, the results from test networks show significant reduction in emissions (CO, CO2, NOx, VOC, PM10) when RL algorithms are implemented compared to fixed signal timings and adaptive schemes.

  8. Automatic Learning of Fine Operating Rules for Online Power System Security Control.

    Science.gov (United States)

    Sun, Hongbin; Zhao, Feng; Wang, Hao; Wang, Kang; Jiang, Weiyong; Guo, Qinglai; Zhang, Boming; Wehenkel, Louis

    2016-08-01

    Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.

  9. The Argonne beamline-B telescope control system: A study of adaptability

    International Nuclear Information System (INIS)

    Fuka, M.A.; Clout, P.N.; Conley, A.P.; Hill, J.O.; Rothrock, R.B.; Trease, L.L.; Zander, M.E.

    1987-01-01

    A beam-expanding telescope to study high-precision H - particle optics and beam sensing was designed by the Accelerator Technology Division at Los Alamos National Laboratory and will be installed on beamline-B at Argonne National Laboratory. The control system for this telescope was developed in a relatively short period of time using experience gained from building the Proton Storage Ring (PSR) control system. The designers modified hardware and software to take advantage of new technology as well as to meet the requirements of the new system. This paper discusses lessons learned in the process of adapting hardware and software from an existing control system to one with rather different requirements

  10. Robust iterative learning contouring controller with disturbance observer for machine tool feed drives.

    Science.gov (United States)

    Simba, Kenneth Renny; Bui, Ba Dinh; Msukwa, Mathew Renny; Uchiyama, Naoki

    2018-04-01

    In feed drive systems, particularly machine tools, a contour error is more significant than the individual axial tracking errors from the view point of enhancing precision in manufacturing and production systems. The contour error must be within the permissible tolerance of given products. In machining complex or sharp-corner products, large contour errors occur mainly owing to discontinuous trajectories and the existence of nonlinear uncertainties. Therefore, it is indispensable to design robust controllers that can enhance the tracking ability of feed drive systems. In this study, an iterative learning contouring controller consisting of a classical Proportional-Derivative (PD) controller and disturbance observer is proposed. The proposed controller was evaluated experimentally by using a typical sharp-corner trajectory, and its performance was compared with that of conventional controllers. The results revealed that the maximum contour error can be reduced by about 37% on average. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  11. The immune system, adaptation, and machine learning

    Science.gov (United States)

    Farmer, J. Doyne; Packard, Norman H.; Perelson, Alan S.

    1986-10-01

    The immune system is capable of learning, memory, and pattern recognition. By employing genetic operators on a time scale fast enough to observe experimentally, the immune system is able to recognize novel shapes without preprogramming. Here we describe a dynamical model for the immune system that is based on the network hypothesis of Jerne, and is simple enough to simulate on a computer. This model has a strong similarity to an approach to learning and artificial intelligence introduced by Holland, called the classifier system. We demonstrate that simple versions of the classifier system can be cast as a nonlinear dynamical system, and explore the analogy between the immune and classifier systems in detail. Through this comparison we hope to gain insight into the way they perform specific tasks, and to suggest new approaches that might be of value in learning systems.

  12. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Science.gov (United States)

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  13. Multiobjective Optimization Design of a Fractional Order PID Controller for a Gun Control System

    Directory of Open Access Journals (Sweden)

    Qiang Gao

    2013-01-01

    Full Text Available Motion control of gun barrels is an ongoing topic for the development of gun control equipments possessing excellent performances. In this paper, a typical fractional order PID control strategy is employed for the gun control system. To obtain optimal parameters of the controller, a multiobjective optimization scheme is developed from the loop-shaping perspective. To solve the specified nonlinear optimization problem, a novel Pareto optimal solution based multiobjective differential evolution algorithm is proposed. To enhance the convergent rate of the optimization process, an opposition based learning method is embedded in the chaotic population initialization process. To enhance the robustness of the algorithm for different problems, an adapting scheme of the mutation operation is further employed. With assistance of the evolutionary algorithm, the optimal solution for the specified problem is selected. The numerical simulation results show that the control system can rapidly follow the demand signal with high accuracy and high robustness, demonstrating the efficiency of the proposed controller parameter tuning method.

  14. Active Learning of Markov Decision Processes for System Verification

    DEFF Research Database (Denmark)

    Chen, Yingke; Nielsen, Thomas Dyhre

    2012-01-01

    deterministic Markov decision processes from data by actively guiding the selection of input actions. The algorithm is empirically analyzed by learning system models of slot machines, and it is demonstrated that the proposed active learning procedure can significantly reduce the amount of data required...... demanding process, and this shortcoming has motivated the development of algorithms for automatically learning system models from observed system behaviors. Recently, algorithms have been proposed for learning Markov decision process representations of reactive systems based on alternating sequences...... of input/output observations. While alleviating the problem of manually constructing a system model, the collection/generation of observed system behaviors can also prove demanding. Consequently we seek to minimize the amount of data required. In this paper we propose an algorithm for learning...

  15. Design of an eLearning System for Accreditation of Non-formal Learning

    OpenAIRE

    Kovatcheva , Eugenia; Nikolov , Roumen

    2008-01-01

    This paper deals with issues related to the non-formal learning in vocational education, and the role of ICT for providing appropriate accreditation model in such education. The presented conclusions are based on the Leonardo da Vinci project LeoSPAN. The paper emphasises on the development of a model and a prototype of an adaptive eLearning system that ensures the pre-defined learner outcomes. One of the advantages of the eLearning system is the flexibility for people who upgrade and improve...

  16. Learning management system and e-learning tools: an experience of medical students' usage and expectations.

    Science.gov (United States)

    Back, David A; Behringer, Florian; Haberstroh, Nicole; Ehlers, Jan P; Sostmann, Kai; Peters, Harm

    2016-08-20

    To investigate medical students´ utilization of and problems with a learning management system and its e-learning tools as well as their expectations on future developments. A single-center online survey has been carried out to investigate medical students´ (n = 505) usage and perception concerning the learning management system Blackboard, and provided e-learning tools. Data were collected with a standardized questionnaire consisting of 70 items and analyzed by quantitative and qualitative methods. The participants valued lecture notes (73.7%) and Wikipedia (74%) as their most important online sources for knowledge acquisition. Missing integration of e-learning into teaching was seen as the major pitfall (58.7%). The learning management system was mostly used for study information (68.3%), preparation of exams (63.3%) and lessons (54.5%). Clarity (98.3%), teaching-related contexts (92.5%) and easy use of e-learning offers (92.5%) were rated highest. Interactivity was most important in free-text comments (n = 123). It is desired that contents of a learning management system support an efficient learning. Interactivity of tools and their conceptual integration into face-to-face teaching are important for students. The learning management system was especially important for organizational purposes and the provision of learning materials. Teachers should be aware that free online sources such as Wikipedia enjoy a high approval as source of knowledge acquisition. This study provides an empirical basis for medical schools and teachers to improve their offerings in the field of digital learning for their students.

  17. A novel multi-agent decentralized win or learn fast policy hill-climbing with eligibility trace algorithm for smart generation control of interconnected complex power grids

    International Nuclear Information System (INIS)

    Xi, Lei; Yu, Tao; Yang, Bo; Zhang, Xiaoshun

    2015-01-01

    Highlights: • Proposing a decentralized smart generation control scheme for the automatic generation control coordination. • A novel multi-agent learning algorithm is developed to resolve stochastic control problems in power systems. • A variable learning rate are introduced base on the framework of stochastic games. • A simulation platform is developed to test the performance of different algorithms. - Abstract: This paper proposes a multi-agent smart generation control scheme for the automatic generation control coordination in interconnected complex power systems. A novel multi-agent decentralized win or learn fast policy hill-climbing with eligibility trace algorithm is developed, which can effectively identify the optimal average policies via a variable learning rate under various operation conditions. Based on control performance standards, the proposed approach is implemented in a flexible multi-agent stochastic dynamic game-based smart generation control simulation platform. Based on the mixed strategy and average policy, it is highly adaptive in stochastic non-Markov environments and large time-delay systems, which can fulfill automatic generation control coordination in interconnected complex power systems in the presence of increasing penetration of decentralized renewable energy. Two case studies on both a two-area load–frequency control power system and the China Southern Power Grid model have been done. Simulation results verify that multi-agent smart generation control scheme based on the proposed approach can obtain optimal average policies thus improve the closed-loop system performances, and can achieve a fast convergence rate with significant robustness compared with other methods

  18. System Quality Characteristics for Selecting Mobile Learning Applications

    Directory of Open Access Journals (Sweden)

    Mohamed SARRAB

    2015-10-01

    Full Text Available The majority of M-learning (Mobile learning applications available today are developed for the formal learning and education environment. These applications are characterized by the improvement in the interaction between learners and instructors to provide high interaction and flexibility to the learning process. M-learning is gaining increased recognition and adoption by different organizations. With the high number of M-learning applications available today, making the right decision about which, application to choose can be quite challenging. To date there is no complete and well defined set of system characteristics for such M-learning applications. This paper presents system quality characteristics for selecting M-learning applications based on the result of a systematic review conducted in this domain.

  19. Intelligent failure-proof control system for structural vibration

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Kazuo [Keio Univ., Yokohama (Japan). Faculty of Science and Technology; Oba, Takahiro [Keio Univ., Tokyo (Japan)

    2000-11-01

    With progress of technology in recent years, gigantism and complication such as high-rise buildings, nuclear reactors and so on have brought about new problems. Particularly, the safety and the reliability for damages in abnormal situations have become more important. Intelligent control systems which can judge whether the situation is normal or abnormal at real time and cope with these situations suitably are demanded. In this study, Cubic Neural Network (CNN) is adopted, which consists of the controllers possessing cubically some levels of information abstracting. In addition to the usual quantitative control, the qualitative control is used for the abnormal situations. And, by selecting a suitable controller, CNN can cope with the abnormal situation. In order to confirm the effectiveness of this system, the structural vibration control problems with sensory failure and elasto-plastic response are dealt with. As a result of simulations, it was demonstrated that CNN can cope with unexpected abnormal situations which are not considered in learning. (author)

  20. Intelligent failure-proof control system for structural vibration

    International Nuclear Information System (INIS)

    Yoshida, Kazuo

    2000-01-01

    With progress of technology in recent years, gigantism and complication such as high-rise buildings, nuclear reactors and so on have brought about new problems. Particularly, the safety and the reliability for damages in abnormal situations have become more important. Intelligent control systems which can judge whether the situation is normal or abnormal at real time and cope with these situations suitably are demanded. In this study, Cubic Neural Network (CNN) is adopted, which consists of the controllers possessing cubically some levels of information abstracting. In addition to the usual quantitative control, the qualitative control is used for the abnormal situations. And, by selecting a suitable controller, CNN can cope with the abnormal situation. In order to confirm the effectiveness of this system, the structural vibration control problems with sensory failure and elasto-plastic response are dealt with. As a result of simulations, it was demonstrated that CNN can cope with unexpected abnormal situations which are not considered in learning. (author)

  1. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    Science.gov (United States)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  2. Panorama of Recommender Systems to Support Learning

    NARCIS (Netherlands)

    Drachsler, Hendrik; Verbert, Katrien; Santos, Olga C.; Manouselis, Nikos

    2015-01-01

    This chapter presents an analysis of recommender systems in TechnologyEnhanced Learning along their 15 years existence (2000-2014). All recommender systems considered for the review aim to support educational stakeholders by personalising the learning process. In this meta-review 82 recommender

  3. An Online Q-learning Based Multi-Agent LFC for a Multi-Area Multi-Source Power System Including Distributed Energy Resources

    Directory of Open Access Journals (Sweden)

    H. Shayeghi

    2017-12-01

    Full Text Available This paper presents an online two-stage Q-learning based multi-agent (MA controller for load frequency control (LFC in an interconnected multi-area multi-source power system integrated with distributed energy resources (DERs. The proposed control strategy consists of two stages. The first stage is employed a PID controller which its parameters are designed using sine cosine optimization (SCO algorithm and are fixed. The second one is a reinforcement learning (RL based supplementary controller that has a flexible structure and improves the output of the first stage adaptively based on the system dynamical behavior. Due to the use of RL paradigm integrated with PID controller in this strategy, it is called RL-PID controller. The primary motivation for the integration of RL technique with PID controller is to make the existing local controllers in the industry compatible to reduce the control efforts and system costs. This novel control strategy combines the advantages of the PID controller with adaptive behavior of MA to achieve the desired level of robust performance under different kind of uncertainties caused by stochastically power generation of DERs, plant operational condition changes, and physical nonlinearities of the system. The suggested decentralized controller is composed of the autonomous intelligent agents, who learn the optimal control policy from interaction with the system. These agents update their knowledge about the system dynamics continuously to achieve a good frequency oscillation damping under various severe disturbances without any knowledge of them. It leads to an adaptive control structure to solve LFC problem in the multi-source power system with stochastic DERs. The results of RL-PID controller in comparison to the traditional PID and fuzzy-PID controllers is verified in a multi-area power system integrated with DERs through some performance indices.

  4. Output Information Based Fault-Tolerant Iterative Learning Control for Dual-Rate Sampling Process with Disturbances and Output Delay

    Directory of Open Access Journals (Sweden)

    Hongfeng Tao

    2018-01-01

    Full Text Available For a class of single-input single-output (SISO dual-rate sampling processes with disturbances and output delay, this paper presents a robust fault-tolerant iterative learning control algorithm based on output information. Firstly, the dual-rate sampling process with output delay is transformed into discrete system in state-space model form with slow sampling rate without time delay by using lifting technology; then output information based fault-tolerant iterative learning control scheme is designed and the control process is turned into an equivalent two-dimensional (2D repetitive process. Moreover, based on the repetitive process stability theory, the sufficient conditions for the stability of system and the design method of robust controller are given in terms of linear matrix inequalities (LMIs technique. Finally, the flow control simulations of two flow tanks in series demonstrate the feasibility and effectiveness of the proposed method.

  5. USE OF FACIAL EMOTION RECOGNITION IN E-LEARNING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Uğur Ayvaz

    2017-09-01

    Full Text Available Since the personal computer usage and internet bandwidth are increasing, e-learning systems are also widely spreading. Although e-learning has some advantages in terms of information accessibility, time and place flexibility compared to the formal learning, it does not provide enough face-to-face interactivity between an educator and learners. In this study, we are proposing a hybrid information system, which is combining computer vision and machine learning technologies for visual and interactive e-learning systems. The proposed information system detects emotional states of the learners and gives feedback to an educator about their instant and weighted emotional states based on facial expressions. In this way, the educator will be aware of the general emotional state of the virtual classroom and the system will create a formal learning-like interactive environment. Herein, several classification algorithms were applied to learn instant emotional state and the best accuracy rates were obtained using kNN and SVM algorithms.

  6. Discrete Learning Control with Application to Hydraulic Actuators

    DEFF Research Database (Denmark)

    Andersen, Torben Ole; Pedersen, Henrik Clemmensen; Hansen, Michael R.

    2015-01-01

    In this paper the robustness of a class of learning control algorithms to state disturbances, output noise, and errors in initial conditions is studied. We present a simple learning algorithm and exhibit, via a concise proof, bounds on the asymptotic trajectory errors for the learned input...... and the corresponding state and output trajectories. Furthermore, these bounds are continuous functions of the bounds on the initial condition errors, state disturbance, and output noise, and the bounds are zero in the absence of these disturbances....

  7. Adaptive fuzzy trajectory control for biaxial motion stage system

    Directory of Open Access Journals (Sweden)

    Wei-Lung Mao

    2016-04-01

    Full Text Available Motion control is an essential part of industrial machinery and manufacturing systems. In this article, the adaptive fuzzy controller is proposed for precision trajectory tracking control in biaxial X-Y motion stage system. The theoretical analyses of direct fuzzy control which is insensitive to parameter uncertainties and external load disturbances are derived to demonstrate the feasibility to track the reference trajectories. The Lyapunov stability theorem has been used to testify the asymptotic stability of the whole system, and all the signals are bounded in the closed-loop system. The intelligent position controller combines the merits of the adaptive fuzzy control with robust characteristics and learning ability for periodic command tracking of a servo drive mechanism. The simulation and experimental results on square, triangle, star, and circle reference contours are presented to show that the proposed controller indeed accomplishes the better tracking performances with regard to model uncertainties. It is observed that the convergence of parameters and tracking errors can be faster and smaller compared with the conventional adaptive fuzzy control in terms of average tracking error and tracking error standard deviation.

  8. A PEDAGOGICAL CRITICAL REVIEW OF ONLINE LEARNING SYSTEM

    Directory of Open Access Journals (Sweden)

    Dwi SULISWORO

    2016-08-01

    Full Text Available E-learning which have various shapes such as blog, classroom learning which is facilitated the World Wide Web; a mix of online instruction and meeting the class known as additional models or hybrid; or the full online experience, where all assessment and instruction is done electronically. Object relationship of learning and constructivist educational philosophy and confirmed that online learning has the orientation which is basically a constructivist ideology, where the combination of some of the knowledge is an inquiry-oriented activities and authentic and also promote the progress of the construction of new knowledge. Description of the online learning system in theory and practice can be illustrated in a few examples that have been found in the research that has been done and found new discoveries obtained in the study, but not everything can be done because of several factors. Please note that the components in the online learning system can serve as a learning system which is very strong influence on learning in the class. The objective of this research is to a pedagogical critical review of online learning system in theory and practice that can be applied by teachers in the teaching process in the classroom. The results obtained in this study were teachers and students need extra effort to make online classes and virtual. Further research is needed on appropriate strategies in order to determine the next result is more useful. There some advices for any studies that discuss online learning system are done in certain areas, namely the use of electricity and other disciplines such as social and humanities.

  9. Emotional learning based intelligent controller for a PWR nuclear reactor core during load following operation

    International Nuclear Information System (INIS)

    Khorramabadi, Sima Seidi; Boroushaki, Mehrdad; Lucas, Caro

    2008-01-01

    The design and evaluation of a novel approach to reactor core power control based on emotional learning is described. The controller includes a neuro-fuzzy system with power error and its derivative as inputs. A fuzzy critic evaluates the present situation, and provides the emotional signal (stress). The controller modifies its characteristics so that the critic's stress is reduced. Simulation results show that the controller has good convergence and performance robustness characteristics over a wide range of operational parameters

  10. LBS Mobile Learning System Based on Android Platform

    Directory of Open Access Journals (Sweden)

    Zhang Ya-Li

    2017-01-01

    Full Text Available In the era of mobile internet, PC-end internet services can no long satisfy people’s demands, needs for App and services on mobile phones are more urgent than ever. With increasing social competition, the concept of lifelong learning becomes more and more popular and accepted, making full use of spare time to learn at any time and any place meets updating knowledge desires of modern people, Location Based System (LBS mobile learning system based on Android platform was created under such background. In this Paper, characteristics of mobile location technology and intelligent terminal were introduced and analyzed, mobile learning system which will fulfill personalized needs of mobile learners was designed and developed on basis of location information, mobile learning can be greatly promoted and new research ideas can be expanded for mobile learning.

  11. Panorama of recommender systems to support learning

    OpenAIRE

    Drachsler, Hendrik; Verbert, Katrien; Santos, Olga; Manouselis, Nikos

    2015-01-01

    This chapter presents an analysis of recommender systems in Technology-Enhanced Learning along their 15 years existence (2000-2014). All recommender systems considered for the review aim to support educational stakeholders by personalising the learning process. In this meta-review 82 recommender systems from 35 different countries have been investigated and categorised according to a given classification framework. The reviewed systems have been classified into 7 clusters according to their c...

  12. Machine learning algorithms for the creation of clinical healthcare enterprise systems

    Science.gov (United States)

    Mandal, Indrajit

    2017-10-01

    Clinical recommender systems are increasingly becoming popular for improving modern healthcare systems. Enterprise systems are persuasively used for creating effective nurse care plans to provide nurse training, clinical recommendations and clinical quality control. A novel design of a reliable clinical recommender system based on multiple classifier system (MCS) is implemented. A hybrid machine learning (ML) ensemble based on random subspace method and random forest is presented. The performance accuracy and robustness of proposed enterprise architecture are quantitatively estimated to be above 99% and 97%, respectively (above 95% confidence interval). The study then extends to experimental analysis of the clinical recommender system with respect to the noisy data environment. The ranking of items in nurse care plan is demonstrated using machine learning algorithms (MLAs) to overcome the drawback of the traditional association rule method. The promising experimental results are compared against the sate-of-the-art approaches to highlight the advancement in recommendation technology. The proposed recommender system is experimentally validated using five benchmark clinical data to reinforce the research findings.

  13. Understanding Self-Controlled Motor Learning Protocols through the Self-Determination Theory.

    Science.gov (United States)

    Sanli, Elizabeth A; Patterson, Jae T; Bray, Steven R; Lee, Timothy D

    2012-01-01

    The purpose of the present review was to provide a theoretical understanding of the learning advantages underlying a self-controlled practice context through the tenets of the self-determination theory (SDT). Three micro-theories within the macro-theory of SDT (Basic psychological needs theory, Cognitive Evaluation Theory, and Organismic Integration Theory) are used as a framework for examining the current self-controlled motor learning literature. A review of 26 peer-reviewed, empirical studies from the motor learning and medical training literature revealed an important limitation of the self-controlled research in motor learning: that the effects of motivation have been assumed rather than quantified. The SDT offers a basis from which to include measurements of motivation into explanations of changes in behavior. This review suggests that a self-controlled practice context can facilitate such factors as feelings of autonomy and competence of the learner, thereby supporting the psychological needs of the learner, leading to long term changes to behavior. Possible tools for the measurement of motivation and regulation in future studies are discussed. The SDT not only allows for a theoretical reinterpretation of the extant motor learning research supporting self-control as a learning variable, but also can help to better understand and measure the changes occurring between the practice environment and the observed behavioral outcomes.

  14. Understanding self-controlled motor learning protocols through the self determination theory

    Directory of Open Access Journals (Sweden)

    Elizabeth Ann Sanli

    2013-01-01

    Full Text Available The purpose of the present review was to provide a theoretical understanding of the learning advantages underlying a self-controlled practice context through the tenets of the self-determination theory (SDT. Three micro theories within the macro theory of SDT (Basic psychological needs theory, Cognitive Evaluation Theory & Organismic Integration Theory are used as a framework for examining the current self-controlled motor learning literature. A review of 26 peer-reviewed, empirical studies from the motor learning and medical training literature revealed an important limitation of the self-controlled research in motor learning: that the effects of motivation have been assumed rather than quantified. The SDT offers a basis from which to include measurements of motivation into explanations of changes in behavior. This review suggests that a self-controlled practice context can facilitate such factors as feelings of autonomy and competence of the learner, thereby supporting the psychological needs of the learner, leading to long term changes to behavior. Possible tools for the measurement of motivation and regulation in future studies are discussed. The SDT not only allows for a theoretical reinterpretation of the extant motor learning research supporting self-control as a learning variable, but also can help to better understand and measure the changes occurring between the practice environment and the observed behavioral outcomes.

  15. An Adaptive Supervisory Sliding Fuzzy Cerebellar Model Articulation Controller for Sensorless Vector-Controlled Induction Motor Drive Systems

    Directory of Open Access Journals (Sweden)

    Shun-Yuan Wang

    2015-03-01

    Full Text Available This paper presents the implementation of an adaptive supervisory sliding fuzzy cerebellar model articulation controller (FCMAC in the speed sensorless vector control of an induction motor (IM drive system. The proposed adaptive supervisory sliding FCMAC comprised a supervisory controller, integral sliding surface, and an adaptive FCMAC. The integral sliding surface was employed to eliminate steady-state errors and enhance the responsiveness of the system. The adaptive FCMAC incorporated an FCMAC with a compensating controller to perform a desired control action. The proposed controller was derived using the Lyapunov approach, which guarantees learning-error convergence. The implementation of three intelligent control schemes—the adaptive supervisory sliding FCMAC, adaptive sliding FCMAC, and adaptive sliding CMAC—were experimentally investigated under various conditions in a realistic sensorless vector-controlled IM drive system. The root mean square error (RMSE was used as a performance index to evaluate the experimental results of each control scheme. The analysis results indicated that the proposed adaptive supervisory sliding FCMAC substantially improved the system performance compared with the other control schemes.

  16. Reinforcement and Systemic Machine Learning for Decision Making

    CERN Document Server

    Kulkarni, Parag

    2012-01-01

    Reinforcement and Systemic Machine Learning for Decision Making There are always difficulties in making machines that learn from experience. Complete information is not always available-or it becomes available in bits and pieces over a period of time. With respect to systemic learning, there is a need to understand the impact of decisions and actions on a system over that period of time. This book takes a holistic approach to addressing that need and presents a new paradigm-creating new learning applications and, ultimately, more intelligent machines. The first book of its kind in this new an

  17. Cultural impacts on e-learning systems' success

    OpenAIRE

    Aparicio, M.; Bação, F.; Oliveira, T.

    2016-01-01

    WOS:000383295100007 (Nº de Acesso Web of Science) E-learning systems are enablers in the learning process, strengthening their importance as part of the educational strategy. Understanding the determinants of e-learning success is crucial for defining instructional strategies. Several authors have studied e-learning implementation and adoption, and various studies have addressed e-learning success from different perspectives. However, none of these studies have verified whether students' c...

  18. Harnessing the Power of Learning Management Systems: An E-Learning Approach for Professional Development.

    Science.gov (United States)

    White, Meagan; Shellenbarger, Teresa

    E-learning provides an alternative approach to traditional professional development activities. A learning management system may help nursing professional development practitioners deliver content more efficiently and effectively; however, careful consideration is needed during planning and implementation. This article provides essential information in the selection and use of a learning management system for professional development.

  19. Speed Sensorless Control of PMSM using Model Reference Adaptive System and RBFN

    OpenAIRE

    Wei Gao; Zhirong Guo

    2013-01-01

    In the speed sensorless vector control system, the amended method of estimating the rotor speed about model reference adaptive system (MRAS) based on radial basis function neural network (RBFN) for PMSM sensorless vector control system was presented. Based on the PI regulator, the radial basis function neural network which is more prominent learning efficiency and performance is combined with MRAS. The reference model and the adjust model are the PMSM itself and the PMSM current, respectively...

  20. New education system for construction of optical holography setup – Tangible learning with Augmented Reality

    International Nuclear Information System (INIS)

    Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2013-01-01

    In case of teaching optical system construction, it is difficult to prepare the optical components for the attendance student. However the tangible learning is very important to master the optical system construction. It helps learners understand easily to use an inexpensive learning system that provides optical experiments experiences. Therefore, we propose the new education system for construction of optical setup with the augmented reality. To use the augmented reality, the proposed system can simulate the optical system construction by the direct hand control. Also, this system only requires an inexpensive web camera, printed makers and a personal computer. Since this system does not require the darkroom and the expensive optical equipments, the learners can study anytime, anywhere when they want to do. In this paper, we developed the system that can teach the optical system construction of the Denisyuk hologram and 2-step transmission type hologram. For the tangible learning and the easy understanding, the proposed system displays the CG objects of the optical components on the markers which are controlled by the learner's hands. The proposed system does not only display the CG object, but also display the light beam which is controlled by the optical components. To display the light beam that is hard to be seen directly, the learners can confirm about what is happening by the own manipulation. For the construction of optical holography setup, we arrange a laser, mirrors, a PBS (polarizing beam splitter), lenses, a polarizer, half-wave plates, spatial filters, an optical power meter and a recording plate. After the construction, proposed system can check optical setup correctly. In comparison with the learners who only read a book, the learners who use the system can construct the optical holography setup more quickly and correctly.

  1. Learning in Artificial Neural Systems

    Science.gov (United States)

    Matheus, Christopher J.; Hohensee, William E.

    1987-01-01

    This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined.

  2. Metabolic learning and memory formation by the brain influence systemic metabolic homeostasis.

    Science.gov (United States)

    Zhang, Yumin; Liu, Gang; Yan, Jingqi; Zhang, Yalin; Li, Bo; Cai, Dongsheng

    2015-04-07

    Metabolic homeostasis is regulated by the brain, but whether this regulation involves learning and memory of metabolic information remains unexplored. Here we use a calorie-based, taste-independent learning/memory paradigm to show that Drosophila form metabolic memories that help in balancing food choice with caloric intake; however, this metabolic learning or memory is lost under chronic high-calorie feeding. We show that loss of individual learning/memory-regulating genes causes a metabolic learning defect, leading to elevated trehalose and lipid levels. Importantly, this function of metabolic learning requires not only the mushroom body but also the hypothalamus-like pars intercerebralis, while NF-κB activation in the pars intercerebralis mimics chronic overnutrition in that it causes metabolic learning impairment and disorders. Finally, we evaluate this concept of metabolic learning/memory in mice, suggesting that the hypothalamus is involved in a form of nutritional learning and memory, which is critical for determining resistance or susceptibility to obesity. In conclusion, our data indicate that the brain, and potentially the hypothalamus, direct metabolic learning and the formation of memories, which contribute to the control of systemic metabolic homeostasis.

  3. Reinforcement learning solution for HJB equation arising in constrained optimal control problem.

    Science.gov (United States)

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong

    2015-11-01

    The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Multichannel sound reinforcement systems at work in a learning environment

    Science.gov (United States)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  5. Modelling, Simulation, Animation, and Real-Time Control (Mosart) for a Class of Electromechanical Systems: A System-Theoretic Approach

    Science.gov (United States)

    Rodriguez, Armando A.; Metzger, Richard P.; Cifdaloz, Oguzhan; Dhirasakdanon, Thanate; Welfert, Bruno

    2004-01-01

    This paper describes an interactive modelling, simulation, animation, and real-time control (MoSART) environment for a class of 'cart-pendulum' electromechanical systems that may be used to enhance learning within differential equations and linear algebra classes. The environment is useful for conveying fundamental mathematical/systems concepts…

  6. Integral reinforcement learning for continuous-time input-affine nonlinear systems with simultaneous invariant explorations.

    Science.gov (United States)

    Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2015-05-01

    This paper focuses on a class of reinforcement learning (RL) algorithms, named integral RL (I-RL), that solve continuous-time (CT) nonlinear optimal control problems with input-affine system dynamics. First, we extend the concepts of exploration, integral temporal difference, and invariant admissibility to the target CT nonlinear system that is governed by a control policy plus a probing signal called an exploration. Then, we show input-to-state stability (ISS) and invariant admissibility of the closed-loop systems with the policies generated by integral policy iteration (I-PI) or invariantly admissible PI (IA-PI) method. Based on these, three online I-RL algorithms named explorized I-PI and integral Q -learning I, II are proposed, all of which generate the same convergent sequences as I-PI and IA-PI under the required excitation condition on the exploration. All the proposed methods are partially or completely model free, and can simultaneously explore the state space in a stable manner during the online learning processes. ISS, invariant admissibility, and convergence properties of the proposed methods are also investigated, and related with these, we show the design principles of the exploration for safe learning. Neural-network-based implementation methods for the proposed schemes are also presented in this paper. Finally, several numerical simulations are carried out to verify the effectiveness of the proposed methods.

  7. Exploring Learner Autonomy: Language Learning Locus of Control in Multilinguals

    Science.gov (United States)

    Peek, Ron

    2016-01-01

    By using data from an online language learning beliefs survey (n?=?841), defining language learning experience in terms of participants' multilingualism, and using a domain-specific language learning locus of control (LLLOC) instrument, this article examines whether more experienced language learners can also be seen as more autonomous language…

  8. Promoting system-level learning from project-level lessons

    Energy Technology Data Exchange (ETDEWEB)

    Jong, Amos A. de, E-mail: amosdejong@gmail.com [Innovation Management, Utrecht (Netherlands); Runhaar, Hens A.C., E-mail: h.a.c.runhaar@uu.nl [Section of Environmental Governance, Utrecht University, Utrecht (Netherlands); Runhaar, Piety R., E-mail: piety.runhaar@wur.nl [Organisational Psychology and Human Resource Development, University of Twente, Enschede (Netherlands); Kolhoff, Arend J., E-mail: Akolhoff@eia.nl [The Netherlands Commission for Environmental Assessment, Utrecht (Netherlands); Driessen, Peter P.J., E-mail: p.driessen@geo.uu.nl [Department of Innovation and Environment Sciences, Utrecht University, Utrecht (Netherlands)

    2012-02-15

    A growing number of low and middle income nations (LMCs) have adopted some sort of system for environmental impact assessment (EIA). However, generally many of these EIA systems are characterised by a low performance in terms of timely information dissemination, monitoring and enforcement after licencing. Donor actors (such as the World Bank) have attempted to contribute to a higher performance of EIA systems in LMCs by intervening at two levels: the project level (e.g. by providing scoping advice or EIS quality review) and the system level (e.g. by advising on EIA legislation or by capacity building). The aims of these interventions are environmental protection in concrete cases and enforcing the institutionalisation of environmental protection, respectively. Learning by actors involved is an important condition for realising these aims. A relatively underexplored form of learning concerns learning at EIA system-level via project level donor interventions. This 'indirect' learning potentially results in system changes that better fit the specific context(s) and hence contribute to higher performances. Our exploratory research in Ghana and the Maldives shows that thus far, 'indirect' learning only occurs incidentally and that donors play a modest role in promoting it. Barriers to indirect learning are related to the institutional context rather than to individual characteristics. Moreover, 'indirect' learning seems to flourish best in large projects where donors achieved a position of influence that they can use to evoke reflection upon system malfunctions. In order to enhance learning at all levels donors should thereby present the outcomes of the intervention elaborately (i.e. discuss the outcomes with a large audience), include practical suggestions about post-EIS activities such as monitoring procedures and enforcement options and stimulate the use of their advisory reports to generate organisational memory and ensure a better

  9. Promoting system-level learning from project-level lessons

    International Nuclear Information System (INIS)

    Jong, Amos A. de; Runhaar, Hens A.C.; Runhaar, Piety R.; Kolhoff, Arend J.; Driessen, Peter P.J.

    2012-01-01

    A growing number of low and middle income nations (LMCs) have adopted some sort of system for environmental impact assessment (EIA). However, generally many of these EIA systems are characterised by a low performance in terms of timely information dissemination, monitoring and enforcement after licencing. Donor actors (such as the World Bank) have attempted to contribute to a higher performance of EIA systems in LMCs by intervening at two levels: the project level (e.g. by providing scoping advice or EIS quality review) and the system level (e.g. by advising on EIA legislation or by capacity building). The aims of these interventions are environmental protection in concrete cases and enforcing the institutionalisation of environmental protection, respectively. Learning by actors involved is an important condition for realising these aims. A relatively underexplored form of learning concerns learning at EIA system-level via project level donor interventions. This ‘indirect’ learning potentially results in system changes that better fit the specific context(s) and hence contribute to higher performances. Our exploratory research in Ghana and the Maldives shows that thus far, ‘indirect’ learning only occurs incidentally and that donors play a modest role in promoting it. Barriers to indirect learning are related to the institutional context rather than to individual characteristics. Moreover, ‘indirect’ learning seems to flourish best in large projects where donors achieved a position of influence that they can use to evoke reflection upon system malfunctions. In order to enhance learning at all levels donors should thereby present the outcomes of the intervention elaborately (i.e. discuss the outcomes with a large audience), include practical suggestions about post-EIS activities such as monitoring procedures and enforcement options and stimulate the use of their advisory reports to generate organisational memory and ensure a better information

  10. The roles of the olivocerebellar pathway in motor learning and motor control. A consensus paper

    Science.gov (United States)

    Lang, Eric J.; Apps, Richard; Bengtsson, Fredrik; Cerminara, Nadia L.; De Zeeuw, Chris I.; Ebner, Timothy J.; Heck, Detlef H.; Jaeger, Dieter; Jörntell, Henrik; Kawato, Mitsuo; Otis, Thomas S.; Ozyildirim, Ozgecan; Popa, Laurentiu S.; Reeves, Alexander M.B.; Schweighofer, Nicolas; Sugihara, Izumi; Xiao, Jianqiang

    2016-01-01

    For many decades the predominant view in the cerebellar field has been that the olivocerebellar system's primary function is to induce plasticity in the cerebellar cortex, specifically, at the parallel fiber-Purkinje cell synapse. However, it has also long been proposed that the olivocerebellar system participates directly in motor control by helping to shape ongoing motor commands being issued by the cerebellum. Evidence consistent with both hypotheses exists; however, they are often investigated as mutually exclusive alternatives. In contrast, here we take the perspective that the olivocerebellar system can contribute to both the motor learning and motor control functions of the cerebellum, and might also play a role in development. We then consider the potential problems and benefits of its having multiple functions. Moreover, we discuss how its distinctive characteristics (e.g., low firing rates, synchronization, variable complex spike waveform) make it more or less suitable for one or the other of these functions, and why its having a dual role makes sense from an evolutionary perspective. We did not attempt to reach a consensus on the specific role(s) the olivocerebellar system plays in different types of movements, as that will ultimately be determined experimentally; however, collectively, the various contributions highlight the flexibility of the olivocerebellar system, and thereby suggest it has the potential to act in both the motor learning and motor control functions of the cerebellum. PMID:27193702

  11. Learning and Control Model of the Arm for Loading

    Science.gov (United States)

    Kim, Kyoungsik; Kambara, Hiroyuki; Shin, Duk; Koike, Yasuharu

    We propose a learning and control model of the arm for a loading task in which an object is loaded onto one hand with the other hand, in the sagittal plane. Postural control during object interactions provides important points to motor control theories in terms of how humans handle dynamics changes and use the information of prediction and sensory feedback. For the learning and control model, we coupled a feedback-error-learning scheme with an Actor-Critic method used as a feedback controller. To overcome sensory delays, a feedforward dynamics model (FDM) was used in the sensory feedback path. We tested the proposed model in simulation using a two-joint arm with six muscles, each with time delays in muscle force generation. By applying the proposed model to the loading task, we showed that motor commands started increasing, before an object was loaded on, to stabilize arm posture. We also found that the FDM contributes to the stabilization by predicting how the hand changes based on contexts of the object and efferent signals. For comparison with other computational models, we present the simulation results of a minimum-variance model.

  12. Towards a lessons learned system for critical software

    International Nuclear Information System (INIS)

    Andrade, J.; Ares, J.; Garcia, R.; Pazos, J.; Rodriguez, S.; Rodriguez-Paton, A.; Silva, A.

    2007-01-01

    Failure can be a major driver for the advance of any engineering discipline and Software Engineering is no exception. But failures are useful only if lessons are learned from them. In this article we aim to make a strong defence of, and set the requirements for, lessons learned systems for safety-critical software. We also present a prototype lessons learned system that includes many of the features discussed here. We emphasize that, apart from individual organizations, lessons learned systems should target industrial sectors and even the Software Engineering community. We would like to encourage the Software Engineering community to use this kind of systems as another tool in the toolbox, which complements or enhances other approaches like, for example, standards and checklists

  13. Towards a lessons learned system for critical software

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, J. [University of A Coruna. Campus de Elvina, s/n. 15071, A Coruna (Spain)]. E-mail: jag@udc.es; Ares, J. [University of A Coruna. Campus de Elvina, s/n. 15071, A Coruna (Spain)]. E-mail: juanar@udc.es; Garcia, R. [University of A Coruna. Campus de Elvina, s/n. 15071, A Coruna (Spain)]. E-mail: rafael@udc.es; Pazos, J. [Technical University of Madrid. Campus de Montegancedo, s/n. 28660, Boadilla del Monte, Madrid (Spain)]. E-mail: jpazos@fi.upm.es; Rodriguez, S. [University of A Coruna. Campus de Elvina, s/n. 15071, A Coruna (Spain)]. E-mail: santi@udc.es; Rodriguez-Paton, A. [Technical University of Madrid. Campus de Montegancedo, s/n. 28660, Boadilla del Monte, Madrid (Spain)]. E-mail: arpaton@fi.upm.es; Silva, A. [Technical University of Madrid. Campus de Montegancedo, s/n. 28660, Boadilla del Monte, Madrid (Spain)]. E-mail: asilva@fi.upm.es

    2007-07-15

    Failure can be a major driver for the advance of any engineering discipline and Software Engineering is no exception. But failures are useful only if lessons are learned from them. In this article we aim to make a strong defence of, and set the requirements for, lessons learned systems for safety-critical software. We also present a prototype lessons learned system that includes many of the features discussed here. We emphasize that, apart from individual organizations, lessons learned systems should target industrial sectors and even the Software Engineering community. We would like to encourage the Software Engineering community to use this kind of systems as another tool in the toolbox, which complements or enhances other approaches like, for example, standards and checklists.

  14. Biomimetic approach to tacit learning based on compound control.

    Science.gov (United States)

    Shimoda, Shingo; Kimura, Hidenori

    2010-02-01

    The remarkable capability of living organisms to adapt to unknown environments is due to learning mechanisms that are totally different from the current artificial machine-learning paradigm. Computational media composed of identical elements that have simple activity rules play a major role in biological control, such as the activities of neurons in brains and the molecular interactions in intracellular control. As a result of integrations of the individual activities of the computational media, new behavioral patterns emerge to adapt to changing environments. We previously implemented this feature of biological controls in a form of machine learning and succeeded to realize bipedal walking without the robot model or trajectory planning. Despite the success of bipedal walking, it was a puzzle as to why the individual activities of the computational media could achieve the global behavior. In this paper, we answer this question by taking a statistical approach that connects the individual activities of computational media to global network behaviors. We show that the individual activities can generate optimized behaviors from a particular global viewpoint, i.e., autonomous rhythm generation and learning of balanced postures, without using global performance indices.

  15. UAV Controller Based on Adaptive Neuro-Fuzzy Inference System and PID

    Directory of Open Access Journals (Sweden)

    Ali Moltajaei Farid

    2013-01-01

    Full Text Available ANFIS is combining a neural network with a fuzzy system results in a hybrid neuro-fuzzy system, capable of reasoning and learning in an uncertain and imprecise environment. In this paper, an adaptive neuro-fuzzy inference system (ANFIS is employed to control an unmanned aircraft vehicle (UAV.  First, autopilots structure is defined, and then ANFIS controller is applied, to control UAVs lateral position. The results of ANFIS and PID lateral controllers are compared, where it shows the two controllers have similar results. ANFIS controller is capable to adaptation in nonlinear conditions, while PID has to be tuned to preserves proper control in some conditions. The simulation results generated by Matlab using Aerosim Aeronautical Simulation Block Set, which provides a complete set of tools for development of six degree-of-freedom. Nonlinear Aerosonde unmanned aerial vehicle model with ANFIS controller is simulated to verify the capability of the system. Moreover, the results are validated by FlightGear flight simulator.

  16. Java simulations of embedded control systems.

    Science.gov (United States)

    Farias, Gonzalo; Cervin, Anton; Arzén, Karl-Erik; Dormido, Sebastián; Esquembre, Francisco

    2010-01-01

    This paper introduces a new Open Source Java library suited for the simulation of embedded control systems. The library is based on the ideas and architecture of TrueTime, a toolbox of Matlab devoted to this topic, and allows Java programmers to simulate the performance of control processes which run in a real time environment. Such simulations can improve considerably the learning and design of multitasking real-time systems. The choice of Java increases considerably the usability of our library, because many educators program already in this language. But also because the library can be easily used by Easy Java Simulations (EJS), a popular modeling and authoring tool that is increasingly used in the field of Control Education. EJS allows instructors, students, and researchers with less programming capabilities to create advanced interactive simulations in Java. The paper describes the ideas, implementation, and sample use of the new library both for pure Java programmers and for EJS users. The JTT library and some examples are online available on http://lab.dia.uned.es/jtt.

  17. Nonlinear Control of an Active Magnetic Bearing System Achieved Using a Fuzzy Control with Radial Basis Function Neural Network

    Directory of Open Access Journals (Sweden)

    Seng-Chi Chen

    2014-01-01

    Full Text Available Studies on active magnetic bearing (AMB systems are increasing in popularity and practical applications. Magnetic bearings cause less noise, friction, and vibration than the conventional mechanical bearings; however, the control of AMB systems requires further investigation. The magnetic force has a highly nonlinear relation to the control current and the air gap. This paper proposes an intelligent control method for positioning an AMB system that uses a neural fuzzy controller (NFC. The mathematical model of an AMB system comprises identification followed by collection of information from this system. A fuzzy logic controller (FLC, the parameters of which are adjusted using a radial basis function neural network (RBFNN, is applied to the unbalanced vibration in an AMB system. The AMB system exhibited a satisfactory control performance, with low overshoot, and produced improved transient and steady-state responses under various operating conditions. The NFC has been verified on a prototype AMB system. The proposed controller can be feasibly applied to AMB systems exposed to various external disturbances; demonstrating the effectiveness of the NFC with self-learning and self-improving capacities is proven.

  18. 3D Game-Based Learning System for Improving Learning Achievement in Software Engineering Curriculum

    Science.gov (United States)

    Su,Chung-Ho; Cheng, Ching-Hsue

    2013-01-01

    The advancement of game-based learning has encouraged many related studies, such that students could better learn curriculum by 3-dimension virtual reality. To enhance software engineering learning, this paper develops a 3D game-based learning system to assist teaching and assess the students' motivation, satisfaction and learning achievement. A…

  19. Learning to Support Learning Together: An Experience with the Soft Systems Methodology

    Science.gov (United States)

    Sanchez, Adolfo; Mejia, Andres

    2008-01-01

    An action research approach called soft systems methodology (SSM) was used to foster organisational learning in a school regarding the role of the learning support department within the school and its relation with the normal teaching-learning activities. From an initial situation of lack of coordination as well as mutual misunderstanding and…

  20. Leadership Perspectives on Operationalizing the Learning Health Care System in an Integrated Delivery System.

    Science.gov (United States)

    Psek, Wayne; Davis, F Daniel; Gerrity, Gloria; Stametz, Rebecca; Bailey-Davis, Lisa; Henninger, Debra; Sellers, Dorothy; Darer, Jonathan

    2016-01-01

    Healthcare leaders need operational strategies that support organizational learning for continued improvement and value generation. The learning health system (LHS) model may provide leaders with such strategies; however, little is known about leaders' perspectives on the value and application of system-wide operationalization of the LHS model. The objective of this project was to solicit and analyze senior health system leaders' perspectives on the LHS and learning activities in an integrated delivery system. A series of interviews were conducted with 41 system leaders from a broad range of clinical and administrative areas across an integrated delivery system. Leaders' responses were categorized into themes. Ten major themes emerged from our conversations with leaders. While leaders generally expressed support for the concept of the LHS and enhanced system-wide learning, their concerns and suggestions for operationalization where strongly aligned with their functional area and strategic goals. Our findings suggests that leaders tend to adopt a very pragmatic approach to learning. Leaders expressed a dichotomy between the operational imperative to execute operational objectives efficiently and the need for rigorous evaluation. Alignment of learning activities with system-wide strategic and operational priorities is important to gain leadership support and resources. Practical approaches to addressing opportunities and challenges identified in the themes are discussed. Continuous learning is an ongoing, multi-disciplinary function of a health care delivery system. Findings from this and other research may be used to inform and prioritize system-wide learning objectives and strategies which support reliable, high value care delivery.

  1. Development Of Electronic Digestive System Module For Effective Teaching And Learning

    Directory of Open Access Journals (Sweden)

    Liman Aminu Doko

    2017-07-01

    Full Text Available The digestive system hence digestion of food is usually one of the topics taught at the secondary and tertiary levels of education. Often this topic is taught using teaching aid in the form of diagrams or charts drawn on plane papers. The inanimate nature of these teaching aid employed makes learning less interesting and comprehension difficult. This paper presents the design and construction of a semi animated digestive module with remote control that visualizes the movement and process of food digestion in the body. Basically the system consists of carved wooden digestive organs with light emitting diodes LEDs carefully fixed on the path of digestion. A remote control is also built to aid remote access to the module. These LEDs start to blink indicating swallowing from the mouth down to the anus illustrating the process of digestion which also involves the production of enzymes. A comparison of with the improved teaching aid will make conventional types showed that it aroused student interest during teaching and learning process. It also reduced too much abstract explanation. Thus making teaching more efficient.

  2. PERSO: Towards an Adaptive e-Learning System

    Science.gov (United States)

    Chorfi, Henda; Jemni, Mohamed

    2004-01-01

    In today's information technology society, members are increasingly required to be up to date on new technologies, particularly for computers, regardless of their background social situation. In this context, our aim is to design and develop an adaptive hypermedia e-learning system, called PERSO (PERSOnalizing e-learning system), where learners…

  3. Divulging Personal Information within Learning Analytics Systems

    Science.gov (United States)

    Ifenthaler, Dirk; Schumacher, Clara

    2015-01-01

    The purpose of this study was to investigate if students are prepared to release any personal data in order to inform learning analytics systems. Besides the well-documented benefits of learning analytics, serious concerns and challenges are associated with the application of these data driven systems. Most notably, empirical evidence regarding…

  4. Measuring strategic control in implicit learning: how and why?

    OpenAIRE

    Norman, Elisabeth

    2015-01-01

    Several methods have been developed for measuring the extent to which implicitly learned knowledge can be applied in a strategic, flexible manner. Examples include generation exclusion tasks in Serial Reaction Time (SRT) learning (Goschke, 1998; Destrebecqz and Cleeremans, 2001) and 2-grammar classification tasks in Artificial Grammar Learning (AGL; Dienes et al., 1995; Norman et al., 2011). Strategic control has traditionally been used as a criterion for determining whether acquired knowledg...

  5. Review of Recommender Systems Algorithms Utilized in Social Networks based e-Learning Systems & Neutrosophic System

    Directory of Open Access Journals (Sweden)

    A. A. Salama

    2015-03-01

    Full Text Available In this paper, we present a review of different recommender system algorithms that are utilized in social networks based e-Learning systems. Future research will include our proposed our e-Learning system that utilizes Recommender System and Social Network. Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. The fundamental concepts of neutrosophic set, introduced by Smarandache in [21, 22, 23] and Salama et al. in [24-66].The purpose of this paper is to utilize a neutrosophic set to analyze social networks data conducted through learning activities.

  6. The Role of Corticostriatal Systems in Speech Category Learning.

    Science.gov (United States)

    Yi, Han-Gyol; Maddox, W Todd; Mumford, Jeanette A; Chandrasekaran, Bharath

    2016-04-01

    One of the most difficult category learning problems for humans is learning nonnative speech categories. While feedback-based category training can enhance speech learning, the mechanisms underlying these benefits are unclear. In this functional magnetic resonance imaging study, we investigated neural and computational mechanisms underlying feedback-dependent speech category learning in adults. Positive feedback activated a large corticostriatal network including the dorsolateral prefrontal cortex, inferior parietal lobule, middle temporal gyrus, caudate, putamen, and the ventral striatum. Successful learning was contingent upon the activity of domain-general category learning systems: the fast-learning reflective system, involving the dorsolateral prefrontal cortex that develops and tests explicit rules based on the feedback content, and the slow-learning reflexive system, involving the putamen in which the stimuli are implicitly associated with category responses based on the reward value in feedback. Computational modeling of response strategies revealed significant use of reflective strategies early in training and greater use of reflexive strategies later in training. Reflexive strategy use was associated with increased activation in the putamen. Our results demonstrate a critical role for the reflexive corticostriatal learning system as a function of response strategy and proficiency during speech category learning. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Parent Perception of Two Eye-Gaze Control Technology Systems in Young Children with Cerebral Palsy: Pilot Study.

    Science.gov (United States)

    Karlsson, Petra; Wallen, Margaret

    2017-01-01

    Eye-gaze control technology enables people with significant physical disability to access computers for communication, play, learning and environmental control. This pilot study used a multiple case study design with repeated baseline assessment and parents' evaluations to compare two eye-gaze control technology systems to identify any differences in factors such as ease of use and impact of the systems for their young children. Five children, aged 3 to 5 years, with dyskinetic cerebral palsy, and their families participated. Overall, families were satisfied with both the Tobii PCEye Go and myGaze® eye tracker, found them easy to position and use, and children learned to operate them quickly. This technology provides young children with important opportunities for learning, play, leisure, and developing communication.

  8. Evaluating a learning management system for blended learning in Greek higher education.

    Science.gov (United States)

    Kabassi, Katerina; Dragonas, Ioannis; Ntouzevits, Alexandra; Pomonis, Tzanetos; Papastathopoulos, Giorgos; Vozaitis, Yiannis

    2016-01-01

    This paper focuses on the usage of a learning management system in an educational institution for higher education in Greece. More specifically, the paper examines the literature on the use of different learning management systems for blended learning in higher education in Greek Universities and Technological Educational Institutions and reviews the advantages and disadvantages. Moreover, the paper describes the usage of the Open eClass platform in a Technological Educational Institution, TEI of Ionian Islands, and the effort to improve the educational material by organizing it and adding video-lectures. The platform has been evaluated by the students of the TEI of Ionian Islands based on six dimensions: namely student, teacher, course, technology, system design, and environmental dimension. The results of this evaluation revealed that Open eClass has been successfully used for blended learning in the TEI of Ionian Islands. Despite the instructors' initial worries about students' lack of participation in their courses if their educational material was made available online and especially in video lectures; blended learning did not reduce physical presence of the students in the classroom. Instead it was only used as a supplementary tool that helps students to study further, watch missed lectures, etc.

  9. Adaptive Control of Nonlinear Discrete-Time Systems by Using OS-ELM Neural Networks

    Directory of Open Access Journals (Sweden)

    Xiao-Li Li

    2014-01-01

    Full Text Available As a kind of novel feedforward neural network with single hidden layer, ELM (extreme learning machine neural networks are studied for the identification and control of nonlinear dynamic systems. The property of simple structure and fast convergence of ELM can be shown clearly. In this paper, we are interested in adaptive control of nonlinear dynamic plants by using OS-ELM (online sequential extreme learning machine neural networks. Based on data scope division, the problem that training process of ELM neural network is sensitive to the initial training data is also solved. According to the output range of the controlled plant, the data corresponding to this range will be used to initialize ELM. Furthermore, due to the drawback of conventional adaptive control, when the OS-ELM neural network is used for adaptive control of the system with jumping parameters, the topological structure of the neural network can be adjusted dynamically by using multiple model switching strategy, and an MMAC (multiple model adaptive control will be used to improve the control performance. Simulation results are included to complement the theoretical results.

  10. Learning Management Systems and E-Learning within Cyprus Universities

    Directory of Open Access Journals (Sweden)

    Amirkhanpour, Monaliz

    2011-01-01

    Full Text Available This paper presents an extensive research study and results on the use of existing open-source Learning Management Systems, or LMS within the public and private universities of Cyprus. The most significant objective of this research is the identification of the different types of E-Learning, i.e. Computer-Based Training (CBT, Technology-Based Learning (TBL, and Web-Based Training (WBT within Cyprus universities. The paper identifies the benefits and limitations of the main learning approaches used in higher educational institutions, i.e. synchronous and asynchronous learning, investigates the open-source LMS used in the Cypriot universities and compares their features with regards to students’ preferences for a collaborative E-Learning environment. The required data for this research study were collected from undergraduate and graduate students, alumni, faculty members, and IT professionals who currently work and/or study at the public and private universities of Cyprus. The most noteworthy recommendation of this study is the clear indication that most of the undergraduate students that extensively use the specific E-Learning platform of their university do not have a clear picture of the differences between an LMS and a VLE. This gap has to be gradually diminished in order to make optimum use of the different features offered by the specific E-Learning platform.

  11. Multidimensional Learner Model In Intelligent Learning System

    Science.gov (United States)

    Deliyska, B.; Rozeva, A.

    2009-11-01

    The learner model in an intelligent learning system (ILS) has to ensure the personalization (individualization) and the adaptability of e-learning in an online learner-centered environment. ILS is a distributed e-learning system whose modules can be independent and located in different nodes (servers) on the Web. This kind of e-learning is achieved through the resources of the Semantic Web and is designed and developed around a course, group of courses or specialty. An essential part of ILS is learner model database which contains structured data about learner profile and temporal status in the learning process of one or more courses. In the paper a learner model position in ILS is considered and a relational database is designed from learner's domain ontology. Multidimensional modeling agent for the source database is designed and resultant learner data cube is presented. Agent's modules are proposed with corresponding algorithms and procedures. Multidimensional (OLAP) analysis guidelines on the resultant learner module for designing dynamic learning strategy have been highlighted.

  12. Adaptive Landmark-Based Navigation System Using Learning Techniques

    DEFF Research Database (Denmark)

    Zeidan, Bassel; Dasgupta, Sakyasingha; Wörgötter, Florentin

    2014-01-01

    The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal. In...... hexapod robots. As a result, it allows the robots to successfully learn to navigate to distal goals in complex environments.......The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal....... Inspired by this, we develop an adaptive landmark-based navigation system based on sequential reinforcement learning. In addition, correlation-based learning is also integrated into the system to improve learning performance. The proposed system has been applied to simulated simple wheeled and more complex...

  13. Impaired learning of punishments in Parkinson's disease with and without impulse control disorder.

    Science.gov (United States)

    Leplow, Bernd; Sepke, Maria; Schönfeld, Robby; Pohl, Johannes; Oelsner, Henriette; Latzko, Lea; Ebersbach, Georg

    2017-02-01

    To document specific learning mechanisms in patients with Parkinson's disease (PD) with and without impulse control disorder (ICD). Thirty-two PD patients receiving dopamine replacement therapy (DRT) were investigated. Sixteen were diagnosed with ICD (ICD + ) and 16 PD patients matched for levodopa equivalence dosage, and DRT duration and severity of disease did not show impulsive behavior (non-ICD). Short-term learning of inhibitory control was assessed by an experimental procedure which was intended to mimic everyday life. Correct inhibition especially, had to be learned without reward (passive avoidance), and the failure to inhibit a response was punished (punishment learning). Results were compared to 16 healthy controls (HC) matched for age and sex. In ICD + patients within-session learning of non-rewarded inhibition was at chance levels. Whereas healthy controls rapidly developed behavioral inhibition, non-ICD patients were also significantly impaired compared to HC, but gradually developed some degree of control. Both patient groups showed significantly decreased learning if the failure to withhold a response was punished. PD patients receiving DRT show impaired ability to acquire both punishment learning and passive avoidance learning, irrespective of whether or not ICD was developed. In ICD + PD patients, behavioral inhibition is nearly absent. Results demonstrate that by means of subtle learning paradigms it is possible to identify PD-DRT patients who show subtle alterations of punishment learning. This may be a behavioral measure for the identification of PD patients who are prone to develop ICD if DRT is continued.

  14. Assessing the Value of E-Learning Systems

    Science.gov (United States)

    Levy, Yair

    2006-01-01

    "Assessing the Value of E-Learning Systems" provides an extensive literature review pulling theories from the field of information systems, psychology and cognitive sciences, distance and online learning, as well as marketing and decision sciences. This book provides empirical evidence for the power of measuring value in the context of e-learning…

  15. Multiple systems for motor skill learning.

    Science.gov (United States)

    Clark, Dav; Ivry, Richard B

    2010-07-01

    Motor learning is a ubiquitous feature of human competence. This review focuses on two particular classes of model tasks for studying skill acquisition. The serial reaction time (SRT) task is used to probe how people learn sequences of actions, while adaptation in the context of visuomotor or force field perturbations serves to illustrate how preexisting movements are recalibrated in novel environments. These tasks highlight important issues regarding the representational changes that occur during the course of motor learning. One important theme is that distinct mechanisms vary in their information processing costs during learning and performance. Fast learning processes may require few trials to produce large changes in performance but impose demands on cognitive resources. Slower processes are limited in their ability to integrate complex information but minimally demanding in terms of attention or processing resources. The representations derived from fast systems may be accessible to conscious processing and provide a relatively greater measure of flexibility, while the representations derived from slower systems are more inflexible and automatic in their behavior. In exploring these issues, we focus on how multiple neural systems may interact and compete during the acquisition and consolidation of new behaviors. Copyright © 2010 John Wiley & Sons, Ltd. This article is categorized under: Psychology > Motor Skill and Performance. Copyright © 2010 John Wiley & Sons, Ltd.

  16. A presentation system for just-in-time learning in radiology.

    Science.gov (United States)

    Kahn, Charles E; Santos, Amadeu; Thao, Cheng; Rock, Jayson J; Nagy, Paul G; Ehlers, Kevin C

    2007-03-01

    There is growing interest in bringing medical educational materials to the point of care. We sought to develop a system for just-in-time learning in radiology. A database of 34 learning modules was derived from previously published journal articles. Learning objectives were specified for each module, and multiple-choice test items were created. A web-based system-called TEMPO-was developed to allow radiologists to select and view the learning modules. Web services were used to exchange clinical context information between TEMPO and the simulated radiology work station. Preliminary evaluation was conducted using the System Usability Scale (SUS) questionnaire. TEMPO identified learning modules that were relevant to the age, sex, imaging modality, and body part or organ system of the patient being viewed by the radiologist on the simulated clinical work station. Users expressed a high degree of satisfaction with the system's design and user interface. TEMPO enables just-in-time learning in radiology, and can be extended to create a fully functional learning management system for point-of-care learning in radiology.

  17. Adaptive polymeric system for Hebbian type learning

    OpenAIRE

    2011-01-01

    Abstract We present the experimental realization of an adaptive polymeric system displaying a ?learning behaviour?. The system consists on a statistically organized networks of memristive elements (memory-resitors) based on polyaniline. In a such network the path followed by the current increments its conductivity, a property which makes the system able to mimic Hebbian type learning and have application in hardware neural networks. After discussing the working principle of ...

  18. Adaptive E-learning System in Secondary Education

    Directory of Open Access Journals (Sweden)

    Sofija Tosheva

    2012-02-01

    Full Text Available In this paper we describe an adaptive web application E-school, where students can adjust some features according to their preferences and learning style. This e-learning environment enables monitoring students progress, total time students have spent in the system, their activity on the forums, the overall achievements in lessons learned, tests performed and solutions to given projects. Personalized assistance that teacher provides in a traditional classroom is not easy to implement. Students have regular contact with teachers using e-mail tools and conversation, so teacher get mentoring role for each student. The results of exploitation of the e-learning system show positive impact in acquiring the material and improvement of student’s achievements.

  19. Deep learning and model predictive control for self-tuning mode-locked lasers

    Science.gov (United States)

    Baumeister, Thomas; Brunton, Steven L.; Nathan Kutz, J.

    2018-03-01

    Self-tuning optical systems are of growing importance in technological applications such as mode-locked fiber lasers. Such self-tuning paradigms require {\\em intelligent} algorithms capable of inferring approximate models of the underlying physics and discovering appropriate control laws in order to maintain robust performance for a given objective. In this work, we demonstrate the first integration of a {\\em deep learning} (DL) architecture with {\\em model predictive control} (MPC) in order to self-tune a mode-locked fiber laser. Not only can our DL-MPC algorithmic architecture approximate the unknown fiber birefringence, it also builds a dynamical model of the laser and appropriate control law for maintaining robust, high-energy pulses despite a stochastically drifting birefringence. We demonstrate the effectiveness of this method on a fiber laser which is mode-locked by nonlinear polarization rotation. The method advocated can be broadly applied to a variety of optical systems that require robust controllers.

  20. Intelligent e-Learning Systems: An Educational Paradigm Shift

    Directory of Open Access Journals (Sweden)

    Suman Bhattacharya

    2016-12-01

    Full Text Available Learning is the long process of transforming information as well as experience into knowledge, skills, attitude and behaviors. To make up the wide gap between the demand of increasing higher education and comparatively limited resources, more and more educational institutes are looking into instructional technology. Use of online resources not only reduces the cost of education but also meet the needs of society. Intelligent e-learning has become one of the important channels to reach out to students exceeding geographic boundaries. Besides this, the characteristics of e-learning have complicated the process of education, and have brought challenges to both instructors and students. This paper will focus on the discussion of different discipline of intelligent e-learning like scaffolding based e-learning, personalized e-learning, confidence based e-learning, intelligent tutoring system, etc. to illuminate the educational paradigm shift in intelligent e-learning system.

  1. Simulation of noisy dynamical system by Deep Learning

    Science.gov (United States)

    Yeo, Kyongmin

    2017-11-01

    Deep learning has attracted huge attention due to its powerful representation capability. However, most of the studies on deep learning have been focused on visual analytics or language modeling and the capability of the deep learning in modeling dynamical systems is not well understood. In this study, we use a recurrent neural network to model noisy nonlinear dynamical systems. In particular, we use a long short-term memory (LSTM) network, which constructs internal nonlinear dynamics systems. We propose a cross-entropy loss with spatial ridge regularization to learn a non-stationary conditional probability distribution from a noisy nonlinear dynamical system. A Monte Carlo procedure to perform time-marching simulations by using the LSTM is presented. The behavior of the LSTM is studied by using noisy, forced Van der Pol oscillator and Ikeda equation.

  2. Courseware Development with Animated Pedagogical Agents in Learning System to Improve Learning Motivation

    Science.gov (United States)

    Chin, Kai-Yi; Hong, Zeng-Wei; Huang, Yueh-Min; Shen, Wei-Wei; Lin, Jim-Min

    2016-01-01

    The addition of animated pedagogical agents (APAs) in computer-assisted learning (CAL) systems could successfully enhance students' learning motivation and engagement in learning activities. Conventionally, the APA incorporated multimedia materials are constructed through the cooperation of teachers and software programmers. However, the thinking…

  3. Causal Learning in Gambling Disorder: Beyond the Illusion of Control.

    Science.gov (United States)

    Perales, José C; Navas, Juan F; Ruiz de Lara, Cristian M; Maldonado, Antonio; Catena, Andrés

    2017-06-01

    Causal learning is the ability to progressively incorporate raw information about dependencies between events, or between one's behavior and its outcomes, into beliefs of the causal structure of the world. In spite of the fact that some cognitive biases in gambling disorder can be described as alterations of causal learning involving gambling-relevant cues, behaviors, and outcomes, general causal learning mechanisms in gamblers have not been systematically investigated. In the present study, we compared gambling disorder patients against controls in an instrumental causal learning task. Evidence of illusion of control, namely, overestimation of the relationship between one's behavior and an uncorrelated outcome, showed up only in gamblers with strong current symptoms. Interestingly, this effect was part of a more complex pattern, in which gambling disorder patients manifested a poorer ability to discriminate between null and positive contingencies. Additionally, anomalies were related to gambling severity and current gambling disorder symptoms. Gambling-related biases, as measured by a standard psychometric tool, correlated with performance in the causal learning task, but not in the expected direction. Indeed, performance of gamblers with stronger biases tended to resemble the one of controls, which could imply that anomalies of causal learning processes play a role in gambling disorder, but do not seem to underlie gambling-specific biases, at least in a simple, direct way.

  4. The more you learn, the less you store : Memory-controlled incremental SVM for visual place recognition

    OpenAIRE

    Pronobis, Andrzej; Jie, Luo; Caputo, Barbara

    2010-01-01

    The capability to learn from experience is a key property for autonomous cognitive systems working in realistic settings. To this end, this paper presents an SVM-based algorithm, capable of learning model representations incrementally while keeping under control memory requirements. We combine an incremental extension of SVMs [43] with a method reducing the number of support vectors needed to build the decision function without any loss in performance [15] introducing a parameter which permit...

  5. Patterns for Designing Learning Management Systems

    NARCIS (Netherlands)

    Avgeriou, Paris; Retalis, Symeon; Papasalouros, Andreas

    2003-01-01

    Learning Management Systems are sophisticated web-based applications that are being engineered today in increasing numbers by numerous institutions and companies that want to get involved in e-learning either for providing services to third parties, or for educating and training their own people.

  6. The Influence of Learning Management Technology to Student’s Learning Outcome

    Directory of Open Access Journals (Sweden)

    Taufiq Lilo Adi Sucipto

    2017-02-01

    Full Text Available The study examines the influence of learning management systems to the implementation of flipped classroom model in a vocational school in Indonesia. The flipped classroom is a relatively new educational model that inverts students’ time to study on lectures and time spent on homework. Despite studies have been conducted on the model, few addressed the impact of the use of a learning management system to the performance of students involved in such learning model particularly within Indonesian educational systems context. A quasi-experiment approach was applied to an experiment class and another control class. Upon the analysis, the results emphasized previously held research outcomes. The use of Edmodo learning management systems enhances students’ performance in the experiment class, relative to those of the control class.     Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License.

  7. Which Recommender System Can Best Fit Social Learning Platforms?

    NARCIS (Netherlands)

    Fazeli, Soude; Loni, Babak; Drachsler, Hendrik; Sloep, Peter

    2014-01-01

    This study aims to develop a recommender system for social learning platforms that combine traditional learning management systems with commercial social networks like Facebook. We therefore take into account social interactions of users to make recommendations on learning resources. We propose to

  8. SYSTEM APPROACH TO THE BLENDED LEARNING

    Directory of Open Access Journals (Sweden)

    Vladimir Kukharenko

    2015-10-01

    Full Text Available Currently, much attention is paid to the development of learning sour cream – a combination of traditional and distance (30-70% of training. Such training is sometimes called hybrid and referred to disruptive technologies. Purpose – to show that the use of systemic campaign in blended learning provides a high quality of education, and the technology can be devastating. The subject of the study – blended learning, object of study – Mixed learning process. The analysis results show that the combined training increases the motivation of students, qualification of teachers, personalized learning process. At the same time there are no reliable methods of assessing the quality of education and training standards. It is important that blended learning strategy to support the institutional goals and had an effective organizational model for support.

  9. Could a Mobile-Assisted Learning System Support Flipped Classrooms for Classical Chinese Learning?

    Science.gov (United States)

    Wang, Y.-H.

    2016-01-01

    In this study, the researcher aimed to develop a mobile-assisted learning system and to investigate whether it could promote teenage learners' classical Chinese learning through the flipped classroom approach. The researcher first proposed the structure of the Cross-device Mobile-Assisted Classical Chinese (CMACC) system according to the pilot…

  10. Functional Based Adaptive and Fuzzy Sliding Controller for Non-Autonomous Active Suspension System

    Science.gov (United States)

    Huang, Shiuh-Jer; Chen, Hung-Yi

    In this paper, an adaptive sliding controller is developed for controlling a vehicle active suspension system. The functional approximation technique is employed to substitute the unknown non-autonomous functions of the suspension system and release the model-based requirement of sliding mode control algorithm. In order to improve the control performance and reduce the implementation problem, a fuzzy strategy with online learning ability is added to compensate the functional approximation error. The update laws of the functional approximation coefficients and the fuzzy tuning parameters are derived from the Lyapunov theorem to guarantee the system stability. The proposed controller is implemented on a quarter-car hydraulic actuating active suspension system test-rig. The experimental results show that the proposed controller suppresses the oscillation amplitude of the suspension system effectively.

  11. IMPROVING CAUSE DETECTION SYSTEMS WITH ACTIVE LEARNING

    Data.gov (United States)

    National Aeronautics and Space Administration — IMPROVING CAUSE DETECTION SYSTEMS WITH ACTIVE LEARNING ISAAC PERSING AND VINCENT NG Abstract. Active learning has been successfully applied to many natural language...

  12. LONS: Learning Object Negotiation System

    Science.gov (United States)

    García, Antonio; García, Eva; de-Marcos, Luis; Martínez, José-Javier; Gutiérrez, José-María; Gutiérrez, José-Antonio; Barchino, Roberto; Otón, Salvador; Hilera, José-Ramón

    This system comes up as a result of the increase of e-learning systems. It manages all relevant modules in this context, such as the association of digital rights with the contents (courses), management and payment processing on rights. There are three blocks:

  13. [Multi-course web-learning system for supporting students of medical technology].

    Science.gov (United States)

    Honma, Satoru; Wakamatsu, Hidetoshi; Kurihara, Yuriko; Yoshida, Shoko; Sakai, Nobue

    2013-05-01

    Web-Learning system was developed to support the self-learning for national qualification examination and medical engineering practice by students. The results from small tests in various situations suggest that the unit-learning systems are more effective, especially for the early stage of their self learning. In addition, the answers of some questionnaire suggest that the students' motivation has a certain relation with the number of the questions in the system. That is, the less number of the questions, the easier they are worked out with a higher learning motivation by students. Thus, the system was extended to enable students to study various subjects and/or units by themselves. The system enables them to have learning effects more easily by the exercise during lectures. The effectiveness of the system was investigated on medical associated subjects installed in the system. The concerning questions of Medical engineering and Pathological histology are adequately divided into several groups, of which sixteen Web-Learning subsystems were well composed for their practical application. Our concerning various unit-learning systems were confirmed much useful for most students comparing with the case of the overall Web-Learning system.

  14. Building machine learning systems with Python

    CERN Document Server

    Richert, Willi

    2013-01-01

    This is a tutorial-driven and practical, but well-grounded book showcasing good Machine Learning practices. There will be an emphasis on using existing technologies instead of showing how to write your own implementations of algorithms. This book is a scenario-based, example-driven tutorial. By the end of the book you will have learnt critical aspects of Machine Learning Python projects and experienced the power of ML-based systems by actually working on them.This book primarily targets Python developers who want to learn about and build Machine Learning into their projects, or who want to pro

  15. Expert Students in Social Learning Management Systems

    Science.gov (United States)

    Avogadro, Paolo; Calegari, Silvia; Dominoni, Matteo Alessandro

    2016-01-01

    Purpose: A social learning management system (social LMS) is a tool which favors social interactions and allows scholastic institutions to supervise and guide the learning process. The inclusion of the social feature to a "normal" LMS leads to the creation of educational social networks (EduSN), where the students interact and learn. The…

  16. Preliminary Test of Adaptive Neuro-Fuzzy Inference System Controller for Spacecraft Attitude Control

    Directory of Open Access Journals (Sweden)

    Sung-Woo Kim

    2012-12-01

    Full Text Available The problem of spacecraft attitude control is solved using an adaptive neuro-fuzzy inference system (ANFIS. An ANFIS produces a control signal for one of the three axes of a spacecraft’s body frame, so in total three ANFISs are constructed for 3-axis attitude control. The fuzzy inference system of the ANFIS is initialized using a subtractive clustering method. The ANFIS is trained by a hybrid learning algorithm using the data obtained from attitude control simulations using state-dependent Riccati equation controller. The training data set for each axis is composed of state errors for 3 axes (roll, pitch, and yaw and a control signal for one of the 3 axes. The stability region of the ANFIS controller is estimated numerically based on Lyapunov stability theory using a numerical method to calculate Jacobian matrix. To measure the performance of the ANFIS controller, root mean square error and correlation factor are used as performance indicators. The performance is tested on two ANFIS controllers trained in different conditions. The test results show that the performance indicators are proper in the sense that the ANFIS controller with the larger stability region provides better performance according to the performance indicators.

  17. Intensitas Perilaku Pengguna E-learning System dengan Model Utaut

    OpenAIRE

    Sari, Fatma; Purnamasari, Susan Dian

    2013-01-01

    This study aims to determine behavioral intention in the use of e-learning system using models UTAUT. The phenomenon underlying the research is: It is not yet optimal use of e-learning by students information systems in the learning process, not yet optimal socialization of the existence of e-learning, so that is not maximized and yet utilization measurability of the impact of using e-learning for lecturers.This study is limited in its scope: analysis of the influence of performance expectanc...

  18. Application of a repetitive process setting to design of monotonically convergent iterative learning control

    Science.gov (United States)

    Boski, Marcin; Paszke, Wojciech

    2015-11-01

    This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.

  19. Indicators for successful learning in air traffic control training

    NARCIS (Netherlands)

    Van Meeuwen, Ludo; Brand-Gruwel, Saskia; Van Merriënboer, Jeroen; De Bock, Jeano; Kirschner, Paul A.

    2011-01-01

    Van Meeuwen, L. W., Brand-Gruwel, S., Van Merriënboer, J. J. G., De Bock, J. J. P. R., & Kirschner, P. A. (2010, August). Indicators for successful learning in air traffic control training. Paper presented at the 5th EARLI SIG 14 Learning and Professional Development Conference. Munich, Germany.

  20. Adaptive critic learning techniques for engine torque and air-fuel ratio control.

    Science.gov (United States)

    Liu, Derong; Javaherian, Hossein; Kovalenko, Olesia; Huang, Ting

    2008-08-01

    A new approach for engine calibration and control is proposed. In this paper, we present our research results on the implementation of adaptive critic designs for self-learning control of automotive engines. A class of adaptive critic designs that can be classified as (model-free) action-dependent heuristic dynamic programming is used in this research project. The goals of the present learning control design for automotive engines include improved performance, reduced emissions, and maintained optimum performance under various operating conditions. Using the data from a test vehicle with a V8 engine, we developed a neural network model of the engine and neural network controllers based on the idea of approximate dynamic programming to achieve optimal control. We have developed and simulated self-learning neural network controllers for both engine torque (TRQ) and exhaust air-fuel ratio (AFR) control. The goal of TRQ control and AFR control is to track the commanded values. For both control problems, excellent neural network controller transient performance has been achieved.