Christensen, David Johan
In this thesis, we study several central elements of autonomous self-reconfigurable modular robots. Unlike conventional robots such robots are: i) Modular, since robots are assembled from numerous robotic modules. ii) Reconfigurable, since the modules can be combined in a variety of ways. iii) Self......-reconfigurable, since the modules themselves are able to change how they are combined. iv) Autonomous, since robots control themselves without human guidance. Such robots are attractive to study since they in theory have several desirable characteristics, such as versatility, reliability and cheapness. In practice...... however, it is challenging to realize such characteristics since state-of-the-art systems and solutions suffer from several inherent technical and theoretical problems and limitations. In this thesis, we address these challenges by exploring four central elements of autonomous self-reconfigurable modular...
Dvinge, Nicolai; Schultz, Ulrik Pagh; Christensen, David Johan
A self-reconfigurable robot is a robotic device that can change its own shape. Self-reconfigurable robots are commonly built from multiple identical modules that can manipulate each other to change the shape of the robot. The robot can also perform tasks such as locomotion without changing shape......., significantly simplifying the task of programming self-reconfigurable robots. Our language fully supports programming the ATRON self-reconfigurable robot, and has been used to implement several controllers running both on the physical modules and in simulation......., self-reconfigurable robots, we have developed a declarative, role-based language that allows the programmer to associate roles and behavior to structural elements in a modular robot. Based on the role declarations, a dedicated middleware for high-level distributed communication is generated...
Self-reconfigurable robots are built from robotic modules typically organised in a lattice. The robotic modules themselves are complete, although simple, robots and have onboard batteries, actuators, sensors, processing power, and communication capabilities. The modules can automatically connect to...... and disconnect from neighbour modules and move around in the lattice of modules. The self-reconfigurable robot as a whole can, through this automatic rearrangement of modules, change its own shape to adapt to the environment or as a response to new tasks. Potential advantages of self......-reconfigurable robots are extreme versatility and robustness. The organisation of self-reconfigurable robots in a lattice structure and the emphasis on local communication between modules mean that lattice automata are a useful basis for control of self-reconfigurable robots. However, there are significant differences...
Docking design of self‐reconfigurable robots is studied. Firstly, the self‐reconfigurable robot is presented. Its basic module is designed, which is composed of a central cube and six rotary arms. Then, the novel docking mechanism of each module is designed. It is critical for the self‐reconfigurable robot to discard any faulty modules for the self‐repairing actions. The docking process is analyzed with the geometric method. The docking forces between two modules are d...
FEI Yanqiong; DONG Qinglei; ZHAO Xifang
This paper proposes a novel, hermaphroditic, and lattice self-reconfigurable modular robot. Each module is composed of a center body--a cubic part and six sides that can rotate independently. There are two holes and two exten- sible pegs on each side. The rotary motion of each side and the extensible motion of the pegs are generated by a motor connected to a reducer, using a cone-shaped gear, belt, clutch, etc. The structure of the module is compact, and has space to extend further.
Full Text Available Self-reconfigurable modular robots are composed of modules which are able to autonomously change the way they are connected. An appropriate control algorithm enables the modular robots to change their shape in order to adapt to their immediate environment. In this paper, we propose an algorithm for adaptive transformation to load condition of the modular robots. The algorithm is based on a simple idea that modules have tendency to gather around stress-concentrated parts and reinforce the parts. As a result of the self-reconfiguration rule, the modular robots form an appropriate structure to stand for the load condition. Applying the algorithm to our modular robot named “CHOBIE II,” we show by computer simulation that the modules are able to construct a cantilever structure with avoiding overstressed states.
徐威; 王高中; 李倩; 王石刚
A self-reconfigurable robot is a non-linear complex system composed of a large number of modules. The complexity caused by non-linearity makes it difficult to solve the problem of module motion planning and shape-changing control with the traditional algorithm. In this paper, a full-discrete metamorphic algorithm is proposed. The modules concurrently process the local sensing information, update their eigenvector, and act by the same predetermined logical rules. Then a reasonable motion sequence for modules and the global metamorphosis can be obtained. Therefore, the complexity of metamorphic algorithm is reduced, the metamorphic procedure is simplified, and the self-organizing metamorphosis can be obtained. The algorithm cases of several typical systems are studied and evaluated through simulation program of 2-D planar homogeneous modular systems.
Full Text Available This paper presents the design and implementation of a new modular self-reconfigurable robot. The single module has three joints and can perform rectilinear motion, lateral shift, lateral rolling, and rotation. A flexible pin-hole-based docking mechanism is designed for self-assembly. With the proposed infrared-sensor-based docking method, multiple modules can be self-assembled to form versatile configurations. The modules communicate with each other through ZigBee protocols. The locomotion planning and geometry analysis of the single module are presented in detail and the efficiency of the single module’s mobility is also demonstrated by experimental results. In automatic docking experiments with two modules, the proposed method is shown to be able to achieve an average success rate of 78% within the effective region. The average time of the docking process is reduced to 75 s. The maximum velocity of the I-shaped robot is up to 3.6 cm/s and the maximum velocity of the X-shaped robot is 4.8 cm/s. The detach-dock method for I-to-X transformation planning is also verified. The ZigBee-based communication system can achieve 100% receiving rate at 55 ms transformation interval.
Schultz, Ulrik Pagh; Christensen, David Johan; Støy, Kasper
. Programming a modular, self-reconfigurable robot is however a complicated task: the robot is essentially a real-time, distributed embedded system, where control and communication paths often are tightly coupled to the current physical configuration of the robot. To facilitate the task of programming modular......, self-reconfigurable robots, we have developed a declarative, role-based language that allows the programmer to define roles and behavior independently of the concrete physical structure of the robot. Roles are compiled to mobile code fragments that distribute themselves over the physical structure...
LIU JinGuo; MA ShuGen; WANG YueChao; LI Bin
This paper presents a network-based analysis approach for the reconfiguration problem of a self-reconfigurable robot.The self-reconfigurable modular robot named "AMOEBA-Ⅰ" has nine kinds of non-isomorphic configurations that consist of a configuration network.Each configuration of the robot is defined to be a node in the weighted and directed configuration network.The transformation from one configuration to another is represented by a directed path with nonnegative weight.Graph theory is applied in the reconfiguration analysis,where reconfiguration route,reconfigurable matrix and route matrix are defined according to the topological information of these configurations.Algorithms in graph theory have been used in enumerating the available reconfiguration routes and deciding the best reconfiguration route.Numerical analysis and experimental simulation results prove the validity of the approach proposed in this paper.And it is potentially suitable for other self-reconfigurable robots' configuration control and reconfiguration planning.
Moghadam, Mikael; Johan Christensen, David; Brandt, David;
This paper explores the role of operating system and high-level languages in the development of software and domain-specific languages (DSLs) for self-reconfigurable robotics. We review some of the current trends in self-reconfigurable robotics and describe the development of a software system...... for ATRON II which utilizes Linux and Python to significantly improve software abstraction and portability while providing some basic features which could prove useful when using Python, either stand-alone or via a DSL, on a self-reconfigurable robot system. These features include transparent socket...... communication, module identification, easy software transfer and reliable module-to-module communication. The end result is a software platform for modular robots that where appropriate builds on existing work in operating systems, virtual machines, middleware and high-level languages....
Elian, Carrillo; Duhaut, Dominique
Bioinspiration and Robotics Walking and Climbing Robots, Book edited by: Maki K. Habib , ISBN: 978-3-902613-15-8, Publisher: I-Tech Education and Publishing, Austria, Collective displacement is a very useful behaviour for living creatures. This behaviour can appear in a flock of birds, a school of fish, or a swarm of insects. Flocking behaviour is a common demonstration of the power of simple rules in collective displacement emergence by (Reynolds, 2007). The study of the displacement of a...
Moghadam, Mikael; Christensen, David Johan; Brandt, David;
This paper explores the role of operating system and high-level languages in the development of software and domain-specific languages (DSLs) for self-reconfigurable robotics. We review some of the current trends in selfreconfigurable robotics and describe the development of a software system...... for ATRON II which utilizes Linux and Python to significantly improve software abstraction and portability while providing some basic features which could prove useful when using Python, either stand-alone or via a DSL, on a selfreconfigurable robot system. These features include transparent socket...... communication, module identification, easy software transfer and reliable module-to-module communication. The end result is a software platform for modular robots that where appropriate builds on existing work in operating systems, virtual machines, middleware and high-level languages....
Zhang Liping; Ma Shugen; Li Bin; Zhang Zheng; Cao Binggang
Based on the design of a docking mechanism, this paper thoroughly investigates the space automatic docking of self-reconfiguration modular exploration robot system (RMERS). The method that leads robot to achieve space docking by using two-dimensional PSD is put forward innovatively for the median size robot system. At the same time, in order to enlarge the detecting extension and the precision of PSD and reduce its dependence on lighting signal, the PSD was remade by increasing the optical device over its light-sensitive surface. The emission board and LED light scheduling were designed according to docking arithmetic, and the operating principle of docking process was analyzed based on these. The simulation experiments were carried out and their results are presented.
WU Qiu-xuan; CAO Guang-yi; TIAN Hua-ying; FEI Yan-qiong
The eigenvector of a module with six adjacent module's state was constructed according to self-reconfigurable robot M-Cubes and the configuration of system was expressed with the eigenvectors of all modules. According to the configuration and motion characteristics of the modules, a 3-dimension motion rule set was provided.The rule sets of each module was run according to eigenvector of the module after the motion direction of system decided and motion rules were selected. At last, the rapid and effective motion and metamorphosis were realized in system. The rule sets are operated on three systems and the distributed motion of system is fully realized. The result of simulation shows that the 3-dimension motion rule sets has perfect applicability and extensibility. The motion steps and communication load of the modules increase with the module number in linear.
Christensen, David Johan; Schultz, Ulrik Pagh; Stoy, Kasper
In this paper, we present a distributed reinforcement learning strategy for morphology-independent lifelong gait learning for modular robots. All modules run identical controllers that locally and independently optimize their action selection based on the robot’s velocity as a global, shared reward...... physical robots with a comparable performance, (iii) can be applied to learn simple gait control tables for both M-TRAN and ATRON robots, (iv) enables an 8-module robot to adapt to faults and changes in its morphology, and (v) can learn gaits for up to 60 module robots but a divergence effect becomes...... substantial from 20–30 modules. These experiments demonstrate the advantages of a distributed learning strategy for modular robots, such as simplicity in implementation, low resource requirements, morphology independence, reconfigurability, and fault tolerance....
Kuksenok, Olga; Balazs, Anna C.
Human motion is enabled by the concerted expansion and contraction of interconnected muscles that are powered by inherent biochemical reactions. One of the challenges in the field of biomimicry is eliciting this form of motion from purely synthetic materials, which typically do not generate internalized reactions to drive mechanical action. Moreover, for practical applications, this bio-inspired motion must be readily controllable. Herein, we develop a computational model to design a new class of polymer gels where structural reconfigurations and internalized reactions are intimately linked to produce autonomous motion, which can be directed with light. These gels contain both spirobenzopyran (SP) chromophores and the ruthenium catalysts that drive the oscillatory Belousov-Zhabotinsky (BZ) reaction. Importantly, both the SP moieties and the BZ reaction are photosensitive. When these dual-functionalized gels are exposed to non-uniform illumination, the localized contraction of the gel (due to the SP moieties) in the presence of traveling chemical waves (due to the BZ reaction) leads to new forms of spontaneous, self-sustained movement, which cannot be achieved by either of the mono-functionalized networks.
Kuksenok, Olga; Balazs, Anna C
Human motion is enabled by the concerted expansion and contraction of interconnected muscles that are powered by inherent biochemical reactions. One of the challenges in the field of biomimicry is eliciting this form of motion from purely synthetic materials, which typically do not generate internalized reactions to drive mechanical action. Moreover, for practical applications, this bio-inspired motion must be readily controllable. Herein, we develop a computational model to design a new class of polymer gels where structural reconfigurations and internalized reactions are intimately linked to produce autonomous motion, which can be directed with light. These gels contain both spirobenzopyran (SP) chromophores and the ruthenium catalysts that drive the oscillatory Belousov-Zhabotinsky (BZ) reaction. Importantly, both the SP moieties and the BZ reaction are photosensitive. When these dual-functionalized gels are exposed to non-uniform illumination, the localized contraction of the gel (due to the SP moieties) in the presence of traveling chemical waves (due to the BZ reaction) leads to new forms of spontaneous, self-sustained movement, which cannot be achieved by either of the mono-functionalized networks. PMID:25924823
Vo, Van Thanh
The objective of the autonomous packaging robot application is to replace manual product packaging in food industry with a fully automatic robot. The objective is achieved by using the combination of machine vision, central computer, sensors, microcontroller and a typical ABB robot. The method is to equip the robot with different sensors: camera as “eyes” of robot, distance sensor and microcontroller as “sense of touch” of the robot, central computer as “brain” of the robot. Because the ro...
Agah, Arvin; Bekey, George A.
This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.
Addanki Purna Ramesh,
Full Text Available This paper focuses on design and implementation of six legged robot that is capable of monitoring and performing house hold works independently. The Autonomous Home Automated Hexapod is developed with three AT89C52 microcontrollers which functions as brain of the robot to which all operating functions of each module are chronologically programmed in it. The legs of the robot were developed with 2 servo motors to provide two degree for each leg. Several additional sensors like TSOP1738 (IR, RF transmitter andreceiver, DS1307 (Real Time Clock have been embedded into robot in modular form to make it work autonomously.
An autonomous robot which can move and find its own route to a destination by means of fuzzy control is under development. An AI technique is utilized to determine the route to a destination from geographical information gathered through an ITV camera mounted on the robot. Information on robot location is also gained through an ITV camera, and, by applying fuzzy inference operation, the robot's movement is controlled. This paper describes the methods that are used for finding a route and controlling movement. Effectiveness of the proposed methods has been confirmed through actual robot movement tests and through computer simulations. (author)
This SpringerBrief reveals the latest techniques in computer vision and machine learning on robots that are designed as accurate and efficient military snipers. Militaries around the world are investigating this technology to simplify the time, cost and safety measures necessary for training human snipers. These robots are developed by combining crucial aspects of computer science research areas including image processing, robotic kinematics and learning algorithms. The authors explain how a new humanoid robot, the iCub, uses high-speed cameras and computer vision algorithms to track the objec
Full Text Available RoboCup is an international research and education initiative, which aims to foster artificial intelligence and robotics research by using competitive soccer as a standard problem. This paper presents a detailed engineering design process and the outcome for an omni-directional mobile robot platform for the Robocup Middle Size League competition. A prototype that can move omnidirectionally with kicking capability was designed, built, and tested by a group of senior students. The design included a mechanical base, pneumatic kicking mechanism, a DSP microcontroller-based control system, various sensor interfacing units, and the analysis of omnidirectional motions. The testing results showed that the system was able to move omnidirectionally with a speed of ∼2 m/s and able to kick a size 5 FIFA soccer ball for a distance of at least 5 meters.
This thesis presents the research work the author carried on during his PhD on the topic of robotic perception for autonomous navigation. In particular, the efforts focus on the Self-Localization, Scene Understanding and Object Detection and Tracking problems, proposing for each of these three topics one or more approaches that present an improvement over the state-of-the-art. In some cases the proposed approaches mutually exploit the generated information to improve the quality of the final ...
Nonami, Kenzo; Suzuki, Satoshi; Wang, Wei; Nakazawa, Daisuke
Worldwide demand for robotic aircraft such as unmanned aerial vehicles (UAVs) and micro aerial vehicles (MAVs) is surging. Not only military but especially civil applications are being developed at a rapid pace. Unmanned vehicles offer major advantages when used for aerial surveillance, reconnaissance, and inspection in complex and inhospitable environments. UAVs are better suited for dirty or dangerous missions than manned aircraft and are more cost-effective. UAVs can operate in contaminated environments, for example, and at altitudes both lower and higher than those typically traversed by m
Full Text Available Now days, due to busy routine life, people forget to water their plants. In this paper, we present a completely autonomous and a cost-effective system for watering indoor potted plants placed on an even surface. The system comprises of a mobile robot and a temperature-humidity sensing module. The system is fully adaptive to any environment and takes into account the watering needs of the plants using the temperature-humidity sensing module. The paper describes the hardware architecture of the fully automated watering system, which uses wireless communication to communicate between the mobile robot and the sensing module. This gardening robot is completely portable and is equipped with a Radio Frequency Identification (RFID module, a microcontroller, an on-board water reservoir and an attached water pump. It is capable of sensing the watering needs of the plants, locating them and finally watering them autonomously without any human intervention. Mobilization of the robot to the potted plant is achieved by using a predefined path. For identification, an RFID tag is attached to each potted plant. The paper also discusses the detailed implementation of the system supported with complete circuitry. Finally, the paper concludes with system performance including the analysis of the water carrying capacity and time requirements to water a set of plants.
To operate in rich, dynamic environments, autonomous robots must be able to effectively utilize and coordinate their limited physical and occupational resources. As complexity increases, it becomes necessary to impose explicit constraints on the control of planning, perception, and action to ensure that unwanted interactions between behaviors do not occur. This paper advocates developing complex robot systems by layering reactive behaviors onto deliberative components. In this structured control approach, the deliberative components handle normal situations and the reactive behaviors, which are explicitly constrained as to when and how they are activated, handle exceptional situations. The Task Control Architecture (TCA) has been developed to support this approach. TCA provides an integrated set of control constructs useful for implementing deliberative and reactive behaviors. The control constructs facilitate modular and evolutionary system development: they are used to integrate and coordinate planning, perception, and execution, and to incrementally improve the efficiency and robustness of the robot systems. To date, TCA has been used in implementing a half-dozen mobile robot systems, including an autonomous six-legged rover and indoor mobile manipulator
In order to engage and help in our daily life, autonomous robots are to operate in dynamic and unstructured environments and interact with people. As the robot's environment and its behaviour are getting more complex, so are the robot's software and the knowledge that the robot needs to carry out it
Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.
In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.
Kumar, Akash; Ganesh, Shashikiran
Physical Research Laboratory operates a 50cm robotic observatory at Mount Abu. This Automated Telescope for Variability Studies (ATVS) makes use of Remote Telescope System 2 (RTS2) for autonomous operations. The observatory uses a 3.5m dome from Sirius Observatories. We have developed electronics using Arduino electronic circuit boards with home grown logic and software to control the dome operations. We are in the process of completing the drivers to link our Arduino based dome controller with RTS2. This document is a short description of the various phases of the development and their integration to achieve the required objective.
Although the robotics community did a lot of research in the field of autonomous mobile robotics, there are still many unsolved challenges. With this dynamic, the European Robotics Challenges (EUROC) aim at enhancing mobile robotics research by building concrete projects with industrial applications. During my final year internship for the Télécom Physique Strasbourg’s Engineering Degree which has taken place in the Robotics and Mechatronics Institute at the DLR Oberpfaffenhofen (Germany),...
Garcia de Marina Peinado, Hector Jesús
This thesis addresses several theoretical and practical problems related to formation-control of autonomous robots. Formation-control aims to simultaneously accomplish the tasks of forming a desired shape by the robots and controlling their coordinated collective motion. This kind of robot performan
Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon
Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel. PMID:26227680
ZHENG Chang-e; HUANG Qiang; HUANG Yuan-can
The small size of miniature robots poses great challenges for the mechanical and deetrieal design and the implementation of autonomous capabilities.In this paper,the mechanical and electrical design for a twowheeled cylindrical miniature autonomous robot ("BMS-1",BIT MicroScout-1) is presented and some autonomous capabilities are implemented by multiple sensors and some arithmetic models.Several experimental results show that BMS-1 is useful for surveillance in confined spaces and suitable for large-scale surveillance due to some autonomous capabilities.
Wilhelmsen, K.C.; Hurd, R.L.; Couture, S.
A tele-robotic and autonomous controller architecture for waste handling and sorting has been developed which uses tele-robotics, autonomous grasping and image processing. As a starting point, prior work from LLNL and ORNL was restructured and ported to a special real-time development environment. Significant improvements in collision avoidance, force compliance, and shared control aspects were then developed. Several orders of magnitude improvement were made in some areas to meet the speed and robustness requirements of the application.
In order to engage and help in our daily life, autonomous robots are to operate in dynamic and unstructured environments and interact with people. As the robot's environment and its behaviour are getting more complex, so are the robot's software and the knowledge that the robot needs to carry out its operations. In collaborating with a human to bake a cake, for instance, the robot needs a large number of components to perceive and manipulate the objects and to communicate and coordinate the t...
Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole; Blas, Morten Rufus
This extended abstract describes a project to make a robot travel autonomously across a public nature park. The challenge is to detect and follow the right path across junctions and open squares avoiding people and obstacles. The robot is equipped with a laser scanner, a (low accuracy) GPS, wheel...
This paper examines control algorithm requirements for autonomous robot navigation outside laboratory environments. Three aspects of navigation are considered: navigation control in explored terrain, environment interactions with robot sensors, and navigation control in unanticipated situations. Major navigation methods are presented and relevance of traditional human learning theory is discussed. A new navigation technique linking graph theory and incidental learning is introduced
This paper examines control algorithm requirements for autonomous robot navigation outside laboratory environments. Three aspects of navigation are considered: navigation control in explored terrain, environment interactions with robot sensors, and navigation control in unanticipated situations. Major navigation methods are presented and relevance of traditional human learning theory is discussed. A new navigation technique linking graph theory and incidental learning is introduced.
Backes, Paul G.; Tso, Kam S.
Fail-safe tele/autonomous robotic system makes it unnecessary for human technicians to enter nuclear-fuel-reprocessing facilities and other high-radiation or otherwise hazardous industrial environments. Used to carry out experiments as exchanging equipment modules, turning bolts, cleaning surfaces, and grappling turning objects by use of mixture of autonomous actions and teleoperation with either single arm or two cooperating arms. System capable of fully autonomous operation, teleoperation or shared control.
Jacoff, Adam; Messina, Elena; Evans, John
One approach to measuring the performance of intelligent systems is to develop standardized or reproducible tests. These tests may be in a simulated environment or in a physical test course. The National Institute of Standards and Technology has developed a test course for evaluating the performance of mobile autonomous robots operating in an urban search and rescue mission. The test course is designed to simulate a collapsed building structure at various levels of fidelity. The course will be used in robotic competitions, such as the American Association for Artificial Intelligence (AAAI) Mobile Robot Competition and the RoboCup Rescue. Designed to be repeatable and highly reconfigurable, the test course challenges a robot's cognitive capabilities such as perception, knowledge representation, planning, autonomy and collaboration. The goal of the test course is to help define useful performance metrics for autonomous mobile robots which, if widely accepted, could accelerate development of advanced robotic capabilities by promoting the re-use of algorithms and system components. The course may also serve as a prototype for further development of performance testing environments which enable robot developers and purchasers to objectively evaluate robots for a particular application. In this paper we discuss performance metrics for autonomous mobile robots, the use of representative urban search and rescue scenarios as a challenge domain, and the design criteria for the test course.
Březina, Tomáš; Ehrenberger, Zdeněk; Houška, P.; Singule, V.
Brno : VUT, 2003 - (Březina, T.; Ehrenberger, Z.; Houška, P.; Singule, V.), s. 1-2 ISBN 80-214-2312-9. [Mechanortonics, robotisc and biomechanics 2003. Hrotovice (CZ), 24.03.2003-27.03.2003] Institutional research plan: CEZ:AV0Z2076919 Keywords : mobile robots * autonomous operation * control Subject RIV: JD - Computer Applications, Robotics
autonomous navigation and self-localization using automatically selected landmarks. The thesis investigates autonomous robot navigation and proposes a new method which benefits from the potential of the visual sensor to provide accuracy and reliability to the navigation process while relying on naturally...... update of the estimated robot position while the robot is moving. In order to make the system autonomous, both acquisition and observation of landmarks have to be carried out automatically. The thesis consequently proposes a method for learning and navigation of a working environment and it explores......The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully...
Full Text Available This work aims at demonstrating the inherent advantages of embracing a strong notion of social embodiment in designing a real-world robot control architecture with explicit ?intelligent? social behaviour between a collective of robots. It develops the current thinking on embodiment beyond the physical by demonstrating the importance of social embodiment. A social framework develops the fundamental social attributes found when more than one robot co-inhabit a physical space. The social metaphors of identity, character, stereotypes and roles are presented and implemented within a real-world social robot paradigm in order to facilitate the realisation of explicit social goals.
Kudoh, Hiroyuki; Fujimoto, Keisuke; Nakayama, Yasuichi
The ability to find and grasp target items in an unknown environment is important for working robots. We developed an autonomous navigating and grasping robot. The operations are locating a requested item, moving to where the item is placed, finding the item on a shelf or table, and picking the item up from the shelf or the table. To achieve these operations, we designed the robot with three functions: an autonomous navigating function that generates a map and a route in an unknown environment, an item position recognizing function, and a grasping function. We tested this robot in an unknown environment. It achieved a series of operations: moving to a destination, recognizing the positions of items on a shelf, picking up an item, placing it on a cart with its hand, and returning to the starting location. The results of this experiment show the applicability of reducing the workforce with robots.
Full Text Available Robotic swarms that take inspiration from nature are becoming a fascinating topic for multi-robot researchers. The aim is to control a large number of simple robots in order to solve common complex tasks. Due to the hardware complexities and cost of robot platforms, current research in swarm robotics is mostly performed by simulation software. The simulation of large numbers of these robots in robotic swarm applications is extremely complex and often inaccurate due to the poor modelling of external conditions. In this paper, we present the design of a low-cost, open-platform, autonomous micro-robot (Colias for robotic swarm applications. Colias employs a circular platform with a diameter of 4 cm. It has a maximum speed of 35 cm/s which enables it to be used in swarm scenarios very quickly over large arenas. Long-range infrared modules with an adjustable output power allow the robot to communicate with its direct neighbours at a range of 0.5 cm to 2 m. Colias has been designed as a complete platform with supporting software development tools for robotics education and research. It has been tested in both individual and swarm scenarios, and the observed results demonstrate its feasibility for use as a micro-sized mobile robot and as a low-cost platform for robot swarm applications.
Ballantyne, James; Johns, Edward; Valibeik, Salman; Wong, Charence; Yang, Guang-Zhong
Dynamic and complex indoor environments present a challenge for mobile robot navigation. The robot must be able to simultaneously map the environment, which often has repetitive features, whilst keep track of its pose and location. This chapter introduces some of the key considerations for human guided navigation. Rather than letting the robot explore the environment fully autonomously, we consider the use of human guidance for progressively building up the environment map and establishing scene association, learning, as well as navigation and planning. After the guide has taken the robot through the environment and indicated the points of interest via hand gestures, the robot is then able to use the geometric map and scene descriptors captured during the tour to create a high-level plan for subsequent autonomous navigation within the environment. Issues related to gesture recognition, multi-cue integration, tracking, target pursuing, scene association and navigation planning are discussed.
This paper describes the design of an autonomous, sonar-based world mapping system for collision prevention in robotic systems. Obstacle detection and mapping is performed as a task that competes with higher-level tasks for the robot's attention. All tasks are integrated within a hierarchy, organized and co-ordinated by schemes analogous to biological reflexes and fixed action patterns. It is illustrated how the existence of low-level reflex behaviours can enhance the survivability and autonomy of complex systems and simplify the design of complex higher-level controls like our autonomous sonar-based world mapping system
Federal Laboratory Consortium — FUNCTION: Provides an environment for developing and evaluating intelligent software for both actual and simulated autonomous vehicles. Laboratory computers provide...
Sparkes, Andrew; Aubrey, Wayne; Byrne, Emma; Clare, Amanda; Khan, Muhammed N; Liakata, Maria; Markham, Magdalena; Rowland, Jem; Soldatova, Larisa N.; Whelan, Kenneth E; Young, Michael; King, Ross D.
We review the main components of autonomous scientific discovery, and how they lead to the concept of a Robot Scientist. This is a system which uses techniques from artificial intelligence to automate all aspects of the scientific discovery process: it generates hypotheses from a computer model of the domain, designs experiments to test these hypotheses, runs the physical experiments using robotic systems, analyses and interprets the resulting data, and repeats the cycle. We describe our two ...
This Springer Brief examines the combination of computer vision techniques and machine learning algorithms necessary for humanoid robots to develop "true consciousness." It illustrates the critical first step towards reaching "deep learning," long considered the holy grail for machine learning scientists worldwide. Using the example of the iCub, a humanoid robot which learns to solve 3D mazes, the book explores the challenges to create a robot that can perceive its own surroundings. Rather than relying solely on human programming, the robot uses physical touch to develop a neural map of its en
Zornetzer, Steve; Gage, Douglas
Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.
Martins, Alfredo; Amaral, Guilherme; Dias, André; Almeida, Carlos; Almeida, José; Silva, Eduardo
13th International Conference on Autonomous Robot Systems (Robotica), 2013 In this paper we present an autonomous ground robot developed for outdoor applications in unstructured scenarios. The robot was developed as a versatile robotics platform for development, test and validation of research in navigation, control, perception and multiple robot coordination on all terrain scenarios. The hybrid systems approach to the control architecture is discussed in the context of multiple robot coor...
Lam, Raymond K.; Doshi, Rajkumar S.; Atkinson, David J.; Lawson, Denise M.
A major requirement for an autonomous robot is the capability to diagnose faults during plan execution in an uncertain environment. Many diagnostic researches concentrate only on hardware failures within an autonomous robot. Taking a different approach, the implementation of a Telerobot Diagnostic System that addresses, in addition to the hardware failures, failures caused by unexpected event changes in the environment or failures due to plan errors, is described. One feature of the system is the utilization of task-plan knowledge and context information to deduce fault symptoms. This forward deduction provides valuable information on past activities and the current expectations of a robotic event, both of which can guide the plan-execution inference process. The inference process adopts a model-based technique to recreate the plan-execution process and to confirm fault-source hypotheses. This technique allows the system to diagnose multiple faults due to either unexpected plan failures or hardware errors. This research initiates a major effort to investigate relationships between hardware faults and plan errors, relationships which were not addressed in the past. The results of this research will provide a clear understanding of how to generate a better task planner for an autonomous robot and how to recover the robot from faults in a critical environment.
Shademan, Azad; Decker, Ryan S; Opfermann, Justin D; Leonard, Simon; Krieger, Axel; Kim, Peter C W
The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon's manual capability. Autonomous robotic surgery-removing the surgeon's hands-promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis-including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses-between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques. PMID:27147588
The Robotics Development Group at the Savannah River Site is developing an autonomous robot (SIMON) to perform radiological surveys of potentially contaminated floors. The robot scans floors at a speed of one-inch/second and stops, sounds an alarm, and flashes lights when contamination in a certain area is detected. The contamination of interest here is primarily alpha and beta-gamma. The robot, a Cybermotion K2A base, is radio controlled, uses dead reckoning to determine vehicle position, and docks with a charging station to replenish its batteries and calibrate its position. It uses an ultrasonic ranging system for collision avoidance. In addition, two safety bumpers located in the front and the back of the robot will stop the robots motion when they are depressed. Paths for the robot are preprogrammed and the robots motion can be monitored on a remote screen which shows a graphical map of the environment. The radiation instrument being used is an Eberline RM22A monitor. This monitor is microcomputer based with a serial I/0 interface for remote operation. Up to 30 detectors may be configured with the RM22A
Dudar, Aed M.; Wagner, David G.; Teese, Gregory D.
An apparatus for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm.
An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures
Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás
Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks. PMID:24852272
Full Text Available The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual messages to read, potentially having to compensate for its viewpoint of the message, and use the limited onboard processing capabilities to decode the message. The robot also has to deal with variations in lighting conditions. In this paper, we present our approach demonstrating that it is feasible for an autonomous mobile robot to read messages of specific colors and font in real-world conditions. We outline the constraints under which the approach works and present results obtained using a Pioneer 2 robot equipped with a Pentium 233 MHz and a Sony EVI-D30 pan-tilt-zoom camera.
Létourneau, Dominic; Michaud, François; Valin, Jean-Marc
The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual messages to read, potentially having to compensate for its viewpoint of the message, and use the limited onboard processing capabilities to decode the message. The robot also has to deal with variations in lighting conditions. In this paper, we present our approach demonstrating that it is feasible for an autonomous mobile robot to read messages of specific colors and font in real-world conditions. We outline the constraints under which the approach works and present results obtained using a Pioneer 2 robot equipped with a Pentium 233 MHz and a Sony EVI-D30 pan-tilt-zoom camera.
In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human-robot interaction based on recent psychological and neurological findings. PMID:22893571
Myers, Scott D.
This paper discusses the requirements and preliminary design of robotic vehicle designed for performing autonomous exterior perimeter security patrols around warehouse areas, ammunition supply depots, and industrial parks for the U.S. Department of Defense. The preliminary design allows for the operation of up to eight vehicles in a six kilometer by six kilometer zone with autonomous navigation and obstacle avoidance. In addition to detection of crawling intruders at 100 meters, the system must perform real-time inventory checking and database comparisons using a microwave tags system.
This work, a joint research between ENEA (the Italian National Agency for Energy, New Technologies and the Environment) and DIGlTAL, presents the layout of the ROBERT project, ROBot with Environmental Recognizing Tools, under development in ENEA laboratories. This project aims at the development of an autonomous mobile vehicle able to navigate in a known indoor environment through the use of artificial vision. The general architecture of the robot is shown together with the data and control flow among the various subsystems. Also the inner structure of the latter complete with the functionalities are given in detail
Ehrenberger, Zdeněk; Kratochvíl, Ctirad
Vol. 1. Varšava : Meander S.C., 2000 - (Jablonski, R.), s. 63-66 ISBN 83-914366-0-8. [International conference Mechatronics 2000. Varšava (PL), 05.11.2000-07.11.2000] Grant ostatní: ÚT AV ČR(XC) 11/1U Keywords : modelling * robots Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering
An autonomous learning device must solve the example bounding problem, i.e., it must divide the continuous universe into discrete examples from which to learn. We describe an architecture which incorporates an example bounder for learning. The architecture is implemented in the GPAL program. An example run with a real mobile robot shows that the program learns and uses new causal, qualitative, and quantitative relationships.
The robotics development group at the Savannah River Laboratory (SRL) is developing a mobile autonomous robot that performs radiological surveys of potentially contaminated floors. The robot is called SIMON, which stands for Semi-Intelligent Mobile Observing Navigator. Certain areas of SRL are classified as radiologically controlled areas (RCAs). In an RCA, radioactive materials are frequently handled by workers, and thus, the potential for contamination is ever present. Current methods used for floor radiological surveying includes labor-intensive manual scanning or random smearing of certain floor locations. An autonomous robot such as SIMON performs the surveying task in a much more efficient manner and will track down contamination before it is contacted by humans. SIMON scans floors at a speed of 1 in./s and stops and alarms upon encountering contamination. Its environment is well defined, consisting of smooth building floors with wide corridors. The kind of contaminations that SIMON is capable of detecting are alpha and beta-gamma. The contamination levels of interest are low to moderate
Hamid D. Taghirad
“Autonomous Mobile Robots: Past, Present and Future of SLAM” 2013. In:Workshop at the First RSI/ISM International Conference on Robotics and Mechatronics by Sharif University of Technology. Presenter: Prof. Hamid D. Taghirad, 2013.
Baumgartner, Eric Thomas
This dissertation describes estimation and control methods for use in the development of an autonomous mobile robot for structured environments. The navigation of the mobile robot is based on precise estimates of the position and orientation of the robot within its environment. The extended Kalman filter algorithm is used to combine information from the robot's drive wheels with periodic observations of small, wall-mounted, visual cues to produce the precise position and orientation estimates. The visual cues are reliably detected by at least one video camera mounted on the mobile robot. Typical position estimates are accurate to within one inch. A path tracking algorithm is also developed to follow desired reference paths which are taught by a human operator. Because of the time-independence of the tracking algorithm, the speed that the vehicle travels along the reference path is specified independent from the tracking algorithm. The estimation and control methods have been applied successfully to two experimental vehicle systems. Finally, an analysis of the linearized closed-loop control system is performed to study the behavior and the stability of the system as a function of various control parameters.
Huntsberger, Terrance; Aghazarian, Hrand
ROAMAN is a computer program for autonomous navigation of a mobile robot on a long (as much as hundreds of meters) traversal of terrain. Developed for use aboard a robotic vehicle (rover) exploring the surface of a remote planet, ROAMAN could also be adapted to similar use on terrestrial mobile robots. ROAMAN implements a combination of algorithms for (1) long-range path planning based on images acquired by mast-mounted, wide-baseline stereoscopic cameras, and (2) local path planning based on images acquired by body-mounted, narrow-baseline stereoscopic cameras. The long-range path-planning algorithm autonomously generates a series of waypoints that are passed to the local path-planning algorithm, which plans obstacle-avoiding legs between the waypoints. Both the long- and short-range algorithms use an occupancy-grid representation in computations to detect obstacles and plan paths. Maps that are maintained by the long- and short-range portions of the software are not shared because substantial localization errors can accumulate during any long traverse. ROAMAN is not guaranteed to generate an optimal shortest path, but does maintain the safety of the rover.
Task-level control refers to the integration and coordination of planning, perception, and real-time control to achieve given high-level goals. Autonomous mobile robots need task-level control to effectively achieve complex tasks in uncertain, dynamic environments. This paper describes the Task Control Architecture (TCA), an implemented system that provides commonly needed constructs for task-level control. Facilities provided by TCA include distributed communication, task decomposition and sequencing, resource management, monitoring and exception handling. TCA supports a design methodology in which robot systems are developed incrementally, starting first with deliberative plans that work in nominal situations, and then layering them with reactive behaviors that monitor plan execution and handle exceptions. To further support this approach, design and analysis tools are under development to provide ways of graphically viewing the system and validating its behavior.
Jeong, Kil-Woong; Cho, Ik-Jin; Lee, Yun-Jung
Most of the recently developed robots are human friendly robots which imitate animals or humans such as entertainment robot, bio-mimetic robot and humanoid robot. Interest for these robots are being increased because the social trend is focused on health, welfare, and graying. Autonomous eating functionality is most unique and inherent behavior of pets and animals. Most of entertainment robots and pet robots make use of internal-type battery. Entertainment robots and pet robots with internal-type battery are not able to operate during charging the battery. Therefore, if a robot has an autonomous function for eating battery as its feeds, the robot is not only able to operate during recharging energy but also become more human friendly like pets. Here, a new autonomous eating mechanism was introduced for a biomimetic robot, called ELIRO-II(Eating LIzard RObot version 2). The ELIRO-II is able to find a food (a small battery), eat and evacuate by itself. This work describe sub-parts of the developed mechanism such as head-part, mouth-part, and stomach-part. In addition, control system of autonomous eating mechanism is described.
The idea of building autonomous robots that can carry out complex and nonrepetitive tasks is an old one, so far unrealized in any meaningful hardware. Tilden has shown recently that there are simple, processor-free solutions to building autonomous mobile machines that continuously adapt to unknown and hostile environments, are designed primarily to survive, and are extremely resistant to damage. These devices use smart mechanics and simple (low component count) electronic neuron control structures having the functionality of biological organisms from simple invertebrates to sophisticated members of the insect and crab family. These devices are paradigms for the development of autonomous machines that can carry out directed goals. The machine then becomes a robust survivalist platform that can carry sensors or instruments. These autonomous roving machines, now in an early stage of development (several proof-of-concept prototype walkers have been built), can be developed so that they are inexpensive, robust, and versatile carriers for a variety of instrument packages. Applications are immediate and many, in areas as diverse as prosthetics, medicine, space, construction, nanoscience, defense, remote sensing, environmental cleanup, and biotechnology
Tilden, M.; Hasslacher, B.; Mainieri, R.; Moses, J.
The idea of building autonomous robots that can carry out complex and nonrepetitive tasks is an old one, so far unrealized in any meaningful hardware. Tilden has shown recently that there are simple, processor-free solutions to building autonomous mobile machines that continuously adapt to unknown and hostile environments, are designed primarily to survive, and are extremely resistant to damage. These devices use smart mechanics and simple (low component count) electronic neuron control structures having the functionality of biological organisms from simple invertebrates to sophisticated members of the insect and crab family. These devices are paradigms for the development of autonomous machines that can carry out directed goals. The machine then becomes a robust survivalist platform that can carry sensors or instruments. These autonomous roving machines, now in an early stage of development (several proof-of-concept prototype walkers have been built), can be developed so that they are inexpensive, robust, and versatile carriers for a variety of instrument packages. Applications are immediate and many, in areas as diverse as prosthetics, medicine, space, construction, nanoscience, defense, remote sensing, environmental cleanup, and biotechnology.
Wong, Andrew K. C.
This paper presents a computer vision system being developed at the Pattern Analysis and Machine Intelligence (PAMI) Lab of the University of Waterloo and at the Vision, Intelligence and Robotics Technologies Corporation (VIRTEK) in support of the Canadian Space Autonomous Robotics Project. This system was originally developed for flexible manufacturing and guidance of autonomous roving vehicles. In the last few years, it has been engineered to support the operations of the Mobile Service System (MSS) (or its equivalence) for the Space Station Project. In the near term, this vision system will provide vision capability for the recognition, location and tracking of payloads as well as for relating the spatial information to the manipulator for capturing, manipulating and berthing payloads. In the long term, it will serve in the role of inspection, surveillance and servicing of the Station. Its technologies will be continually expanded and upgraded to meet the demand as the needs of the Space Station evolve and grow. Its spin-off technologies will benefit the industrial sectors as well.
Kjærgaard, Morten; Andersen, Nils Axel; Ravn, Ole;
knowledge about the underlying algorithms. The framework also makes it possible for the robot to autonomously calibrate itself, resulting in higher stability of the robot and less development time required. The work is a result of an industrial research project aimed at lowering development costs......This article presents a framework for configuring the individual components used in component based robot control systems. Using smart parameters that adapt to the respective robot system makes it possible to obtain optimal parameter values while reusing the software components, without expert...... and improving robustness of autonomous robot applications....
Grolinger, Katarina; Jerbic, Bojan; Vranjes, Bozo
The purpose of autonomous robot is to solve various tasks while adapting its behavior to the variable environment, expecting it is able to navigate much like a human would, including handling uncertain and unexpected obstacles. To achieve this the robot has to be able to find solution to unknown situations, to learn experienced knowledge, that means action procedure together with corresponding knowledge on the work space structure, and to recognize working environment. The planning of the intelligent robot behavior presented in this paper implements the reinforcement learning based on strategic and random attempts for finding solution and neural network approach for memorizing and recognizing work space structure (structural assignment problem). Some of the well known neural networks based on unsupervised learning are considered with regard to the structural assignment problem. The adaptive fuzzy shadowed neural network is developed. It has the additional shadowed hidden layer, specific learning rule and initialization phase. The developed neural network combines advantages of networks based on the Adaptive Resonance Theory and using shadowed hidden layer provides ability to recognize lightly translated or rotated obstacles in any direction.
Taraglio, S.; Nanni, V. [ENEA, Robotics and Information Technology Division, Rome (Italy)
In this book are summarised some of the results of the PRASSI project as presented by the different partners of the effort. PRASSI is an acronym which stands for Autonomous Robotic Platform for the Security and Surveillance of plants, the Italian for it is 'Piattaforma Robotica per la Sorveglianza e Sicurezza d'Impianto'. This project has been funded by the Italian Ministry for the Education, the University and the Research (MIUR) in the framework of the project High Performance Computing Applied to Robotics (Calcolo Parallelo con Applicazioni alla Robotica) of the law 95/1995. The idea behind such an initiative is that of fostering the knowledge and possibly the use of high performance computing in the research and industrial community. In other words, robotic scientists are always simplifying their algorithms or using particular approaches (e.g. soft computing) in order to use standard processors for difficult sensorial data processing; well, what if an embedded parallel computer were available, with at least one magnitude more of computing power?.
Programming of autonomous mobile robots is subject to a set of unique requirements, which differ significantly from pure software projects and programming of stationary robots. Despite severe constraint on the payload and thereby limited available computational power, real-time constraints for physical interaction of the robot with its environment must be satisfied. Furthermore, the complexity of robots, the uncertainties in sensors and the interaction with the environment and the cooperation...
Sanfeliu Cortés, Alberto
In this paper we present a summary of some of the research that we are developing in the Institute of Robotics of the CSIC-UPC, in the field of Learning and Robot Vision for autonomous mobile robots. We describe the problems that we have found and some solutions that have been applied in two issues: tracking objects and learning and recognition of 3D objects in robotic environments. We will explain some of the results accomplished.
Prabuwono, Anton Satria; Said, Samsi; Burhanuddin; Sulaiman, Riza
In this study, the performance evaluations of autonomous contour following task with three different algorithms have been performed for Adept SCARA robot. A prototype of smart tool integrated with sensor has been designed. It can be attached and reattached into robot gripper and interfaced through I/O pins of Adept robot controller for automated robot teaching operation. The algorithms developed were tested on a semicircle object of 40 millimeter radius. The semicircle object was selected bec...
Exploring autonomy in robotics is a meaningful task. The intuitive definition of autonomy is the capability of a robot to make a decision based on its own knowledge, acquired by its distributed sensors, without any human interference. Throughout this framework we discuss some algorithms and techniques underlying the subjects of adaptive navigation and motion planning for autonomous mobile robots. Mobile Robots will play an important role in many future applications, such as ...
Autonomous mobile robots must respond to external challenges and threats in real time. One way to satisfy this requirement is to use a fast low level intelligence to react to local environment changes. A fast reactive controller has been implemented which performs the task of real time local navigation by integrating primitive elements of perception, planning, and control. Competing achievement and constraint behaviors are used to allow abstract qualitative specification of navigation goals. An interface is provided to allow a higher level deliberative intelligence with a more global perspective to set local goals for the reactive controller. The reactive controller's simplistic strategies may not always succeed, so a means to monitor and redirect the reactive controller is provided.
Martius, Georg; Olbrich, Eckehard
Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information using the algorithm by Kraskov et al. (2004) which is based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration.
Full Text Available Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration.
Full Text Available The requirement of an autonomous robotic vehicles demand highly efficient algorithm as well as software. Today’s advanced computer hardware technology does not provide these types of extensive processing capabilities, so there is still a major space and time limitation for the technologies that are available for autonomous robotic applications. Now days, small to miniature mobile robots are required for investigation, surveillance and hazardous material detection for military and industrial applications. But these small sized robots have limited power capacity as well as memory and processing resources. A number of algorithms exist for producing optimal path for dynamically cost. This paper presents a new ant colony based approach which is helpful in solving path planning problem for autonomous robotic application. The experiment of simulation verified its validity of algorithm in terms of time.
Corominas Murtra, Andreu; Mirats-Tur, Josep M.
This technical report defines the spatial representation and the map file format used in a mobile robot map-based autonomous navigation system designed to be deployed in urban areas. After a discussion about common requirements of spatial representations for map-based mobile robot autonomous navigation, a proposed environment model that fulfills previously discussed requirements is formally presented. An example of a map representing an outdoor area of an university campus of about 10000m2 is...
WU Er-yong; ZHOU Wen-hui; ZHANG Li; DAI Guo-jun
This paper presents a software framework for off-road autonomous robot navigation system. With the requirements of accurate terrain perception and instantaneous obstacles detection, one navigation software framework was advanced based on the principles of "three layer architecture" of intelligence system. Utilized the technologies of distributed system, machine learning and multiple sensor fusion, individual functional module was discussed. This paper aims to provide a framework reference for autonomous robot navigation system design.
Fujimoto, Katsuharu; Kaji, Hirotaka; Negoro, Masanori; Yoshida, Makoto; Mizutani, Hiroyuki; Saitou, Tomoya; Nakamura, Katsu
“Tsukuba Challenge” is the only of its kind to require mobile robots to work autonomously and safely on public walkways. In this paper, we introduce the outline of our robot “JW-Future”, developed for this experiment based on an electric wheel chair. Additionally, the significance of participation to such a technical trial is discussed from the viewpoint of industries.
Věchet, Stanislav; Chen, K.-S.; Krejsa, Jiří
Roč. 3, č. 4 (2013), s. 273-277. ISSN 2223-9766 Institutional support: RVO:61388998 Keywords : particle filters * autonomous mobile robot s * mixed potential fields Subject RIV: JD - Computer Applications, Robot ics http://www.ausmt.org/index.php/AUSMT/article/view/214/239
Distributed robotics is a rapidly growing and maturing interdisciplinary research area lying at the intersection of computer science, network science, control theory, and electrical and mechanical engineering. The goal of the Symposium on Distributed Autonomous Robotic Systems (DARS) is to exchange and stimulate research ideas to realize advanced distributed robotic systems. This volume of proceedings includes 31 original contributions presented at the 2012 International Symposium on Distributed Autonomous Robotic Systems (DARS 2012) held in November 2012 at the Johns Hopkins University in Baltimore, MD USA. The selected papers in this volume are authored by leading researchers from Asia, Europa, and the Americas, thereby providing a broad coverage and perspective of the state-of-the-art technologies, algorithms, system architectures, and applications in distributed robotic systems. The book is organized into five parts, representative of critical long-term and emerging research thrusts in the multi-robot com...
Full Text Available This work focuses on Monte Carlo registration methods and their application with autonomous robots. A streaming and an offline variant are developed, both based on a particle filter. The streaming registration is performed in real-time during data acquisition with a laser striper allowing for on-the-fly pose estimation. Thus, the acquired data can be instantly utilized, for example, for object modeling or robot manipulation, and the laser scan can be aborted after convergence. Curvature features are calculated online and the estimated poses are optimized in the particle weighting step. For sampling the pose particles, uniform, normal, and Bingham distributions are compared. The methods are evaluated with a high-precision laser striper attached to an industrial robot and with a noisy Time-of-Flight camera attached to service robots. The shown applications range from robot assisted teleoperation, over autonomous object modeling, to mobile robot localization.
Arrabales, Raúl; Sanchis, Araceli
In this paper we argue that machine consciousness can be successfully modelled to be the base of a control system for an autonomous mobile robot. Such a bio-inspired system provides the robot with cognitive benefits the same way that consciousness does for humans and other higher mammals. The key functions of consciousness are identified and partially applied to an original computational model, which is implemented in a software simulated mobile robot. We use a simulator to prove our assumpti...
Ribeiro, António Fernando; Monteiro, Jorge; Silva, Pedro; Silva, Victor; Braga, Paulo
Eurobot is a robotics European challenge for the young generation (university and technical schools) which is held annually, with a different challenge in every edition, and participate around about 200 teams every year. Each game comprises two teams competing against each other and does not allow draws. This work describes the design, development and building up of an autonomous mobile robot to fulfill this challenge. This paper includes the challenge description, robot design, sensors...
Slušný, Stanislav; Vidnerová, Petra; Neruda, Roman
Seňa: PONT, 2007 - (Vojtáš, P.), s. 103-108 ISBN 978-80-969184-7-8. [ITAT 2007. Conference on Theory and Practice of Information Theory. Poľana (SK), 21.09.2007-27.09.2007] Grant ostatní: GA UK(CZ) 184/2002 Institutional research plan: CEZ:AV0Z10300504 Keywords : evolutionary robotics * neural networks * autonomous robot * robot control Subject RIV: IN - Informatics, Computer Science
This paper deals with the path planning and sensing planning expert system with learning functions for the pipeline inspection and maintenance robot, Mark IV. The robot can carry out inspection tasks to autonomously detect malfunctions in a plant pipeline system. Furthermore, the robot becomes more intelligent by adding the following functions: (1) the robot, Mark IV, is capable of inspecting surfaces of storage tanks as well as pipeline outer surfaces; (2) in path planning, the robot has a learning function using information generated in the past such as a moving path, task level and control commands of the robot; (3) in inspecting a pipeline system with plant equipment such as valves, franges, T- and L-joints, the robot is capable of inspecting continuous surfaces in pipeline. Thus, together with the improved path planning expert system (PPES) and the sensing planning expert system (SPES), the Mark IV robot becomes intelligent enough to automatically carry out given inspection tasks. (author)
Wang, P. K. C.
The problem of deriving navigation strategies for a fleet of autonomous mobile robots moving in formation is considered. Here, each robot is represented by a particle with a spherical effective spatial domain and a specified cone of visibility. The global motion of each robot in the world space is described by the equations of motion of the robot's center of mass. First, methods for formation generation are discussed. Then, simple navigation strategies for robots moving in formation are derived. A sufficient condition for the stability of a desired formation pattern for a fleet of robots each equipped with the navigation strategy based on nearest neighbor tracking is developed. The dynamic behavior of robot fleets consisting of three or more robots moving in formation in a plane is studied by means of computer simulation.
Rogers, Erika; Murphy, Robin R.
This paper describes a new approach in semi-autonomous mobile robots. In this approach the robot has sufficient computerized intelligence to function autonomously under a certain set of conditions, while the local system is a cooperative decision making unit that combines human and machine intelligence. Communication is then allowed to take place in a common mode and in a common language. A number of exception-handling scenarios that were constructed as a result of experiments with actual sensor data collected from two mobile robots were presented.
Herianto; Toshiki Sakakibara; Daisuke Kurabayashi
Navigation system based on the animal behavior has received a growing attention in the past few years. The navigation systems using artificial pheromone are still few so far. For this reason, this paper presents our research that aim to implement autonomous navigation with artificial pheromone system. By introducing artificial pheromone system composed of data carriers and autonomous robots, the robotic system creates a potential field to navigate their group. We have developed a pheromone density model to realize the function of pheromones with the help of data carriers. We intend to show the effectiveness of the proposed system by performing simulations and realization using modified mobile robot. The pheromone potential field system can be used for navigation of autonomous robots.
Mondada, Francesco; Correll, Nikolaus; Mermoud, Grégory; Egerstedt, Magnus; Hsieh, M; Parker, Lynne; Støy, Kasper
Distributed robotics is a rapidly growing, interdisciplinary research area lying at the intersection of computer science, communication and control systems, and electrical and mechanical engineering. The goal of the Symposium on Distributed Autonomous Robotic Systems (DARS) is to exchange and stimulate research ideas to realize advanced distributed robotic systems. This volume of proceedings includes 43 original contributions presented at the Tenth International Symposium on Distributed Autonomous Robotic Systems (DARS 2010), which was held in November 2010 at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. The selected papers in this volume are authored by leading researchers from Asia, Australia, Europa, and the Americas, thereby providing a broad coverage and perspective of the state-of-the-art technologies, algorithms, system architectures, and applications in distributed robotic systems. The book is organized into four parts, each representing one critical and long-term research thru...
We review the current state of research in autonomous mobile robots and conclude that there is an inadequate basis for predicting the reliability and behavior of robots operating in unengineered environments. We present a new approach to the study of autonomous mobile robot performance based on formal statistical analysis of independently reproducible experiments conducted on real robots. Simulators serve as models rather than experimental surrogates. We demonstrate three new results: 1) Two commonly used performance metrics (time and distance) are not as well correlated as is often tacitly assumed. 2) The probability distributions of these performance metrics are exponential rather than normal, and 3) a modular, object-oriented simulation accurately predicts the behavior of the real robot in a statistically significant manner.
Full Text Available A design and implementation method of a robot soccer system with three vision‐based autonomous robots is proposed in this paper. A hierarchical architecture with four independent layers: (a information layer, (b strategy layer, (c tactics layer, and (d execution layer, is proposed to construct a flexible and robust vision‐based autonomous robot soccer system efficiently. Five mechanisms, including (a a two‐dimensional neck mechanism, (b dribbling mechanism, (c shooting mechanism, (d aiming mechanism, and (e flexible movement mechanism, are proposed to mean the robot with multiple functions can win the game. A method based on data obtained from a compass and a vision sensor is proposed to determine the location of the robot on the field. In the strategy design, a hierarchical architecture of decision based on the finite‐state transition mechanism for the field players and the goalkeeper is proposed to solve varied situations in the robot soccer game. Three vision‐based robots are implemented and some real competition results in the FIRA Cup are presented to illustrate the validity and feasibility of the proposed method in autonomous robot soccer system design.
Amir A. F. Nassiraei; Kazuo Ishii
The concept of Intelligent Mechanical Design (IMD) is presented to show how a mechanical structure can be designed to affect robot controllability, simplification and task performance. Exploring this concept produces landmarks in the territory of mechanical robot design in the form of seven design principles. The design principles, which we call the Mecha-Telligence Principles (MTP), provide guidance on how to design mechanics for autonomous mobile robots. These principles guide us to ask the right questions when investigating issues concerning self-controllable, reliable, feasible, and compatible mechanics for autonomous mobile robots. To show how MTP can be applied in the design process we propose a novel methodology, named as Mecha-Telligence Methodology (MTM). Mechanical design by the proposed methodology is based on preference classification of the robot specification described by interaction of the robot with its environment and the physical parameters of the robot mechatronics. After defining new terms, we investigate the feasibility of the proposed methodology to the mechanical design of an autonomous mobile sewer inspection robot. In this industrial project we show how a passive-active intelligent moving mechanism can be designed using the MTM and employed in the field.
Fredy Hernán Martínez Sarmiento
Full Text Available Our motivation focuses on answering a simple question: What is the minimum robotic structure necessary to solve a navigation problem? Our research deals with environments that are unknown, dynamic, and denied to sensors. In particular, the paper addresses problems concerning how to coordinate the navigation of multi-ple autonomous mobile robots without requiring system identification, geometric map building, localization or state estimation. The proposed navigation algorithm uses the gradient of the environment to set the navigation control. This gradient is continuously modified by all the robots in the form of local communication. The design scheme, both for the algorithm and for its implementation on robots, searches for a minimal approximation, in which it minimizes the requirements of the robot (processing power, communication and kind of sensors. Besides, our research finds autonomous navigation for each robot, and also scales the system to any number of agents. The navigation algorithm is formulated for a grouping task, where the robots form autonomous groups without any external interaction or prior information of the environment or information from other robots. Finally, task performance is verified through simulation for the laboratory prototypes of the group.
Hornby, Gregory S.; Takamura, Seichi; Yamamoto, Takashi; Fujita, Masahiro
A challenging task that must be accomplished for every legged robot is creating the walking and running behaviors needed for it to move. In this paper we describe our system for autonomously evolving dynamic gaits on two of Sony's quadruped robots. Our evolutionary algorithm runs on board the robot and uses the robot's sensors to compute the quality of a gait without assistance from the experimenter. First we show the evolution of a pace and trot gait on the OPEN-R prototype robot. With the fastest gait, the robot moves at over 10/min/min., which is more than forty body-lengths/min. While these first gaits are somewhat sensitive to the robot and environment in which they are evolved, we then show the evolution of robust dynamic gaits, one of which is used on the ERS-110, the first consumer version of AIBO.
Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.
Full Text Available The present paper considers issues related to navigation by autonomous mobile robots in overcrowded dynamic indoor environments (e.g., shopping malls, exhibition halls or convention centers. For robots moving among potentially unaware bystanders, safety is a key issue. A navigation method based on mixed potential field path planning is proposed, in cooperation with active artificial landmarks-based localization, in particular the bearing of infrared beacons placed in known coordinates processed via particle filters. Simulation experiments and tests in unmodified real-world environments with the actual robot show the proposed navigation system allows the robot to successfully navigate safely among bystanders.
Hernádez Juan, Sergi; Herrero Cotarelo, Fernando
This technical report describes the work done to develop a new navigation scheme for an autonomous car-like robot available at the Mobile Robotics Laboratory at IRI. To plan the general path the robot should follow (i.e. the global planner), a search based planner algorithm, with motion primitives which take into account the kinematic constraints of the robot, is used. To actually execute the path and avoid dynamic obstacles (i.e the local planner) a modification of the DWA algorithm is used,...
In this paper we demonstrate the feasibility of an autonomous robotics inspection and manipulation in an unstructured environment, using information coming from a multisensory integrated system. The task is to perform a real operation, such as adjusting a valve, on a testbed representing an hydraulic circuit. The robotics system, made up of a mobile crawling robot and of an anthropomorphous, six-degree-of-freedom industrial robot, can achieve the goal by the joint use of vision, range and force/torque sensors. (author)
Subhranil Som; Arjun Shome
Main aim of this paperwork is to study development of the obstacle avoiding spy robot, which can be operated manually as per the operator wants to take control of the robot himself, it also can be autonomous in its actions while intelligently moving itself by detecting the obstacles in front of it by the help of the obstacle detectable circuit. The robot is in form of a vehicle mounted with a web cam, which acquires and sends video as per the robots eye view to a TV or PC via ...
Full Text Available Autonomous underwater robots in the past few years have been designed according to the individual concepts and experiences of the researchers. To design a robot, which meets all the requirements of potential users, is an advanced work. Hence, a systematic design method that could include users’ preferences and requirements is needed. This paper presents the quality function deployment (QFD technique to design an autonomous underwater robot focusing on the Thai Navy military mission. Important user requirements extracted from the QFD method are the ability to record videos, operating at depth up to 10 meters, the ability to operate remotely with cable and safety concerns related to water leakages. Less important user requirements include beauty, using renewable energy, operating remotely with radio and ability to work during night time. The important design parameters derived from the user requirements are a low cost-controller, an autonomous control algorithm, a compass sensor and vertical gyroscope, and a depth sensor. Of low-importance ranked design parameters include the module design, use clean energy, a low noise electric motor, remote surveillance design, a pressure hull, and a beautiful hull form design. The study results show the feasibility of using QFD techniques to systematically design the autonomous underwater robot to meet user requirements. Mapping between the design and expected parameters and a conceptual drafting design of an autonomous underwater robot are also presented.
Létourneau Dominic; Michaud François; Valin Jean-Marc
The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual me...
Dominic Létourneau; François Michaud; Jean-Marc Valin
The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual mes...
This paper deals with the new mechanism of a new maintenance robot, Mark IV, following the previous reports on pipeline inspection and maintenance robots of Mark I, II, and III. The Mark IV has a mechanism capable of inspecting surfaces of storage tanks as well as pipeline outer surfaces, which is another capability of the maintenance robots, different from the previous ones. The main features of Mark IV are as follows, (i) The robot has a multijoint structure, so that it has better adaptability to the curvartures of pipelines and storage tanks. (ii) The joint of the robot has SMA actuators to make the robot lighter in weight. Some actuator shape characteristics are also examined for the robot structure and control. (iii) The robot has suckers at both ends so that the robot can climb up along the wall from the ground. (iv) A robot with the inch worm mechanisms has many functional motions, such that it can pass over flanges and T-joints, and transfer to adjacent pipelines with a wider range of pipe diameters. (v) A control method is given for the mobile motion control. Thus, the functional level of the maintenance robot has been greatly improved by the introduction of the Mark IV robot. (author)
An interactive computer graphics program has been developed which allows an operator to more readily control robot motions in two distinct modes; viz., man-controlled and autonomous. In man-controlled mode, the robot is guided by a joystick or similar device. As the robot moves, actual joint angle information is measured and supplied to a graphics system which accurately duplicates the robot motion. Obstacles are placed in the actual and animated workspace and the operator is warned of imminent collisions by sight and sound via the graphics system. Operation of the system in man-controlled mode is shown. In autonomous mode, a collision-free path between specified points is obtained by previewing robot motions on the graphics system. Once a satisfactory path is selected, the path characteristics are transmitted to the actual robot and the motion is executed. The telepresence system developed at the University of Florida has been successful in demonstrating that the concept of controlling a robot manipulator with the aid of an interactive computer graphics system is feasible and practical. The clarity of images coupled with real-time interaction and real-time determination of imminent collision with obstacles has resulted in improved operator performance. Furthermore, the ability for an operator to preview and supervise autonomous operations is a significant attribute when operating in a hazardous environment
Wehner, Walter S.
The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.
Chatterjee, Amitava; Nirmal Singh, N
This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...
Muhammad Adil Ansari
Full Text Available Autonomous robots are intelligent machines that are capable of performing desired tasks by themselves, without explicit human control. This paper presents design and implementation of the ASVR (Autonomous Sonar Based Vehicle Robot. ASVR is a microcontroller based, programmable mobile robot that can sense and react to its environment and can work in partially known and unpredictable environments. A novel algorithm based on ultrasonic sensors and simple calculations for real-time obstacle detection and avoidance that is intended for mobile robots is also outlined. Also a novel technique is proposed and implemented for steering referencing of vehicle. The design is implemented in air using ultrasonic sensors but can be adapted using sonar to underwater environments where it has important applications such as deep sea maintenance and reconnaissance tasks. The paper also presents performance results of a prototype developed to prove the design concept.
Ramis Trubat, Àfrica
The rapid development of 3D computer graphics and virtual environments has allowed the researchers to avoid working with physical robotic systems. These require specialised knowledge, a very complex construction, a huge time-consuming and it may not be financially feasible. Therefore, an alternative approach would be to use robot simulations which allow researchers to carry out experiments on the computer. Ideally one would first prototype a robot, then controls its algorithms ...
Miranda Neto, Arthur; Corrêa Victorino, Alessandro; Fantoni, Isabelle; Zampieri, Douglas Eduardo; Ferreira, Janito Vaqueiro; Lima, Danilo Alves
International audience Autonomous robots have motivated researchers from different groups due to the challenge that it represents. Many applications for control of autonomous platform are being developed and one important aspect is the excess of information, frequently redundant, that imposes a great computational cost in data processing. Taking into account the temporal coherence between consecutive frames, we have proposed a set of tools based on Pearson's Correlation Coefficient (PCC): ...
Vásárhelyi, Gábor; Virágh, Csaba; Somorjai, Gergő; Tarcai, Norbert; Szörényi, Tamás; Nepusz, Tamás; Vicsek, Tamás
We present the first decentralized multi-copter flock that performs stable autonomous outdoor flight with up to 10 flying agents. By decentralized and autonomous we mean that all members navigate themselves based on the dynamic information received from other robots in the vicinity. We do not use central data processing or control; instead, all the necessary computations are carried out by miniature on-board computers. The only global information the system exploits is from GPS receivers, whi...
Ribeiro, Paulo Rogério de Almeida; Ribeiro, António Fernando; Lopes, Gil
This work presents an application of the Microsoft Kinect camera for an autonomous mobile robot. In order to drive autonomously one main issue is the ability to recognize signalling panels positioned overhead. The Kinect camera can be applied in this task due to its double integrated sensor, namely vision and distance. The vision sensor is used to perceive the signalling panel, while the distance sensor is applied as a segmentation filter, by eliminating pixels by their depth in the object’s ...
Trulls, Eduard; Corominas Murtra, Andreu; Pérez-Ibarz, J.; Ferrer, Gonzalo; Vasquez, D.; Mirats-Tur, Josep M.; Sanfeliu, Alberto
This paper presents a fully autonomous navigation solution for urban, pedestrian environments. The task at hand, undertaken within the context of the European project URUS, was to enable two urban service robots, based on Segway RMP200 platforms and using planar lasers as primary sensors, to navigate around a known, large (10,000 m2), pedestrian-only environment with poor global positioning system coverage. Special consideration is given to the nature of our robots, highly mobile but two-whee...
This volume of proceedings includes 32 original contributions presented at the 12th International Symposium on Distributed Autonomous Robotic Systems (DARS 2014), held in November 2014. The selected papers in this volume are authored by leading researchers from Asia, Europe, and the Americas, thereby providing a broad coverage and perspective of the state-of-the-art technologies, algorithms, system architectures, and applications in distributed robotic systems. .
An autonomous mobile robot is being developed to perform remote surveillance and inspection task on large numbers of stored radioactive waste drums. The robot will be self guided through narrow storage aisles and record the visual image of each viewable drum for subsequent off line analysis and archiving. The system will remove the personnel from potential exposure to radiation, perform the require inspections, and improve the ability to assess the long term trends in drum conditions
Gribov, Vladislav; Voos, Holger
In this paper, a safety oriented model based software engineering process for autonomous robots is proposed. Herein, the main focus is on the modeling of the safety case based on the standard ISO/DIS 13482. Combined with a safe multilayer robot software architecture it allows to trace the safety requirements and to model safety relevant properties on the early design stages in order to build a reliable chain of evidence. The introduced engineering processes consist of the Domain Engineering, ...
In this paper I examine the issues related to the robot with mind. To create a robot with mind aims to recreate neuro function by engineering. The robot with mind is expected not only to process external information by the built-in program and behave accordingly, but also to gain the consciousness activity responding multiple conditions and flexible and interactive communication skills coping with unknown situation. That prospect is based on the development of artificial intelligence in which self-organizing and self-emergent functions have been available in recent years. To date, controllable aspects in robotics have been restricted to data making and programming of cognitive abilities, while consciousness activities and communication skills have been regarded as uncontrollable aspects due to their contingency and uncertainty. However, some researchers of robotics claim that every activity of the mind can be recreated by engineering and is therefore controllable. Based on the development of the cognitive abilities of children and the findings of neuroscience, researchers have attempted to produce the latest artificial intelligence with autonomous learning systems. I conclude that controllability is inconsistent with autonomy in the genuine sense and autonomous robots recreated by engineering cannot be autonomous partners of humans. PMID:24558734
Full Text Available The paper addresses the rationale of a process that produces artworks made by a swarm of robots. This process relies on the interaction, though the environment, of a set of robots designed to create spatiotemporal patterns from an initial homogeneous medium (the canvas. Inspired by social insect societies, the approach presented here exploits robot-robot and robot-environment interactions to develop emergent behaviour. The swarm intelligence concept is crucial to this approach because the viability of the team (group of robots is required in order to achieve the viability of the individual. Without any central coordination or plan, the group of robots produces its artworks on the basis of a data-driven (bottom-up process. Moreover, each robot can be viewed as an autonomous agent because it has on board all the resources required to provide the global outcome of the experiment, including sensors, actuators, and the controller, which demonstrates a reactive behaviour by reinforcing a previously made signal (positive feedback. The process is also presented in the context of Machine Art, and a detailed technical description of each robot is given, as well as an example of artworks produced by the collective behaviour of the set of robots.
Anderson, J. D.; Lee, D. J.; Archibald, J. K.
The use of on-board vision with small autonomous robots has been made possible by the advances in the field of Field Programmable Gate Array (FPGA) technology. By connecting a CMOS camera to an FPGA board, on-board vision has been used to reduce the computation time inherent in vision algorithms. The FPGA board allows the user to create custom hardware in a faster, safer, and more easily verifiable manner that decreases the computation time and allows the vision to be done in real-time. Real-time vision tasks for small autonomous robots include object tracking, obstacle detection and avoidance, and path planning. Competitions were created to demonstrate that our algorithms work with our small autonomous vehicles in dealing with these problems. These competitions include Mouse-Trapped-in-a-Box, where the robot has to detect the edges of a box that it is trapped in and move towards them without touching them; Obstacle Avoidance, where an obstacle is placed at any arbitrary point in front of the robot and the robot has to navigate itself around the obstacle; Canyon Following, where the robot has to move to the center of a canyon and follow the canyon walls trying to stay in the center; the Grand Challenge, where the robot had to navigate a hallway and return to its original position in a given amount of time; and Stereo Vision, where a separate robot had to catch tennis balls launched from an air powered cannon. Teams competed on each of these competitions that were designed for a graduate-level robotic vision class, and each team had to develop their own algorithm and hardware components. This paper discusses one team's approach to each of these problems.
Full Text Available Main aim of this paperwork is to study development of the obstacle avoiding spy robot, which can be operated manually as per the operator wants to take control of the robot himself, it also can be autonomous in its actions while intelligently moving itself by detecting the obstacles in front of it by the help of the obstacle detectable circuit. The robot is in form of a vehicle mounted with a web cam, which acquires and sends video as per the robots eye view to a TV or PC via a TV tuner card. The microcontroller chip ATMEGA 328 present on the microcontroller board ARDUINO controls the movements of the robot. In manual operating conditions the user will have a radio transmitter (tx via which the user will send signal to the radio receiver (rx present inside the robot which accordingly will pass on the signal to the microcontroller board, and as per the coding of the signal signatures burnt inside the microcontroller chip the robot will complete its movements. In Autonomous operating conditions the user will have no control on the robot that is the robot cannot be operated via any external controls, it will only function as per the data received from the obstacle detection circuits to the microcontroller which will make the robot motors move accordingly as per the code written in it. The idea is to make a robot to tackle the hostage situations & cope up with the worst conditions, which can be quiet a matter of risk to be handled by human being.
Krejsa, Jiří; Věchet, Stanislav; Ondroušek, V.
Praha: Institute of Thermomechanics AS CR, v. v. i., 2007 - (Zolotarev, I.), s. 139-140 ISBN 978-80-87012-06-2. [Engineering Mechanics 2007: national conference with international participation. Svratka (CZ), 14.05.2007-17.05.2007] Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot * navigation * localization Subject RIV: JD - Computer Applications, Robot ics
Multi-sensor data fusion is a broad area of constant research which is applied to a wide variety of fields such as the field of mobile robots. Mobile robots are complex systems where the design and implementation of sensor fusion is a complex task. But research applications are explored constantl....... The scope of the thesis is limited to building a map for a laboratory robot by fusing range readings from a sonar array with landmarks extracted from stereo vision images using the (Scale Invariant Feature Transform) SIFT algorithm.......Multi-sensor data fusion is a broad area of constant research which is applied to a wide variety of fields such as the field of mobile robots. Mobile robots are complex systems where the design and implementation of sensor fusion is a complex task. But research applications are explored constantly...
Jin-Dong Liu; Huosheng Hu
Behaviour-based approach plays a key role for mobile robots to operate safely in unknown or dynamically changing environments. We have developed a hybrid control architecture for our autonomous robotic fish that consists of three layers: cognitive, behaviour and swim pattern. In this paper, we describe some main design issues of the behaviour layer, which is the centre of the layered control architecture of our robotic fish. Fuzzy logic control (FLC) is adopted here to design individual behaviours. Simulation and real experiments are presented to show the feasibility and the performance of the designed behaviour layer.
Bourbakis, N. G.; Maas, M.; Tascillo, A.; Vandewinckel, C.
ODYSSEUS is an autonomous walking robot, which makes use of three wheels and three legs for its movement in the free navigation space. More specifically, it makes use of its autonomous wheels to move around in an environment where the surface is smooth and not uneven. However, in the case that there are small height obstacles, stairs, or small height unevenness in the navigation environment, the robot makes use of both wheels and legs to travel efficiently. In this paper we present the detailed hardware design and the simulated behavior of the extended leg/arm part of the robot, since it plays a very significant role in the robot actions (movements, selection of objects, etc.). In particular, the leg/arm consists of three major parts: The first part is a pipe attached to the robot base with a flexible 3-D joint. This pipe has a rotated bar as an extended part, which terminates in a 3-D flexible joint. The second part of the leg/arm is also a pipe similar to the first. The extended bar of the second part ends at a 2-D joint. The last part of the leg/arm is a clip-hand. It is used for selecting several small weight and size objects, and when it is in a 'closed' mode, it is used as a supporting part of the robot leg. The entire leg/arm part is controlled and synchronized by a microcontroller (68CH11) attached to the robot base.
Parish, David W.; Grabbe, Robert D.; Marzwell, Neville I.
A Modular Autonomous Robotic System (MARS), consisting of a modular autonomous vehicle control system that can be retrofit on to any vehicle to convert it to autonomous control and support a modular payload for multiple applications is being developed. The MARS design is scalable, reconfigurable, and cost effective due to the use of modern open system architecture design methodologies, including serial control bus technology to simplify system wiring and enhance scalability. The design is augmented with modular, object oriented (C++) software implementing a hierarchy of five levels of control including teleoperated, continuous guidepath following, periodic guidepath following, absolute position autonomous navigation, and relative position autonomous navigation. The present effort is focused on producing a system that is commercially viable for routine autonomous patrolling of known, semistructured environments, like environmental monitoring of chemical and petroleum refineries, exterior physical security and surveillance, perimeter patrolling, and intrafacility transport applications.
Full Text Available Problem statement: Research into robot motion control offers research opportunities that will change scientists and engineers for year to come. Autonomous robots are increasingly evident in many aspects of industry and everyday life and a robust robot motion control can be used for homeland security and many consumer applications. This study discussed the adaptive fuzzy knowledge based controller for robot motion control in indoor and outdoor environment. Approach: The proposed method consisted of two components: the process monitor that detects changes in the process characteristics and the adaptation mechanism that used information passed to it by the process monitor to update the controller parameters. Results: Experimental evaluation had been done in both indoor and outdoor environment where the robot communicates with the base station through its Wireless fidelity antenna and the performance monitor used a set of five performance criteria to access the fuzzy knowledge based controller. Conclusion: The proposed method had been found to be robust.
Nickerson, S. B.; Camacho, F.; Mader, D. L.; Milios, E. E.; Jenkin, M. R. M.; Bains, N.; Braun, P.; Green, D.; Hung, S.; Korba, L.
The main goal of the project is to build a mobile robot that can navigate in a known indoor environment using computer vision as its main sensor, with the aid of an internal geometric model of its environment. A second goal is to explore the technology in such a way as to best illustrate its usefulness and commercial potential. The theory will focus on the development and testing of computer vision algorithms as aids for robot navigation. Two robots will be built: ARK-1 (autonomous robot for a known environment); and ARK-2. ARK-1 will be tethered and will be used to test the vision algorithms. ARK-2 will be untethered, will use other sensors in addition to vision, will have a real-time operating system and will operate in an industrial environment. The platforms for both ARK- 1 and ARK-2 will be the same as that of a robot being developed at NRC for industrial applications.
Ayad Mohammed Jabbar
Full Text Available The autonomous navigation of robots is an important area of research. It can intelligently navigate itself from source to target within an environment without human interaction. Recently, algorithms and techniques have been made and developed to improve the performance of robots. It’s more effective and has high precision tasks than before. This work proposed to solve a maze using a Flood fill algorithm based on real time camera monitoring the movement on its environment. Live video streaming sends an obtained data to be processed by the server. The server sends back the information to the robot via wireless radio. The robot works as a client device moves from point to point depends on server information. Using camera in this work allows voiding great time that needs it to indicate the route by the robot.
Hansen, Søren Tranberg; Bak, Thomas; Risager, Claus
This paper presents a field study of a physical ball game for elderly based on an autonomous, mobile robot. The game algorithm is based on Case Based Reasoning and adjusts the game challenge to the player’s mobility skills by registering the spatio-temporal behaviour of the player using an on boa...
van Hoof, Herke; van der Zant, Tijn; Wiering, Marco
Perception is an essential ability for autonomous robots in non-standardized conditions. However, the appearance of objects can change between different conditions. A system visually tracking a target based on its appearance could lose its target in those cases. A tracker learning the appearance of
Full Text Available A robot can perform a given task through a policy that maps its sensed state to appropriate actions. We assume that a hand-coded controller can achieve such a mapping only for the basic cases of the task. Refining the controller becomes harder and gets more tedious and error prone as the complexity of the task increases. In this paper, we present a new learning from demonstration approach to improve the robot's performance through the use of corrective human feedback as a complement to an existing hand-coded algorithm. The human teacher observes the robot as it performs the task using the hand-coded algorithm and takes over the control to correct the behavior when the robot selects a wrong action to be executed. Corrections are captured as new state-action pairs and the default controller output is replaced by the demonstrated corrections during autonomous execution when the current state of the robot is decided to be similar to a previously corrected state in the correction database. The proposed approach is applied to a complex ball dribbling task performed against stationary defender robots in a robot soccer scenario, where physical Aldebaran Nao humanoid robots are used. The results of our experiments show an improvement in the robot's performance when the default hand-coded controller is augmented with corrective human demonstration.
This paper deals with the sensing planning for a pipeline inspection and the maintenance robot, by which the robot can carry out inspection tasks to detect a malfunction location in a plant pipeline system autonomously. For this purpose, the robot needs knowledge of the plant map, plant function and plant diagnosis. In the previous report, the path planning expert system (PPES) was reported; if the plant map is given, the robot can automatically produce the path to reach a location from another location within the robot task level. In this paper, PPES is modified to adapt the sensing planning expert system (SPES) and generates executable robot commands for motion control. In addition, the plant knowledge system requires more information concerning plant operation states, such as standard value/status and up/down stream. Furthermore, the robot needs knowledge on inspection/repair and diagnosis, so that the robot can estimate the malfunction candidates and select one individually after some inspection trials. Together with PPES and SPES, the robot becomes intelligent enough to carry out given inspection tasks automatically. (author)
Overholt, James L.; Hudas, Greg R.; Gerhart, Grant R.
Proprioception is a sense of body position and movement that supports the control of many automatic motor functions such as posture and locomotion. This concept, normally relegated to the fields of neural physiology and kinesiology, is being utilized in the field of unmanned mobile robotics. This paper looks at developing proprioceptive behaviors for use in controlling an unmanned ground vehicle. First, we will discuss the field of behavioral control of mobile robots. Next, a discussion of proprioception and the development of proprioceptive sensors will be presented. We will then focus on the development of a unique neural-fuzzy architecture that will be used to incorporate the control behaviors coming directly from the proprioceptive sensors. Finally we will present a simulation experiment where a simple multi-sensor robot, utilizing both external and proprioceptive sensors, is presented with the task of navigating an unknown terrain to a known target position. Results of the mobile robot utilizing this unique fusion methodology will be discussed.
Tedder, Maurice; Chung, Chan-Jin
The purpose of this paper is to introduce a cost-effective way to design robot vision and control software using Matlab for an autonomous robot designed to compete in the 2004 Intelligent Ground Vehicle Competition (IGVC). The goal of the autonomous challenge event is for the robot to autonomously navigate an outdoor obstacle course bounded by solid and dashed lines on the ground. Visual input data is provided by a DV camcorder at 160 x 120 pixel resolution. The design of this system involved writing an image-processing algorithm using hue, satuaration, and brightness (HSB) color filtering and Matlab image processing functions to extract the centroid, area, and orientation of the connected regions from the scene. These feature vectors are then mapped to linguistic variables that describe the objects in the world environment model. The linguistic variables act as inputs to a fuzzy logic controller designed using the Matlab fuzzy logic toolbox, which provides the knowledge and intelligence component necessary to achieve the desired goal. Java provides the central interface to the robot motion control and image acquisition components. Field test results indicate that the Matlab based solution allows for rapid software design, development and modification of our robot system.
Simmons, Reid G.
The Task Control Architecture (TCA) provides communication and coordination facilities to construct distributed, concurrent robotic systems. The use of TCA in a system that walks a legged robot through rugged terrain is described. The walking system, as originally implemented, had a sequential sense-plan-act control cycle. Utilizing TCA features for task sequencing and monitoring, the system was modified to concurrently plan and execute steps. Walking speed improved by over 30 percent, with only a relatively modest conversion effort.
Wei Hongxing; Li Ning; Liu Miao; Tan Jindong
Swarm intelligence embodied by many species such as ants and bees has inspired scholars in swarm robotic researches.This paper presents a novel autonomous self-assembly distributed swarm flying robot-DSFR,which can drive on the ground,autonomously accomplish self-assembly and then fly in the air coordinately.Mechanical and electrical designs ofa DSFR module,as well as the kinematics and dynamics analysis,are specifically investigated.Meanwhile,this paper brings forward a generalized adjacency matrix to describe configurations of DSFR structures.Also,the distributed flight control model is established for vertical taking-off and horizontal hovering,which can be applied to control of DSFR systems with arbitrary configurations.Finally,some experiments are carried out to testify and validate the DSFR design,the autonomous self-assembly strategy and the distributed flight control laws.
One of the goals in robotics is the human personnel's protection that work in dangerous areas or of difficult access, such it is the case of the nuclear industry where exist areas that, for their own nature, they are inaccessible for the human personnel, such as areas with high radiation level or high temperatures; it is in these cases where it is indispensable the use of an inspection system that is able to carry out a sampling of the area in order to determine if this areas can be accessible for the human personnel. In this situation it is possible to use an inspection system based on a mobile robot, of preference of autonomous navigation, for the realization of such inspection avoiding by this way the human personnel's exposure. The present work proposes a model of autonomous navigation for a mobile robot Pioneer 2-D Xe based on the algorithm of wall following using the paradigm of fuzzy logic. (Author)
Di Nuovo, Alessandro G; Marocco, Davide; Di Nuovo, Santo; Cangelosi, Angelo
In this paper we focus on modeling autonomous learning to improve performance of a humanoid robot through a modular artificial neural networks architecture. A model of a neural controller is presented, which allows a humanoid robot iCub to autonomously improve its sensorimotor skills. This is achieved by endowing the neural controller with a secondary neural system that, by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. Results and analysis presented in the paper provide evidence of the viability of the approach proposed and help to clarify the rational behind the chosen model and its implementation. PMID:23122490
Barhen, J.; Dress, W. B.; Jorgensen, C. C.
This article provides an overview of studies at the Oak Ridge National Laboratory (ORNL) of neural networks running on parallel machines applied to the problems of autonomous robotics. The first section provides the motivation for our work in autonomous robotics and introduces the computational hardware in use. Section 2 presents two theorems concerning the storage capacity and stability of neural networks. Section 3 presents a novel load-balancing algorithm implemented with a neural network. Section 4 introduces the robotics test bed now in place. Section 5 concerns navigation issues in the test-bed system. Finally, Section 6 presents a frequency-coded network model and shows how Darwinian techniques are applied to issues of parameter optimization and on-line design.
Analysis of the safety of operating and maintaining the Stored Waste Autonomous Mobile Inspector (SWAMI) II in a hazardous environment at the Fernald Environmental Management Project (FEMP) was completed. The SWAMI II is a version of a commercial robot, the HelpMate trademark robot produced by the Transitions Research Corporation, which is being updated to incorporate the systems required for inspecting mixed toxic chemical and radioactive waste drums at the FEMP. It also has modified obstacle detection and collision avoidance subsystems. The robot will autonomously travel down the aisles in storage warehouses to record images of containers and collect other data which are transmitted to an inspector at a remote computer terminal. A previous study showed the SWAMI II has economic feasibility. The SWAMI II will more accurately locate radioactive contamination than human inspectors. This thesis includes a System Safety Hazard Analysis and a quantitative Fault Tree Analysis (FTA). The objectives of the analyses are to prevent potentially serious events and to derive a comprehensive set of safety requirements from which the safety of the SWAMI II and other autonomous mobile robots can be evaluated. The Computer-Aided Fault Tree Analysis (CAFTA copyright) software is utilized for the FTA. The FTA shows that more than 99% of the safety risk occurs during maintenance, and that when the derived safety requirements are implemented the rate of serious events is reduced to below one event per million operating hours. Training and procedures in SWAMI II operation and maintenance provide an added safety margin. This study will promote the safe use of the SWAMI II and other autonomous mobile robots in the emerging technology of mobile robotic inspection
Xue, Shuwan; Deligeorges, Socrates; Soloway, Aaron; Lichtenstein, Lee; Gore, Tyler; Hubbard, Allyn
Limited autonomous behaviors are fast becoming a critical capability in the field of robotics as robotic applications are used in more complicated and interactive environments. As additional sensory capabilities are added to robotic platforms, sensor fusion to enhance and facilitate autonomous behavior becomes increasingly important. Using biology as a model, the equivalent of a vestibular system needs to be created in order to orient the system within its environment and allow multi-modal sensor fusion. In mammals, the vestibular system plays a central role in physiological homeostasis and sensory information integration (Fuller et al, Neuroscience 129 (2004) 461-471). At the level of the Superior Colliculus in the brain, there is multimodal sensory integration across visual, auditory, somatosensory, and vestibular inputs (Wallace et al, J Neurophysiol 80 (1998) 1006-1010), with the vestibular component contributing a strong reference frame gating input. Using a simple model for the deep layers of the Superior Colliculus, an off-the-shelf 3-axis solid state gyroscope and accelerometer was used as the equivalent representation of the vestibular system. The acceleration and rotational measurements are used to determine the relationship between a local reference frame of a robotic platform (an iRobot Packbot®) and the inertial reference frame (the outside world), with the simulated vestibular input tightly coupled with the acoustic and optical inputs. Field testing of the robotic platform using acoustics to cue optical sensors coupled through a biomimetic vestibular model for "slew to cue" gunfire detection have shown great promise.
Husain, Ammar; Jones, Heather; Kannan, Balajee; Wong, Uland; Pimentel, Tiago; Tang, Sarah; Daftry, Shreyansh; Huber, Steven; Whittaker, William L.
Caves on other planetary bodies offer sheltered habitat for future human explorers and numerous clues to a planet's past for scientists. While recent orbital imagery provides exciting new details about cave entrances on the Moon and Mars, the interiors of these caves are still unknown and not observable from orbit. Multi-robot teams offer unique solutions for exploration and modeling subsurface voids during precursor missions. Robot teams that are diverse in terms of size, mobility, sensing, and capability can provide great advantages, but this diversity, coupled with inherently distinct low-level behavior architectures, makes coordination a challenge. This paper presents a framework that consists of an autonomous frontier and capability-based task generator, a distributed market-based strategy for coordinating and allocating tasks to the different team members, and a communication paradigm for seamless interaction between the different robots in the system. Robots have different sensors, (in the representative robot team used for testing: 2D mapping sensors, 3D modeling sensors, or no exteroceptive sensors), and varying levels of mobility. Tasks are generated to explore, model, and take science samples. Based on an individual robot's capability and associated cost for executing a generated task, a robot is autonomously selected for task execution. The robots create coarse online maps and store collected data for high resolution offline modeling. The coordination approach has been field tested at a mock cave site with highly-unstructured natural terrain, as well as an outdoor patio area. Initial results are promising for applicability of the proposed multi-robot framework to exploration and modeling of planetary caves.
Full Text Available A novel distributed hunting approach for multiple autonomous robots in unstructured mode‐free environments, which is based on effective sectors and local sensing, is proposed in this paper. The visual information, encoder and sonar data are integrated in the robot’s local frame, and the effective sector is introduced. The hunting task is modelled as three states: search state, round‐obstacle state, and hunting state, and the corresponding switching conditions and control strategies are given. A form of cooperation will emerge where the robots interact only locally with each other. The evader, whose motion is a priori unknown to the robots, adopts an escape strategy to avoid being captured. The approach is scalable and may cope with problems of communication and wheel slippage. The effectiveness of the proposed approach is verified through experiments with a team of wheeled robots.
Defigueiredo, R.; Ciscon, L.; Berberian, D.
The Rice-obot I is the first in a series of Intelligent Autonomous Mobile Robots (IAMRs) being developed at Rice University's Cooperative Intelligent Mobile Robots (CIMR) lab. The Rice-obot I is mainly designed to be a testbed for various robotic and AI techniques, and a platform for developing intelligent control systems for exploratory robots. Researchers present the need for a generalized environment capable of combining all of the control, sensory and knowledge systems of an IAMR. They introduce Lisp-Nodes as such a system, and develop the basic concepts of nodes, messages and classes. Furthermore, they show how the control system of the Rice-obot I is implemented as sub-systems in Lisp-Nodes.
Donato Di Paola
Full Text Available The development of intelligent surveillance systems is an active research area. In this context, mobile and multi-functional robots are generally adopted as means to reduce the environment structuring and the number of devices needed to cover a given area. Nevertheless, the number of different sensors mounted on the robot, and the number of complex tasks related to exploration, monitoring, and surveillance make the design of the overall system extremely challenging. In this paper, we present our autonomous mobile robot for surveillance of indoor environments. We propose a system able to handle autonomously general-purpose tasks and complex surveillance issues simultaneously. It is shown that the proposed robotic surveillance scheme successfully addresses a number of basic problems related to environment mapping, localization and autonomous navigation, as well as surveillance tasks, like scene processing to detect abandoned or removed objects and people detection and following. The feasibility of the approach is demonstrated through experimental tests using a multisensor platform equipped with a monocular camera, a laser scanner, and an RFID device. Real world applications of the proposed system include surveillance of wide areas (e.g. airports and museums and buildings, and monitoring of safety equipment.
A mobile robotic system is described that conducts radiological surveys to map alpha, beta, and gamma radiation on surfaces in relatively level open areas or areas containing obstacles such as stored containers or hallways, equipment, walls and support columns. The invention incorporates improved radiation monitoring methods using multiple scintillation detectors, the use of laser scanners for maneuvering in open areas, ultrasound pulse generators and receptors for collision avoidance in limited space areas or hallways, methods to trigger visible alarms when radiation is detected, and methods to transmit location data for real-time reporting and mapping of radiation locations on computer monitors at a host station. A multitude of high performance scintillation detectors detect radiation while the on-board system controls the direction and speed of the robot due to pre-programmed paths. The operators may revise the preselected movements of the robotic system by ethernet communications to remonitor areas of radiation or to avoid walls, columns, equipment, or containers. The robotic system is capable of floor survey speeds of from 1/2-inch per second up to about 30 inches per second, while the on-board processor collects, stores, and transmits information for real-time mapping of radiation intensity and the locations of the radiation for real-time display on computer monitors at a central command console. 4 figs
Mast, Marcus; Burmester, Michael; Graf, Birgit; Weisshardt, Florian; Arbeiter, Georg; Španel, Michal; Zdenek, Materna; Smrz, Pavel; Kronreif, Gernot
Service robots could support elderly people's activities of daily living and enable them to live in their own residences independently as long as possible. Current robot technology does not allow reliable fully autonomous operation of service robots with manipulation capabilities in the heterogeneous environments of private homes. We developed and evaluated a usage concept for semi-autonomous robot control as well as user interfaces for three user groups. Elderly people are provided with simp...
Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles
The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as
Stoeter, Sascha A; Papanikolopoulos, Nikolaos
The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed. PMID:15828659
A new type of mobile robots with a looping movement mechanism in the lateral and circular direction of a pipeline is presented in this paper for pipeline maintenance operations. This robot has four degrees of freedom and more flexibility than the first and the second prototype robots, which have the wheel type of mobile mechanism for horizontally located pipelines. This robot can pass over obstacles such as flanges and also T-joint pipelines, which the previously reported robots can do, and furthermore has more pipeline maintenance adaptabilities such that the new robot can move along vertically located pipelines and that it can move to an adjacently located pipeline. Therefore, the control must be so complicated that the dual mode control is introduced by employing the coordinate transformation matrix. To detect flanges, T-joints and pipelines in the neighbourhood, ultrasonic sensors as well as infra ray sensors are installed as the short and the long range sensors, so that the robot can autonomously move along pipelines. (author)
Charles V. Smith Iii
Full Text Available Control systems driven by voice recognition software have been implemented before but lacked the context driven approach to generate relevant responses and actions. A partially voice activated control system for mobile robotics is presented that allows an autonomous robot to interact with people and the environment in a meaningful way, while dynamically creating customized tours. Many existing control systems also require substantial training for voice application. The system proposed requires little to no training and is adaptable to chaotic environments. The traversable area is mapped once and from that map a fully customized route is generated to the user
Wolff, J. Gerard
This article is about how the "SP theory of intelligence" and its realisation in the "SP machine" (both outlined in the article) may help to solve computer-related problems in the design of autonomous robots, meaning robots that do not depend on external intelligence or power supplies, are mobile, and are designed to exhibit as much human-like intelligence as possible. The article is about: how to increase the computational and energy efficiency of computers and reduce their bulk; how to achi...
Simmons, Reid; Mitchell, Tom
An architecture is presented for controlling robots that have multiple tasks, operate in dynamic domains, and require a fair degree of autonomy. The architecture is built on several layers of functionality, including a distributed communication layer, a behavior layer for querying sensors, expanding goals, and executing commands, and a task level for managing the temporal aspects of planning and achieving goals, coordinating tasks, allocating resources, monitoring, and recovering from errors. Application to a legged planetary rover and an indoor mobile manipulator is described.
Witkowski, Ulf; Sitte, Joaquin; Herbrechtsmeier, Stefan; Rückert, Ulrich
AMiRESot is a new robot soccer league that is played with small autonomous miniature robots. Team sizes are defined with one, two, and three robots per team. Special to the AMiRESot league are the fully autonomous behavior of the robots and their small size. For the matches, the rules mainly follow the FIFA laws with some modifications being useful for robot soccer. The new AMiRESot soccer robot is small in size (maximum 110 mm diameter) but a powerful vehicle, equipped with a differential drive system. For sensing, the robots in their basic configuration are equipped with active infrared sensors and a color image sensor. For information processing a powerful mobile processor and reconfigurable hardware resources (FPGA) are available. Due to the robot’s modular structure it can be easily extended by additional sensing and processing resources. This paper gives an overview of the AMiRESot rules and presents details of the new robot platform used for AMiRESot.
ZHANG Guo-wei; LU Qiu-hong
Using stereo vision for autonomous mobile robot path-planning is a hot technology. The environment mapping and path-planning algorithms were introduced, and they were applied in the autonomous mobile robot experiment platform. Through experiments in the robot platform, the effectiveness of these algorithms was verified.
Magallón Hernández, Ignacio
This TCC (Undergraduate Course Final Project) aims to develop a solution for intelligent autonomous navigation with mobile robots using computer vision. Using C language and OpenCV, an image processing library, the generated code applies different filters and convolutions in the input image obtained by webcam in order to reduce input noise, homogenize regions and detect borders. The program, which can be adapted to different environments by regulating four parameters, allows th...
Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco
Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation. PMID:20365620
Mingjun Wang; Jun Zhou; Jun Tu; Chengliang Liu
Long-range terrain perception has a high value in performing efficient autonomous navigation and risky intervention tasks for field robots, such as earlier recognition of hazards, better path planning, and higher speeds. However, Stereo-based navigation systems can only perceive near-field terrain due to the nearsightedness of stereo vision. Many near-to-far learning methods, based on regions' appearance features, are proposed to predict the far-field terrain. We p...
Orosz, Gábor; Moehlis, Jeff; Bullo, Francesco
Fundamental design principles are presented for vehicle systems governed by autonomous cruise control devices. By analyzing the corresponding delay differential equations, it is shown that for any car-following model short-wavelength oscillations can appear due to robotic reaction times, and that there are tradeoffs between the time delay and the control gains. The analytical findings are demonstrated on an optimal velocity model using numerical continuation and numerical simulation.
Andersen, Jens Christian; Ravn, Ole; Andersen, Nils Axel
Orchard navigation using sensor-based localization and exible mission management facilitates successful missions independent of the Global Positioning System (GPS). This is especially important while driving between tight tree rows where the GPS coverage is poor. This paper suggests localization ......, obstacle avoidance, path planning and drive control. The system is tested successfully using a Hako 20 kW tractor during autonomous missions in both cherry and apple orchards with mission length of up to 2.3 km including the headland turns....
Yen, John; Pfluger, Nathan
The ability of a mobile robot system to plan and move intelligently in a dynamic system is needed if robots are to be useful in areas other than controlled environments. An example of a use for this system is to control an autonomous mobile robot in a space station, or other isolated area where it is hard or impossible for human life to exist for long periods of time (e.g., Mars). The system would allow the robot to be programmed to carry out the duties normally accomplished by a human being. Some of the duties that could be accomplished include operating instruments, transporting objects, and maintenance of the environment. The main focus of our early work has been on developing a fuzzy controller that takes a path and adapts it to a given environment. The robot only uses information gathered from the sensors, but retains the ability to avoid dynamically placed obstacles near and along the path. Our fuzzy logic controller is based on the following algorithm: (1) determine the desired direction of travel; (2) determine the allowed direction of travel; and (3) combine the desired and allowed directions in order to determine a direciton that is both desired and allowed. The desired direction of travel is determined by projecting ahead to a point along the path that is closer to the goal. This gives a local direction of travel for the robot and helps to avoid obstacles.
Cao, Zuoliang; Hu, Jun; Cao, Jin; Hall, Ernest L.
As a laboratory demonstration platform, TUT-I mobile robot provides various experimentation modules to demonstrate the robotics technologies that are involved in remote control, computer programming, teach-and-playback operations. Typically, the teach-and-playback operation has been proved to be an effective solution especially in structured environments. The path generated in the teach mode and path correction in real-time using path error detecting in the playback mode are demonstrated. The vision-based image database is generated as the given path representation in the teaching procedure. The algorithm of an online image positioning is performed for path following. Advanced sensory capability is employed to provide environment perception. A unique omni directional vision (omni-vision) system is used for localization and navigation. The omni directional vision involves an extremely wide-angle lens, which has the feature that a dynamic omni-vision image is processed in real time to respond the widest view during the movement. The beacon guidance is realized by observing locations of points derived from over-head features such as predefined light arrays in a building. The navigation approach is based upon the omni-vision characteristics. A group of ultrasonic sensors is employed for obstacle avoidance.
Full Text Available This contribution is oriented to ways of computer vision algorithms for mobile robot localization in internal and external agricultural environment. The main aim of this work was to design, create, verify and evaluate speed and functionality of computer vision localization algorithm. An input colour camera data and depth data were captured by MS® Kinect sensor that was mounted on 6-wheel-drive mobile robot chassis. The design of the localization algorithm was focused to the most significant blobs and points (landmarks on the colour picture. Actual coordinates of autonomous mobile robot were calculated out from measured distances (depth sensor and calculated angles (RGB camera with respect to landmark points. Time measurement script was used to compare the speed of landmark finding algorithm for localization in case of one and more landmarks on picture. The main source code was written in MS Visual studio C# programming language with Microsoft.Kinect.1.7.dll on Windows based PC. Algorithms described in this article were created for a future development of an autonomous agronomical m obile robot localization and control.
This paper suggests an alternative to the current approach to visual feedback for common robotic tasks in the nuclear industry, particularly those under the direct supervision of an operator. The concept depends on the use of head mounted displays (HMD's), capable of presenting real-time video imagery from gimbaled cameras, whose pointing direction is slaved to the head position of the operator wearing the HMD. Tasks ranging from simple inspection to visualization of extreme, unexpected situations could benefit from greatly improved flexibility through this concept; this natural, autonomous visual feedback loop allows the operator to concentrate on the actual robotic manipulation, in addition to improving positional awareness of his robotic tools with respect to their surroundings. 1 fig
Peters, R. A., II; Sarkar, N.; Bodenheimer, R. E.; Brown, E.; Campbell, C.; Hambuchen, K.; Johnson, C.; Koku, A. B.; Nilas, P.; Peng, J.
Our research achievements under the NASA-JSC grant contributed significantly in the following areas. Multi-agent based robot control architecture called the Intelligent Machine Architecture (IMA) : The Vanderbilt team received a Space Act Award for this research from NASA JSC in October 2004. Cognitive Control and the Self Agent : Cognitive control in human is the ability to consciously manipulate thoughts and behaviors using attention to deal with conflicting goals and demands. We have been updating the IMA Self Agent towards this goal. If opportunity arises, we would like to work with NASA to empower Robonaut to do cognitive control. Applications 1. SES for Robonaut, 2. Robonaut Fault Diagnostic System, 3. ISAC Behavior Generation and Learning, 4. Segway Research.
During 1993, the activity at the University was split into two primary groups. One group provided direct support for the development and testing of the RVIR vehicle. This effort culminated in a demonstration of the vehicle at ORNL during December. The second group of researchers focused attention on pushing the technology forward in the areas of radiation imaging, navigation, and sensing modalities. A major effort in technology transfer took place during this year. All of these efforts reflected in the periodic progress reports which are attached. During 1994, our attention will change from the Nuclear Energy program to the Environmental Restoration and Waste Management office. The immediate needs of the Robotics Technology Development Program within the Office of Technology Development of EM drove this change in target applications. The University will be working closely with the national laboratories to further develop and transfer existing technologies to mobile platforms which are currently being designed and employed in seriously hazardous environments
Larouche, Benoit P.
The doctoral research is to develop an autonomous intelligent robotic manipulator technology for on-orbit servicing (OOS). More specifically, the research is focused on one of the most critical tasks in OOS- the capture of a non-cooperative object whilst minimizing impact forces and accelerations. The objective of the research is: the development of a vision-based control theory, and the implementation and testing of the developed theory by designing and constructing a custom non-redundant holonomic robotic manipulator. The research validated the newly developed control theory and its ability to (i) capture a moving target autonomously and (ii) minimize unfavourable contact dynamics during the most critical parts of the capture operations between the capture satellite and a non-cooperative/tumbling object. A custom robotic manipulator functional prototype has been designed, assembled, constructed, and programmed from concept to completion in order to provide full customizability and controllability in both the hardware and the software. Based on the test platform, a thorough experimental investigation has been conducted to validate the newly developed control methodologies to govern the behaviour of the robotic manipulators (RM) in an autonomous capture. The capture itself is effected on non-cooperative targets in zero-gravity simulated environment. The RM employs a vision system, force sensors, and encoders in order to sense its environment. The control is effected through position and pseudo-torque inputs to three stepper motors and three servo motors. The controller is a modified hybrid force/neural network impedance controller based on N. Hogan's original work. The experimental results demonstrate the set objectives of this thesis have been successfully achieved.
Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey
Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.
“Autonomous manipulation” is a challenge in robotic technologies. It refers to the capability of a mobile robot system with one or more manipulators that performs intervention tasks requiring physical contacts in unstructured environments and without continuous human supervision. Achieving autonomous manipulation capability is a quantum leap in robotic technologies as it is currently beyond the state of the art in robotics. This book addresses issues with the complexity of the problems encountered in autonomous manipulation including representation and modeling of robotic structures, kinematic and dynamic robotic control, kinematic and algorithmic singularity avoidance, dynamic task priority, workspace optimization and environment perception. Further development in autonomous manipulation should be able to provide robust improvements of the solutions for all of the above issues. The book provides an extensive tract on sensory-based autonomous manipulation for intervention tasks in unstructured environment...
Bhandari, Susmita; Mathis, Allison; Mohiuddin, Kashif; Pietrocola, David; Restrepo, Maria; Ahlgren, David J.
ALVIN-VII is an autonomous vehicle designed to compete in the AUVSI Intelligent Ground Vehicle Competition (IGVC). The competition consists of two events, the Autonomous Challenge and Navigation Challenge. Using tri-processor control architecture the information from sonar sensors, cameras, GPS and compass is effectively integrated to map out the path of the robot. In the Autonomous Challenge, the real time data from two Firewire web cameras and an array of four sonar sensors are plotted on a custom-defined polar grid to identify the position of the robot with respect to the obstacles in its path. Depending on the position of the obstacles in the grid, a state number is determined and a command of action is retrieved from the state table. The image processing algorithm comprises a series of steps involving plane extraction, morphological analysis, edge extraction and interpolation, all of which are statistically based allowing optimum operation at varying ambient conditions. In the Navigation Challenge, data from GPS and sonar sensors are integrated on a polar grid with flexible distance thresholds and a state table approach is used to drive the vehicle to the next waypoint while avoiding obstacles. Both algorithms are developed and implemented using National Instruments (NI) hardware and LabVIEW software. The task of collecting and processing information in real time can be time consuming and hence not reactive enough for moving robots. Using three controllers, the image processing is done separately for each camera while a third controller integrates the data received through an Ethernet connection.
Alexander, Harold L.
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
The contribution of this thesis deals with the navigation and the piloting of an autonomous robot, in a known or weakly known environment of dimension two without constraints. This leads to generate an optimal path to a given goal and then to compute the commands to follow this path. Several constraints are taken into account (obstacles, geometry and kinematic of the robot, dynamic effects). The first part defines the problem and presents the state of the art. The three following parts present a set of complementary solutions according to the knowledge level of the environment and to the space constraints: - Case of a known environment: generation and following of a trajectory with respect to given path points. - Case of a weakly known environment: coupling of a command module interacting with the environment perception, and a path planner. This allows a fast motion of the robot. - Case of a constrained environment: planner enabling the taking into account of many constraints as the robot's shape, turning radius limitation, backward motion and orientation. (author)
Rao, N. S. V.; Iyengar, S. S.; Weisbin, C. R.
The following problem is considered: A point robot is placed in a terrain populated by an unknown number of polyhedral obstacles of varied sizes and locations in two/three dimensions. The robot is equipped with a sensor capable of detecting all the obstacle vertices and edges that are visible from the present location of the robot. The robot is required to autonomously navigate and build the complete terrain model using the sensor information. It is established that the necessary number of scanning operations needed for complete terrain model acquisition by any algorithm that is based on scan from vertices strategy is given by the summation of i = 1 (sup n) N(O sub i)-n and summation of i = 1 (sup n) N(O sub i)-2n in two- and three-dimensional terrains respectively, where O = (O sub 1, O sub 2,....O sub n) set of the obstacles in the terrain, and N(O sub i) is the number of vertices of the obstacle O sub i.
Zheng, Will Hua; Marzwell, Neville I.; Chau, Savio N.
Mission critical systems typically employ multi-string redundancy to cope with possible hardware failure. Such systems are only as fault tolerant as there are many redundant strings. Once a particular critical component exhausts its redundant spares, the multi-string architecture cannot tolerate any further hardware failure. This paper aims at addressing such catastrophic faults through the use of 'Self-Reconfigurable Chips' as a last resort effort to 'repair' a faulty critical component.
Cochrane, W. A.; Luo, X.; Lim, T.; Taylor, W. D.; Schnetler, H.
A Micro-Autonomous Positioning System (MAPS) has been developed using micro-autonomous robots for the deployment of small mirrors within multi-object astronomical instruments for use on the next generation ground-based telescopes. The micro-autonomous robot is a two-wheel differential drive robot with a footprint of approximately 20 × 20 mm. The robot uses two brushless DC Smoovy motors with 125:1 planetary gearheads for positioning the mirror. This article describes the various elements of the overall system and in more detail the various robot designs. Also described in this article is the build and test of the most promising design, proving that micro-autonomous robot technology can be used in precision controlled applications.
Martinez, Dominique; Arhidi, Lotfi; Demondion, Elodie; Masson, Jean-Baptiste; Lucas, Philippe
Robots designed to track chemical leaks in hazardous industrial facilities or explosive traces in landmine fields face the same problem as insects foraging for food or searching for mates: the olfactory search is constrained by the physics of turbulent transport. The concentration landscape of wind borne odors is discontinuous and consists of sporadically located patches. A pre-requisite to olfactory search is that intermittent odor patches are detected. Because of its high speed and sensitivity, the olfactory organ of insects provides a unique opportunity for detection. Insect antennae have been used in the past to detect not only sex pheromones but also chemicals that are relevant to humans, e.g., volatile compounds emanating from cancer cells or toxic and illicit substances. We describe here a protocol for using insect antennae on autonomous robots and present a proof of concept for tracking odor plumes to their source. The global response of olfactory neurons is recorded in situ in the form of electroantennograms (EAGs). Our experimental design, based on a whole insect preparation, allows stable recordings within a working day. In comparison, EAGs on excised antennae have a lifetime of 2 hr. A custom hardware/software interface was developed between the EAG electrodes and a robot. The measurement system resolves individual odor patches up to 10 Hz, which exceeds the time scale of artificial chemical sensors. The efficiency of EAG sensors for olfactory searches is further demonstrated in driving the robot toward a source of pheromone. By using identical olfactory stimuli and sensors as in real animals, our robotic platform provides a direct means for testing biological hypotheses about olfactory coding and search strategies. It may also prove beneficial for detecting other odorants of interests by combining EAGs from different insect species in a bioelectronic nose configuration or using nanostructured gas sensors that mimic insect antennae. PMID:25145980
Taraglio, S.; Zanela, S.; Santini, A.; Nanni, V. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Div. Robotica e Informatica Avanzata
The article presents some of the Terpsichore project's results aimed to developed and test algorithms and applications for autonomous robotics. Four applications are described: dynamic mapping of a building's interior through the use of ultrasonic sensors; visual drive of an autonomous robot via a neural network controller; a neural network-based stereo vision system that steers a robot through unknown indoor environments; and the evolution of intelligent behaviours via the genetic algorithm approach.
With the development of humanoid robots, autonomous stair climbing is an important capability. Humanoid robots will play an important role in helping people tackle some basic problems in the future. The main contribution of this thesis is that the NAO humanoid robot can climb the spiral staircase autonomously. In the vision module, the algorithm of image filtering and detecting the contours of the stair contributes to calculating the location of the stairs accurately. Additionally, the st...
Jia, Bao-Zhi; Zhu, Ming
This paper describes a method of human guidance for autonomous cruise of indoor robot. A low-cost robot follows a person in a room and notes the path for autonomous cruise using its monocular vision. A method of video-based object detection and tracking is taken to detect the target by the video received from the robot's camera. The validity of the human guidance method is proved by the experiment.
Christensen, David Johan; Andersen, Jens Christian; Blanke, Mogens;
tools and robots, and recharge their batteries while underwater. These properties will provide the system, when fully developed, with unique capabilities such as ability to adapt robotic morphology and function to the current task and tolerate failures leading to long-term autonomous operations.......This paper provides a brief overview of an underwater robotic system for autonomous inspection in confined offshore underwater structures. The system, which is currently in development, consist of heterogeneous modular robots able to physically dock and communicate with other robots, transport...
Development of human and robot collaborative navigation for autonomous maintenance management of nuclear installation has been conducted. The human-robot collaborative system is performed using a switching command between autonomous navigation and manual navigation that incorporate a human intervention. The autonomous navigation path is conducted using a novel algorithm of MLG method based on Lozano-Perezs visibility graph. The MLG optimizes the shortest distance and safe constraints. While the manual navigation is performed using manual robot tele operation tools. Experiment in the MLG autonomous navigation system is conducted for six times with 3-D starting point and destination point coordinate variation. The experiment shows a good performance of autonomous robot maneuver to avoid collision with obstacle. The switching navigation is well interpreted using open or close command to RS-232C constructed using LabVIEW
Dragone, Mauro; O'Donaghue, Ruadhan; Leonard, John J.; O'Hare, G. M. P.; Duffy, Brian R.; Patrikalakis, Andrew; Leederkerken, Jacques
The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem...
Harris, C.; Evans, R.; Tidey, E.
A system has been developed to enable a robot vehicle to autonomously explore and map an indoor environment using only visual sensors. The vehicle is equipped with a single camera, whose output is wirelessly transmitted to an off-board standard PC for processing. Visual features within the camera imagery are extracted and tracked, and their 3D positions are calculated using a Structure from Motion algorithm. As the vehicle travels, obstacles in its surroundings are identified and a map of the explored region is generated. This paper discusses suitable criteria for assessing the performance of the system by computer-based simulation and practical experiments with a real vehicle. Performance measures identified include the positional accuracy of the 3D map and the vehicle's location, the efficiency and completeness of the exploration and the system reliability. Selected results are presented and the effect of key system parameters and algorithms on performance is assessed. This work was funded by the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence.
ARIES (Autonomous Robotic Inspection Experimental System) is under development for the Department of Energy (DOE) to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. This paper focuses on the mechanical deployment system-referred to as the camera positioning system (CPS)-used in the project. The CPS is used for positioning four identical but separate camera packages consisting of vision cameras and other required sensors such as bar-code readers and light stripe projectors. The CPS is attached to the top of a mobile robot and consists of two mechanisms. The first is a lift mechanism composed of 5 interlocking rail-elements which starts from a retracted position and extends upward to simultaneously position 3 separate camera packages to inspect the top three drums of a column of four drums. The second is a parallelogram special case Grashof four-bar mechanism which is used for positioning a camera package on drums on the floor. Both mechanisms are the subject of this paper, where the lift mechanism is discussed in detail
Given the difficulty in hand-coding task schemes, an intellectualized architecture of the autonomous micro-mobile robot based-behavior for fault-repair was presented. Integrating the reinforcement learning and the group behavior evolution simulating the human's learning and evolution, the autonomous micro-mobile robot will automatically generate the suited actions satisfied the environment. However, the designer only devises some basic behaviors, which decreases the workload of the designer and cognitive deficiency of the robot to the environment. The results of simulation have shown that the architecture endows micro robot with the ability of learning, adaptation and robustness, also with the ability of accomplishing the given task.
Dias, Bruno Miguel Morais
This dissertation aims to guarantee the integration of a mobile autonomous robot equipped with many sensors in a multi-agent distributed and georeferenced surveillance system. The integration of a mobile autonomous robot in this system leads to new features that will be available to clients of surveillance system may use. These features may be of two types: using the robot as an agent that will act in the environment or by using the robot as a mobile set of sensors. As an agent in the syst...
A robot system consists of autonomous mobile robots each of which repeats Look-Compute-Move cycles, where the robot observes the positions of other robots (Look phase), computes the track to the next location (Compute phase), and moves along the track (Move phase). In this survey, we focus on self-organization of mobile robots, especially their power of forming patterns. The formation power of a robot system is the class of patterns that the robots can form, and existing results show that the robot system's formation power is determined by their asynchrony, obliviousness, and visibility. We briefly survey existing results, with impossibilities and pattern formation algorithms. Finally, we present several open problems related to the pattern formation problem of mobile robots
Wang, P. K. C.
A general navigation strategy for multiple autonomous robots in a bounded domain is developed analytically. Each robot is modeled as a spherical particle (i.e., an effective spatial domain about the center of mass); its interactions with other robots or with obstacles and domain boundaries are described in terms of the classical many-body problem; and a collision-avoidance strategy is derived and combined with homing, robot-robot, and robot-obstacle collision-avoidance strategies. Results from homing simulations involving (1) a single robot in a circular domain, (2) two robots in a circular domain, and (3) one robot in a domain with an obstacle are presented in graphs and briefly characterized.
Casals, Alicia; Fernández Caballero, Antonio
The special issue on ?Robotics and Autonomous Systems in the 50th Anniversary of Artificial Intelligence? collects a subset of the best papers in the fields of Robotics and Autonomous Systems presented at the Campus Multidisciplinary in Perception and Intelligence, CMPI-2006. The CMPI-2006 international conference, held in Albacete, Spain, from July 10 to 14, 2006, resulted in a forum for scientists in commemoration of the 50th Anniversary of Artificial Intelligence, which successfully report...
Luo, Chaomin; Krishnan, Mohan; Paulik, Mark; Jan, Gene Eu
This paper aims to address a trace-guided real-time navigation and map building approach of an autonomous mobile robot. Wave-front based global path planner is developed to generate a global trajectory for an autonomous mobile robot. Modified Vector Field Histogram (M-VFH) is employed based on the LIDAR sensor information to guide the robot locally to be autonomously traversed with obstacle avoidance by following traces provided by the global path planner. A local map composed of square grids is created through the local navigator while the robot traverses with limited LIDAR sensory information. From the measured sensory information, a map of the robot's immediate limited surroundings is dynamically built for the robot navigation. The real-time wave-front based navigation and map building methodology has been successfully demonstrated in a Player/Stage simulation environment. With the wave-front-based global path planner and M-VFH local navigator, a safe, short, and reasonable trajectory is successfully planned in a majority of situations without any templates, without explicitly optimizing any global cost functions, and without any learning procedures. Its effectiveness, feasibility, efficiency and simplicity of the proposed real-time navigation and map building of an autonomous mobile robot have been successfully validated by simulation and comparison studies. Comparison studies of the proposed approach with the other path planning approaches demonstrate that the proposed method is capable of planning more reasonable and shorter collision-free trajectories autonomously.
Rufus Blas, Morten; Riisgaard, Søren; Ravn, Ole;
Interpreting laser data to allow autonomous robot navigation on paved as well as dirt roads using a fixed angle 2D laser scanner is a daunting task. This paper introduces an algorithm for terrain classification that fuses four distinctly different classifiers: raw height, step size, slope, and...... department developed Medium Mobile Robot and tests conducted in a national park environment....
Rufus Blas, Morten; Riisgaard, Søren; Ravn, Ole;
Interpreting laser data to allow autonomous robot navigation on paved as well as dirt roads using a fixed angle 2D laser scanner is a daunting task. This paper introduces an algorithm for terrain classification that fuses four distinctly different classifiers: raw height, step size, slope...... with a department developed Medium Mobile Robot and tests conducted in a national park environment....
Tripathi, G. N.; Rihani, V.
The paper presents the electronic design and motion planning of a robot based on decision making regarding its straight motion and precise turn using Artificial Neural Network (ANN). The ANN helps in learning of robot so that it performs motion autonomously. The weights calculated are implemented in microcontroller. The performance has been tested to be excellent.
LUND, Henrik Hautop; Pagliarini, Luigi
Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept of a distributed educational system as a valuable tool for introducing students to interactive parallel and distributed processing programming as the foundation for distributed robotics and human-robot interaction...
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.
Bicho, E.; Ribeiro, Fernando; Louro, Luis, ed. lit.; International Conference on Autonomous Robot Systems and Competitions, 12, Guimarães, 2012; Robotica’2012
This is the 2012’s edition of the scientific meeting of the Portuguese Robotics Open (ROBOTICA’ 2012). It aims to disseminate scientific contributions and to promote discussion of theories, methods and experiences in areas of relevance to Autonomous Robotics and Robotic Competitions. All accepted contributions are included in this proceedings book. The conference program has also included an invited talk by Dr.ir. Raymond H. Cuijpers, from the Department of Human Technology Interacti...
The Regolith Advanced Surface Systems Operations Robot (RASSOR) Phase 2 is an excavation robot for mining regolith on a planet like Mars. The robot is programmed using the Robotic Operating System (ROS) and it also uses a physical simulation program called Gazebo. This internship focused on various functions of the program in order to make it a more professional and efficient robot. During the internship another project called the Smart Autonomous Sand-Swimming Excavator was worked on. This is a robot that is designed to dig through sand and extract sample material. The intern worked on programming the Sand-Swimming robot, and designing the electrical system to power and control the robot.
A biomimetic robot inspired by Cyanea capillata, termed as ‘Cyro’, was developed to meet the functional demands of underwater surveillance in defense and civilian applications. The vehicle was designed to mimic the morphology and swimming mechanism of the natural counterpart. The body of the vehicle consists of a rigid support structure with linear DC motors which actuate eight mechanical arms. The mechanical arms in conjunction with artificial mesoglea create the hydrodynamic force required for propulsion. The full vehicle measures 170 cm in diameter and has a total mass of 76 kg. An analytical model of the mechanical arm kinematics was developed. The analytical and experimental bell kinematics were analyzed and compared to the C. capillata. Cyro was found to reach the water surface untethered and autonomously from a depth of 182 cm in five actuation cycles. It achieved an average velocity of 8.47 cm s−1 while consuming an average power of 70 W. A two-axis thrust stand was developed to calculate the thrust directly from a single bell segment yielding an average thrust of 27.9 N for the whole vehicle. Steady state velocity during Cyro's swimming test was not reached but the measured performance during its last swim cycle resulted in a cost of transport of 10.9 J (kg ⋅ m)−1 and total efficiency of 0.03. (paper)
Belyakov, Vladimir; Makarov, Vladimir; Zezyulin, Denis; Kurkin, Andrey; Pelinovsky, Efim
Hazardous phenomena in the coastal zone lead to the topographic changing which are difficulty inspected by traditional methods. It is why those autonomous robots are used for collection of nearshore topographic and hydrodynamic measurements. The robot RTS-Hanna is well-known (Wubbold, F., Hentschel, M., Vousdoukas, M., and Wagner, B. Application of an autonomous robot for the collection of nearshore topographic and hydrodynamic measurements. Coastal Engineering Proceedings, 2012, vol. 33, Paper 53). We describe here several constructions of mobile systems developed in Laboratory "Transported Machines and Transported Complexes", Nizhny Novgorod State Technical University. They can be used in the field surveys and monitoring of wave regimes nearshore.
J. de Hoog; S. Cameron; A. Visser
Teams of communicating robots are likely to be used for a wide range of applications in the near future, such as robotic search and rescue or robotic exploration of hostile and remote environments. In such scenarios, environments are likely to contain significant interference and multi-robot systems
Joaquin, Sitte; Felix, Werner
Autonomous robots must carry out useful tasks all by themselves relying entirely on their own perceptions of their environment. The cognitive abilities required for autonomous action are largely independent of robot size, which makes mini robots attractive as artefacts for research, education and entertainment. Autonomous mini robots must be small enough for experimentation on a desktop or a small laboratory. They must be easy to carry and safe for interaction with humans. They must not be expensive. Mini robot designers have to work at the leading edge of technology so that their creations can carry out purposeful autonomic action under these constraints. Since 2001 researchers have met every two years for an international symposium to report on the advances achieved in Autonomous Mini Robots for Research and Edutainment (AMiRE). The AMiRE Symposium is a single track conference that offers ample opportunities for discussion and exchange of ideas. This volume contains the contributed papers of the 2011 AM...
Hoog, de, G.S.; Cameron, S.; de Visser, A.
Teams of communicating robots are likely to be used for a wide range of applications in the near future, such as robotic search and rescue or robotic exploration of hostile and remote environments. In such scenarios, environments are likely to contain significant interference and multi-robot systems must be able to cope with loss of communication. We propose a novel multi-robot exploration approach, role-based exploration, in which members of the team explicitly plan to explore beyond communi...
Luo, Chaomin; Krishnan, Mohan; Paulik, Mark; Fallouh, Samer
In this paper, development of a low-cost PID controller with an intelligent behavior coordination system for an autonomous mobile robot is described that is equipped with IR sensors, ultrasonic sensors, regulator, and RC filters on the robot platform based on HCS12 microcontroller and embedded systems. A novel hybrid PID controller and behavior coordination system is developed for wall-following navigation and obstacle avoidance of an autonomous mobile robot. Adaptive control used in this robot is a hybrid PID algorithm associated with template and behavior coordination models. Software development contains motor control, behavior coordination intelligent system and sensor fusion. In addition, the module-based programming technique is adopted to improve the efficiency of integrating the hybrid PID and template as well as behavior coordination model algorithms. The hybrid model is developed to synthesize PID control algorithms, template and behavior coordination technique for wall-following navigation with obstacle avoidance systems. The motor control, obstacle avoidance, and wall-following navigation algorithms are developed to propel and steer the autonomous mobile robot. Experiments validate how this PID controller and behavior coordination system directs an autonomous mobile robot to perform wall-following navigation with obstacle avoidance. Hardware configuration and module-based technique are described in this paper. Experimental results demonstrate that the robot is successfully capable of being guided by the hybrid PID controller and behavior coordination system for wall-following navigation with obstacle avoidance.
Full Text Available Research on humanoid robotics in Mechatronics and Automation Laboratory, Electrical and Computer Engineering, Islamic Azad University Khorasgan branch (Isfahan of Iran was started at
the beginning of this decade. Various research prototypes for humanoid robots have been designed and are going through evolution over these years. This paper describes the hardware and software design of the kid size humanoid robot systems of the PERSIA Team in 2009. The robot has 20 actuated degrees of freedom based on Hitec HSR898. In this paper we have tried to focus on areas such as mechanical structure, Image processing unit, robot controller, Robot AI and behavior
learning. In 2009, our developments for the Kid size humanoid robot include: (1 the design and construction of our new humanoid robots (2 the design and construction of a new hardware and software controller to be used in our robots. The project is described in two main parts: Hardware and Software. The software is developed a robot application which consists walking controller, autonomous motion robot, self localization base on vision and Particle Filter, local AI, Trajectory Planning, Motion Controller and Network. The hardware consists of the mechanical structure and the driver circuit board. Each robot is able to walk, fast walk, pass, kick and dribble when it catches
the ball. These humanoids have been successfully participating in various robotic soccer competitions. This project is still in progress and some new interesting methods are described in the current report.
Gross, Anthony R.; Briggs, Geoffrey A.; Glass, Brian J.; Pedersen, Liam; Kortenkamp, David M.; Wettergreen, David S.; Nourbakhsh, I.; Clancy, Daniel J.; Zornetzer, Steven (Technical Monitor)
Space exploration missions are evolving toward more complex architectures involving more capable robotic systems, new levels of human and robotic interaction, and increasingly autonomous systems. How this evolving mix of advanced capabilities will be utilized in the design of new missions is a subject of much current interest. Cost and risk constraints also play a key role in the development of new missions, resulting in a complex interplay of a broad range of factors in the mission development and planning of new missions. This paper will discuss how human, robotic, and autonomous systems could be used in advanced space exploration missions. In particular, a recently completed survey of the state of the art and the potential future of robotic systems, as well as new experiments utilizing human and robotic approaches will be described. Finally, there will be a discussion of how best to utilize these various approaches for meeting space exploration goals.
Kurkin, Andrey; Zeziulin, Denis; Makarov, Vladimir; Belyakov, Vladimir; Tyugin, Dmitry; Pelinovsky, Efim
The project covers the development of a technology for monitoring and forecasting the state of the coastal zone environment using radar equipment transported by autonomous mobile robotic systems (AMRS). Sought-after areas of application are the eastern and northern coasts of Russia, where continuous collection of information on topographic changes of the coastal zone and carrying out hydrodynamic measurements in inaccessible to human environment are needed. The intensity of the reflection of waves, received by radar surveillance, is directly related to the height of the waves. Mathematical models and algorithms for processing experimental data (signal selection, spectral analysis, wavelet analysis), recalculation of landwash from data on heights of waves far from the shore, determination of the threshold values of heights of waves far from the shore have been developed. There has been developed the program complex for functioning of the experimental prototype of AMRS, comprising the following modules: data loading module, reporting module, module of georeferencing, data analysis module, monitoring module, hardware control module, graphical user interface. Further work will be connected with carrying out tests of manufactured experimental prototype in conditions of selected routes coastline of Sakhalin Island. Conducting field tests will allow to reveal the shortcomings of development and to identify ways of optimization of the structure and functioning algorithms of AMRS, as well as functioning the measuring equipment. The presented results have been obtained in Nizhny Novgorod State Technical University n.a. R. Alekseev in the framework of the Federal Target Program «Research and development on priority directions of scientific-technological complex of Russia for 2014 - 2020 years» (agreement № 14.574.21.0089 (unique identifier of agreement - RFMEFI57414X0089)).
Nassiraei, Amir Ali Forough
(Abstract) During the 21st century, it is expected that the robots with different degrees of autonomy and mobility will play an increasingly important role in all side of human life. Thus these kinds of robots will become much more complex than today, and the development of such robots present a great challenge for researchers. However, drawbacks of robot complexity, necessity of more complex hardware, software and mechanical structure may lead to low reliability and increasing...
Leitner, Jürgen; Schmidhuber, Jürgen; Förster, Alexander
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus...
Wehner, Michael; Truby, Ryan L; Fitzgerald, Daniel J; Mosadegh, Bobak; Whitesides, George M; Lewis, Jennifer A; Wood, Robert J
Soft robots possess many attributes that are difficult, if not impossible, to achieve with conventional robots composed of rigid materials. Yet, despite recent advances, soft robots must still be tethered to hard robotic control systems and power sources. New strategies for creating completely soft robots, including soft analogues of these crucial components, are needed to realize their full potential. Here we report the untethered operation of a robot composed solely of soft materials. The robot is controlled with microfluidic logic that autonomously regulates fluid flow and, hence, catalytic decomposition of an on-board monopropellant fuel supply. Gas generated from the fuel decomposition inflates fluidic networks downstream of the reaction sites, resulting in actuation. The body and microfluidic logic of the robot are fabricated using moulding and soft lithography, respectively, and the pneumatic actuator networks, on-board fuel reservoirs and catalytic reaction chambers needed for movement are patterned within the body via a multi-material, embedded 3D printing technique. The fluidic and elastomeric architectures required for function span several orders of magnitude from the microscale to the macroscale. Our integrated design and rapid fabrication approach enables the programmable assembly of multiple materials within this architecture, laying the foundation for completely soft, autonomous robots. PMID:27558065
Nishiwaki, Koichi; Kuffner, James; Kagami, Satoshi; Inaba, Masayuki; Inoue, Hirochika
This paper gives an overview of the humanoid robot 'H7', which was developed over several years as an experimental platform for walking, autonomous behaviour and human interaction research at the University of Tokyo. H7 was designed to be a human-sized robot capable of operating autonomously in indoor environments designed for humans. The hardware is relatively simple to operate and conduct research on, particularly with respect to the hierarchical design of its control architecture. We describe the overall design goals and methodology, along with a summary of its online walking capabilities, autonomous vision-based behaviours and automatic motion planning. We show experimental results obtained by implementations running within a simulation environment as well as on the actual robot hardware. PMID:17148051
Soft robotics offers the unique promise of creating inherently safe and adaptive systems. These systems bring man-made machines closer to the natural capabilities of biological systems. An important requirement to enable self-contained soft mobile robots is an on-board power source. In this paper, we present an approach to create a bio-inspired soft robotic snake that can undulate in a similar way to its biological counterpart using pressure for actuation power, without human intervention. With this approach, we develop an autonomous soft snake robot with on-board actuation, power, computation and control capabilities. The robot consists of four bidirectional fluidic elastomer actuators in series to create a traveling curvature wave from head to tail along its body. Passive wheels between segments generate the necessary frictional anisotropy for forward locomotion. It takes 14 h to build the soft robotic snake, which can attain an average locomotion speed of 19 mm s−1. (paper)
Sinha, Aakash; Bhardwaj, Prashant; Vaibhav, Bipul; Mohommad, Noor
Ro-Boat is an autonomous river cleaning intelligent robot incorporating mechanical design and computer vision algorithm to achieve autonomous river cleaning and provide a sustainable environment. Ro-boat is designed in a modular fashion with design details such as mechanical structural design, hydrodynamic design and vibrational analysis. It is incorporated with a stable mechanical system with air and water propulsion, robotic arms and solar energy source and it is proceed to become autonomous by using computer vision. Both "HSV Color Space" and "SURF" are proposed to use for measurements in Kalman Filter resulting in extremely robust pollutant tracking. The system has been tested with successful results in the Yamuna River in New Delhi. We foresee that a system of Ro-boats working autonomously 24x7 can clean a major river in a city on about six months time, which is unmatched by alternative methods of river cleaning.
Andersen, Nils Axel; Braithwaite, Ian David; Blanke, Mogens;
condition based cleaning. This paper describes how a novel sensor, developed for the purpose, and algorithms for classification and learning are combined with a commercial robot to obtain an autonomous system which meets the necessary quality attributes. These include features to make selective cleaning...... where dirty areas are detected, that operator assistance is called only when cleanness hypothesis cannot be made with confidence. The paper describes the design of the system where learning from experience maps and operator instructions are combined to obtain a smart and autonomous cleaning robot.......Cleaning of livestock buildings is the single most health-threatening task in the agricultural industry and a transition to robot-based cleaning would be instrumental to improving working conditions for employees. Present cleaning robots fall short on cleanness quality, as they cannot perform...
For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.
Daemi, Ali; Pena, Edward; Ferguson, Paul
This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.
Andersen, Jens Christian; Blas, Morten Rufus; Andersen, Nils Axel; Ravn, Ole; Blanke, Mogens
Interpreting laser data to allow autonomous robot navigation on paved as well as dirt roads using a fixed angle 2D laser scanner is a daunting task. This paper introduces an algorithm for terrain classification that fuses seven distinctly different classifiers: raw height, roughness, step size......, curvature, slope, width and invalid data. These are then used to extract road borders, traversable terrain and identify obstacles. Experimental results are shown and discussed. The results were obtained using a DTU developed mobile robot, and the autonomous tests were conducted in a national park...
Richardson, Al; Rodgers, Michael H.
Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
Lopez de Meneses Novosilzov, Yuri; Nicoud, Jean-Daniel
Since the inception of Robotics, visual information has been incorporated in order to allow the robots to perform tasks that require an interaction with their environment, particularly when it is a changing environment. Depth perception is a most useful information for a mobile robot to navigate in its environment and interact with its surroundings. Among the different methods capable of measuring the distance to the objects in the scene, stereo vision is the most advantageous for a small, mo...
Lopez de Meneses Novosilzov, Yuri
Since the inception of Robotics, visual information has been incorporated in order to allow the robots to perform tasks that require an interaction with their environment, particularly when it is a changing environment. Depth perception is a most useful information for a mobile robot to navigate in its environment and interact with its surroundings. Among the different methods capable of measuring the distance to the objects in the scene, stereo vision is the most advantageous for a small, mo...
Tomatis, Nicola; Philippsen, Roland; Jensen, Björn; Arras, Kai Oliver; Terrien, G.; Piguet, Ralph; Siegwart, Roland
This paper presents the effort that has been undertaken in designing and building both hardware and software for a fully autonomous navigating vehicle aimed for a tour guide application. The challenge for such a project is to combine industrial high quality production for the mobile platforms with techniques for mobile robot navigation and interaction which are currently best available in academic research. For this the experience and technology of the Autonomous Systems Lab at EPFL has been ...
Jun, Hyun Il.
An articulated arm with three degrees of freedom is implemented and tested on an autonomous robot. Kinematic equations of motion for the arm are modeled and tested. A communication architecture is successfully implemented for wireless manual control of the arm. Visual and thermal cues are realized with an onboard camera and a collocated thermal sensor. Future work suggests investigations for full autonomous arm control without manual operator intervention based on sensor cues and visual s...
Remias, Leonard V.
Approved for public release, distribution is unlimited Yamabico-11 is an autonomous mobile robot used as a research platform with one area in image understanding. Previous work focused on edge detection analysis on a Silicon Graphics Iris (SGI) workstation with no method for implementation on the robot. Yamabico-11 does not have an on-board image processing capability to detect straight edges in a grayscale image and a method for allowing the user to analyze the data. The approach taken fo...
Aravind, G; Gautham, Vasan; Kumar, T. S. B Gowtham; Naresh, Balaji
Accumulation of dust on the surface of solar panels reduces the amount of radiation reaching it. This leads to loss in generated electric power and formation of hotspots which would permanently damage the solar panel. This project aims at developing an autonomous vacuum cleaning method which can be used on a regular basis to maximize the lifetime and efficiency of a solar panel. This system is implemented using two subsystems namely a Robotic Vacuum Cleaner and a Docking Station. The Robotic ...
Youjin Shin; Donghyeon Kim; Hyunsuk Lee; Jooyoung Park; Woojin Chung
This paper deals with the autonomous navigation problem of a mobile robot in outdoor road environments. The target application is surveillance in petroleum storage bases. Although there have been remarkable technological achievements recently in the area of outdoor navigation, robotic systems are still expensive due to a large number of high cost sensors. This paper proposes the reliable extraction algorithm of traversable regions using a single onboard Laser Range Finder (LRF) in outdoor roa...
Falker, John; Zeitlin, Nancy; Leucht, Kurt; Stolleis, Karl
Kennedy Space Center has teamed up with the Biological Computation Lab at the University of New Mexico to create a swarm of small, low-cost, autonomous robots, called Swarmies, to be used as a ground-based research platform for in-situ resource utilization missions. The behavior of the robot swarm mimics the central-place foraging strategy of ants to find and collect resources in an unknown environment and return those resources to a central site.
Managing the movements of an autonomous mobile robot in its environment is a problem that has been tackled since the early integration of arti ficial intelligence and robotics. However, this problem remains di fficult and no general solution has been devised. Among existing navigation strategies, we will focus on those that use a map to represent the spatial layout of the environment and that allow to plan movements toward distant goals. Map-building and self-positioning within these maps are...
Watanabe, Yutaka; Yairi, Takehisa; Machida, Kazuo
Space robots will be needed in the future space missions. So far, many types of space robots have been developed, but in particular, Intra-Vehicular Activity (IVA) space robots that support human activities should be developed to reduce human-risks in space. In this paper, we study the motion learning method of an IVA space robot with the multi-link mechanism. The advantage point is that this space robot moves using reaction force of the multi-link mechanism and contact forces from the wall as space walking of an astronaut, not to use a propulsion. The control approach is determined based on a reinforcement learning with the actor-critic algorithm. We demonstrate to clear effectiveness of this approach using a 5-link space robot model by simulation. First, we simulate that a space robot learn the motion control including contact phase in two dimensional case. Next, we simulate that a space robot learn the motion control changing base attitude in three dimensional case.
Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole
This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot (2...
Hansen, Karl Damkjær; Garcia Ruiz, Francisco Jose; Kazmi, Wajahat;
The ASETA project develops theory and methods for robotic agricultural systems. In ASETA, unmanned aircraft and unmanned ground vehicles are used to automate the task of identifying and removing weeds in sugar beet fields. The framework for a working automatic robotic weeding system is presented...... along with the implemented computer vision systems....
Akin, H. Levent; Meriçli, Çetin; Meriçli, Tekin
Teaching the fundamentals of robotics to computer science undergraduates requires designing a well-balanced curriculum that is complemented with hands-on applications on a platform that allows rapid construction of complex robots, and implementation of sophisticated algorithms. This paper describes such an elective introductory course where the…
Panati, Subbash; Baasandorj, Bayanjargal; Chong, Kil To
Mobile robot navigation has been an area of robotics which has gained massive attention among the researchers of robotics community. Path planning and obstacle avoidance are the key aspects of mobile robot navigation. This paper presents harmonic potential field based navigation algorithm for mobile robots. Harmonic potential field method overcomes the issue of local minima which was a major bottleneck in the case of artificial potential field method. The harmonic potential field is calculated using harmonic functions and Dirichlet boundary conditions are used for the obstacles, goal and initial position. The simulation results shows that the proposed method is able to overcome the local minima issue and navigate successfully from initial position to the goal without colliding into obstacles in static environment.
Qadir, Raja Humza
This book describes how the principle of self-sufficiency can be applied to a reconfigurable modular robotic organism. It shows the design considerations for a novel REPLICATOR robotic platform, both hardware and software, featuring the behavioral characteristics of social insect colonies. Following a comprehensive overview of some of the bio-inspired techniques already available, and of the state-of-the-art in re-configurable modular robotic systems, the book presents a novel power management system with fault-tolerant energy sharing, as well as its implementation in the REPLICATOR robotic modules. In addition, the book discusses, for the first time, the concept of “artificial energy homeostasis” in the context of a modular robotic organism, and shows its verification on a custom-designed simulation framework in different dynamic power distribution and fault tolerance scenarios. This book offers an ideal reference guide for both hardware engineers and software developers involved in the design and implem...
Szakaly, Zoltan F.; Schenker, Paul S.
This paper describes a recipe for the construction of control systems that support complex machines such as multi-limbed/multi-fingered robots. The robot has to execute a task under varying environmental conditions and it has to react reasonably when previously unknown conditions are encountered. Its behavior should be learned and/or trained as opposed to being programmed. The paper describes one possible method for organizing the data that the robot has learned by various means. This framework can accept useful operator input even if it does not fully specify what to do, and can combine knowledge from autonomous, operator assisted and programmed experiences.
Squillante, M. R.; Derochemont, L. P.; Cirignano, L.; Lieberman, P.; Soller, M. S.
The goal of this program was to develop new sensing capabilities for autonomous robots operating in space. Information gained by the robot using these new capabilities would be combined with other information gained through more traditional capabilities, such as video, to help the robot characterize its environment as well as to identify known or unknown objects that it encounters. Several sensing capabilities using nuclear radiation detectors and backscatter technology were investigated. The result of this research has been the construction and delivery to NASA of a prototype system with three capabilities for use by autonomous robots. The primary capability was the use of beta particle backscatter measurements to determine the average atomic number (Z) of an object. This gives the robot a powerful tool to differentiate objects which may look the same, such as objects made out of different plastics or other light weight materials. In addition, the same nuclear sensor used in the backscatter measurement can be used as a nuclear spectrometer to identify sources of nuclear radiation that may be encountered by the robot, such as nuclear powered satellites. A complete nuclear analysis system is included in the software and hardware of the prototype system built in phase 2 of this effort. Finally, a method to estimate the radiation dose in the environment of the robot has been included as a third capability. Again, the same nuclear sensor is used in a different operating mode and with different analysis software. Each of these capabilities are described.
Antonelo, Eric; Baerveldt, Albert-Jan; Rögnvaldsson, Thorsteinn; Figueiredo, Mauricio
Classical reinforcement learning mechanisms and a modular neural network are unified for conceiving an intelligent autonomous system for mobile robot navigation. The conception aims at inhibiting two common navigation deficiencies: generation of unsuitable cyclic trajectories and ineffectiveness in risky configurations. Distinct design apparatuses are considered for tackling these navigation difficulties, for instance: 1) neuron parameter for memorizing neuron activities (also functioning as ...
Matthes, Larry; Belluta, Paolo; McHenry, Michael
Four methods of detection of bodies of water are under development as means to enable autonomous robotic ground vehicles to avoid water hazards when traversing off-road terrain. The methods involve processing of digitized outputs of optoelectronic sensors aboard the vehicles. It is planned to implement these methods in hardware and software that would operate in conjunction with the hardware and software for navigation and for avoidance of solid terrain obstacles and hazards. The first method, intended for use during the day, is based on the observation that, under most off-road conditions, reflections of sky from water are easily discriminated from the adjacent terrain by their color and brightness, regardless of the weather and of the state of surface waves on the water. Accordingly, this method involves collection of color imagery by a video camera and processing of the image data by an algorithm that classifies each pixel as soil, water, or vegetation according to its color and brightness values (see figure). Among the issues that arise is the fact that in the presence of reflections of objects on the opposite shore, it is difficult to distinguish water by color and brightness alone. Another issue is that once a body of water has been identified by means of color and brightness, its boundary must be mapped for use in navigation. Techniques for addressing these issues are under investigation. The second method, which is not limited by time of day, is based on the observation that ladar returns from bodies of water are usually too weak to be detected. In this method, ladar scans of the terrain are analyzed for returns and the absence thereof. In appropriate regions, the presence of water can be inferred from the absence of returns. Under some conditions in which reflections from the bottom are detectable, ladar returns could, in principle, be used to determine depth. The third method involves the recognition of bodies of water as dark areas in short
Full Text Available High-level intelligence allows a mobile robot to create and interpret complex world models, but without a precise control system, the accuracy of the world model and the robot's ability to interact with its surroundings are greatly diminished. This problem is amplified when the environment is hostile, such as in a battlefield situation where an error in movement or a slow response may lead to destruction of the robot. As the presence of robots on the battlefield continues to escalate and the trend toward relieving the human of the low-level control burden advances, the ability to combine the functionalities of several critical control systems on a single platform becomes imperative.
Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.
The topics are presented in viewgraph form and include: neural network controller for robot arm positioning with visual feedback; initial training of the arm; automatic recovery from cumulative fault scenarios; and error reduction by iterative fine movements.