WorldWideScience

Sample records for robot vision robot

  1. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  2. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  3. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  4. Vision servo of industrial robot: A review

    Science.gov (United States)

    Zhang, Yujin

    2018-04-01

    Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.

  5. Active Vision for Sociable Robots

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2001-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  6. Machine Learning for Robotic Vision

    OpenAIRE

    Drummond, Tom

    2018-01-01

    Machine learning is a crucial enabling technology for robotics, in particular for unlocking the capabilities afforded by visual sensing. This talk will present research within Prof Drummond’s lab that explores how machine learning can be developed and used within the context of Robotic Vision.

  7. New development in robot vision

    CERN Document Server

    Behal, Aman; Chung, Chi-Kit

    2015-01-01

    The field of robotic vision has advanced dramatically recently with the development of new range sensors.  Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related...

  8. Robotics

    Science.gov (United States)

    Popov, E. P.; Iurevich, E. I.

    The history and the current status of robotics are reviewed, as are the design, operation, and principal applications of industrial robots. Attention is given to programmable robots, robots with adaptive control and elements of artificial intelligence, and remotely controlled robots. The applications of robots discussed include mechanical engineering, cargo handling during transportation and storage, mining, and metallurgy. The future prospects of robotics are briefly outlined.

  9. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  10. Robotics

    International Nuclear Information System (INIS)

    Scheide, A.W.

    1983-01-01

    This article reviews some of the technical areas and history associated with robotics, provides information relative to the formation of a Robotics Industry Committee within the Industry Applications Society (IAS), and describes how all activities relating to robotics will be coordinated within the IEEE. Industrial robots are being used for material handling, processes such as coating and arc welding, and some mechanical and electronics assembly. An industrial robot is defined as a programmable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for a variety of tasks. The initial focus of the Robotics Industry Committee will be on the application of robotics systems to the various industries that are represented within the IAS

  11. Physics Based Vision Systems for Robotic Manipulation

    Data.gov (United States)

    National Aeronautics and Space Administration — With the increase of robotic manipulation tasks (TA4.3), specifically dexterous manipulation tasks (TA4.3.2), more advanced computer vision algorithms will be...

  12. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  13. Robotics

    Energy Technology Data Exchange (ETDEWEB)

    Lorino, P; Altwegg, J M

    1985-05-01

    This article, which is aimed at the general reader, examines latest developments in, and the role of, modern robotics. The 7 main sections are sub-divided into 27 papers presented by 30 authors. The sections are as follows: 1) The role of robotics, 2) Robotics in the business world and what it can offer, 3) Study and development, 4) Utilisation, 5) Wages, 6) Conditions for success, and 7) Technological dynamics.

  14. Vision Guided Intelligent Robot Design And Experiments

    Science.gov (United States)

    Slutzky, G. D.; Hall, E. L.

    1988-02-01

    The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.

  15. Vision-based mapping with cooperative robots

    Science.gov (United States)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  16. International Conference on Computational Vision and Robotics

    CERN Document Server

    2015-01-01

    Computer Vision and Robotic is one of the most challenging areas of 21st century. Its application ranges from Agriculture to Medicine, Household applications to Humanoid, Deep-sea-application to Space application, and Industry applications to Man-less-plant. Today’s technologies demand to produce intelligent machine, which are enabling applications in various domains and services. Robotics is one such area which encompasses number of technology in it and its application is widespread. Computational vision or Machine vision is one of the most challenging tools for the robot to make it intelligent.   This volume covers chapters from various areas of Computational Vision such as Image and Video Coding and Analysis, Image Watermarking, Noise Reduction and Cancellation, Block Matching and Motion Estimation, Tracking of Deformable Object using Steerable Pyramid Wavelet Transformation, Medical Image Fusion, CT and MRI Image Fusion based on Stationary Wavelet Transform. The book also covers articles from applicati...

  17. Manifold learning in machine vision and robotics

    Science.gov (United States)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  18. Beyond speculative robot ethics: A vision assessment study on the future of the robotic caretaker

    NARCIS (Netherlands)

    Plas, A.P. van der; Smits, M.; Wehrmann, C.

    2010-01-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to

  19. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    Palmer, “Development of a navigation system for semi-autonomous operation of wheelchairs,” in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples

  20. A Fast Vision System for Soccer Robot

    Directory of Open Access Journals (Sweden)

    Tianwu Yang

    2012-01-01

    Full Text Available This paper proposes a fast colour-based object recognition and localization for soccer robots. The traditional HSL colour model is modified for better colour segmentation and edge detection in a colour coded environment. The object recognition is based on only the edge pixels to speed up the computation. The edge pixels are detected by intelligently scanning a small part of whole image pixels which is distributed over the image. A fast method for line and circle centre detection is also discussed. For object localization, 26 key points are defined on the soccer field. While two or more key points can be seen from the robot camera view, the three rotation angles are adjusted to achieve a precise localization of robots and other objects. If no key point is detected, the robot position is estimated according to the history of robot movement and the feedback from the motors and sensors. The experiments on NAO and RoboErectus teen-size humanoid robots show that the proposed vision system is robust and accurate under different lighting conditions and can effectively and precisely locate robots and other objects.

  1. Robotics

    Indian Academy of Sciences (India)

    netic induction to detect an object. The development of ... end effector, inclination of object, magnetic and electric fields, etc. The sensors described ... In the case of a robot, the various actuators and motors have to be modelled. The major ...

  2. Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.

    Science.gov (United States)

    van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline

    2010-11-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.

  3. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  4. ROBERT autonomous navigation robot with artificial vision

    International Nuclear Information System (INIS)

    Cipollini, A.; Meo, G.B.; Nanni, V.; Rossi, L.; Taraglio, S.; Ferjancic, C.

    1993-01-01

    This work, a joint research between ENEA (the Italian National Agency for Energy, New Technologies and the Environment) and DIGlTAL, presents the layout of the ROBERT project, ROBot with Environmental Recognizing Tools, under development in ENEA laboratories. This project aims at the development of an autonomous mobile vehicle able to navigate in a known indoor environment through the use of artificial vision. The general architecture of the robot is shown together with the data and control flow among the various subsystems. Also the inner structure of the latter complete with the functionalities are given in detail

  5. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  6. A robotic vision system to measure tree traits

    Science.gov (United States)

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  7. System and method for controlling a vision guided robot assembly

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.

    2017-03-07

    A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.

  8. Advanced robot vision system for nuclear power plants

    International Nuclear Information System (INIS)

    Onoguchi, Kazunori; Kawamura, Atsuro; Nakayama, Ryoichi.

    1991-01-01

    We have developed a robot vision system for advanced robots used in nuclear power plants, under a contract with the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. This work is part of the large-scale 'advanced robot technology' project. The robot vision system consists of self-location measurement, obstacle detection, and object recognition subsystems, which are activated by a total control subsystem. This paper presents details of these subsystems and the experimental results obtained. (author)

  9. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Gerd Mayer

    2008-11-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  10. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Hans Utz

    2006-03-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  11. A Vision-Based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  12. Vision-Based Robot Following Using PID Control

    OpenAIRE

    Chandra Sekhar Pati; Rahul Kala

    2017-01-01

    Applications like robots which are employed for shopping, porter services, assistive robotics, etc., require a robot to continuously follow a human or another robot. This paper presents a mobile robot following another tele-operated mobile robot based on a PID (Proportional–Integral-Differential) controller. Here, we use two differential wheel drive robots; one is a master robot and the other is a follower robot. The master robot is manually controlled and the follower robot is programmed to ...

  13. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  14. A Practical Solution Using A New Approach To Robot Vision

    Science.gov (United States)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write

  15. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  16. Vision Assisted Laser Scanner Navigation for Autonomous Robots

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2008-01-01

    This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot ...

  17. Vision Based Tracker for Dart-Catching Robot

    OpenAIRE

    Linderoth, Magnus; Robertsson, Anders; Åström, Karl; Johansson, Rolf

    2009-01-01

    This paper describes how high-speed computer vision can be used in a motion control application. The specific application investigated is a dart catching robot. Computer vision is used to detect a flying dart and a filtering algorithm predicts its future trajectory. This will give data to a robot controller allowing it to catch the dart. The performance of the implemented components indicates that the dart catching application can be made to work well. Conclusions are also made about what fea...

  18. Robotic Arm Control Algorithm Based on Stereo Vision Using RoboRealm Vision

    Directory of Open Access Journals (Sweden)

    SZABO, R.

    2015-05-01

    Full Text Available The goal of this paper is to present a stereo computer vision algorithm intended to control a robotic arm. Specific points on the robot joints are marked and recognized in the software. Using a dedicated set of mathematic equations, the movement of the robot is continuously computed and monitored with webcams. Positioning error is finally analyzed.

  19. Ping-Pong Robotics with High-Speed Vision System

    DEFF Research Database (Denmark)

    Li, Hailing; Wu, Haiyan; Lou, Lei

    2012-01-01

    The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. In order to detect a flying ball accurately and robustly...... of the manipulator are updated iteratively with decreasing error. Experiments are conducted on a 7 degrees of freedom humanoid robot arm. A successful Ping-Pong playing between the robot arm and human is achieved with a high successful rate of 88%....

  20. Robotics, vision and control fundamental algorithms in Matlab

    CERN Document Server

    Corke, Peter

    2017-01-01

    Robotic vision, the combination of robotics and computer vision, involves the application of computer algorithms to data acquired from sensors. The research community has developed a large body of such algorithms but for a newcomer to the field this can be quite daunting. For over 20 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and compu...

  1. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Robotic anesthesia - A vision for the future of anesthesia

    OpenAIRE

    Hemmerling, Thomas M.; Taddei, Riccardo; Wehbe, Mohamad; Morse, Joshua; Cyr, Shantale; Zaouter, Cedrick

    2011-01-01

    Summary This narrative review describes a rationale for robotic anesthesia. It offers a first classification of robotic anesthesia by separating it into pharmacological robots and robots for aiding or replacing manual gestures. Developments in closed loop anesthesia are outlined. First attempts to perform manual tasks using robots are described. A critical analysis of the delayed development and introduction of robots in anesthesia is delivered.

  3. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    Science.gov (United States)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  4. Learning Spatial Object Localization from Vision on a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Jürgen Leitner

    2012-12-01

    Full Text Available We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN and Genetic Programming (GP, are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robot's kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robot's workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.

  5. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  6. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  7. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  8. Vision-Based Robot Following Using PID Control

    Directory of Open Access Journals (Sweden)

    Chandra Sekhar Pati

    2017-06-01

    Full Text Available Applications like robots which are employed for shopping, porter services, assistive robotics, etc., require a robot to continuously follow a human or another robot. This paper presents a mobile robot following another tele-operated mobile robot based on a PID (Proportional–Integral-Differential controller. Here, we use two differential wheel drive robots; one is a master robot and the other is a follower robot. The master robot is manually controlled and the follower robot is programmed to follow the master robot. For the master robot, a Bluetooth module receives the user’s command from an android application which is processed by the master robot’s controller, which is used to move the robot. The follower robot receives the image from the Kinect sensor mounted on it and recognizes the master robot. The follower robot identifies the x, y positions by employing the camera and the depth by using the Kinect depth sensor. By identifying the x, y, and z locations of the master robot, the follower robot finds the angle and distance between the master and follower robot, which is given as the error term of a PID controller. Using this, the follower robot follows the master robot. A PID controller is based on feedback and tries to minimize the error. Experiments are conducted for two indigenously developed robots; one depicting a humanoid and the other a small mobile robot. It was observed that the follower robot was easily able to follow the master robot using well-tuned PID parameters.

  9. 3D vision upgrade kit for TALON robot

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  10. Robot path planning using expert systems and machine vision

    Science.gov (United States)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  11. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  12. Computer Vision for Artificially Intelligent Robotic Systems

    Science.gov (United States)

    Ma, Chialo; Ma, Yung-Lung

    1987-04-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main

  13. Facilitating Programming of Vision-Equipped Robots through Robotic Skills and Projection Mapping

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard

    The field of collaborative industrial robots is currently developing fast both in the industry and in the scientific community. Companies such as Rethink Robotics and Universal Robots are redefining the concept of an industrial robot and entire new markets and use cases are becoming relevant for ...

  14. Vision-Based Interfaces Applied to Assistive Robots

    Directory of Open Access Journals (Sweden)

    Elisa Perez

    2013-02-01

    Full Text Available This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.

  15. A remote assessment system with a vision robot and wearable sensors.

    Science.gov (United States)

    Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun

    2004-01-01

    This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.

  16. Augmented models for improving vision control of a mobile robot

    DEFF Research Database (Denmark)

    Andersen, Gert Lysgaard; Christensen, Anders C.; Ravn, Ole

    1994-01-01

    obtain good performance even when using standard low cost equipment and a comparatively low sampling rate. The plant model is a compound of kinematic, dynamic and sensor submodels, all integrated into a discrete state space representation. An intelligent strategy is applied for the vision sensor......This paper describes the modelling phases for the design of a path tracking vision controller for a three wheeled mobile robot. It is shown that, by including the dynamic characteristics of vision and encoder sensors and implementing the total system in one multivariable control loop, one can...

  17. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    Science.gov (United States)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  18. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  19. 3D vision in a virtual reality robotics environment

    Science.gov (United States)

    Schutz, Christian L.; Natonek, Emerico; Baur, Charles; Hugli, Heinz

    1996-12-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.

  20. A real time tracking vision system and its application to robotics

    International Nuclear Information System (INIS)

    Inoue, Hirochika

    1994-01-01

    Among various sensing channels the vision is most important for making robot intelligent. If provided with a high speed visual tracking capability, the robot-environment interaction becomes dynamic instead of static, and thus the potential repertoire of robot behavior becomes very rich. For this purpose we developed a real-time tracking vision system. The fundamental operation on which our system based is the calculation of correlation between local images. Use of special chip for correlation and the multi-processor configuration enable the robot to track more than hundreds cues in full video rate. In addition to the fundamental visual performance, applications for robot behavior control are also introduced. (author)

  1. Exploratorium: Robots.

    Science.gov (United States)

    Brand, Judith, Ed.

    2002-01-01

    This issue of Exploratorium Magazine focuses on the topic robotics. It explains how to make a vibrating robotic bug and features articles on robots. Contents include: (1) "Where Robot Mice and Robot Men Run Round in Robot Towns" (Ray Bradbury); (2) "Robots at Work" (Jake Widman); (3) "Make a Vibrating Robotic Bug" (Modesto Tamez); (4) "The Robot…

  2. Laws on Robots, Laws by Robots, Laws in Robots : Regulating Robot Behaviour by Design

    NARCIS (Netherlands)

    Leenes, R.E.; Lucivero, F.

    2015-01-01

    Speculation about robot morality is almost as old as the concept of a robot itself. Asimov’s three laws of robotics provide an early and well-discussed example of moral rules robots should observe. Despite the widespread influence of the three laws of robotics and their role in shaping visions of

  3. Robot Control for Dynamic Environment Using Vision and Autocalibration

    DEFF Research Database (Denmark)

    Larsen, Thomas Dall; Lildballe, Jacob; Andersen, Nils Axel

    1997-01-01

    To enhance flexibility and extend the area of applications for robotic systems, it is important that the systems are capable ofhandling uncertainties and respond to (random) human behaviour.A vision systemmust very often be able to work in a dynamical ``noisy'' world where theplacement ofobjects...... can vary within certain restrictions. Furthermore it would be useful ifthe system is able to recover automatically after serious changes have beenapplied, for instance if the camera has been moved.In this paper an implementationof such a system is described. The system is a robotcapable of playing...

  4. A cognitive approach to vision for a mobile robot

    Science.gov (United States)

    Benjamin, D. Paul; Funk, Christopher; Lyons, Damian

    2013-05-01

    We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both

  5. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  6. Robot Actors, Robot Dramaturgies

    DEFF Research Database (Denmark)

    Jochum, Elizabeth

    This paper considers the use of tele-operated robots in live performance. Robots and performance have long been linked, from the working androids and automata staged in popular exhibitions during the nineteenth century and the robots featured at Cybernetic Serendipity (1968) and the World Expo...

  7. Beyond Speculative Robot Ethics

    NARCIS (Netherlands)

    Smits, M.; Van der Plas, A.

    2010-01-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also lead to

  8. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  9. Vision-Based Recognition of Activities by a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Mounîm A. El-Yacoubi

    2015-12-01

    Full Text Available We present an autonomous assistive robotic system for human activity recognition from video sequences. Due to the large variability inherent to video capture from a non-fixed robot (as opposed to a fixed camera, as well as the robot's limited computing resources, implementation has been guided by robustness to this variability and by memory and computing speed efficiency. To accommodate motion speed variability across users, we encode motion using dense interest point trajectories. Our recognition model harnesses the dense interest point bag-of-words representation through an intersection kernel-based SVM that better accommodates the large intra-class variability stemming from a robot operating in different locations and conditions. To contextually assess the engine as implemented in the robot, we compare it with the most recent approaches of human action recognition performed on public datasets (non-robot-based, including a novel approach of our own that is based on a two-layer SVM-hidden conditional random field sequential recognition model. The latter's performance is among the best within the recent state of the art. We show that our robot-based recognition engine, while less accurate than the sequential model, nonetheless shows good performances, especially given the adverse test conditions of the robot, relative to those of a fixed camera.

  10. Motion based segmentation for robot vision using adapted EM algorithm

    NARCIS (Netherlands)

    Zhao, Wei; Roos, Nico

    2016-01-01

    Robots operate in a dynamic world in which objects are often moving. The movement of objects may help the robot to segment the objects from the background. The result of the segmentation can subsequently be used to identify the objects. This paper investigates the possibility of segmenting objects

  11. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  12. Vision Sensor-Based Road Detection for Field Robot Navigation

    Directory of Open Access Journals (Sweden)

    Keyu Lu

    2015-11-01

    Full Text Available Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.

  13. Development of Vision Control Scheme of Extended Kalman filtering for Robot's Position Control

    International Nuclear Information System (INIS)

    Jang, W. S.; Kim, K. S.; Park, S. I.; Kim, K. Y.

    2003-01-01

    It is very important to reduce the computational time in estimating the parameters of vision control algorithm for robot's position control in real time. Unfortunately, the batch estimation commonly used requires too murk computational time because it is iteration method. So, the batch estimation has difficulty for robot's position control in real time. On the other hand, the Extended Kalman Filtering(EKF) has many advantages to calculate the parameters of vision system in that it is a simple and efficient recursive procedures. Thus, this study is to develop the EKF algorithm for the robot's vision control in real time. The vision system model used in this study involves six parameters to account for the inner(orientation, focal length etc) and outer (the relative location between robot and camera) parameters of camera. Then, EKF has been first applied to estimate these parameters, and then with these estimated parameters, also to estimate the robot's joint angles used for robot's operation. finally, the practicality of vision control scheme based on the EKF has been experimentally verified by performing the robot's position control

  14. Vision-based robotic system for object agnostic placing operations

    DEFF Research Database (Denmark)

    Rofalis, Nikolaos; Nalpantidis, Lazaros; Andersen, Nils Axel

    2016-01-01

    Industrial robots are part of almost all modern factories. Even though, industrial robots nowadays manipulate objects of a huge variety in different environments, exact knowledge about both of them is generally assumed. The aim of this work is to investigate the ability of a robotic system to ope...... to the system, neither for the objects nor for the placing box. The experimental evaluation of the developed robotic system shows that a combination of seemingly simple modules and strategies can provide effective solution to the targeted problem....... to operate within an unknown environment manipulating unknown objects. The developed system detects objects, finds matching compartments in a placing box, and ultimately grasps and places the objects there. The developed system exploits 3D sensing and visual feature extraction. No prior knowledge is provided...

  15. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  16. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    OpenAIRE

    Kia, Chua; Arshad, Mohd Rizal

    2006-01-01

    This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system ...

  17. Robot engineering

    International Nuclear Information System (INIS)

    Jung, Seul

    2006-02-01

    This book deals with robot engineering, giving descriptions of robot's history, current tendency of robot field, work and characteristic of industrial robot, essential merit and vector, application of matrix, analysis of basic vector, expression of Denavit-Hartenberg, robot kinematics such as forward kinematics, inverse kinematics, cases of MATLAB program, and motion kinematics, robot kinetics like moment of inertia, centrifugal force and coriolis power, and Euler-Lagrangian equation course plan, SIMULINK position control of robots.

  18. Robot engineering

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Seul

    2006-02-15

    This book deals with robot engineering, giving descriptions of robot's history, current tendency of robot field, work and characteristic of industrial robot, essential merit and vector, application of matrix, analysis of basic vector, expression of Denavit-Hartenberg, robot kinematics such as forward kinematics, inverse kinematics, cases of MATLAB program, and motion kinematics, robot kinetics like moment of inertia, centrifugal force and coriolis power, and Euler-Lagrangian equation course plan, SIMULINK position control of robots.

  19. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  20. Modeling and Implementation of Omnidirectional Soccer Robot with Wide Vision Scope Applied in Robocup-MSL

    Directory of Open Access Journals (Sweden)

    Mohsen Taheri

    2010-04-01

    Full Text Available The purpose of this paper is to design and implement a middle size soccer robot to conform RoboCup MSL league. First, according to the rules of RoboCup, we design the middle size soccer robot, The proposed autonomous soccer robot consists of the mechanical platform, motion control module, omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, competition strategies, and obstacle avoidance. And this soccer robot equips the laptop computer system and interface circuits to make decisions. In fact, the omnidirectional vision sensor of the vision system deals with the image processing and positioning for obstacle avoidance and
    target tracking. The boundary-following algorithm (BFA is applied to find the important features of the field. We utilize the sensor data fusion method in the control system parameters, self localization and world modeling. A vision-based self-localization and the conventional odometry
    systems are fused for robust selflocalization. The localization algorithm includes filtering, sharing and integration of the data for different types of objects recognized in the environment. In the control strategies, we present three state modes, which include the Attack Strategy, Defense Strategy and Intercept Strategy. The methods have been tested in the many Robocup competition field middle size robots.

  1. KNOWLEDGE-BASED ROBOT VISION SYSTEM FOR AUTOMATED PART HANDLING

    Directory of Open Access Journals (Sweden)

    J. Wang

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper discusses an algorithm incorporating a knowledge-based vision system into an industrial robot system for handling parts intelligently. A continuous fuzzy controller was employed to extract boundary information in a computationally efficient way. The developed algorithm for on-line part recognition using fuzzy logic is shown to be an effective solution to extract the geometric features of objects. The proposed edge vector representation method provides enough geometric information and facilitates the object geometric reconstruction for gripping planning. Furthermore, a part-handling model was created by extracting the grasp features from the geometric features.

    AFRIKAANSE OPSOMMING: Hierdie artikel beskryf ‘n kennis-gebaseerde visiesisteemalgoritme wat in ’n industriёle robotsisteem ingesluit word om sodoende intelligente komponenthantering te bewerkstellig. ’n Kontinue wasige beheerder is gebruik om allerlei objekinligting deur middel van ’n effektiewe berekeningsmetode te bepaal. Die ontwikkelde algoritme vir aan-lyn komponentherkenning maak gebruik van wasige logika en word bewys as ’n effektiewe metode om geometriese inligting van objekte te bepaal. Die voorgestelde grensvektormetode verskaf voldoende inligting en maak geometriese rekonstruksie van die objek moontlik om greepbeplanning te kan doen. Voorts is ’n komponenthanteringsmodel ontwikkel deur die grypkenmerke af te lei uit die geometriese eienskappe.

  2. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    Science.gov (United States)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  3. Vision guided robot bin picking of cylindrical objects

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg; Dyhr-Nielsen, Carsten

    1997-01-01

    In order to achieve increased flexibility on robotic production lines an investigation of the rovbot bin-picking problem is presented. In the paper, the limitations related to previous attempts to solve the problem are pointed uot and a set of innovative methods are presented. The main elements...

  4. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  5. A novel method of robot location using RFID and stereo vision

    Science.gov (United States)

    Chen, Diansheng; Zhang, Guanxin; Li, Zhen

    2012-04-01

    This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.

  6. A Framework for Obstacles Avoidance of Humanoid Robot Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2013-04-01

    Full Text Available In this paper, we propose a framework for multiple moving obstacles avoidance strategy using stereo vision for humanoid robot in indoor environment. We assume that this model of humanoid robot is used as a service robot to deliver a cup to customer from starting point to destination point. We have successfully developed and introduced three main modules to recognize faces, to identify multiple moving obstacles and to initiate a maneuver. A group of people who are walking will be tracked as multiple moving obstacles. Predefined maneuver to avoid obstacles is applied to robot because the limitation of view angle from stereo camera to detect multiple obstacles. The contribution of this research is a new method for multiple moving obstacles avoidance strategy with Bayesian approach using stereo vision based on the direction and speed of obstacles. Depth estimation is used to obtain distance calculation between obstacles and the robot. We present the results of the experiment of the humanoid robot called Gatotkoco II which is used our proposed method and evaluate its performance. The proposed moving obstacles avoidance strategy was tested empirically and proved effective for humanoid robot.

  7. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    Science.gov (United States)

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  8. Active vision via extremum seeking for robots in unstructured environments : Applications in object recognition and manipulation

    NARCIS (Netherlands)

    Calli, B.; Caarls, W.; Wisse, M.; Jonker, P.P.

    2018-01-01

    In this paper, a novel active vision strategy is proposed for optimizing the viewpoint of a robot's vision sensor for a given success criterion. The strategy is based on extremum seeking control (ESC), which introduces two main advantages: 1) Our approach is model free: It does not require an

  9. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    Science.gov (United States)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  10. Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots

    OpenAIRE

    Pérez Sala, Xavier

    2010-01-01

    We propose a robust system for automatic Robot Navigation in uncontrolled en- vironments. The system is composed by three main modules: the Arti cial Vision module, the Reinforcement Learning module, and the behavior control module. The aim of the system is to allow a robot to automatically nd a path that arrives to a pre xed goal. Turn and straight movements in uncontrolled environments are automatically estimated and controlled using the proposed modules. The Arti cial Vi...

  11. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  12. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    International Nuclear Information System (INIS)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun

    2013-01-01

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task

  13. Evolutionary robotics

    Indian Academy of Sciences (India)

    In evolutionary robotics, a suitable robot control system is developed automatically through evolution due to the interactions between the robot and its environment. It is a complicated task, as the robot and the environment constitute a highly dynamical system. Several methods have been tried by various investigators to ...

  14. Robot Aesthetics

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Putnam, Lance Jonathan

    This paper considers art-based research practice in robotics through a discussion of our course and relevant research projects in autonomous art. The undergraduate course integrates basic concepts of computer science, robotic art, live performance and aesthetic theory. Through practice...... in robotics research (such as aesthetics, culture and perception), we believe robot aesthetics is an important area for research in contemporary aesthetics....

  15. Filigree Robotics

    DEFF Research Database (Denmark)

    Tamke, Martin; Evers, Henrik Leander; Clausen Nørgaard, Esben

    2016-01-01

    Filigree Robotics experiments with the combination of traditional ceramic craft with robotic fabrication in order to generate a new narrative of fine three-dimensional ceramic ornament for architecture.......Filigree Robotics experiments with the combination of traditional ceramic craft with robotic fabrication in order to generate a new narrative of fine three-dimensional ceramic ornament for architecture....

  16. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  17. 75 FR 36456 - Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision...

    Science.gov (United States)

    2010-06-25

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc.), Security... accurate information concerning the securities of Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc...

  18. State of the art of robotic surgery related to vision: brain and eye applications of newly available devices

    Directory of Open Access Journals (Sweden)

    Nuzzi R

    2018-02-01

    Full Text Available Raffaele Nuzzi, Luca Brusasco Department of Surgical Sciences, Eye Clinic, University of Torino, Turin, Italy Background: Robot-assisted surgery has revolutionized many surgical subspecialties, mainly where procedures have to be performed in confined, difficult to visualize spaces. Despite advances in general surgery and neurosurgery, in vivo application of robotics to ocular surgery is still in its infancy, owing to the particular complexities of microsurgery. The use of robotic assistance and feedback guidance on surgical maneuvers could improve the technical performance of expert surgeons during the initial phase of the learning curve. Evidence acquisition: We analyzed the advantages and disadvantages of surgical robots, as well as the present applications and future outlook of robotics in neurosurgery in brain areas related to vision and ophthalmology. Discussion: Limitations to robotic assistance remain, that need to be overcome before it can be more widely applied in ocular surgery. Conclusion: There is heightened interest in studies documenting computerized systems that filter out hand tremor and optimize speed of movement, control of force, and direction and range of movement. Further research is still needed to validate robot-assisted procedures. Keywords: robotic surgery related to vision, robots, ophthalmological applications of robotics, eye and brain robots, eye robots

  19. Development of a Vision-Based Robotic Follower Vehicle

    Science.gov (United States)

    2009-02-01

    resultant blob . . . . . . . . . . 14 Figure 13: A sample image and the recognized keypoints found using the SIFT algorithm...Figure 12: An example of a spherical target and the resultant blob (taken from [66]). To track multi-coloured objects, rather than using just one...International Journal of Advanced Robotic Systems, 2(3), 245–250. [37] Zhou, J. and Clark, C. (2006), Autonomous fish tracking by ROV using Monocular

  20. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  1. Estimation of visual maps with a robot network equipped with vision sensors.

    Science.gov (United States)

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  2. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    Directory of Open Access Journals (Sweden)

    Arturo Gil

    2010-05-01

    Full Text Available In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  3. 9th International Conference on Robotics, Vision, Signal Processing & Power Applications

    CERN Document Server

    Iqbal, Shahid; Teoh, Soo; Mustaffa, Mohd

    2017-01-01

     The proceeding is a collection of research papers presented, at the 9th International Conference on Robotics, Vision, Signal Processing & Power Applications (ROVISP 2016), by researchers, scientists, engineers, academicians as well as industrial professionals from all around the globe to present their research results and development activities for oral or poster presentations. The topics of interest are as follows but are not limited to:   • Robotics, Control, Mechatronics and Automation • Vision, Image, and Signal Processing • Artificial Intelligence and Computer Applications • Electronic Design and Applications • Telecommunication Systems and Applications • Power System and Industrial Applications • Engineering Education.

  4. 8th International Conference on Robotic, Vision, Signal Processing & Power Applications

    CERN Document Server

    Mustaffa, Mohd

    2014-01-01

    The proceeding is a collection of research papers presented, at the 8th International Conference on Robotics, Vision, Signal Processing and Power Applications (ROVISP 2013), by researchers, scientists, engineers, academicians as well as industrial professionals from all around the globe. The topics of interest are as follows but are not limited to: • Robotics, Control, Mechatronics and Automation • Vision, Image, and Signal Processing • Artificial Intelligence and Computer Applications • Electronic Design and Applications • Telecommunication Systems and Applications • Power System and Industrial Applications  

  5. Gain-scheduling control of a monocular vision-based human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-08-01

    Full Text Available , R. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition. Hutchinson, S., Hager, G., and Corke, P. (1996). A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12... environment, in a passive manner, at relatively high speeds and low cost. The control of mobile robots using vision in the feed- back loop falls into the well-studied field of visual servo control. Two primary approaches are used: image-based visual...

  6. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    Science.gov (United States)

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  7. Design and Development of Vision Based Blockage Clearance Robot for Sewer Pipes

    Directory of Open Access Journals (Sweden)

    Krishna Prasad Nesaian

    2012-03-01

    Full Text Available Robotic technology is one of the advanced technologies, which is capable of completing tasks at situations where humans are unable to reach, see or survive. The underground sewer pipelines are the major tools for the transportation of effluent water. A lot of troubles caused by blockage in sewer pipe will lead to overflow of effluent water, sanitation problems. So robotic vehicle that is capable of traveling at underneath effluent water determining blockage using ultrasonic sensors and clearing by means of drilling mechanism is done. In addition to that wireless camera is fixed which acts as a robot vision by which we can monitor video and capture images using MATLAB tool. Thus in this project a prototype model of underground sewer pipe blockage clearance robot with drilling type will be developed

  8. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  9. Mutual Visibility by Robots with Persistent Memory

    OpenAIRE

    Bhagat, Subhash; Mukhopadhyaya, Krishnendu

    2017-01-01

    This paper addresses the mutual visibility problem for a set of semi-synchronous, opaque robots occupying distinct positions in the Euclidean plane. Since robots are opaque, if three robots lie on a line, the middle robot obstructs the visions of the two other robots. The mutual visibility problem asks the robots to coordinate their movements to form a configuration, within finite time and without collision, in which no three robots are collinear. Robots are endowed with a constant bits of pe...

  10. A Vision-Based Approach for Estimating Contact Forces: Applications to Robot-Assisted Surgery

    Directory of Open Access Journals (Sweden)

    C. W. Kennedy

    2005-01-01

    Full Text Available The primary goal of this paper is to provide force feedback to the user using vision-based techniques. The approach presented in this paper can be used to provide force feedback to the surgeon for robot-assisted procedures. As proof of concept, we have developed a linear elastic finite element model (FEM of a rubber membrane whereby the nodal displacements of the membrane points are measured using vision. These nodal displacements are the input into our finite element model. In the first experiment, we track the deformation of the membrane in real-time through stereovision and compare it with the actual deformation computed through forward kinematics of the robot arm. On the basis of accurate deformation estimation through vision, we test the physical model of a membrane developed through finite element techniques. The FEM model accurately reflects the interaction forces on the user console when the interaction forces of the robot arm with the membrane are compared with those experienced by the surgeon on the console through the force feedback device. In the second experiment, the PHANToM haptic interface device is used to control the Mitsubishi PA-10 robot arm and interact with the membrane in real-time. Image data obtained through vision of the deformation of the membrane is used as the displacement input for the FEM model to compute the local interaction forces which are then displayed on the user console for providing force feedback and hence closing the loop.

  11. CRV 2008: Fifth Canadian Conference on Computerand Robot Vision, Windsor, ON, Canada, May 2008

    DEFF Research Database (Denmark)

    Fihl, Preben

    This technical report will cover the participation in the fifth Canadian Conference on Computer and Robot Vision in May 2008. The report will give a concise description of the topics presented at the conference, focusing on the work related to the HERMES project and human motion and action...

  12. A Miniature Robot for Retraction Tasks under Vision Assistance in Minimally Invasive Surgery

    Directory of Open Access Journals (Sweden)

    Giuseppe Tortora

    2014-03-01

    Full Text Available Minimally Invasive Surgery (MIS is one of the main aims of modern medicine. It enables surgery to be performed with a lower number and severity of incisions. Medical robots have been developed worldwide to offer a robotic alternative to traditional medical procedures. New approaches aimed at a substantial decrease of visible scars have been explored, such as Natural Orifice Transluminal Endoscopic Surgery (NOTES. Simple surgical tasks such as the retraction of an organ can be a challenge when performed from narrow access ports. For this reason, there is a continuous need to develop new robotic tools for performing dedicated tasks. This article illustrates the design and testing of a new robotic tool for retraction tasks under vision assistance for NOTES. The retraction robots integrate brushless motors to enable additional degrees of freedom to that provided by magnetic anchoring, thus improving the dexterity of the overall platform. The retraction robot can be easily controlled to reach the target organ and apply a retraction force of up to 1.53 N. Additional degrees of freedom can be used for smooth manipulation and grasping of the organ.

  13. Robotic environments

    NARCIS (Netherlands)

    Bier, H.H.

    2011-01-01

    Technological and conceptual advances in fields such as artificial intelligence, robotics, and material science have enabled robotic architectural environments to be implemented and tested in the last decade in virtual and physical prototypes. These prototypes are incorporating sensing-actuating

  14. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  15. Healthcare Robotics

    OpenAIRE

    Riek, Laurel D.

    2017-01-01

    Robots have the potential to be a game changer in healthcare: improving health and well-being, filling care gaps, supporting care givers, and aiding health care workers. However, before robots are able to be widely deployed, it is crucial that both the research and industrial communities work together to establish a strong evidence-base for healthcare robotics, and surmount likely adoption barriers. This article presents a broad contextualization of robots in healthcare by identifying key sta...

  16. Industrial Robots.

    Science.gov (United States)

    Reed, Dean; Harden, Thomas K.

    Robots are mechanical devices that can be programmed to perform some task of manipulation or locomotion under automatic control. This paper discusses: (1) early developments of the robotics industry in the United States; (2) the present structure of the industry; (3) noneconomic factors related to the use of robots; (4) labor considerations…

  17. Autonomous military robotics

    CERN Document Server

    Nath, Vishnu

    2014-01-01

    This SpringerBrief reveals the latest techniques in computer vision and machine learning on robots that are designed as accurate and efficient military snipers. Militaries around the world are investigating this technology to simplify the time, cost and safety measures necessary for training human snipers. These robots are developed by combining crucial aspects of computer science research areas including image processing, robotic kinematics and learning algorithms. The authors explain how a new humanoid robot, the iCub, uses high-speed cameras and computer vision algorithms to track the objec

  18. Vision-based control of robotic arm with 6 degrees of freedom

    OpenAIRE

    Versleegers, Wim

    2014-01-01

    This paper studies the procedure to program a vertically articulated robot with six degrees of freedom, the Mitsubishi Melfa RV-2SD, with Matlab. A major drawback of the programming software provided by Mitsubishi is that it barely allows the use of vision-based programming. The amount of useable cameras is limited and moreover, the cameras are very expensive. Using Matlab, these limitations could be overcome. However there is no direct way to control the robot with Matlab. The goal of this p...

  19. Robot vision system R and D for ITER blanket remote-handling system

    International Nuclear Information System (INIS)

    Maruyama, Takahito; Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka; Tesini, Alessandro

    2014-01-01

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system

  20. Robot vision system R and D for ITER blanket remote-handling system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Takahito, E-mail: maruyama.takahito@jaea.go.jp [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Tesini, Alessandro [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2014-10-15

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system.

  1. Design of an Embedded Multi-Camera Vision System—A Case Study in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Valter Costa

    2018-02-01

    Full Text Available The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a Real-Time Robotic Vision application, thus exemplifying good algorithm design practices, the advantages of employing the “zero copy one pass” methodology and associated trade-offs leading to the selection of a controller platform. The vision tasks under study are: (i recognition of a “flat” signal; and (ii track following, requiring 3D reconstruction. This research firstly improves the used algorithms for the mentioned tasks and finally selects the controller hardware. Optimization for the shown algorithms yielded from 1.5 times to 190 times improvements, always with acceptable quality for the target application, with algorithm optimization being more important on lower computing power platforms. Results also include a 3-cm and five-degree accuracy for lane tracking and 100% accuracy for signalling panel recognition, which are better than most results found in the literature for this application. Clear results comparing different PC platforms for the mentioned Robotic Vision tasks are also shown, demonstrating trade-offs between accuracy and computing power, leading to the proper choice of control platform. The presented design principles are portable to other applications, where Real-Time constraints exist.

  2. Stereo-vision and 3D reconstruction for nuclear mobile robots

    International Nuclear Information System (INIS)

    Lecoeur-Taibi, I.; Vacherand, F.; Rivallin, P.

    1991-01-01

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  3. Robot Mechanisms

    CERN Document Server

    Lenarcic, Jadran; Stanišić, Michael M

    2013-01-01

    This book provides a comprehensive introduction to the area of robot mechanisms, primarily considering industrial manipulators and humanoid arms. The book is intended for both teaching and self-study. Emphasis is given to the fundamentals of kinematic analysis and the design of robot mechanisms. The coverage of topics is untypical. The focus is on robot kinematics. The book creates a balance between theoretical and practical aspects in the development and application of robot mechanisms, and includes the latest achievements and trends in robot science and technology.

  4. Development of a teaching system for an industrial robot using stereo vision

    Science.gov (United States)

    Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki

    1997-12-01

    The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.

  5. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    Science.gov (United States)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  6. Robot Futures

    DEFF Research Database (Denmark)

    Christoffersen, Anja; Grindsted Nielsen, Sally; Jochum, Elizabeth Ann

    Robots are increasingly used in health care settings, e.g., as homecare assistants and personal companions. One challenge for personal robots in the home is acceptance. We describe an innovative approach to influencing the acceptance of care robots using theatrical performance. Live performance...... is a useful testbed for developing and evaluating what makes robots expressive; it is also a useful platform for designing robot behaviors and dialogue that result in believable characters. Therefore theatre is a valuable testbed for studying human-robot interaction (HRI). We investigate how audiences...... perceive social robots interacting with humans in a future care scenario through a scripted performance. We discuss our methods and initial findings, and outline future work....

  7. Robotics education

    International Nuclear Information System (INIS)

    Benton, O.

    1984-01-01

    Robotics education courses are rapidly spreading throughout the nation's colleges and universities. Engineering schools are offering robotics courses as part of their mechanical or manufacturing engineering degree program. Two year colleges are developing an Associate Degree in robotics. In addition to regular courses, colleges are offering seminars in robotics and related fields. These seminars draw excellent participation at costs running up to $200 per day for each participant. The last one drew 275 people from Texas to Virginia. Seminars are also offered by trade associations, private consulting firms, and robot vendors. IBM, for example, has the Robotic Assembly Institute in Boca Raton and charges about $1,000 per week for course. This is basically for owners of IBM robots. Education (and training) can be as short as one day or as long as two years. Here is the educational pattern that is developing now

  8. A Collaborative Approach for Surface Inspection Using Aerial Robots and Computer Vision

    Directory of Open Access Journals (Sweden)

    Martin Molina

    2018-03-01

    Full Text Available Aerial robots with cameras on board can be used in surface inspection to observe areas that are difficult to reach by other means. In this type of problem, it is desirable for aerial robots to have a high degree of autonomy. A way to provide more autonomy would be to use computer vision techniques to automatically detect anomalies on the surface. However, the performance of automated visual recognition methods is limited in uncontrolled environments, so that in practice it is not possible to perform a fully automatic inspection. This paper presents a solution for visual inspection that increases the degree of autonomy of aerial robots following a semi-automatic approach. The solution is based on human-robot collaboration in which the operator delegates tasks to the drone for exploration and visual recognition and the drone requests assistance in the presence of uncertainty. We validate this proposal with the development of an experimental robotic system using the software framework Aerostack. The paper describes technical challenges that we had to solve to develop such a system and the impact on this solution on the degree of autonomy to detect anomalies on the surface.

  9. Examples of design and achievement of vision systems for mobile robotics applications

    Science.gov (United States)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  10. Deviation from Trajectory Detection in Vision based Robotic Navigation using SURF and Subsequent Restoration by Dynamic Auto Correction Algorithm

    Directory of Open Access Journals (Sweden)

    Ray Debraj

    2015-01-01

    Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.

  11. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    Science.gov (United States)

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  12. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2005-09-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  13. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2008-11-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  14. Robots and lattice automata

    CERN Document Server

    Adamatzky, Andrew

    2015-01-01

    The book gives a comprehensive overview of the state-of-the-art research and engineering in theory and application of Lattice Automata in design and control of autonomous Robots. Automata and robots share the same notional meaning. Automata (originated from the latinization of the Greek word “αυτόματον”) as self-operating autonomous machines invented from ancient years can be easily considered the first steps of robotic-like efforts. Automata are mathematical models of Robots and also they are integral parts of robotic control systems. A Lattice Automaton is a regular array or a collective of finite state machines, or automata. The Automata update their states by the same rules depending on states of their immediate neighbours. In the context of this book, Lattice Automata are used in developing modular reconfigurable robotic systems, path planning and map exploration for robots, as robot controllers, synchronisation of robot collectives, robot vision, parallel robotic actuators. All chapters are...

  15. Robotic buildings(s)

    NARCIS (Netherlands)

    Bier, H.H.

    2014-01-01

    Technological and conceptual advances in fields such as artificial intelligence, robotics, and material science have enabled robotic building to be in the last decade prototypically implemented. In this context, robotic building implies both physically built robotic environments and robotically

  16. Computer vision system R&D for EAST Articulated Maintenance Arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Linglong, E-mail: linglonglin@ipp.ac.cn; Song, Yuntao, E-mail: songyt@ipp.ac.cn; Yang, Yang, E-mail: yangy@ipp.ac.cn; Feng, Hansheng, E-mail: hsfeng@ipp.ac.cn; Cheng, Yong, E-mail: chengyong@ipp.ac.cn; Pan, Hongtao, E-mail: panht@ipp.ac.cn

    2015-11-15

    Highlights: • We discussed the image preprocessing, object detection and pose estimation algorithms under poor light condition of inner vessel of EAST tokamak. • The main pipeline, including contours detection, contours filter, MER extracted, object location and pose estimation, was carried out in detail. • The technical issues encountered during the research were discussed. - Abstract: Experimental Advanced Superconducting Tokamak (EAST) is the first full superconducting tokamak device which was constructed at Institute of Plasma Physics Chinese Academy of Sciences (ASIPP). The EAST Articulated Maintenance Arm (EAMA) robot provides the means of the in-vessel maintenance such as inspection and picking up the fragments of first wall. This paper presents a method to identify and locate the fragments semi-automatically by using the computer vision. The use of computer vision in identification and location faces some difficult challenges such as shadows, poor contrast, low illumination level, less texture and so on. The method developed in this paper enables credible identification of objects with shadows through invariant image and edge detection. The proposed algorithms are validated through our ASIPP robotics and computer vision platform (ARVP). The results show that the method can provide a 3D pose with reference to robot base so that objects with different shapes and size can be picked up successfully.

  17. Soft Robotics.

    Science.gov (United States)

    Whitesides, George M

    2018-04-09

    This description of "soft robotics" is not intended to be a conventional review, in the sense of a comprehensive technical summary of a developing field. Rather, its objective is to describe soft robotics as a new field-one that offers opportunities to chemists and materials scientists who like to make "things" and to work with macroscopic objects that move and exert force. It will give one (personal) view of what soft actuators and robots are, and how this class of soft devices fits into the more highly developed field of conventional "hard" robotics. It will also suggest how and why soft robotics is more than simply a minor technical "tweak" on hard robotics and propose a unique role for chemistry, and materials science, in this field. Soft robotics is, at its core, intellectually and technologically different from hard robotics, both because it has different objectives and uses and because it relies on the properties of materials to assume many of the roles played by sensors, actuators, and controllers in hard robotics. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  19. Grasping in Robotics

    CERN Document Server

    2013-01-01

    Grasping in Robotics contains original contributions in the field of grasping in robotics with a broad multidisciplinary approach. This gives the possibility of addressing all the major issues related to robotized grasping, including milestones in grasping through the centuries, mechanical design issues, control issues, modelling achievements and issues, formulations and software for simulation purposes, sensors and vision integration, applications in industrial field and non-conventional applications (including service robotics and agriculture).   The contributors to this book are experts in their own diverse and wide ranging fields. This multidisciplinary approach can help make Grasping in Robotics of interest to a very wide audience. In particular, it can be a useful reference book for researchers, students and users in the wide field of grasping in robotics from many different disciplines including mechanical design, hardware design, control design, user interfaces, modelling, simulation, sensors and hum...

  20. VisGraB: A Benchmark for Vision-Based Grasping. Paladyn Journal of Behavioral Robotics

    DEFF Research Database (Denmark)

    Kootstra, Gert; Popovic, Mila; Jørgensen, Jimmy Alison

    2012-01-01

    that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision......We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different...

  1. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    Directory of Open Access Journals (Sweden)

    Xun Chai

    2015-04-01

    Full Text Available Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  2. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    Science.gov (United States)

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  3. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  4. Robotics 101

    Science.gov (United States)

    Sultan, Alan

    2011-01-01

    Robots are used in all kinds of industrial settings. They are used to rivet bolts to cars, to move items from one conveyor belt to another, to gather information from other planets, and even to perform some very delicate types of surgery. Anyone who has watched a robot perform its tasks cannot help but be impressed by how it works. This article…

  5. Vitruvian Robot

    DEFF Research Database (Denmark)

    Hasse, Cathrine

    2017-01-01

    future. A real version of Ava would not last long in a human world because she is basically a solipsist, who does not really care about humans. She cannot co-create the line humans walk along. The robots created as ‘perfect women’ (sex robots) today are very far from the ideal image of Ava...

  6. Endoscopic vision-based tracking of multiple surgical instruments during robot-assisted surgery.

    Science.gov (United States)

    Ryu, Jiwon; Choi, Jaesoon; Kim, Hee Chan

    2013-01-01

    Robot-assisted minimally invasive surgery is effective for operations in limited space. Enhancing safety based on automatic tracking of surgical instrument position to prevent inadvertent harmful events such as tissue perforation or instrument collisions could be a meaningful augmentation to current robotic surgical systems. A vision-based instrument tracking scheme as a core algorithm to implement such functions was developed in this study. An automatic tracking scheme is proposed as a chain of computer vision techniques, including classification of metallic properties using k-means clustering and instrument movement tracking using similarity measures, Euclidean distance calculations, and a Kalman filter algorithm. The implemented system showed satisfactory performance in tests using actual robot-assisted surgery videos. Trajectory comparisons of automatically detected data and ground truth data obtained by manually locating the center of mass of each instrument were used to quantitatively validate the system. Instruments and collisions could be well tracked through the proposed methods. The developed collision warning system could provide valuable information to clinicians for safer procedures. © 2012, Copyright the Authors. Artificial Organs © 2012, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  7. Robot Teachers

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Ess, Charles Melvin; Bhroin, Niamh Ni

    The world's first robot teacher, Saya, was introduced to a classroom in Japan in 2009. Saya, had the appearance of a young female teacher. She could express six basic emotions, take the register and shout orders like 'be quiet' (The Guardian, 2009). Since 2009, humanoid robot technologies have...... developed. It is now suggested that robot teachers may become regular features in educational settings, and may even 'take over' from human teachers in ten to fifteen years (cf. Amundsen, 2017 online; Gohd, 2017 online). Designed to look and act like a particular kind of human; robot teachers mediate human...... existence and roles, while also aiming to support education through sophisticated, automated, human-like interaction. Our paper explores the design and existential implications of ARTIE, a robot teacher at Oxford Brookes University (2017, online). Drawing on an initial empirical exploration we propose...

  8. Social Robots

    DEFF Research Database (Denmark)

    Social robotics is a cutting edge research area gathering researchers and stakeholders from various disciplines and organizations. The transformational potential that these machines, in the form of, for example, caregiving, entertainment or partner robots, pose to our societies and to us as indiv......Social robotics is a cutting edge research area gathering researchers and stakeholders from various disciplines and organizations. The transformational potential that these machines, in the form of, for example, caregiving, entertainment or partner robots, pose to our societies and to us...... as individuals seems to be limited by our technical limitations and phantasy alone. This collection contributes to the field of social robotics by exploring its boundaries from a philosophically informed standpoint. It constructively outlines central potentials and challenges and thereby also provides a stable...

  9. Robotic seeding

    DEFF Research Database (Denmark)

    Pedersen, Søren Marcus; Fountas, Spyros; Sørensen, Claus Aage Grøn

    2017-01-01

    Agricultural robotics has received attention for approximately 20 years, but today there are only a few examples of the application of robots in agricultural practice. The lack of uptake may be (at least partly) because in many cases there is either no compelling economic benefit......, or there is a benefit but it is not recognized. The aim of this chapter is to quantify the economic benefits from the application of agricultural robots under a specific condition where such a benefit is assumed to exist, namely the case of early seeding and re-seeding in sugar beet. With some predefined assumptions...... with regard to speed, capacity and seed mapping, we found that among these two technical systems both early seeding with a small robot and re-seeding using a robot for a smaller part of the field appear to be financially viable solutions in sugar beet production....

  10. Micro intelligence robot

    International Nuclear Information System (INIS)

    Jeon, Yon Ho

    1991-07-01

    This book gives descriptions of micro robot about conception of robots and micro robot, match rules of conference of micro robots, search methods of mazes, and future and prospect of robots. It also explains making and design of 8 beat robot like making technique, software, sensor board circuit, and stepping motor catalog, speedy 3, Mr. Black and Mr. White, making and design of 16 beat robot, such as micro robot artist, Jerry 2 and magic art of shortening distances algorithm of robot simulation.

  11. An Intelligent Robot Programing

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Seong Yong

    2012-01-15

    This book introduces an intelligent robot programing with background of the begging, introduction of VPL, and SPL, building of environment for robot platform, starting of robot programing, design of simulation environment, robot autonomy drive control programing, simulation graphic. Such as SPL graphic programing graphical image and graphical shapes, and graphical method application, application of procedure for robot control, robot multiprogramming, robot bumper sensor programing, robot LRF sencor programing and robot color sensor programing.

  12. An Intelligent Robot Programing

    International Nuclear Information System (INIS)

    Hong, Seong Yong

    2012-01-01

    This book introduces an intelligent robot programing with background of the begging, introduction of VPL, and SPL, building of environment for robot platform, starting of robot programing, design of simulation environment, robot autonomy drive control programing, simulation graphic. Such as SPL graphic programing graphical image and graphical shapes, and graphical method application, application of procedure for robot control, robot multiprogramming, robot bumper sensor programing, robot LRF sencor programing and robot color sensor programing.

  13. Humanlike Robots - The Upcoming Revolution in Robotics

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2009-01-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  14. Humanlike robots: the upcoming revolution in robotics

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2009-08-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  15. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  16. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  17. Toward The Robot Eye: Isomorphic Representation For Machine Vision

    Science.gov (United States)

    Schenker, Paul S.

    1981-10-01

    This paper surveys some issues confronting the conception of models for general purpose vision systems. We draw parallels to requirements of human performance under visual transformations naturally occurring in the ecological environment. We argue that successful real world vision systems require a strong component of analogical reasoning. We propose a course of investigation into appropriate models, and illustrate some of these proposals by a simple example. Our study emphasizes the potential importance of isomorphic representations - models of image and scene which embed a metric of their respective spaces, and whose topological structure facilitates identification of scene descriptors that are invariant under viewing transformations.

  18. Machine vision for a selective broccoli harvesting robot

    NARCIS (Netherlands)

    Blok, Pieter M.; Barth, Ruud; Berg, Van Den Wim

    2016-01-01

    The selective hand-harvest of fresh market broccoli is labor-intensive and comprises about 35% of the total production costs. This research was conducted to determine whether machine vision can be used to detect broccoli heads, as a first step in the development of a fully autonomous selective

  19. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    Science.gov (United States)

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2018-02-01

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  20. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  1. A Robust Vision Module for Humanoid Robotic Ping-Pong Game

    Directory of Open Access Journals (Sweden)

    Xiaopeng Chen

    2015-04-01

    Full Text Available Developing a vision module for a humanoid ping-pong game is challenging due to the spin and the non-linear rebound of the ping-pong ball. In this paper, we present a robust predictive vision module to overcome these problems. The hardware of the vision module is composed of two stereo camera pairs with each pair detecting the 3D positions of the ball on one half of the ping-pong table. The software of the vision module divides the trajectory of the ball into four parts and uses the perceived trajectory in the first part to predict the other parts. In particular, the software of the vision module uses an aerodynamic model to predict the trajectories of the ball in the air and uses a novel non-linear rebound model to predict the change of the ball's motion during rebound. The average prediction error of our vision module at the ball returning point is less than 50 mm - a value small enough for standard sized ping-pong rackets. Its average processing speed is 120fps. The precision and efficiency of our vision module enables two humanoid robots to play ping-pong continuously for more than 200 rounds.

  2. Hexapod Robot

    Science.gov (United States)

    Begody, Ericka

    2016-01-01

    The project I am working on at NASA-Johnson Space Center in Houston, TX is a hexapod robot. This project was started by various engineers at the Trick Lab. The goal of this project is to have the hexapod track a yellow ball or possibly another object from left to right and up/down. The purpose is to have it track an object like a real creature. The project will consist of using software and hardware. This project started with a hexapod robot which uses a senor bar to track a yellow ball but with a limited field of vision. The sensor bar acts as the robots "head." Two servos will be added to the hexapod to create flexion and extension of the head. The neck and head servos will have to be programmed to be added to the original memory map of the existing servos. I will be using preexisting code. The main programming language that will be used to add to the preexisting code is C++. The trick modeling and simulation software will also be used in the process to improve its tracking and movement. This project will use a trial and error approach, basically seeing what works and what does not. The first step is to initially understand how the hexapod works. To get a general understanding of how the hexapod maneuvers and plan on how to had a neck and head servo which works with the rest of the body. The second step would be configuring the head and neck servos with the leg servos. During this step, limits will be programmed specifically for the each servo. By doing this, the servo is limited to how far it can rotate both clockwise and counterclockwise and this is to prevent hardware damage. The hexapod will have two modes in which it works in. The first mode will be if the sensor bar does not detect an object. If the object it is programmed to look for is not in its view it will automatically scan from left to right 3 times then up and down once. The second mode will be if the sensor bar does detect the object. In this mode the hexapod will track the object from left to

  3. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    Science.gov (United States)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  4. Space Robotics Challenge

    Data.gov (United States)

    National Aeronautics and Space Administration — The Space Robotics Challenge seeks to infuse robot autonomy from the best and brightest research groups in the robotics community into NASA robots for future...

  5. Robotic arm

    International Nuclear Information System (INIS)

    Kwech, H.

    1989-01-01

    A robotic arm positionable within a nuclear vessel by access through a small diameter opening and having a mounting tube supported within the vessel and mounting a plurality of arm sections for movement lengthwise of the mounting tube as well as for movement out of a window provided in the wall of the mounting tube is disclosed. An end effector, such as a grinding head or welding element, at an operating end of the robotic arm, can be located and operated within the nuclear vessel through movement derived from six different axes of motion provided by mounting and drive connections between arm sections of the robotic arm. The movements are achieved by operation of remotely-controllable servo motors, all of which are mounted at a control end of the robotic arm to be outside the nuclear vessel. 23 figs

  6. Robotic surgery

    Science.gov (United States)

    ... with this type of surgery give it some advantages over standard endoscopic techniques. The surgeon can make ... Elsevier Saunders; 2015:chap 87. Muller CL, Fried GM. Emerging technology in surgery: Informatics, electronics, robotics. In: ...

  7. Robotic parathyroidectomy.

    Science.gov (United States)

    Okoh, Alexis Kofi; Sound, Sara; Berber, Eren

    2015-09-01

    Robotic parathyroidectomy has recently been described. Although the procedure eliminates the neck scar, it is technically more demanding than the conventional approaches. This report is a review of the patients' selection criteria, technique, and outcomes. © 2015 Wiley Periodicals, Inc.

  8. Light Robotics

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin

    Light Robotics - Structure-Mediated Nanobiophotonics covers the latest means of sculpting of both light and matter for achieving bioprobing and manipulation at the smallest scales. The synergy between photonics, nanotechnology and biotechnology spans the rapidly growing field of nanobiophotonics...

  9. Robotic arm

    Science.gov (United States)

    Kwech, Horst

    1989-04-18

    A robotic arm positionable within a nuclear vessel by access through a small diameter opening and having a mounting tube supported within the vessel and mounting a plurality of arm sections for movement lengthwise of the mounting tube as well as for movement out of a window provided in the wall of the mounting tube. An end effector, such as a grinding head or welding element, at an operating end of the robotic arm, can be located and operated within the nuclear vessel through movement derived from six different axes of motion provided by mounting and drive connections between arm sections of the robotic arm. The movements are achieved by operation of remotely-controllable servo motors, all of which are mounted at a control end of the robotic arm to be outside the nuclear vessel.

  10. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    Science.gov (United States)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  11. State of the art of robotic surgery related to vision: brain and eye applications of newly available devices

    Science.gov (United States)

    Nuzzi, Raffaele

    2018-01-01

    Background Robot-assisted surgery has revolutionized many surgical subspecialties, mainly where procedures have to be performed in confined, difficult to visualize spaces. Despite advances in general surgery and neurosurgery, in vivo application of robotics to ocular surgery is still in its infancy, owing to the particular complexities of microsurgery. The use of robotic assistance and feedback guidance on surgical maneuvers could improve the technical performance of expert surgeons during the initial phase of the learning curve. Evidence acquisition We analyzed the advantages and disadvantages of surgical robots, as well as the present applications and future outlook of robotics in neurosurgery in brain areas related to vision and ophthalmology. Discussion Limitations to robotic assistance remain, that need to be overcome before it can be more widely applied in ocular surgery. Conclusion There is heightened interest in studies documenting computerized systems that filter out hand tremor and optimize speed of movement, control of force, and direction and range of movement. Further research is still needed to validate robot-assisted procedures. PMID:29440943

  12. Recent advances in robotics

    International Nuclear Information System (INIS)

    Beni, G.; Hackwood, S.

    1984-01-01

    Featuring 10 contributions, this volume offers a state-of-the-art report on robotic science and technology. It covers robots in modern industry, robotic control to help the disabled, kinematics and dynamics, six-legged walking robots, a vector analysis of robot manipulators, tactile sensing in robots, and more

  13. Negative Affect in Human Robot Interaction

    DEFF Research Database (Denmark)

    Rehm, Matthias; Krogsager, Anders

    2013-01-01

    The vision of social robotics sees robots moving more and more into unrestricted social environments, where robots interact closely with users in their everyday activities, maybe even establishing relationships with the user over time. In this paper we present a field trial with a robot in a semi...

  14. Special Issue on Intelligent Robots

    Directory of Open Access Journals (Sweden)

    Genci Capi

    2013-08-01

    Full Text Available The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in real time, a large amount of sensory data—such as vision, laser, microphone—in order to determine the best action. Intelligent algorithms have been successfully applied to link complex sensory data to robot action. This editorial briefly summarizes recent findings in the field of intelligent robots as described in the articles published in this special issue.

  15. A State-of-the-Art Review on Mapping and Localization of Mobile Robots Using Omnidirectional Vision Sensors

    Directory of Open Access Journals (Sweden)

    L. Payá

    2017-01-01

    Full Text Available Nowadays, the field of mobile robotics is experiencing a quick evolution, and a variety of autonomous vehicles is available to solve different tasks. The advances in computer vision have led to a substantial increase in the use of cameras as the main sensors in mobile robots. They can be used as the only source of information or in combination with other sensors such as odometry or laser. Among vision systems, omnidirectional sensors stand out due to the richness of the information they provide the robot with, and an increasing number of works about them have been published over the last few years, leading to a wide variety of frameworks. In this review, some of the most important works are analysed. One of the key problems the scientific community is addressing currently is the improvement of the autonomy of mobile robots. To this end, building robust models of the environment and solving the localization and navigation problems are three important abilities that any mobile robot must have. Taking it into account, the review concentrates on these problems; how researchers have addressed them by means of omnidirectional vision; the main frameworks they have proposed; and how they have evolved in recent years.

  16. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    Science.gov (United States)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two

  17. Soft Robotics Week

    CERN Document Server

    Rossiter, Jonathan; Iida, Fumiya; Cianchetti, Matteo; Margheri, Laura

    2017-01-01

    This book offers a comprehensive, timely snapshot of current research, technologies and applications of soft robotics. The different chapters, written by international experts across multiple fields of soft robotics, cover innovative systems and technologies for soft robot legged locomotion, soft robot manipulation, underwater soft robotics, biomimetic soft robotic platforms, plant-inspired soft robots, flying soft robots, soft robotics in surgery, as well as methods for their modeling and control. Based on the results of the second edition of the Soft Robotics Week, held on April 25 – 30, 2016, in Livorno, Italy, the book reports on the major research lines and novel technologies presented and discussed during the event.

  18. Rehabilitation robotics.

    Science.gov (United States)

    Krebs, H I; Volpe, B T

    2013-01-01

    This chapter focuses on rehabilitation robotics which can be used to augment the clinician's toolbox in order to deliver meaningful restorative therapy for an aging population, as well as on advances in orthotics to augment an individual's functional abilities beyond neurorestoration potential. The interest in rehabilitation robotics and orthotics is increasing steadily with marked growth in the last 10 years. This growth is understandable in view of the increased demand for caregivers and rehabilitation services escalating apace with the graying of the population. We provide an overview on improving function in people with a weak limb due to a neurological disorder who cannot properly control it to interact with the environment (orthotics); we then focus on tools to assist the clinician in promoting rehabilitation of an individual so that s/he can interact with the environment unassisted (rehabilitation robotics). We present a few clinical results occurring immediately poststroke as well as during the chronic phase that demonstrate superior gains for the upper extremity when employing rehabilitation robotics instead of usual care. These include the landmark VA-ROBOTICS multisite, randomized clinical study which demonstrates clinical gains for chronic stroke that go beyond usual care at no additional cost. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Medical robotics.

    Science.gov (United States)

    Ferrigno, Giancarlo; Baroni, Guido; Casolo, Federico; De Momi, Elena; Gini, Giuseppina; Matteucci, Matteo; Pedrocchi, Alessandra

    2011-01-01

    Information and communication technology (ICT) and mechatronics play a basic role in medical robotics and computer-aided therapy. In the last three decades, in fact, ICT technology has strongly entered the health-care field, bringing in new techniques to support therapy and rehabilitation. In this frame, medical robotics is an expansion of the service and professional robotics as well as other technologies, as surgical navigation has been introduced especially in minimally invasive surgery. Localization systems also provide treatments in radiotherapy and radiosurgery with high precision. Virtual or augmented reality plays a role for both surgical training and planning and for safe rehabilitation in the first stage of the recovery from neurological diseases. Also, in the chronic phase of motor diseases, robotics helps with special assistive devices and prostheses. Although, in the past, the actual need and advantage of navigation, localization, and robotics in surgery and therapy has been in doubt, today, the availability of better hardware (e.g., microrobots) and more sophisticated algorithms(e.g., machine learning and other cognitive approaches)has largely increased the field of applications of these technologies,making it more likely that, in the near future, their presence will be dramatically increased, taking advantage of the generational change of the end users and the increasing request of quality in health-care delivery and management.

  20. Generic robot architecture

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  1. 'Filigree Robotics'

    DEFF Research Database (Denmark)

    2016-01-01

    -scale 3D printed ceramics accompanied by prints, videos and ceramic probes, which introduce the material and design processes of the project.'Filigree Robotics' experiments with a combination of the traditional ceramic technique of ‘Overforming’ with 3d Laserscan and Robotic extrusion technique...... application of reflectivity after an initial 3d print. The consideration and integration of this material practice into a digital workflow took place in an interdisciplinary collaboration of Ceramicist Flemming Tvede Hansen from KADK Superformlab and architectural researchers from CITA (Martin Tamke, Henrik...... to the creation of the form and invites for experimentation. In Filigree Robotics we combine the crafting of the mold with a parallel running generative algorithm, which is fed by a constant laserscan of the 3d surface. This algorithm, analyses the topology of the mold, identifies high and low points and uses...

  2. Automated rose cutting in greenhouses with 3D vision and robotics : analysis of 3D vision techniques for stem detection

    NARCIS (Netherlands)

    Noordam, J.C.; Hemming, J.; Heerde, van C.J.E.; Golbach, F.B.T.F.; Soest, van R.; Wekking, E.

    2005-01-01

    The reduction of labour cost is the major motivation to develop a system for robot harvesting of roses in greenhouses that at least can compete with manual harvesting. Due to overlapping leaves, one of the most complicated tasks in robotic rose cutting is to locate the stem and trace the stem down

  3. Cloud Robotics Platforms

    Directory of Open Access Journals (Sweden)

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  4. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    Science.gov (United States)

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...techniques to determine the distances from each pixel to the camera. 14. SUBJECT TERMS unmanned undersea vehicles (UUVs), autonomous ... AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING Jake A. Jones Lieutenant Commander, United States Navy B.S

  5. Medical robotics

    CERN Document Server

    Troccaz, Jocelyne

    2013-01-01

    In this book, we present medical robotics, its evolution over the last 30 years in terms of architecture, design and control, and the main scientific and clinical contributions to the field. For more than two decades, robots have been part of hospitals and have progressively become a common tool for the clinician. Because this domain has now reached a certain level of maturity it seems important and useful to provide a state of the scientific, technological and clinical achievements and still open issues. This book describes the short history of the domain, its specificity and constraints, and

  6. Service Robots

    DEFF Research Database (Denmark)

    Clemmensen, Torkil; Nielsen, Jeppe Agger; Andersen, Kim Normann

    The position presented in this paper is that in order to understand how service robots shape, and are being shaped by, the physical and social contexts in which they are used, we need to consider both work/organizational analysis and interaction design. We illustrate this with qualitative data...... and personal experiences to generate discussion about how to link these two traditions. This paper presents selected results from a case study that investigated the implementation and use of robot vacuum cleaners in Danish eldercare. The study demonstrates interpretive flexibility with variation...

  7. Robot Choreography

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Heath, Damith

    2016-01-01

    We propose a robust framework for combining performance paradigms with human robot interaction (HRI) research. Following an analysis of several case studies that combine the performing arts with HRI experiments, we propose a methodology and “best practices” for implementing choreography and other...... performance paradigms in HRI experiments. Case studies include experiments conducted in laboratory settings, “in the wild”, and live performance settings. We consider the technical and artistic challenges of designing and staging robots alongside humans in these various settings, and discuss how to combine...

  8. Cultural Robotics: The Culture of Robotics and Robotics in Culture

    Directory of Open Access Journals (Sweden)

    Hooman Samani

    2013-12-01

    Full Text Available In this paper, we have investigated the concept of “Cultural Robotics” with regard to the evolution of social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of a culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. According to the importance of the embodiment of robots in the sense of presence, the influence of robots in communication culture is anticipated. The sustainability of robotics culture based on diversity for cultural communities for various acceptance modalities is explored in order to anticipate the creation of different attributes of culture between robots and humans in the future.

  9. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting.

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-04

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell's natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  10. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  11. Robotic Surgery

    Science.gov (United States)

    Childress, Vincent W.

    2007-01-01

    The medical field has many uses for automated and remote-controlled technology. For example, if a tissue sample is only handled in the laboratory by a robotic handling system, then it will never come into contact with a human. Such a system not only helps to automate the medical testing process, but it also helps to reduce the chances of…

  12. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  13. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    International Nuclear Information System (INIS)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin

    2014-01-01

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  14. Robot vision language RVL/V: An integration scheme of visual processing and manipulator control

    International Nuclear Information System (INIS)

    Matsushita, T.; Sato, T.; Hirai, S.

    1984-01-01

    RVL/V is a robot vision language designed to write a program for visual processing and manipulator control of a hand-eye system. This paper describes the design of RVL/V and the current implementation of the system. Visual processing is performed on one-dimensional range data of the object surface. Model-based instructions execute object detection, measurement and view control. The hierarchy of visual data and processing is introduced to give RVL/V generality. A new scheme to integrate visual information and manipulator control is proposed. The effectiveness of the model-based visual processing scheme based on profile data is demonstrated by a hand-eye experiment

  15. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2016-01-01

    Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.

  16. On quaternion based parameterization of orientation in computer vision and robotics

    Directory of Open Access Journals (Sweden)

    G. Terzakis

    2014-04-01

    Full Text Available The problem of orientation parameterization for applications in computer vision and robotics is examined in detail herein. The necessary intuition and formulas are provided for direct practical use in any existing algorithm that seeks to minimize a cost function in an iterative fashion. Two distinct schemes of parameterization are analyzed: The first scheme concerns the traditional axis-angle approach, while the second employs stereographic projection from unit quaternion sphere to the 3D real projective space. Performance measurements are taken and a comparison is made between the two approaches. Results suggests that there exist several benefits in the use of stereographic projection that include rational expressions in the rotation matrix derivatives, improved accuracy, robustness to random starting points and accelerated convergence.

  17. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Science.gov (United States)

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  18. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  19. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    Science.gov (United States)

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  20. Robot bicolor system

    Science.gov (United States)

    Yamaba, Kazuo

    1999-03-01

    In case of robot vision, most important problem is the processing speed of acquiring and analyzing images are less than the speed of execution of the robot. In an actual robot color vision system, it is considered that the system should be processed at real time. We guessed this problem might be solved using by the bicolor analysis technique. We have been testing a system which we hope will give us insight to the properties of bicolor vision. The experiment is used the red channel of a color CCD camera and an image from a monochromatic camera to duplicate McCann's theory. To mix the two signals together, the mono image is copied into each of the red, green and blue memory banks of the image processing board and then added the red image to the red bank. On the contrary, pure color images, red, green and blue components are obtained from the original bicolor images in the novel color system after the scaling factor is added to each RGB image. Our search for a bicolor robot vision system was entirely successful.

  1. Micro Robotics Lab

    Data.gov (United States)

    Federal Laboratory Consortium — Our research is focused on the challenges of engineering robotic systems down to sub-millimeter size scales. We work both on small mobile robots (robotic insects for...

  2. Robots of the Future

    Indian Academy of Sciences (India)

    two main types of robots: industrial robots, and autonomous robots. .... position); it also has a virtual CPU with two stacks and three registers that hold 32-bit strings. Each item ..... just like we can aggregate images, text, and information from.

  3. Presentation robot Advee

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, Stanislav; Hrbáček, J.; Ripel, T.; Ondroušek, V.; Hrbáček, R.; Schreiber, P.

    2012-01-01

    Roč. 18, 5/6 (2012), s. 307-322 ISSN 1802-1484 Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot * human - robot interface * localization Subject RIV: JD - Computer Applications, Robot ics

  4. Towards Sociable Robots

    DEFF Research Database (Denmark)

    Ngo, Trung Dung

    This thesis studies aspects of self-sufficient energy (energy autonomy) for truly autonomous robots and towards sociable robots. Over sixty years of history of robotics through three developmental ages containing single robot, multi-robot systems, and social (sociable) robots, the main objective...... of roboticists mostly focuses on how to make a robotic system function autonomously and further, socially. However, such approaches mostly emphasize behavioural autonomy, rather than energy autonomy which is the key factor for not only any living machine, but for life on the earth. Consequently, self......-sufficient energy is one of the challenges for not only single robot or multi-robot systems, but also social and sociable robots. This thesis is to deal with energy autonomy for multi-robot systems through energy sharing (trophallaxis) in which each robot is equipped with two capabilities: self-refueling energy...

  5. Cloud Robotics Model

    OpenAIRE

    Mester, Gyula

    2015-01-01

    Cloud Robotics was born from the merger of service robotics and cloud technologies. It allows robots to benefit from the powerful computational, storage, and communications resources of modern data centres. Cloud robotics allows robots to take advantage of the rapid increase in data transfer rates to offload tasks without hard real time requirements. Cloud Robotics has rapidly gained momentum with initiatives by companies such as Google, Willow Garage and Gostai as well as more than a dozen a...

  6. Robot Programming.

    Science.gov (United States)

    1982-12-01

    Paris, France, June, 1982, 519-530. Latoinbe, J. C. "Equipe Intelligence Artificielle et Robotique: Etat d’avancement des recherches," Laboratoire...8217AD-A127 233 ROBOT PROGRRMMING(U) MASSACHUSETTS INST OFGTECHi/ CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB T LOZANO-PEREZ UNCLASSIFIED DC8 AI-9 N884...NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK Artificial Intelligence Laboratory AREA I WORK UNIT NUMBERS ,. 545 Technology Square Cambridge

  7. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This paper summarizes the research results on the friendly network robotics in fiscal 1996. This research assumes an android robot as an ultimate robot and the future robot system utilizing computer network technology. The robot aiming at human daily work activities in factories or under extreme environments is required to work under usual human work environments. The human robot with similar size, shape and functions to human being is desirable. Such robot having a head with two eyes, two ears and mouth can hold a conversation with human being, can walk with two legs by autonomous adaptive control, and has a behavior intelligence. Remote operation of such robot is also possible through high-speed computer network. As a key technology to use this robot under coexistence with human being, establishment of human coexistent robotics was studied. As network based robotics, use of robots connected with computer networks was also studied. In addition, the R-cube (R{sup 3}) plan (realtime remote control robot technology) was proposed. 82 refs., 86 figs., 12 tabs.

  8. A Novel Generic Ball Recognition Algorithm Based on Omnidirectional Vision for Soccer Robots

    Directory of Open Access Journals (Sweden)

    Hui Zhang

    2013-11-01

    Full Text Available It is significant for the final goal of RoboCup to realize the recognition of generic balls for soccer robots. In this paper, a novel generic ball recognition algorithm based on omnidirectional vision is proposed by combining the modified Haar-like features and AdaBoost learning algorithm. The algorithm is divided into offline training and online recognition. During the phase of offline training, numerous sub-images are acquired from various panoramic images, including generic balls, and then the modified Haar-like features are extracted from them and used as the input of the AdaBoost learning algorithm to obtain a classifier. During the phase of online recognition, and according to the imaging characteristics of our omnidirectional vision system, rectangular windows are defined to search for the generic ball along the rotary and radial directions in the panoramic image, and the learned classifier is used to judge whether a ball is included in the window. After the ball has been recognized globally, ball tracking is realized by integrating a ball velocity estimation algorithm to reduce the computational cost. The experimental results show that good performance can be achieved using our algorithm, and that the generic ball can be recognized and tracked effectively.

  9. Cultural Robotics: The Culture of Robotics and Robotics in Culture

    OpenAIRE

    Hooman Samani; Elham Saadatian; Natalie Pang; Doros Polydorou; Owen Noel Newton Fernando; Ryohei Nakatsu; Jeffrey Tzu Kwan Valino Koh

    2013-01-01

    In this paper, we have investigated the concept of “Cultural Robotics” with regard to the evolution of social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of a culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. Ac...

  10. Robotic assisted minimally invasive surgery

    Directory of Open Access Journals (Sweden)

    Palep Jaydeep

    2009-01-01

    Full Text Available The term "robot" was coined by the Czech playright Karel Capek in 1921 in his play Rossom′s Universal Robots. The word "robot" is from the check word robota which means forced labor.The era of robots in surgery commenced in 1994 when the first AESOP (voice controlled camera holder prototype robot was used clinically in 1993 and then marketed as the first surgical robot ever in 1994 by the US FDA. Since then many robot prototypes like the Endoassist (Armstrong Healthcare Ltd., High Wycombe, Buck, UK, FIPS endoarm (Karlsruhe Research Center, Karlsruhe, Germany have been developed to add to the functions of the robot and try and increase its utility. Integrated Surgical Systems (now Intuitive Surgery, Inc. redesigned the SRI Green Telepresence Surgery system and created the daVinci Surgical System ® classified as a master-slave surgical system. It uses true 3-D visualization and EndoWrist ® . It was approved by FDA in July 2000 for general laparoscopic surgery, in November 2002 for mitral valve repair surgery. The da Vinci robot is currently being used in various fields such as urology, general surgery, gynecology, cardio-thoracic, pediatric and ENT surgery. It provides several advantages to conventional laparoscopy such as 3D vision, motion scaling, intuitive movements, visual immersion and tremor filtration. The advent of robotics has increased the use of minimally invasive surgery among laparoscopically naοve surgeons and expanded the repertoire of experienced surgeons to include more advanced and complex reconstructions.

  11. Calibration of Robot Reference Frames for Enhanced Robot Positioning Accuracy

    OpenAIRE

    Cheng, Frank Shaopeng

    2008-01-01

    This chapter discussed the importance and methods of conducting robot workcell calibration for enhancing the accuracy of the robot TCP positions in industrial robot applications. It shows that the robot frame transformations define the robot geometric parameters such as joint position variables, link dimensions, and joint offsets in an industrial robot system. The D-H representation allows the robot designer to model the robot motion geometry with the four standard D-H parameters. The robot k...

  12. 30 Years of Robotic Surgery.

    Science.gov (United States)

    Leal Ghezzi, Tiago; Campos Corleta, Oly

    2016-10-01

    The idea of reproducing himself with the use of a mechanical robot structure has been in man's imagination in the last 3000 years. However, the use of robots in medicine has only 30 years of history. The application of robots in surgery originates from the need of modern man to achieve two goals: the telepresence and the performance of repetitive and accurate tasks. The first "robot surgeon" used on a human patient was the PUMA 200 in 1985. In the 1990s, scientists developed the concept of "master-slave" robot, which consisted of a robot with remote manipulators controlled by a surgeon at a surgical workstation. Despite the lack of force and tactile feedback, technical advantages of robotic surgery, such as 3D vision, stable and magnified image, EndoWrist instruments, physiologic tremor filtering, and motion scaling, have been considered fundamental to overcome many of the limitations of the laparoscopic surgery. Since the approval of the da Vinci(®) robot by international agencies, American, European, and Asian surgeons have proved its factibility and safety for the performance of many different robot-assisted surgeries. Comparative studies of robotic and laparoscopic surgical procedures in general surgery have shown similar results with regard to perioperative, oncological, and functional outcomes. However, higher costs and lack of haptic feedback represent the major limitations of current robotic technology to become the standard technique of minimally invasive surgery worldwide. Therefore, the future of robotic surgery involves cost reduction, development of new platforms and technologies, creation and validation of curriculum and virtual simulators, and conduction of randomized clinical trials to determine the best applications of robotics.

  13. Surgery with cooperative robots.

    Science.gov (United States)

    Lehman, Amy C; Berg, Kyle A; Dumpert, Jason; Wood, Nathan A; Visty, Abigail Q; Rentschler, Mark E; Platt, Stephen R; Farritor, Shane M; Oleynikov, Dmitry

    2008-03-01

    Advances in endoscopic techniques for abdominal procedures continue to reduce the invasiveness of surgery. Gaining access to the peritoneal cavity through small incisions prompted the first significant shift in general surgery. The complete elimination of external incisions through natural orifice access is potentially the next step in reducing patient trauma. While minimally invasive techniques offer significant patient advantages, the procedures are surgically challenging. Robotic surgical systems are being developed that address the visualization and manipulation limitations, but many of these systems remain constrained by the entry incisions. Alternatively, miniature in vivo robots are being developed that are completely inserted into the peritoneal cavity for laparoscopic and natural orifice procedures. These robots can provide vision and task assistance without the constraints of the entry incision, and can reduce the number of incisions required for laparoscopic procedures. In this study, a series of minimally invasive animal-model surgeries were performed using multiple miniature in vivo robots in cooperation with existing laparoscopy and endoscopy tools as well as the da Vinci Surgical System. These procedures demonstrate that miniature in vivo robots can address the visualization constraints of minimally invasive surgery by providing video feedback and task assistance from arbitrary orientations within the peritoneal cavity.

  14. Educational Robotics as Mindtools

    Science.gov (United States)

    Mikropoulos, Tassos A.; Bellou, Ioanna

    2013-01-01

    Although there are many studies on the constructionist use of educational robotics, they have certain limitations. Some of them refer to robotics education, rather than educational robotics. Others follow a constructionist approach, but give emphasis only to design skills, creativity and collaboration. Some studies use robotics as an educational…

  15. ROILA : RObot Interaction LAnguage

    NARCIS (Netherlands)

    Mubin, O.

    2011-01-01

    The number of robots in our society is increasing rapidly. The number of service robots that interact with everyday people already outnumbers industrial robots. The easiest way to communicate with these service robots, such as Roomba or Nao, would be natural speech. However, the limitations

  16. Robotic Hand

    Science.gov (United States)

    1993-01-01

    The Omni-Hand was developed by Ross-Hime Designs, Inc. for Marshall Space Flight Center (MSFC) under a Small Business Innovation Research (SBIR) contract. The multiple digit hand has an opposable thumb and a flexible wrist. Electric muscles called Minnacs power wrist joints and the interchangeable digits. Two hands have been delivered to NASA for evaluation for potential use on space missions and the unit is commercially available for applications like hazardous materials handling and manufacturing automation. Previous SBIR contracts resulted in the Omni-Wrist and Omni-Wrist II robotic systems, which are commercially available for spray painting, sealing, ultrasonic testing, as well as other uses.

  17. An approach to robot SLAM based on incremental appearance learning with omnidirectional vision

    Science.gov (United States)

    Wu, Hua; Qin, Shi-Yin

    2011-03-01

    Localisation and mapping with an omnidirectional camera becomes more difficult as the landmark appearances change dramatically in the omnidirectional image. With conventional techniques, it is difficult to match the features of the landmark with the template. We present a novel robot simultaneous localisation and mapping (SLAM) algorithm with an omnidirectional camera, which uses incremental landmark appearance learning to provide posterior probability distribution for estimating the robot pose under a particle filtering framework. The major contribution of our work is to represent the posterior estimation of the robot pose by incremental probabilistic principal component analysis, which can be naturally incorporated into the particle filtering algorithm for robot SLAM. Moreover, the innovative method of this article allows the adoption of the severe distorted landmark appearances viewed with omnidirectional camera for robot SLAM. The experimental results demonstrate that the localisation error is less than 1 cm in an indoor environment using five landmarks, and the location of the landmark appearances can be estimated within 5 pixels deviation from the ground truth in the omnidirectional image at a fairly fast speed.

  18. Color-based scale-invariant feature detection applied in robot vision

    Science.gov (United States)

    Gao, Jian; Huang, Xinhan; Peng, Gang; Wang, Min; Li, Xinde

    2007-11-01

    The scale-invariant feature detecting methods always require a lot of computation yet sometimes still fail to meet the real-time demands in robot vision fields. To solve the problem, a quick method for detecting interest points is presented. To decrease the computation time, the detector selects as interest points those whose scale normalized Laplacian values are the local extrema in the nonholonomic pyramid scale space. The descriptor is built with several subregions, whose width is proportional to the scale factor, and the coordinates of the descriptor are rotated in relation to the interest point orientation just like the SIFT descriptor. The eigenvector is computed in the original color image and the mean values of the normalized color g and b in each subregion are chosen to be the factors of the eigenvector. Compared with the SIFT descriptor, this descriptor's dimension has been reduced evidently, which can simplify the point matching process. The performance of the method is analyzed in theory in this paper and the experimental results have certified its validity too.

  19. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    Directory of Open Access Journals (Sweden)

    Giuseppe Airò Farulla

    2016-02-01

    Full Text Available Vision-based Pose Estimation (VPE represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements.

  20. An FPGA-Based Omnidirectional Vision Sensor for Motion Detection on Mobile Robots

    Directory of Open Access Journals (Sweden)

    Jones Y. Mori

    2012-01-01

    Full Text Available This work presents the development of an integrated hardware/software sensor system for moving object detection and distance calculation, based on background subtraction algorithm. The sensor comprises a catadioptric system composed by a camera and a convex mirror that reflects the environment to the camera from all directions, obtaining a panoramic view. The sensor is used as an omnidirectional vision system, allowing for localization and navigation tasks of mobile robots. Several image processing operations such as filtering, segmentation and morphology have been included in the processing architecture. For achieving distance measurement, an algorithm to determine the center of mass of a detected object was implemented. The overall architecture has been mapped onto a commercial low-cost FPGA device, using a hardware/software co-design approach, which comprises a Nios II embedded microprocessor and specific image processing blocks, which have been implemented in hardware. The background subtraction algorithm was also used to calibrate the system, allowing for accurate results. Synthesis results show that the system can achieve a throughput of 26.6 processed frames per second and the performance analysis pointed out that the overall architecture achieves a speedup factor of 13.78 in comparison with a PC-based solution running on the real-time operating system xPC Target.

  1. Discrete-State-Based Vision Navigation Control Algorithm for One Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Dunwen Wei

    2015-01-01

    Full Text Available Navigation with the specific objective can be defined by specifying desired timed trajectory. The concept of desired direction field is proposed to deal with such navigation problem. To lay down a principled discussion of the accuracy and efficiency of navigation algorithms, strictly quantitative definitions of tracking error, actuator effect, and time efficiency are established. In this paper, one vision navigation control method based on desired direction field is proposed. This proposed method uses discrete image sequences to form discrete state space, which is especially suitable for bipedal walking robots with single camera walking on a free-barrier plane surface to track the specific objective without overshoot. The shortest path method (SPM is proposed to design such direction field with the highest time efficiency. However, one improved control method called canonical piecewise-linear function (PLF is proposed. In order to restrain the noise disturbance from the camera sensor, the band width control method is presented to significantly decrease the error influence. The robustness and efficiency of the proposed algorithm are illustrated through a number of computer simulations considering the error from camera sensor. Simulation results show that the robustness and efficiency can be balanced by choosing the proper controlling value of band width.

  2. Modular Robotic Wearable

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2009-01-01

    In this concept paper we trace the contours and define a new approach to robotic systems, composed of interactive robotic modules which are somehow worn on the body. We label such a field as Modular Robotic Wearable (MRW). We describe how, by using modular robotics for creating wearable....... Finally, by focusing on the intersection of the combination modular robotic systems, wearability, and bodymind we attempt to explore the theoretical characteristics of such approach and exploit the possible playware application fields....

  3. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  4. Visual guidance of a pig evisceration robot using neural networks

    DEFF Research Database (Denmark)

    Christensen, S.S.; Andersen, A.W.; Jørgensen, T.M.

    1996-01-01

    The application of a RAM-based neural network to robot vision is demonstrated for the guidance of a pig evisceration robot. Tests of the combined robot-vision system have been performed at an abattoir. The vision system locates a set of feature points on a pig carcass and transmits the 3D coordin...

  5. Next generation light robotic

    DEFF Research Database (Denmark)

    Villangca, Mark Jayson; Palima, Darwin; Banas, Andrew Rafael

    2017-01-01

    -assisted surgery imbibes surgeons with superhuman abilities and gives the expression “surgical precision” a whole new meaning. Still in its infancy, much remains to be done to improve human-robot collaboration both in realizing robots that can operate safely with humans and in training personnel that can work......Conventional robotics provides machines and robots that can replace and surpass human performance in repetitive, difficult, and even dangerous tasks at industrial assembly lines, hazardous environments, or even at remote planets. A new class of robotic systems no longer aims to replace humans...... with so-called automatons but, rather, to create robots that can work alongside human operators. These new robots are intended to collaborate with humans—extending their abilities—from assisting workers on the factory floor to rehabilitating patients in their homes. In medical robotics, robot...

  6. Distributed Robotics Education

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2011-01-01

    Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept of a distribu......Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept...... to be changed, related to multirobot control and human-robot interaction control from virtual to physical representation. The proposed system is valuable for bringing a vast number of issues into education – such as parallel programming, distribution, communication protocols, master dependency, connectivity...

  7. An Adaptive Robot Game

    DEFF Research Database (Denmark)

    Hansen, Søren Tranberg; Svenstrup, Mikael; Dalgaard, Lars

    2010-01-01

    The goal of this paper is to describe an adaptive robot game, which motivates elderly people to do a regular amount of physical exercise while playing. One of the advantages of robot based games is that the initiative to play can be taken autonomously by the robot. In this case, the goal is to im......The goal of this paper is to describe an adaptive robot game, which motivates elderly people to do a regular amount of physical exercise while playing. One of the advantages of robot based games is that the initiative to play can be taken autonomously by the robot. In this case, the goal...... is to improve the mental and physical state of the user by playing a physical game with the robot. Ideally, a robot game should be simple to learn but difficult to master, providing an appropriate degree of challenge for players with different skills. In order to achieve that, the robot should be able to adapt...

  8. Robotic intelligence kernel

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  9. Robotic membranes

    DEFF Research Database (Denmark)

    Ramsgaard Thomsen, Mette

    2008-01-01

    The relationship between digital and analogue is often constructed as one of opposition. The perception that the world is permeated with underlying patterns of data, describing events and matter alike, suggests that information can be understood apart from the substance to which it is associated......, and that its encoded logic can be constructed and reconfigured as an isolated entity. This disembodiment of information from materiality implies that an event like a thunderstorm, or a material like a body, can be described equally by data, in other words it can be read or written. The following prototypes......, Vivisection and Strange Metabolisms, were developed at the Centre for Information Technology and Architecture (CITA) at the Royal Danish Academy of Fine Arts in Copenhagen as a means of engaging intangible digital data with tactile physical material. As robotic membranes, they are a dual examination...

  10. Combining a Novel Computer Vision Sensor with a Cleaning Robot to Achieve Autonomous Pig House Cleaning

    DEFF Research Database (Denmark)

    Andersen, Nils Axel; Braithwaite, Ian David; Blanke, Mogens

    2005-01-01

    condition based cleaning. This paper describes how a novel sensor, developed for the purpose, and algorithms for classification and learning are combined with a commercial robot to obtain an autonomous system which meets the necessary quality attributes. These include features to make selective cleaning...

  11. Direct methods for vision-based robot control : application and implementation

    NARCIS (Netherlands)

    Pieters, R.S.

    2013-01-01

    With the growing interest of integrating robotics into everyday life and industry, the requirements towards the quality and quantity of applications grows equally hard. This trend is profoundly recognized in applications involving visual perception. Whereas visual sensing in home environments tend

  12. Working on the robot society. : Visions and insights from science about the relation technology and employment.

    NARCIS (Netherlands)

    van Est, R.; Kool, L.

    2015-01-01

    The report Working on the robot society sets out current scientific findings for the relationship between technology and employment. It looks at the future and describes the policy options. In so doing, the report provides a joint fund of knowledge for societal and political debate on how the

  13. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    Science.gov (United States)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  14. Robotics research in Chile

    Directory of Open Access Journals (Sweden)

    Javier Ruiz-del-Solar

    2016-12-01

    Full Text Available The development of research in robotics in a developing country is a challenging task. Factors such as low research funds, low trust from local companies and the government, and a small number of qualified researchers hinder the development of strong, local research groups. In this article, and as a case of study, we present our research group in robotics at the Advanced Mining Technology Center of the Universidad de Chile, and the way in which we have addressed these challenges. In 2008, we decided to focus our research efforts in mining, which is the main industry in Chile. We observed that this industry has needs in terms of safety, productivity, operational continuity, and environmental care. All these needs could be addressed with robotics and automation technology. In a first stage, we concentrate ourselves in building capabilities in field robotics, starting with the automation of a commercial vehicle. An important outcome of this project was the earn of the local mining industry confidence. Then, in a second stage started in 2012, we began working with the local mining industry in technological projects. In this article, we describe three of the technological projects that we have developed with industry support: (i an autonomous vehicle for mining environments without global positioning system coverage; (ii the inspection of the irrigation flow in heap leach piles using unmanned aerial vehicles and thermal cameras; and (iii an enhanced vision system for vehicle teleoperation in adverse climatic conditions.

  15. Robotics Potential Fields

    Directory of Open Access Journals (Sweden)

    Jordi Lucero

    2009-01-01

    Full Text Available This problem was to calculate the path a robot would take to navigate an obstacle field and get to its goal. Three obstacles were given as negative potential fields which the robot avoided, and a goal was given a positive potential field that attracted the robot. The robot decided each step based on its distance, angle, and influence from every object. After each step, the robot recalculated and determined its next step until it reached its goal. The robot's calculations and steps were simulated with Microsoft Excel.

  16. Designing Emotionally Expressive Robots

    DEFF Research Database (Denmark)

    Tsiourti, Christiana; Weiss, Astrid; Wac, Katarzyna

    2017-01-01

    Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid...... robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper -a highly human-like robot and Hobbit – a robot...... for robots....

  17. A focused bibliography on robotics

    Science.gov (United States)

    Mergler, H. W.

    1983-08-01

    The present bibliography focuses on eight robotics-related topics believed by the author to be of special interest to researchers in the field of industrial electronics: robots, sensors, kinematics, dynamics, control systems, actuators, vision, economics, and robot applications. This literature search was conducted through the 1970-present COMPENDEX data base, which provides world-wide coverage of nearly 3500 journals, conference proceedings and reports, and the 1969-1981 INSPEC data base, which is the largest for the English language in the fields of physics, electrotechnology, computers, and control.

  18. Robotic Sensitive-Site Assessment

    Science.gov (United States)

    2015-09-04

    annotations. The SOA component is the backend infrastructure that receives and stores robot-generated and human-input data and serves these data to several...Architecture Server (heading level 2) The SOA server provides the backend infrastructure to receive data from robot situational awareness payloads, to archive...incapacitation or even death. The proper use of PPE is critical to avoiding exposure. However, wearing PPE limits mobility and field of vision, and

  19. Advanced mechanics in robotic systems

    CERN Document Server

    Nava Rodríguez, Nestor Eduardo

    2011-01-01

    Illustrates original and ambitious mechanical designs and techniques for the development of new robot prototypes Includes numerous figures, tables and flow charts Discusses relevant applications in robotics fields such as humanoid robots, robotic hands, mobile robots, parallel manipulators and human-centred robots

  20. Technique of Substantiating Requirements for the Vision Systems of Industrial Robotic Complexes

    Directory of Open Access Journals (Sweden)

    V. Ya. Kolyuchkin

    2015-01-01

    Full Text Available In references, there is a lack of approaches to describe the justified technical requirements for the vision systems (VS of industrial robotics complexes (IRC. Therefore, an objective of the work is to develop a technique that allows substantiating requirements for the main quality indicators of VS, functioning as a part of the IRC.The proposed technique uses a model representation of VS, which, as a part of the IRC information system, sorts the objects in the work area, as well as measures their linear and angular coordinates. To solve the problem of statement there is a proposal to define the target function of a designed IRC as a dependence of the IRC indicator efficiency on the VS quality indicators. The paper proposes to use, as an indicator of the IRC efficiency, the probability of a lack of fault products when manufacturing. Based on the functions the VS perform as a part of the IRC information system, the accepted indicators of VS quality are as follows: a probability of the proper recognition of objects in the working IRC area, and confidential probabilities of measuring linear and angular orientation coordinates of objects with the specified values of permissible error. Specific values of these errors depend on the orientation errors of working bodies of manipulators that are a part of the IRC. The paper presents mathematical expressions that determine the functional dependence of the probability of a lack of fault products when manufacturing on the VS quality indicators and the probability of failures of IRC technological equipment.The offered technique for substantiating engineering requirements for the VS of IRC has novelty. The results obtained in this work can be useful for professionals involved in IRC VS development, and, in particular, in development of VS algorithms and software.

  1. Robotics_MobileRobot Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Robots and rovers exploring planets need to autonomously navigate to specified locations. Advanced Scientific Concepts, Inc. (ASC) and the University of Minnesota...

  2. Robots Social Embodiment in Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Brian Duffy

    2008-11-01

    Full Text Available This work aims at demonstrating the inherent advantages of embracing a strong notion of social embodiment in designing a real-world robot control architecture with explicit ?intelligent? social behaviour between a collective of robots. It develops the current thinking on embodiment beyond the physical by demonstrating the importance of social embodiment. A social framework develops the fundamental social attributes found when more than one robot co-inhabit a physical space. The social metaphors of identity, character, stereotypes and roles are presented and implemented within a real-world social robot paradigm in order to facilitate the realisation of explicit social goals.

  3. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter.

    Science.gov (United States)

    Alatise, Mary B; Hancke, Gerhard P

    2017-09-21

    Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).

  4. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  5. Springer handbook of robotics

    CERN Document Server

    Khatib, Oussama

    2016-01-01

    The second edition of this handbook provides a state-of-the-art cover view on the various aspects in the rapidly developing field of robotics. Reaching for the human frontier, robotics is vigorously engaged in the growing challenges of new emerging domains. Interacting, exploring, and working with humans, the new generation of robots will increasingly touch people and their lives. The credible prospect of practical robots among humans is the result of the scientific endeavour of a half a century of robotic developments that established robotics as a modern scientific discipline. The ongoing vibrant expansion and strong growth of the field during the last decade has fueled this second edition of the Springer Handbook of Robotics. The first edition of the handbook soon became a landmark in robotics publishing and won the American Association of Publishers PROSE Award for Excellence in Physical Sciences & Mathematics as well as the organization’s Award for Engineering & Technology. The second edition o...

  6. Project ROBOTICS 2008

    DEFF Research Database (Denmark)

    Conrad, Finn

    Mathematical modelling of Alto Robot, direct- and inverse kinematic transformation,simulation and path control applying MATLAB/SIMULINK.......Mathematical modelling of Alto Robot, direct- and inverse kinematic transformation,simulation and path control applying MATLAB/SIMULINK....

  7. Project Tasks in Robotics

    DEFF Research Database (Denmark)

    Sørensen, Torben; Hansen, Poul Erik

    1998-01-01

    Description of the compulsary project tasks to be carried out as a part of DTU course 72238 Robotics......Description of the compulsary project tasks to be carried out as a part of DTU course 72238 Robotics...

  8. CMS cavern inspection robot

    CERN Document Server

    Ibrahim, Ibrahim

    2017-01-01

    Robots which are immune to the CMS cavern environment, wirelessly controlled: -One actuated by smart materials (Ionic Polymer-Metal Composites and Macro Fiber Composites) -One regular brushed DC rover -One servo-driven rover -Stair-climbing robot

  9. The Tox21 robotic platform for the assessment of environmental chemicals--from vision to reality.

    Science.gov (United States)

    Attene-Ramos, Matias S; Miller, Nicole; Huang, Ruili; Michael, Sam; Itkin, Misha; Kavlock, Robert J; Austin, Christopher P; Shinn, Paul; Simeonov, Anton; Tice, Raymond R; Xia, Menghang

    2013-08-01

    Since its establishment in 2008, the US Tox21 inter-agency collaboration has made great progress in developing and evaluating cellular models for the evaluation of environmental chemicals as a proof of principle. Currently, the program has entered its production phase (Tox21 Phase II) focusing initially on the areas of modulation of nuclear receptors and stress response pathways. During Tox21 Phase II, the set of chemicals to be tested has been expanded to nearly 10,000 (10K) compounds and a fully automated screening platform has been implemented. The Tox21 robotic system combined with informatics efforts is capable of screening and profiling the collection of 10K environmental chemicals in triplicate in a week. In this article, we describe the Tox21 screening process, compound library preparation, data processing, and robotic system validation. Published by Elsevier Ltd.

  10. Vision based persistent localization of a humanoid robot for locomotion tasks

    Directory of Open Access Journals (Sweden)

    Martínez Pablo A.

    2016-09-01

    Full Text Available Typical monocular localization schemes involve a search for matches between reprojected 3D world points and 2D image features in order to estimate the absolute scale transformation between the camera and the world. Successfully calculating such transformation implies the existence of a good number of 3D points uniformly distributed as reprojected pixels around the image plane. This paper presents a method to control the march of a humanoid robot towards directions that are favorable for visual based localization. To this end, orthogonal diagonalization is performed on the covariance matrices of both sets of 3D world points and their 2D image reprojections. Experiments with the NAO humanoid platform show that our method provides persistence of localization, as the robot tends to walk towards directions that are desirable for successful localization. Additional tests demonstrate how the proposed approach can be incorporated into a control scheme that considers reaching a target position.

  11. RHOBOT: Radiation hardened robotics

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, P.C.; Posey, L.D. [Sandia National Labs., Albuquerque, NM (United States)

    1997-10-01

    A survey of robotic applications in radioactive environments has been conducted, and analysis of robotic system components and their response to the varying types and strengths of radiation has been completed. Two specific robotic systems for accident recovery and nuclear fuel movement have been analyzed in detail for radiation hardness. Finally, a general design approach for radiation-hardened robotics systems has been developed and is presented. This report completes this project which was funded under the Laboratory Directed Research and Development program.

  12. Micro robot bible

    International Nuclear Information System (INIS)

    Yoon, Jin Yeong

    2000-08-01

    This book deals with micro robot, which tells of summary of robots like entertainment robots and definition of robots, introduction of micro mouse about history, composition and rules, summary of micro controller with its history, appearance and composition, introduction of stepping motor about types, structure, basic characteristics, and driving ways, summary of sensor section, power, understanding of 80C196KC micro controller, basic driving program searching a maze algorithm, smooth turn and making of tracer line.

  13. RHOBOT: Radiation hardened robotics

    International Nuclear Information System (INIS)

    Bennett, P.C.; Posey, L.D.

    1997-10-01

    A survey of robotic applications in radioactive environments has been conducted, and analysis of robotic system components and their response to the varying types and strengths of radiation has been completed. Two specific robotic systems for accident recovery and nuclear fuel movement have been analyzed in detail for radiation hardness. Finally, a general design approach for radiation-hardened robotics systems has been developed and is presented. This report completes this project which was funded under the Laboratory Directed Research and Development program

  14. Two Legged Walking Robot

    OpenAIRE

    Kraus, V.

    2015-01-01

    The aim of this work is to construct a two-legged wirelessly controlled walking robot. This paper describes the construction of the robot, its control electronics, and the solution of the wireless control. The article also includes a description of the application to control the robot. The control electronics of the walking robot are built using the development kit Arduino Mega, which is enhanced with WiFi module allowing the wireless control, a set of ultrasonic sensors for detecting obstacl...

  15. Micro robot bible

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jin Yeong

    2000-08-15

    This book deals with micro robot, which tells of summary of robots like entertainment robots and definition of robots, introduction of micro mouse about history, composition and rules, summary of micro controller with its history, appearance and composition, introduction of stepping motor about types, structure, basic characteristics, and driving ways, summary of sensor section, power, understanding of 80C196KC micro controller, basic driving program searching a maze algorithm, smooth turn and making of tracer line.

  16. Robots at Work

    OpenAIRE

    Graetz, Georg; Michaels, Guy

    2015-01-01

    Despite ubiquitous discussions of robots' potential impact, there is almost no systematic empirical evidence on their economic effects. In this paper we analyze for the first time the economic impact of industrial robots, using new data on a panel of industries in 17 countries from 1993-2007. We find that industrial robots increased both labor productivity and value added. Our panel identification is robust to numerous controls, and we find similar results instrumenting increased robot use wi...

  17. Robots in the Roses

    OpenAIRE

    2014-01-01

    2014-04 Robots in the Roses A CRUSER Sponsored Event. The 4th Annual Robots in the Roses provides a venue for Faculty & NPS Students to showcase unmanned systems research (current or completed) and recruit NPS Students to join in researching on your project. Posters, robots, vehicles, videos, and even just plain humans welcome! Families are welcome to attend Robots in the Roses as we'll have a STEM activity for children to participate in.

  18. Modular robot

    International Nuclear Information System (INIS)

    Ferrante, T.A.

    1997-01-01

    A modular robot may comprise a main body having a structure defined by a plurality of stackable modules. The stackable modules may comprise a manifold, a valve module, and a control module. The manifold may comprise a top surface and a bottom surface having a plurality of fluid passages contained therein, at least one of the plurality of fluid passages terminating in a valve port located on the bottom surface of the manifold. The valve module is removably connected to the manifold and selectively fluidically connects the plurality of fluid passages contained in the manifold to a supply of pressurized fluid and to a vent. The control module is removably connected to the valve module and actuates the valve module to selectively control a flow of pressurized fluid through different ones of the plurality of fluid passages in the manifold. The manifold, valve module, and control module are mounted together in a sandwich-like manner and comprise a main body. A plurality of leg assemblies are removably connected to the main body and are removably fluidically connected to the fluid passages in the manifold so that each of the leg assemblies can be selectively actuated by the flow of pressurized fluid in different ones of the plurality of fluid passages in the manifold. 12 figs

  19. A neuromorphic controller for a robotic vehicle equipped with a dynamic vision sensor

    OpenAIRE

    Blum, Hermann; Dietmüller, Alexander; Milde, Moritz; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia

    2017-01-01

    Neuromorphic electronic systems exhibit advantageous characteristics, in terms of low energy consumption and low response latency, which can be useful in robotic applications that require compact and low power embedded computing resources. However, these neuromorphic circuits still face significant limitations that make their usage challenging: these include low precision, variability of components, sensitivity to noise and temperature drifts, as well as the currently limited number of neuron...

  20. Robot 2015 : Second Iberian Robotics Conference : Advances in Robotics

    CERN Document Server

    Moreira, António; Lima, Pedro; Montano, Luis; Muñoz-Martinez, Victor

    2016-01-01

    This book contains a selection of papers accepted for presentation and discussion at ROBOT 2015: Second Iberian Robotics Conference, held in Lisbon, Portugal, November 19th-21th, 2015. ROBOT 2015 is part of a series of conferences that are a joint organization of SPR – “Sociedade Portuguesa de Robótica/ Portuguese Society for Robotics”, SEIDROB – Sociedad Española para la Investigación y Desarrollo de la Robótica/ Spanish Society for Research and Development in Robotics and CEA-GTRob – Grupo Temático de Robótica/ Robotics Thematic Group. The conference organization had also the collaboration of several universities and research institutes, including: University of Minho, University of Porto, University of Lisbon, Polytechnic Institute of Porto, University of Aveiro, University of Zaragoza, University of Malaga, LIACC, INESC-TEC and LARSyS. Robot 2015 was focussed on the Robotics scientific and technological activities in the Iberian Peninsula, although open to research and delegates from other...

  1. Vision and Task Assistance using Modular Wireless In Vivo Surgical Robots

    Science.gov (United States)

    Platt, Stephen R.; Hawks, Jeff A.; Rentschler, Mark E.

    2009-01-01

    Minimally invasive abdominal surgery (laparoscopy) results in superior patient outcomes compared to conventional open surgery. However, the difficulty of manipulating traditional laparoscopic tools from outside the body of the patient generally limits these benefits to patients undergoing relatively low complexity procedures. The use of tools that fit entirely inside the peritoneal cavity represents a novel approach to laparoscopic surgery. Our previous work demonstrated that miniature mobile and fixed-based in vivo robots using tethers for power and data transmission can successfully operate within the abdominal cavity. This paper describes the development of a modular wireless mobile platform for in vivo sensing and manipulation applications. Design details and results of ex vivo and in vivo tests of robots with biopsy grasper, staple/clamp, video, and physiological sensor payloads are presented. These types of self-contained surgical devices are significantly more transportable and lower in cost than current robotic surgical assistants. They could ultimately be carried and deployed by non-medical personnel at the site of an injury to allow a remotely located surgeon to provide critical first response medical intervention irrespective of the location of the patient. PMID:19237337

  2. Vision and task assistance using modular wireless in vivo surgical robots.

    Science.gov (United States)

    Platt, Stephen R; Hawks, Jeff A; Rentschler, Mark E

    2009-06-01

    Minimally invasive abdominal surgery (laparoscopy) results in superior patient outcomes compared to conventional open surgery. However, the difficulty of manipulating traditional laparoscopic tools from outside the body of the patient generally limits these benefits to patients undergoing relatively low complexity procedures. The use of tools that fit entirely inside the peritoneal cavity represents a novel approach to laparoscopic surgery. Our previous work demonstrated that miniature mobile and fixed-based in vivo robots using tethers for power and data transmission can successfully operate within the abdominal cavity. This paper describes the development of a modular wireless mobile platform for in vivo sensing and manipulation applications. Design details and results of ex vivo and in vivo tests of robots with biopsy grasper, staple/clamp, video, and physiological sensor payloads are presented. These types of self-contained surgical devices are significantly more transportable and lower in cost than current robotic surgical assistants. They could ultimately be carried and deployed by nonmedical personnel at the site of an injury to allow a remotely located surgeon to provide critical first response medical intervention irrespective of the location of the patient.

  3. Building a Better Robot

    Science.gov (United States)

    Navah, Jan

    2012-01-01

    Kids love to build robots, letting their imaginations run wild with thoughts of what they might look like and what they could be programmed to do. Yet when students use cereal boxes and found objects to make robots, often the projects look too similar and tend to fall apart. This alternative allows students to "build" robots in a different way,…

  4. Open middleware for robotics

    CSIR Research Space (South Africa)

    Namoshe, M

    2008-12-01

    Full Text Available and their technologies within the field of multi-robot systems to ease the difficulty of realizing robot applications. And lastly, an example of algorithm development for multi-robot co-operation using one of the discussed software architecture is presented...

  5. Learning robotics using Python

    CERN Document Server

    Joseph, Lentin

    2015-01-01

    If you are an engineer, a researcher, or a hobbyist, and you are interested in robotics and want to build your own robot, this book is for you. Readers are assumed to be new to robotics but should have experience with Python.

  6. Robots de servicio

    Directory of Open Access Journals (Sweden)

    Rafael Aracil

    2008-04-01

    Full Text Available Resumen: El término Robots de Servicio apareció a finales de los años 80 como una necesidad de desarrollar máquinas y sistemas capaces de trabajar en entornos diferentes a los fabriles. Los Robots de Servicio tenían que poder trabajar en entornos noestructurados, en condiciones ambientales cambiantes y con una estrecha interacción con los humanos. En 1995 fue creado por la IEEE Robotics and Automation Society, el Technical Committee on Service Robots, y este comité definió en el año 2000 las áreas de aplicación de los Robots de Servicios, que se pueden dividir en dos grandes grupos: 1 sectores productivos no manufactureros tales como edificación, agricultura, naval, minería, medicina, etc. y 2 sectores de servicios propiamente dichos: asistencia personal, limpieza, vigilancia, educación, entretenimiento, etc. En este trabajo se hace una breve revisión de los principales conceptos y aplicaciones de los robots de servicio. Palabras clave: Robots de servicio, robots autónomos, robots de exteriores, robots de educación y entretenimiento, robots caminantes y escaladores, robots humanoides

  7. Robotic hand and fingers

    Science.gov (United States)

    Salisbury, Curt Michael; Dullea, Kevin J.

    2017-06-06

    Technologies pertaining to a robotic hand are described herein. The robotic hand includes one or more fingers releasably attached to a robotic hand frame. The fingers can abduct and adduct as well as flex and tense. The fingers are releasably attached to the frame by magnets that allow for the fingers to detach from the frame when excess force is applied to the fingers.

  8. Biomimetic vibrissal sensing for robots.

    Science.gov (United States)

    Pearson, Martin J; Mitchinson, Ben; Sullivan, J Charles; Pipe, Anthony G; Prescott, Tony J

    2011-11-12

    Active vibrissal touch can be used to replace or to supplement sensory systems such as computer vision and, therefore, improve the sensory capacity of mobile robots. This paper describes how arrays of whisker-like touch sensors have been incorporated onto mobile robot platforms taking inspiration from biology for their morphology and control. There were two motivations for this work: first, to build a physical platform on which to model, and therefore test, recent neuroethological hypotheses about vibrissal touch; second, to exploit the control strategies and morphology observed in the biological analogue to maximize the quality and quantity of tactile sensory information derived from the artificial whisker array. We describe the design of a new whiskered robot, Shrewbot, endowed with a biomimetic array of individually controlled whiskers and a neuroethologically inspired whisking pattern generation mechanism. We then present results showing how the morphology of the whisker array shapes the sensory surface surrounding the robot's head, and demonstrate the impact of active touch control on the sensory information that can be acquired by the robot. We show that adopting bio-inspired, low latency motor control of the rhythmic motion of the whiskers in response to contact-induced stimuli usefully constrains the sensory range, while also maximizing the number of whisker contacts. The robot experiments also demonstrate that the sensory consequences of active touch control can be usefully investigated in biomimetic robots.

  9. Intelligent robot trends for 1998

    Science.gov (United States)

    Hall, Ernest L.

    1998-10-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent technical and economic trends. Technically, the machines are faster, cheaper, more repeatable, more reliable and safer. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. Economically, the robotics industry now has a 1.1 billion-dollar market in the U.S. and is growing. Feasibility studies results are presented which also show decreasing costs for robots and unaudited healthy rates of return for a variety of robotic applications. However, the road from inspiration to successful application can be long and difficult, often taking decades to achieve a new product. A greater emphasis on mechatronics is needed in our universities. Certainly, more cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit industry and society.

  10. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    Science.gov (United States)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  11. 25th Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    Borangiu, Theodor

    2017-01-01

    This book presents the proceedings of the 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 held in Belgrade, Serbia, on June 30th–July 2nd, 2016. In keeping with the tradition of the event, RAAD 2016 covered all the important areas of research and innovation in new robot designs and intelligent robot control, with papers including Intelligent robot motion control; Robot vision and sensory processing; Novel design of robot manipulators and grippers; Robot applications in manufacturing and services; Autonomous systems, humanoid and walking robots; Human–robot interaction and collaboration; Cognitive robots and emotional intelligence; Medical, human-assistive robots and prosthetic design; Robots in construction and arts, and Evolution, education, legal and social issues of robotics. For the first time in RAAD history, the themes cloud robots, legal and ethical issues in robotics as well as robots in arts were included in the technical program. The book is a valuable resource f...

  12. [Robotics in pediatric surgery].

    Science.gov (United States)

    Camps, J I

    2011-10-01

    Despite the extensive use of robotics in the adult population, the use of robotics in pediatrics has not been well accepted. There is still a lack of awareness from pediatric surgeons on how to use the robotic equipment, its advantages and indications. Benefit is still controversial. Dexterity and better visualization of the surgical field are one of the strong values. Conversely, cost and a lack of small instruments prevent the use of robotics in the smaller patients. The aim of this manuscript is to present the controversies about the use of robotics in pediatric surgery.

  13. Low cost submarine robot

    Directory of Open Access Journals (Sweden)

    Ponlachart Chotikarn

    2010-10-01

    Full Text Available A submarine robot is a semi-autonomous submarine robot used mainly for marine environmental research. We aim todevelop a low cost, semi-autonomous submarine robot which is able to travel underwater. The robot’s structure was designedand patented using a novel idea of the diving system employing a volume adjustment mechanism to vary the robot’s density.A light weight, flexibility and small structure provided by PVC can be used to construct the torpedo-liked shape robot.Hydraulic seal and O-ring rubbers are used to prevent water leaking. This robot is controlled by a wired communicationsystem.

  14. Advances in robot kinematics

    CERN Document Server

    Khatib, Oussama

    2014-01-01

    The topics addressed in this book cover the whole range of kinematic analysis, synthesis and design and consider robotic systems possessing serial, parallel and cable driven mechanisms. The robotic systems range from being less than fully mobile to kinematically redundant to overconstrained.  The fifty-six contributions report the latest results in robot kinematics with emphasis on emerging areas such as design and control of humanoids or humanoid subsystems. The book is of interest to researchers wanting to bring their knowledge up to date regarding modern topics in one of the basic disciplines in robotics, which relates to the essential property of robots, the motion of mechanisms.

  15. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hesheng [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Chen, Weidong, E-mail: wdchen@sjtu.edu.cn [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Xu, Lifei; He, Tao [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2015-10-15

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  16. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    International Nuclear Information System (INIS)

    Wang, Hesheng; Chen, Weidong; Xu, Lifei; He, Tao

    2015-01-01

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  17. CANINE: a robotic mine dog

    Science.gov (United States)

    Stancil, Brian A.; Hyams, Jeffrey; Shelley, Jordan; Babu, Kartik; Badino, Hernán.; Bansal, Aayush; Huber, Daniel; Batavia, Parag

    2013-01-01

    Neya Systems, LLC competed in the CANINE program sponsored by the U.S. Army Tank Automotive Research Development and Engineering Center (TARDEC) which culminated in a competition held at Fort Benning as part of the 2012 Robotics Rodeo. As part of this program, we developed a robot with the capability to learn and recognize the appearance of target objects, conduct an area search amid distractor objects and obstacles, and relocate the target object in the same way that Mine dogs and Sentry dogs are used within military contexts for exploration and threat detection. Neya teamed with the Robotics Institute at Carnegie Mellon University to develop vision-based solutions for probabilistic target learning and recognition. In addition, we used a Mission Planning and Management System (MPMS) to orchestrate complex search and retrieval tasks using a general set of modular autonomous services relating to robot mobility, perception and grasping.

  18. A Vision for the Exploration of Mars: Robotic Precursors Followed by Humans to Mars Orbit in 2033

    Science.gov (United States)

    Sellers, Piers J.; Garvin, James B.; Kinney, Anne L.; Amato, Michael J.; White, Nicholas E.

    2012-01-01

    The reformulation of the Mars program gives NASA a rare opportunity to deliver a credible vision in which humans, robots, and advancements in information technology combine to open the deep space frontier to Mars. There is a broad challenge in the reformulation of the Mars exploration program that truly sets the stage for: 'a strategic collaboration between the Science Mission Directorate (SMD), the Human Exploration and Operations Mission Directorate (HEOMD) and the Office of the Chief Technologist, for the next several decades of exploring Mars'.Any strategy that links all three challenge areas listed into a true long term strategic program necessitates discussion. NASA's SMD and HEOMD should accept the President's challenge and vision by developing an integrated program that will enable a human expedition to Mars orbit in 2033 with the goal of returning samples suitable for addressing the question of whether life exists or ever existed on Mars

  19. Marine Robot Autonomy

    CERN Document Server

    2013-01-01

    Autonomy for Marine Robots provides a timely and insightful overview of intelligent autonomy in marine robots. A brief history of this emerging field is provided, along with a discussion of the challenges unique to the underwater environment and their impact on the level of intelligent autonomy required.  Topics covered at length examine advanced frameworks, path-planning, fault tolerance, machine learning, and cooperation as relevant to marine robots that need intelligent autonomy.  This book also: Discusses and offers solutions for the unique challenges presented by more complex missions and the dynamic underwater environment when operating autonomous marine robots Includes case studies that demonstrate intelligent autonomy in marine robots to perform underwater simultaneous localization and mapping  Autonomy for Marine Robots is an ideal book for researchers and engineers interested in the field of marine robots.      

  20. ROBOT TASK SCENE ANALYZER

    International Nuclear Information System (INIS)

    Hamel, William R.; Everett, Steven

    2000-01-01

    Environmental restoration and waste management (ER and WM) challenges in the United States Department of Energy (DOE), and around the world, involve radiation or other hazards which will necessitate the use of remote operations to protect human workers from dangerous exposures. Remote operations carry the implication of greater costs since remote work systems are inherently less productive than contact human work due to the inefficiencies/complexities of teleoperation. To reduce costs and improve quality, much attention has been focused on methods to improve the productivity of combined human operator/remote equipment systems; the achievements to date are modest at best. The most promising avenue in the near term is to supplement conventional remote work systems with robotic planning and control techniques borrowed from manufacturing and other domains where robotic automation has been used. Practical combinations of teleoperation and robotic control will yield telerobotic work systems that outperform currently available remote equipment. It is believed that practical telerobotic systems may increase remote work efficiencies significantly. Increases of 30% to 50% have been conservatively estimated for typical remote operations. It is important to recognize that the basic hardware and software features of most modern remote manipulation systems can readily accommodate the functionality required for telerobotics. Further, several of the additional system ingredients necessary to implement telerobotic control--machine vision, 3D object and workspace modeling, automatic tool path generation and collision-free trajectory planning--are existent

  1. ROBOT TASK SCENE ANALYZER

    Energy Technology Data Exchange (ETDEWEB)

    William R. Hamel; Steven Everett

    2000-08-01

    Environmental restoration and waste management (ER and WM) challenges in the United States Department of Energy (DOE), and around the world, involve radiation or other hazards which will necessitate the use of remote operations to protect human workers from dangerous exposures. Remote operations carry the implication of greater costs since remote work systems are inherently less productive than contact human work due to the inefficiencies/complexities of teleoperation. To reduce costs and improve quality, much attention has been focused on methods to improve the productivity of combined human operator/remote equipment systems; the achievements to date are modest at best. The most promising avenue in the near term is to supplement conventional remote work systems with robotic planning and control techniques borrowed from manufacturing and other domains where robotic automation has been used. Practical combinations of teleoperation and robotic control will yield telerobotic work systems that outperform currently available remote equipment. It is believed that practical telerobotic systems may increase remote work efficiencies significantly. Increases of 30% to 50% have been conservatively estimated for typical remote operations. It is important to recognize that the basic hardware and software features of most modern remote manipulation systems can readily accommodate the functionality required for telerobotics. Further, several of the additional system ingredients necessary to implement telerobotic control--machine vision, 3D object and workspace modeling, automatic tool path generation and collision-free trajectory planning--are existent.

  2. Non-manufacturing applications of robotics

    International Nuclear Information System (INIS)

    Dauchez, P.

    2000-12-01

    This book presents the different non-manufacturing sectors of activity where robotics can have useful or necessary applications: underwater robotics, agriculture robotics, road work robotics, nuclear robotics, medical-surgery robotics, aids to disabled people, entertainment robotics. Service robotics has been voluntarily excluded because this developing sector is not mature yet. (J.S.)

  3. Embedded vision equipment of industrial robot for inline detection of product errors by clustering–classification algorithms

    Directory of Open Access Journals (Sweden)

    Kamil Zidek

    2016-10-01

    Full Text Available The article deals with the design of embedded vision equipment of industrial robots for inline diagnosis of product error during manipulation process. The vision equipment can be attached to the end effector of robots or manipulators, and it provides an image snapshot of part surface before grasp, searches for error during manipulation, and separates products with error from the next operation of manufacturing. The new approach is a methodology based on machine teaching for the automated identification, localization, and diagnosis of systematic errors in products of high-volume production. To achieve this, we used two main data mining algorithms: clustering for accumulation of similar errors and classification methods for the prediction of any new error to proposed class. The presented methodology consists of three separate processing levels: image acquisition for fail parameterization, data clustering for categorizing errors to separate classes, and new pattern prediction with a proposed class model. We choose main representatives of clustering algorithms, for example, K-mean from quantization of vectors, fast library for approximate nearest neighbor from hierarchical clustering, and density-based spatial clustering of applications with noise from algorithm based on the density of the data. For machine learning, we selected six major algorithms of classification: support vector machines, normal Bayesian classifier, K-nearest neighbor, gradient boosted trees, random trees, and neural networks. The selected algorithms were compared for speed and reliability and tested on two platforms: desktop-based computer system and embedded system based on System on Chip (SoC with vision equipment.

  4. Evolution of robotic arms.

    Science.gov (United States)

    Moran, Michael E

    2007-01-01

    The foundation of surgical robotics is in the development of the robotic arm. This is a thorough review of the literature on the nature and development of this device with emphasis on surgical applications. We have reviewed the published literature and classified robotic arms by their application: show, industrial application, medical application, etc. There is a definite trend in the manufacture of robotic arms toward more dextrous devices, more degrees-of-freedom, and capabilities beyond the human arm. da Vinci designed the first sophisticated robotic arm in 1495 with four degrees-of-freedom and an analog on-board controller supplying power and programmability. von Kemplen's chess-playing automaton left arm was quite sophisticated. Unimate introduced the first industrial robotic arm in 1961, it has subsequently evolved into the PUMA arm. In 1963 the Rancho arm was designed; Minsky's Tentacle arm appeared in 1968, Scheinman's Stanford arm in 1969, and MIT's Silver arm in 1974. Aird became the first cyborg human with a robotic arm in 1993. In 2000 Miguel Nicolalis redefined possible man-machine capacity in his work on cerebral implantation in owl-monkeys directly interfacing with robotic arms both locally and at a distance. The robotic arm is the end-effector of robotic systems and currently is the hallmark feature of the da Vinci Surgical System making its entrance into surgical application. But, despite the potential advantages of this computer-controlled master-slave system, robotic arms have definite limitations. Ongoing work in robotics has many potential solutions to the drawbacks of current robotic surgical systems.

  5. Is Ethics of Robotics about Robots? Philosophy of Robotics Beyond Realism and Individualilsm.

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2011-01-01

    If we are doing ethics of robotics, what exactly is the object of our inquiry? This paper challenges 'individualist' robot ontology and 'individualist' social philosophy of robots. It is argued that ethics of robotics should not study and evaluate robotics exclusively in terms of individual

  6. Aerial service robotics: the AIRobots perspective

    NARCIS (Netherlands)

    Marconi, L.; Basile, F.; Caprari, G.; Carloni, Raffaella; Chiacchio, P.; Hurzeler, C.; Lippiello, V.; Naldi, R.; Siciliano, B.; Stramigioli, Stefano; Zwicker, E.

    This paper presents the main vision and research activities of the ongoing European project AIRobots (Innova- tive Aerial Service Robot for Remote Inspection by Contact, www.airobots.eu). The goal of AIRobots is to develop a new generation of aerial service robots capable of supporting human beings

  7. JPL Robotics Technology Applicable to Agriculture

    Science.gov (United States)

    Udomkesmalee, Suraphol Gabriel; Kyte, L.

    2008-01-01

    This slide presentation describes several technologies that are developed for robotics that are applicable for agriculture. The technologies discussed are detection of humans to allow safe operations of autonomous vehicles, and vision guided robotic techniques for shoot selection, separation and transfer to growth media,

  8. Robotic devices for nuclear plant

    Energy Technology Data Exchange (ETDEWEB)

    Abel, E

    1986-05-01

    The article surveys the background of nuclear remote handling and its associated technology, robotics. Manipulators, robots, robot applications, extending the range of applications, and future developments, are all discussed.

  9. Evolutionary robotics – A review

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    a need for a technique by which the robot is able to acquire new behaviours automatically .... Evolutionary robotics is a comparatively new field of robotics research, which seems to ..... Technical Report: PCIA-94-04, Institute of Psychology,.

  10. Fast Segmentation of Colour Apple Image under All-Weather Natural Conditions for Vision Recognition of Picking Robots

    Directory of Open Access Journals (Sweden)

    Wei Ji

    2016-02-01

    Full Text Available In order to resolve the poor real-time performance problem of the normalized cut (Ncut method in apple vision recognition of picking robots, a fast segmentation method of colour apple images based on the adaptive mean-shift and Ncut methods is proposed in this paper. Firstly, the traditional Ncut method based on pixels is changed into the Ncut method based on regions by the adaptive mean-shift initial segmenting. In this way, the number of peaks and edges in the image is dramatically reduced and the computation speed is improved. Secondly, the image is divided into regional maps by extracting the R-B colour feature, which not only reduces the quantity of regions, but also to some extent overcomes the effect on illumination. On this basis, every region map is expressed by a region point, so the undirected graph of the R-B colour grey-level feature is attained. Finally, regarding the undirected graph as the input of Ncut, we construct the weight matrix W by region points and determine the number of clusters based on the decision-theoretic rough set. The adaptive clustering segmentation can be implemented by an Ncut algorithm. Experimental results show that the maximum segmentation error is 3% and the average recognition time is less than 0.7s, which can meet the requirements of a real-time picking robot.

  11. Robot Games for Elderly

    DEFF Research Database (Denmark)

    Hansen, Søren Tranberg

    2011-01-01

    improve a person’s overall health, and this thesis investigates how games based on an autonomous, mobile robot platform, can be used to motivate elderly to move physically while playing. The focus of the investigation is on the development of games for an autonomous, mobile robot based on algorithms using...... spatio-temporal information about player behaviour - more specifically, I investigate three types of games each using a different control strategy. The first game is based on basic robot control which allows the robot to detect and follow a person. A field study in a rehabilitation centre and a nursing....... The robot facilitates interaction, and the study suggests that robot based games potentially can be used for training balance and orientation. The second game consists in an adaptive game algorithm which gradually adjusts the game challenge to the mobility skills of the player based on spatio...

  12. Robot-laser system

    International Nuclear Information System (INIS)

    Akeel, H.A.

    1987-01-01

    A robot-laser system is described for providing a laser beam at a desired location, the system comprising: a laser beam source; a robot including a plurality of movable parts including a hollow robot arm having a central axis along which the laser source directs the laser beam; at least one mirror for reflecting the laser beam from the source to the desired location, the mirror being mounted within the robot arm to move therewith and relative thereto to about a transverse axis that extends angularly to the central axis of the robot arm; and an automatic programmable control system for automatically moving the mirror about the transverse axis relative to and in synchronization with movement of the robot arm to thereby direct the laser beam to the desired location as the arm is moved

  13. Survival of falling robots

    Science.gov (United States)

    Cameron, Jonathan M.; Arkin, Ronald C.

    1992-01-01

    As mobile robots are used in more uncertain and dangerous environments, it will become important to design them so that they can survive falls. In this paper, we examine a number of mechanisms and strategies that animals use to withstand these potentially catastrophic events and extend them to the design of robots. A brief survey of several aspects of how common cats survive falls provides an understanding of the issues involved in preventing traumatic injury during a falling event. After outlining situations in which robots might fall, a number of factors affecting their survival are described. From this background, several robot design guidelines are derived. These include recommendations for the physical structure of the robot as well as requirements for the robot control architecture. A control architecture is proposed based on reactive control techniques and action-oriented perception that is geared to support this form of survival behavior.

  14. Robotic surgery update.

    Science.gov (United States)

    Jacobsen, G; Elli, F; Horgan, S

    2004-08-01

    Minimally invasive surgical techniques have revolutionized the field of surgery. Telesurgical manipulators (robots) and new information technologies strive to improve upon currently available minimally invasive techniques and create new possibilities. A retrospective review of all robotic cases at a single academic medical center from August 2000 until November 2002 was conducted. A comprehensive literature evaluation on robotic surgical technology was also performed. Robotic technology is safely and effectively being applied at our institution. Robotic and information technologies have improved upon minimally invasive surgical techniques and created new opportunities not attainable in open surgery. Robotic technology offers many benefits over traditional minimal access techniques and has been proven safe and effective. Further research is needed to better define the optimal application of this technology. Credentialing and educational requirements also need to be delineated.

  15. Survival of falling robots

    Science.gov (United States)

    Cameron, Jonathan M.; Arkin, Ronald C.

    1992-02-01

    As mobile robots are used in more uncertain and dangerous environments, it will become important to design them so that they can survive falls. In this paper, we examine a number of mechanisms and strategies that animals use to withstand these potentially catastrophic events and extend them to the design of robots. A brief survey of several aspects of how common cats survive falls provides an understanding of the issues involved in preventing traumatic injury during a falling event. After outlining situations in which robots might fall, a number of factors affecting their survival are described. From this background, several robot design guidelines are derived. These include recommendations for the physical structure of the robot as well as requirements for the robot control architecture. A control architecture is proposed based on reactive control techniques and action-oriented perception that is geared to support this form of survival behavior.

  16. Fundamentals of soft robot locomotion

    OpenAIRE

    Calisti, M.; Picardi, G.; Laschi, C.

    2017-01-01

    Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human���robot interaction and locomotion. Although field applications have emerged for soft manipulation and human���robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This p...

  17. Robotic liver surgery

    Science.gov (United States)

    Leung, Universe

    2014-01-01

    Robotic surgery is an evolving technology that has been successfully applied to a number of surgical specialties, but its use in liver surgery has so far been limited. In this review article we discuss the challenges of minimally invasive liver surgery, the pros and cons of robotics, the evolution of medical robots, and the potentials in applying this technology to liver surgery. The current data in the literature are also presented. PMID:25392840

  18. Robotized transcranial magnetic stimulation

    CERN Document Server

    Richter, Lars

    2014-01-01

    Presents new, cutting-edge algorithms for robot/camera calibration, sensor fusion and sensor calibration Explores the main challenges for accurate coil positioning, such as head motion, and outlines how active robotic motion compensation can outperform hand-held solutions Analyzes how a robotized system in medicine can alleviate concerns with a patient's safety, and presents a novel fault-tolerant algorithm (FTA) sensor for system safety

  19. Raspberry Pi robotics projects

    CERN Document Server

    Grimmett, Richard

    2015-01-01

    This book is for enthusiasts who want to use the Raspberry Pi to build complex robotics projects. With the aid of the step-by-step instructions in this book, you can construct complex robotics projects that can move, talk, listen, see, swim, or fly. No previous Raspberry Pi robotics experience is assumed, but even experts will find unexpected and interesting information in this invaluable guide.

  20. Robots as Confederates

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    2016-01-01

    This paper addresses the use of robots in experimental research for the study of human language, human interaction, and human nature. It is argued that robots make excellent confederates that can be completely controlled, yet which engage human participants in interactions that allow us to study...... numerous linguistic and psychological variables in isolation in an ecologically valid way. Robots thus combine the advantages of observational studies and of controlled experimentation....

  1. Robotics in General Surgery

    OpenAIRE

    Wall, James; Chandra, Venita; Krummel, Thomas

    2008-01-01

    In summary, robotics has made a significant contribution to General Surgery in the past 20 years. In its infancy, surgical robotics has seen a shift from early systems that assisted the surgeon to current teleoperator systems that can enhance surgical skills. Telepresence and augmented reality surgery are being realized, while research and development into miniaturization and automation is rapidly moving forward. The future of surgical robotics is bright. Researchers are working to address th...

  2. Robotic hand project

    OpenAIRE

    Karaçizmeli, Cengiz; Çakır, Gökçe; Tükel, Dilek

    2014-01-01

    In this work, the mechatronic based robotic hand is controlled by the position data taken from the glove which has flex sensors mounted to capture finger bending of the human hand. The angular movement of human hand’s fingers are perceived and processed by a microcontroller, and the robotic hand is controlled by actuating servo motors. It has seen that robotic hand can simulate the movement of the human hand that put on the glove, during tests have done. This robotic hand can be used not only...

  3. Perspectives of construction robots

    Science.gov (United States)

    Stepanov, M. A.; Gridchin, A. M.

    2018-03-01

    This article is an overview of construction robots features, based on formulating the list of requirements for different types of construction robots in relation to different types of construction works.. It describes a variety of construction works and ways to construct new or to adapt existing robot designs for a construction process. Also, it shows the prospects of AI-controlled machines, implementation of automated control systems and networks on construction sites. In the end, different ways to develop and improve, including ecological aspect, the construction process through the wide robotization, creating of data communication networks and, in perspective, establishing of fully AI-controlled construction complex are formulated.

  4. Robots de servicio

    OpenAIRE

    Aracil, Rafael; Balaguer, Carlos; Armada, Manuel

    2008-01-01

    8 págs, 9 figs. El término Robots de Servicio apareció a finales de los años 80 como una necesidad de desarrollar máquinas y sistemas capaces de trabajar en entornos diferentes a los fabriles. Los Robots de Servicio tenían que poder trabajar en entornos noestructurados, en condiciones ambientales cambiantes y con una estrecha interacción con los humanos. En 1995 fue creado por la IEEE Robotics and Automation Society, el Technical Committee on Service Robots, y este comité definió en el año...

  5. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  6. Advanced robot locomotion.

    Energy Technology Data Exchange (ETDEWEB)

    Neely, Jason C.; Sturgis, Beverly Rainwater; Byrne, Raymond Harry; Feddema, John Todd; Spletzer, Barry Louis; Rose, Scott E.; Novick, David Keith; Wilson, David Gerald; Buerger, Stephen P.

    2007-01-01

    This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics natural human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.

  7. Robotic assisted laparoscopic colectomy.

    LENUS (Irish Health Repository)

    Pandalai, S

    2010-06-01

    Robotic surgery has evolved over the last decade to compensate for limitations in human dexterity. It avoids the need for a trained assistant while decreasing error rates such as perforations. The nature of the robotic assistance varies from voice activated camera control to more elaborate telerobotic systems such as the Zeus and the Da Vinci where the surgeon controls the robotic arms using a console. Herein, we report the first series of robotic assisted colectomies in Ireland using a voice activated camera control system.

  8. Soft-Material Robotics

    OpenAIRE

    Wang, L; Nurzaman, SG; Iida, Fumiya

    2017-01-01

    There has been a boost of research activities in robotics using soft materials in the past ten years. It is expected that the use and control of soft materials can help realize robotic systems that are safer, cheaper, and more adaptable than the level that the conventional rigid-material robots can achieve. Contrary to a number of existing review and position papers on soft-material robotics, which mostly present case studies and/or discuss trends and challenges, the review focuses on the fun...

  9. Robotics for nuclear facilities

    International Nuclear Information System (INIS)

    Abe, Akira; Nakayama, Ryoichi; Kubo, Katsumi

    1988-01-01

    It is highly desirable that automatic or remotely controlled machines perform inspection and maintenance tasks in nuclear facilities. Toshiba has been working to develop multi-functional robots, with one typical example being a master-slave manipulator for use in reprocessing facilities. At the same time, the company is also working on the development of multi-purpose intelligent robots. One such device, an automatic inspection robot, to be deployed along a monorail, performs inspection by means of image processing technology, while and advanced intelligent maintenance robot is equipped with a special wheel-locomotion mechanism and manipulator and is designed to perform maintenance tasks. (author)

  10. Increasing Robotic Science Applications

    Data.gov (United States)

    National Aeronautics and Space Administration — The principal objectives are to demonstrate robotic-based scientific investigations and resource prospecting, and develop and demonstrate modular science instrument...

  11. DSLs in robotics

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh; Bordignon, Mirko; Stoy, Kasper

    2017-01-01

    Robotic systems blend hardware and software in a holistic way that intrinsically raises many crosscutting concerns such as concurrency, uncertainty, and time constraints. These concerns make programming robotic systems challenging as expertise from multiple domains needs to be integrated...... conceptually and technically. Programming languages play a central role in providing a higher level of abstraction. This briefing presents a case study on the evolution of domain-specific languages based on modular robotics. The case study on the evolution of domain-specific languages is based on a series...... of DSL prototypes developed over five years for the domain of modular, self-reconfigurable robots....

  12. Learning for intelligent mobile robots

    Science.gov (United States)

    Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.

    2003-10-01

    Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A

  13. Image processor of model-based vision system for assembly robots

    International Nuclear Information System (INIS)

    Moribe, H.; Nakano, M.; Kuno, T.; Hasegawa, J.

    1987-01-01

    A special purpose image preprocessor for the visual system of assembly robots has been developed. The main function unit is composed of lookup tables to utilize the advantage of semiconductor memory for large scale integration, high speed and low price. More than one unit may be operated in parallel since it is designed on the standard IEEE 796 bus. The operation time of the preprocessor in line segment extraction is usually 200 ms per 500 segments, though it differs according to the complexity of scene image. The gray-scale visual system supported by the model-based analysis program using the extracted line segments recognizes partially visible or overlapping industrial workpieces, and detects these locations and orientations

  14. Learning-based Nonlinear Model Predictive Control to Improve Vision-based Mobile Robot Path Tracking

    Science.gov (United States)

    2015-07-01

    corresponding cost function to be J(u) = ( xd − x)TQx ( xd − x) + uTRu, (20) where Qx ∈ RKnx×Knx is positive semi-definite, R and u are as in (3), xd is a...sequence of desired states, xd = ( xd ,k+1, . . . , xd ,k+K), x is a sequence of predicted states, x = (xk+1, . . . ,xk+K), and K is the given prediction...vact,k−1+b, ωact,k−1+b), based ωk θk vk xd ,i−1 xd ,i xd ,i+1 xk yk Figure 5: Definition of the robot velocities, vk and ωk, and three pose variables

  15. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  16. Inventing Japan's 'robotics culture': the repeated assembly of science, technology, and culture in social robotics.

    Science.gov (United States)

    Sabanović, Selma

    2014-06-01

    Using interviews, participant observation, and published documents, this article analyzes the co-construction of robotics and culture in Japan through the technical discourse and practices of robotics researchers. Three cases from current robotics research--the seal-like robot PARO, the Humanoid Robotics Project HRP-2 humanoid, and 'kansei robotics' - show the different ways in which scientists invoke culture to provide epistemological grounding and possibilities for social acceptance of their work. These examples show how the production and consumption of social robotic technologies are associated with traditional crafts and values, how roboticists negotiate among social, technical, and cultural constraints while designing robots, and how humans and robots are constructed as cultural subjects in social robotics discourse. The conceptual focus is on the repeated assembly of cultural models of social behavior, organization, cognition, and technology through roboticists' narratives about the development of advanced robotic technologies. This article provides a picture of robotics as the dynamic construction of technology and culture and concludes with a discussion of the limits and possibilities of this vision in promoting a culturally situated understanding of technology and a multicultural view of science.

  17. Using High-Level RTOS Models for HW/SW Embedded Architecture Exploration: Case Study on Mobile Robotic Vision

    Directory of Open Access Journals (Sweden)

    Verdier François

    2008-01-01

    Full Text Available Abstract We are interested in the design of a system-on-chip implementing the vision system of a mobile robot. Following a biologically inspired approach, this vision architecture belongs to a larger sensorimotor loop. This regulation loop both creates and exploits dynamics properties to achieve a wide variety of target tracking and navigation objectives. Such a system is representative of numerous flexible and dynamic applications which are more and more encountered in embedded systems. In order to deal with all of the dynamic aspects of these applications, it appears necessary to embed a dedicated real-time operating system on the chip. The presence of this on-chip custom executive layer constitutes a major scientific obstacle in the traditional hardware and software design flows. Classical exploration and simulation tools are particularly inappropriate in this case. We detail in this paper the specific mechanisms necessary to build a high-level model of an embedded custom operating system able to manage such a real-time but flexible application. We also describe our executable RTOS model written in SystemC allowing an early simulation of our application on top of its specific scheduling layer. Based on this model, a methodology is discussed and results are given on the exploration and validation of a distributed platform adapted to this vision system.

  18. Towards Light‐guided Micro‐robotics

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    ‐dimensional microstructures. Furthermore, we exploit the light shaping capabilities available in the workstation to demonstrate a new strategy for controlling microstructures that goes beyond the typical refractive light deflections that are exploited in conventional optical trapping and manipulation e.g. of micro......Robotics in the macro‐scale typically uses light for carrying information in machine vision for monitoring and feedback in intelligent robotic guidance systems. With light’s miniscule momentum, shrinking robots down to the micro‐scale regime creates opportunities for exploiting optical forces...... and torques in micro‐robotic actuation and control. Indeed, the literature on optical trapping and micro‐manipulation attests to the possibilities for optical micro‐robotics. Advancing light‐driven micro‐robotics requires the optimization of optical force and optical torque that, in turn, requires...

  19. Robotic fabrication in architecture, art, and design

    CERN Document Server

    Braumann, Johannes

    2013-01-01

    Architects, artists, and designers have been fascinated by robots for many decades, from Villemard’s utopian vision of an architect building a house with robotic labor in 1910, to the design of buildings that are robots themselves, such as Archigram’s Walking City. Today, they are again approaching the topic of robotic fabrication but this time employing a different strategy: instead of utopian proposals like Archigram’s or the highly specialized robots that were used by Japan’s construction industry in the 1990s, the current focus of architectural robotics is on industrial robots. These robotic arms have six degrees of freedom and are widely used in industry, especially for automotive production lines. What makes robotic arms so interesting for the creative industry is their multi-functionality: instead of having to develop specialized machines, a multifunctional robot arm can be equipped with a wide range of end-effectors, similar to a human hand using various tools. Therefore, architectural researc...

  20. Multi-robot control interface

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID

    2011-12-06

    Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes a multi-robot common window comprised of information received from each of the plurality of robots.

  1. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  2. Robotic refueling machine

    International Nuclear Information System (INIS)

    Challberg, R.C.; Jones, C.R.

    1996-01-01

    One of the longest critical path operations performed during the outage is removing and replacing the fuel. A design is currently under development for a refueling machine which would allow faster, fully automated operation and would also allow the handling of two fuel assemblies at the same time. This design is different from current designs, (a) because of its lighter weight, making increased acceleration and speed possible, (b) because of its control system which makes locating the fuel assembly more dependable and faster, and (c) because of its dual handling system allowing simultaneous fuel movements. The new design uses two robotic arms to span a designated area of the vessel and the fuel storage area. Attached to the end of each robotic arm is a lightweight telescoping mast with a pendant attached to the end of each mast. The pendant acts as the base unit, allowing attachment of any number of end effectors depending on the servicing or inspection operation. Housed within the pendant are two television cameras used for the positioning control system. The control system is adapted from the robotics field using the technology known as machine vision, which provides both object and character recognition techniques to enable relative position control rather than absolute position control as in past designs. The pendant also contains thrusters that are used for fast, short distance, precise positioning. The new refueling machine system design is capable of a complete off load and reload of an 872 element core in about 5.3 days compared to 13 days for a conventional system

  3. Mobile robot for hazardous environments

    International Nuclear Information System (INIS)

    Bains, N.

    1995-01-01

    This paper describes the architecture and potential applications of the autonomous robot for a known environment (ARK). The ARK project has developed an autonomous mobile robot that can move around by itself in a complicated nuclear environment utilizing a number of sensors for navigation. The primary sensor system is computer vision. The ARK has the intelligence to determine its position utilizing open-quotes natural landmarks,close quotes such as ordinary building features at any point along its path. It is this feature that gives ARK its uniqueness to operate in an industrial type of environment. The prime motivation to develop ARK was the potential application of mobile robots in radioactive areas within nuclear generating stations and for nuclear waste sites. The project budget is $9 million over 4 yr and will be completed in October 1995

  4. Robots: l'embarras de richesses [:survey of robots available

    International Nuclear Information System (INIS)

    Meieran, H.; Brittain, K.; Sturkey, R.

    1989-01-01

    A survey of robots available for use in the nuclear industry is presented. Two new categories of mobile robots have been introduced since the last survey (April 1987): pipe crawlers and underwater robots. The number of robots available has risen to double what it was two years ago and four times what it was in 1986. (U.K.)

  5. Biomass feeds vegetarian robot; Biomassa voedt vegetarische robot

    Energy Technology Data Exchange (ETDEWEB)

    Van den Brandt, M. [Office for Science and Technology, Embassy of the Kingdom of the Netherlands, Washington (United States)

    2009-09-15

    This brief article addresses the EATR robot (Energetically Autonomous Tactical Robot) that was developed by Cyclone Power and uses biomass as primary source of energy for propulsion. [Dutch] Een kort artikel over de door Cyclone Power ontwikkelde EATR-robot (Energetically Autonomous Tactical Robot) die voor de voortdrijving biomassa gebruikt als primaire energiebron.

  6. Neuro-Inspired Spike-Based Motion: From Dynamic Vision Sensor to Robot Motor Open-Loop Control through Spike-VITE

    Directory of Open Access Journals (Sweden)

    Fernando Perez-Peña

    2013-11-01

    Full Text Available In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM. All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6 and power requirements (3.4 W to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6. It also evidences the suitable use of AER as a communication protocol between processing and actuation.

  7. Neuro-Inspired Spike-Based Motion: From Dynamic Vision Sensor to Robot Motor Open-Loop Control through Spike-VITE

    Science.gov (United States)

    Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan

    2013-01-01

    In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation. PMID:24264330

  8. Can we trust robots?

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2011-01-01

    Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is

  9. Robotics in endoscopy.

    Science.gov (United States)

    Klibansky, David; Rothstein, Richard I

    2012-09-01

    The increasing complexity of intralumenal and emerging translumenal endoscopic procedures has created an opportunity to apply robotics in endoscopy. Computer-assisted or direct-drive robotic technology allows the triangulation of flexible tools through telemanipulation. The creation of new flexible operative platforms, along with other emerging technology such as nanobots and steerable capsules, can be transformational for endoscopic procedures. In this review, we cover some background information on the use of robotics in surgery and endoscopy, and review the emerging literature on platforms, capsules, and mini-robotic units. The development of techniques in advanced intralumenal endoscopy (endoscopic mucosal resection and endoscopic submucosal dissection) and translumenal endoscopic procedures (NOTES) has generated a number of novel platforms, flexible tools, and devices that can apply robotic principles to endoscopy. The development of a fully flexible endoscopic surgical toolkit will enable increasingly advanced procedures to be performed through natural orifices. The application of platforms and new flexible tools to the areas of advanced endoscopy and NOTES heralds the opportunity to employ useful robotic technology. Following the examples of the utility of robotics from the field of laparoscopic surgery, we can anticipate the emerging role of robotic technology in endoscopy.

  10. Neuronal nets in robotics

    International Nuclear Information System (INIS)

    Jimenez Sanchez, Raul

    1999-01-01

    The paper gives a generic idea of the solutions that the neuronal nets contribute to the robotics. The advantages and the inconveniences are exposed that have regarding the conventional techniques. It also describe the more excellent applications as the pursuit of trajectories, the positioning based on images, the force control or of the mobile robots management, among others

  11. Modelling of Hydraulic Robot

    DEFF Research Database (Denmark)

    Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik

    1997-01-01

    This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...

  12. Robots that care

    NARCIS (Netherlands)

    Looije, R.; Arendsen, J.; Saldien, J.; Vanderborght, B.; Broekens, J.; Neerincx, M.

    2010-01-01

    Many countries face pressure on their health care systems. To alleviate this pressure, 'self care' and 'self monitoring' are often stimulated with the use of new assistive technologies. Social robotics is a research area where robotic technology is optimized for various social functions. One of

  13. Robotics and Industrial Arts.

    Science.gov (United States)

    Edmison, Glenn A.; And Others

    Robots are becoming increasingly common in American industry. By l990, they will revolutionize the way industry functions, replacing hundreds of workers and doing hot, dirty jobs better and more quickly than the workers could have done them. Robotics should be taught in high school industrial arts programs as a major curriculum component. The…

  14. Robotics in medicine

    Science.gov (United States)

    Kuznetsov, D. N.; Syryamkin, V. I.

    2015-11-01

    Modern technologies play a very important role in our lives. It is hard to imagine how people can get along without personal computers, and companies - without powerful computer centers. Nowadays, many devices make modern medicine more effective. Medicine is developing constantly, so introduction of robots in this sector is a very promising activity. Advances in technology have influenced medicine greatly. Robotic surgery is now actively developing worldwide. Scientists have been carrying out research and practical attempts to create robotic surgeons for more than 20 years, since the mid-80s of the last century. Robotic assistants play an important role in modern medicine. This industry is new enough and is at the early stage of development; despite this, some developments already have worldwide application; they function successfully and bring invaluable help to employees of medical institutions. Today, doctors can perform operations that seemed impossible a few years ago. Such progress in medicine is due to many factors. First, modern operating rooms are equipped with up-to-date equipment, allowing doctors to make operations more accurately and with less risk to the patient. Second, technology has enabled to improve the quality of doctors' training. Various types of robots exist now: assistants, military robots, space, household and medical, of course. Further, we should make a detailed analysis of existing types of robots and their application. The purpose of the article is to illustrate the most popular types of robots used in medicine.

  15. Multi-robot caravanning

    KAUST Repository

    Denny, Jory; Giese, Andrew; Mahadevan, Aditya; Marfaing, Arnaud; Glockenmeier, Rachel; Revia, Colton; Rodriguez, Samuel; Amato, Nancy M.

    2013-01-01

    of waypoints. At the heart of our algorithm is the use of leader election to efficiently exploit the unique environmental knowledge available to each robot in order to plan paths for the group, which makes it general enough to work with robots that have

  16. Going Green Robots

    Science.gov (United States)

    Nelson, Jacqueline M.

    2011-01-01

    In looking at the interesting shapes and sizes of old computer parts, creating robots quickly came to the author's mind. In this article, she describes how computer parts can be used creatively. Students will surely enjoy creating their very own robots while learning about the importance of recycling in the society. (Contains 1 online resource.)

  17. Reflection on robotic intelligence

    NARCIS (Netherlands)

    Bartneck, C.

    2006-01-01

    This paper reflects on the development or robots, both their physical shape as well as their intelligence. The later strongly depends on the progress made in the artificial intelligence (AI) community which does not yet provide the models and tools necessary to create intelligent robots. It is time

  18. Robots Cannot Lie

    DEFF Research Database (Denmark)

    Borggreen, Gunhild

    2014-01-01

    En analyse af den japanske robot-menneske teaterstykke Hataraku Watashi med fokus på Austins og Butlers begreb om performativitet.......En analyse af den japanske robot-menneske teaterstykke Hataraku Watashi med fokus på Austins og Butlers begreb om performativitet....

  19. Intelligent robot action planning

    Energy Technology Data Exchange (ETDEWEB)

    Vamos, T; Siegler, A

    1982-01-01

    Action planning methods used in intelligent robot control are discussed. Planning is accomplished through environment understanding, environment representation, task understanding and planning, motion analysis and man-machine communication. These fields are analysed in detail. The frames of an intelligent motion planning system are presented. Graphic simulation of the robot's environment and motion is used to support the planning. 14 references.

  20. Innovations in robotic surgery.

    Science.gov (United States)

    Gettman, Matthew; Rivera, Marcelino

    2016-05-01

    Developments in robotic surgery have continued to advance care throughout the field of urology. The purpose of this review is to evaluate innovations in robotic surgery over the past 18 months. The release of the da Vinci Xi system heralded an improvement on the Si system with improved docking, the ability to further manipulate robotic arms without clashing, and an autofocus universal endoscope. Robotic simulation continues to evolve with improvements in simulation training design to include augmented reality in robotic surgical education. Robotic-assisted laparoendoscopic single-site surgery continues to evolve with improvements on technique that allow for tackling previously complex pathologic surgical anatomy including urologic oncology and reconstruction. Last, innovations of new surgical platforms with robotic systems to improve surgeon ergonomics and efficiency in ureteral and renal surgery are being applied in the clinical setting. Urologic surgery continues to be at the forefront of the revolution of robotic surgery with advancements in not only existing technology but also creation of entirely novel surgical systems.

  1. Robotic surgery in gynecology

    Directory of Open Access Journals (Sweden)

    Jean eBouquet De Jolinière

    2016-05-01

    Full Text Available Abstract Minimally invasive surgery (MIS can be considered as the greatest surgical innovation over the past thirty years. It revolutionized surgical practice with well-proven advantages over traditional open surgery: reduced surgical trauma and incision-related complications, such as surgical-site infections, postoperative pain and hernia, reduced hospital stay, and improved cosmetic outcome. Nonetheless, proficiency in MIS can be technically challenging as conventional laparoscopy is associated with several limitations as the two-dimensional (2D monitor reduction in-depth perception, camera instability, limited range of motion and steep learning curves. The surgeon has a low force feedback which allows simple gestures, respect for tissues and more effective treatment of complications.Since 1980s several computer sciences and robotics projects have been set up to overcome the difficulties encountered with conventional laparoscopy, to augment the surgeon's skills, achieve accuracy and high precision during complex surgery and facilitate widespread of MIS. Surgical instruments are guided by haptic interfaces that replicate and filter hand movements. Robotically assisted technology offers advantages that include improved three- dimensional stereoscopic vision, wristed instruments that improve dexterity, and tremor canceling software that improves surgical precision.

  2. Robot skills for manufacturing

    DEFF Research Database (Denmark)

    Pedersen, Mikkel Rath; Nalpantidis, Lazaros; Andersen, Rasmus Skovgaard

    2016-01-01

    -asserting robot skills for manufacturing. We show how a relatively small set of skills are derived from current factory worker instructions, and how these can be transferred to industrial mobile manipulators. General robot skills can not only be implemented on these robots, but also be intuitively concatenated...... products are introduced by manufacturers. In order to compete on global markets, the factories of tomorrow need complete production lines, including automation technologies that can effortlessly be reconfigured or repurposed, when the need arises. In this paper we present the concept of general, self...... in running production facilities at an industrial partner. It follows from these experiments that the use of robot skills, and associated task-level programming framework, is a viable solution to introducing robots that can intuitively and on the fly be programmed to perform new tasks by factory workers....

  3. Robotics at Savannah River

    International Nuclear Information System (INIS)

    Byrd, J.S.

    1983-01-01

    A Robotics Technology Group was organized at the Savannah River Laboratory in August 1982. Many potential applications have been identified that will improve personnel safety, reduce operating costs, and increase productivity using modern robotics and automation. Several active projects are under way to procure robots, to develop unique techniques and systems for the site's processes, and to install the systems in the actual work environments. The projects and development programs are involved in the following general application areas: (1) glove boxes and shielded cell facilities, (2) laboratory chemical processes, (3) fabrication processes for reactor fuel assemblies, (4) sampling processes for separation areas, (5) emergency response in reactor areas, (6) fuel handling in reactor areas, and (7) remote radiation monitoring systems. A Robotics Development Laboratory has been set up for experimental and development work and for demonstration of robotic systems

  4. Evidence for robots.

    Science.gov (United States)

    Shenoy, Ravikiran; Nathwani, Dinesh

    2017-01-01

    Robots have been successfully used in commercial industry and have enabled humans to perform tasks which are repetitive, dangerous and requiring extreme force. Their role has evolved and now includes many aspects of surgery to improve safety and precision. Orthopaedic surgery is largely performed on bones which are rigid immobile structures which can easily be performed by robots with great precision. Robots have been designed for use in orthopaedic surgery including joint arthroplasty and spine surgery. Experimental studies have been published evaluating the role of robots in arthroscopy and trauma surgery. In this article, we will review the incorporation of robots in orthopaedic surgery looking into the evidence in their use. © The Authors, published by EDP Sciences, 2017.

  5. Robotics: The next step?

    Science.gov (United States)

    Broeders, Ivo A M J

    2014-02-01

    Robotic systems were introduced 15 years ago to support complex endoscopic procedures. The technology is increasingly used in gastro-intestinal surgery. In this article, literature on experimental- and clinical research is reviewed and ergonomic issues are discussed. literature review was based on Medline search using a large variety of search terms, including e.g. robot(ic), randomized, rectal, oesophageal, ergonomics. Review articles on relevant topics are discussed with preference. There is abundant evidence of supremacy in performing complex endoscopic surgery tasks when using the robot in an experimental setting. There is little high-level evidence so far on translation of these merits to clinical practice. Robotic systems may appear helpful in complex gastro-intestinal surgery. Moreover, dedicated computer based technology integrated in telepresence systems opens the way to integration of planning, diagnostics and therapy. The first high tech add-ons such as near infrared technology are under clinical evaluation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Towards Versatile Robots Through Open Heterogeneous Modular Robots

    DEFF Research Database (Denmark)

    Lyder, Andreas

    arises, a new robot can be assembled rapidly from the existing modules, in contrast to conventional robots, which require a time consuming and expensive development process. In this thesis we define a modular robot to be a robot consisting of dynamically reconfigurable modules. The goal of this thesis......Robots are important tools in our everyday life. Both in industry and at the consumer level they serve the purpose of increasing our scope and extending our capabilities. Modular robots take the next step, allowing us to easily create and build various robots from a set of modules. If a problem...... is to increase the versatility and practical usability of modular robots by introducing new conceptual designs. Until now modular robots have been based on a pre-specified set of modules, and thus, their functionality is limited. We propose an open heterogeneous design concept, which allows a modular robot...

  7. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    Science.gov (United States)

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Exploiting Child-Robot Aesthetic Interaction for a Social Robot

    OpenAIRE

    Lee, Jae-Joon; Kim, Dae-Won; Kang, Bo-Yeong

    2012-01-01

    A social robot interacts and communicates with humans by using the embodied knowledge gained from interactions with its social environment. In recent years, emotion has emerged as a popular concept for designing social robots. Several studies on social robots reported an increase in robot sociability through emotional imitative interactions between the robot and humans. In this paper conventional emotional interactions are extended by exploiting the aesthetic theories that the sociability of ...

  9. Towards Versatile Robots Through Open Heterogeneous Modular Robots

    OpenAIRE

    Lyder, Andreas

    2010-01-01

    Robots are important tools in our everyday life. Both in industry and at the consumer level they serve the purpose of increasing our scope and extending our capabilities. Modular robots take the next step, allowing us to easily create and build various robots from a set of modules. If a problem arises, a new robot can be assembled rapidly from the existing modules, in contrast to conventional robots, which require a time consuming and expensive development process. In this thesis we define a ...

  10. Interaction with Soft Robotic Tentacles

    DEFF Research Database (Denmark)

    Jørgensen, Jonas

    2018-01-01

    Soft robotics technology has been proposed for a number of applications that involve human-robot interaction. In this tabletop demonstration it is possible to interact with two soft robotic platforms that have been used in human-robot interaction experiments (also accepted to HRI'18 as a Late...

  11. Robots: An Impact on Education.

    Science.gov (United States)

    Blaesi, LaVon; Maness, Marion

    1984-01-01

    Provides background information on robotics and robots, considering impact of robots on the workplace and concerns of the work force. Discusses incorporating robotics into the educational system at all levels, exploring industry-education partnerships to fund introduction of new technology into the curriculum. New funding sources and funding…

  12. Remote controlled data collector robot

    Directory of Open Access Journals (Sweden)

    Jozsef Suto

    2012-06-01

    Full Text Available Today a general need for robots assisting different human activities rises. The goal of the present project is to develop a prototyping robot, which provides facilities for attaching and fitting different kinds of sensors and actuators. This robot provides an easy way to turn a general purpose robot into a special function one.

  13. Construction of the Control System of Cleaning Robots with Vision Guidance

    Directory of Open Access Journals (Sweden)

    Tian-Syung Lan

    2013-01-01

    Full Text Available The study uses Kinect, modern and depth detectable photography equipment to detect objects on the ground and above the ground. The data collected is used to construct a model on ground level, that is, used lead automatic guiding vehicle. The core of the vehicle uses a PIC18F4520 microchip. Bluetooth wireless communication is adopted for remote connection to a computer, which is used to control the vehicles remotely. Operators send movement command to automatic guiding vehicle through computer. Once the destination point is identified, the vehicle lead is forward. The guiding process will map out a path that directs the vehicle to the destination and void any obstacles. The study is based on existing cleaning robots that are available. Aside from fixed point movement, through data analysis, the system is also capable of identifying objects that are not supposed to appear on the ground, such as aluminum cans. By configuring the destination to aluminum cans, the automatic guiding vehicle will lead to a can and pick it up. Such action is the realization of cleaning function.

  14. A Complementary Vision Strategy for Autonomous Robots in Underground Terrains using SRM and Entropy Models

    Directory of Open Access Journals (Sweden)

    Omowunmi Isafiade

    2013-09-01

    Full Text Available This work investigates robots' perception in underground terrains (mines and tunnels using statistical region merging (SRM and the entropy models. A probabilistic approach based on the local entropy is employed. The entropy is measured within a fixed window on a stream of mine and tunnel frames to compute features used in the segmentation process, while SRM reconstructs the main structural components of an imagery by a simple but effective statistical analysis. An investigation is conducted on different regions of the mine, such as the shaft, stope and gallery, using publicly available mine frames, with a stream of locally captured mine images. Furthermore, an investigation is also conducted on a stream of dynamic underground tunnel image frames, using the XBOX Kinect 3D sensors. The Kinect sensors produce streams of red, green and blue (RGB and depth images of 640 × 480 resolution at 30 frames per second. Integrating the depth information into drivability gives a strong cue to the analysis, which detects 3D results augmenting drivable and non-drivable regions in 2D. The results of the 2D and 3D experiment with different terrains, mines and tunnels, together with the qualitative and quantitative evaluations, reveal that a good drivable region can be detected in dynamic underground terrains.

  15. Visual servo simulation of EAST articulated maintenance arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Pan, Hongtao; Cheng, Yong; Feng, Hansheng [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Wu, Huapeng [Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta (Finland)

    2016-03-15

    For the inspection and light-duty maintenance of the vacuum vessel in the EAST tokamak, a serial robot arm, called EAST articulated maintenance arm, is developed. Due to the 9-m-long cantilever arm, the large flexibility of the EAMA robot introduces a problem in the accurate positioning. This article presents an autonomous robot control to cope with the robot positioning problem, which is a visual servo approach in context of tile grasping for the EAMA robot. In the experiments, the proposed method was implemented in a simulation environment to position and track a target graphite tile with the EAMA robot. As a result, the proposed visual control scheme can successfully drive the EAMA robot to approach and track the target tile until the robot reaches the desired position. Furthermore, the functionality of the simulation software presented in this paper is proved to be suitable for the development of the robotic and computer vision application.

  16. Visual servo simulation of EAST articulated maintenance arm robot

    International Nuclear Information System (INIS)

    Yang, Yang; Song, Yuntao; Pan, Hongtao; Cheng, Yong; Feng, Hansheng; Wu, Huapeng

    2016-01-01

    For the inspection and light-duty maintenance of the vacuum vessel in the EAST tokamak, a serial robot arm, called EAST articulated maintenance arm, is developed. Due to the 9-m-long cantilever arm, the large flexibility of the EAMA robot introduces a problem in the accurate positioning. This article presents an autonomous robot control to cope with the robot positioning problem, which is a visual servo approach in context of tile grasping for the EAMA robot. In the experiments, the proposed method was implemented in a simulation environment to position and track a target graphite tile with the EAMA robot. As a result, the proposed visual control scheme can successfully drive the EAMA robot to approach and track the target tile until the robot reaches the desired position. Furthermore, the functionality of the simulation software presented in this paper is proved to be suitable for the development of the robotic and computer vision application.

  17. Robotics for nuclear power plants

    International Nuclear Information System (INIS)

    Shiraiwa, Takanori; Watanabe, Atsuo; Miyasawa, Tatsuo

    1984-01-01

    Demand for robots in nuclear power plants is increasing of late in order to reduce workers' exposure to radiations. Especially, owing to the progress of microelectronics and robotics, earnest desire is growing for the advent of intellecturized robots that perform indeterminate and complicated security work. Herein represented are the robots recently developed for nuclear power plants and the review of the present status of robotics. (author)

  18. Robotics for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Shiraiwa, Takanori; Watanabe, Atsuo; Miyasawa, Tatsuo

    1984-10-01

    Demand for robots in nuclear power plants is increasing of late in order to reduce workers' exposure to radiations. Especially, owing to the progress of microelectronics and robotics, earnest desire is growing for the advent of intellecturized robots that perform indeterminate and complicated security work. Herein represented are the robots recently developed for nuclear power plants and the review of the present status of robotics.

  19. Toward cognitive robotics

    Science.gov (United States)

    Laird, John E.

    2009-05-01

    Our long-term goal is to develop autonomous robotic systems that have the cognitive abilities of humans, including communication, coordination, adapting to novel situations, and learning through experience. Our approach rests on the recent integration of the Soar cognitive architecture with both virtual and physical robotic systems. Soar has been used to develop a wide variety of knowledge-rich agents for complex virtual environments, including distributed training environments and interactive computer games. For development and testing in robotic virtual environments, Soar interfaces to a variety of robotic simulators and a simple mobile robot. We have recently made significant extensions to Soar that add new memories and new non-symbolic reasoning to Soar's original symbolic processing, which should significantly improve Soar abilities for control of robots. These extensions include episodic memory, semantic memory, reinforcement learning, and mental imagery. Episodic memory and semantic memory support the learning and recalling of prior events and situations as well as facts about the world. Reinforcement learning provides the ability of the system to tune its procedural knowledge - knowledge about how to do things. Mental imagery supports the use of diagrammatic and visual representations that are critical to support spatial reasoning. We speculate on the future of unmanned systems and the need for cognitive robotics to support dynamic instruction and taskability.

  20. Future of robotic surgery.

    Science.gov (United States)

    Lendvay, Thomas Sean; Hannaford, Blake; Satava, Richard M

    2013-01-01

    In just over a decade, robotic surgery has penetrated almost every surgical subspecialty and has even replaced some of the most commonly performed open oncologic procedures. The initial reports on patient outcomes yielded mixed results, but as more medical centers develop high-volume robotics programs, outcomes appear comparable if not improved for some applications. There are limitations to the current commercially available system, and new robotic platforms, some designed to compete in the current market and some to address niche surgical considerations, are being developed that will change the robotic landscape in the next decade. Adoption of these new systems will be dependent on overcoming barriers to true telesurgery that range from legal to logistical. As additional surgical disciplines embrace robotics and open surgery continues to be replaced by robotic approaches, it will be imperative that adequate education and training keep pace with technology. Methods to enhance surgical performance in robotics through the use of simulation and telementoring promise to accelerate learning curves and perhaps even improve surgical readiness through brief virtual-reality warm-ups and presurgical rehearsal. All these advances will need to be carefully and rigorously validated through not only patient outcomes, but also cost efficiency.

  1. HYBRID COMMUNICATION NETWORK OF MOBILE ROBOT AND QUAD-COPTER

    Directory of Open Access Journals (Sweden)

    Moustafa M. Kurdi

    2017-01-01

    Full Text Available This paper introduces the design and development of QMRS (Quadcopter Mobile Robotic System. QMRS is a real-time obstacle avoidance capability in Belarus-132N mobile robot with the cooperation of quadcopter Phantom-4. The function of QMRS consists of GPS used by Mobile Robot and image vision and image processing system from both robot and quad-copter and by using effective searching algorithm embedded inside the robot. Having the capacity to navigate accurately is one of the major abilities of a mobile robot to effectively execute a variety of jobs including manipulation, docking, and transportation. To achieve the desired navigation accuracy, mobile robots are typically equipped with on-board sensors to observe persistent features in the environment, to estimate their pose from these observations, and to adjust their motion accordingly. Quadcopter takes off from Mobile Robot, surveys the terrain and transmits the processed Image terrestrial robot. The main objective of research paper is to focus on the full coordination between robot and quadcopter by designing an efficient wireless communication using WIFI. In addition, it identify the method involving the use of vision and image processing system from both robot and quadcopter; analyzing path in real-time and avoiding obstacles based-on the computational algorithm embedded inside the robot. QMRS increases the efficiency and reliability of the whole system especially in robot navigation, image processing and obstacle avoidance due to the help and connection among the different parts of the system.

  2. Architecture for robot intelligence

    Science.gov (United States)

    Peters, II, Richard Alan (Inventor)

    2004-01-01

    An architecture for robot intelligence enables a robot to learn new behaviors and create new behavior sequences autonomously and interact with a dynamically changing environment. Sensory information is mapped onto a Sensory Ego-Sphere (SES) that rapidly identifies important changes in the environment and functions much like short term memory. Behaviors are stored in a DBAM that creates an active map from the robot's current state to a goal state and functions much like long term memory. A dream state converts recent activities stored in the SES and creates or modifies behaviors in the DBAM.

  3. Robot NAO cantante

    OpenAIRE

    Caballero Pamos, Adrián

    2016-01-01

    En los últimos años la robótica ha experimentado un crecimiento exponencial incorporando todo tipo de funcionalidades. Introducir el mundo musical en los robots es una de ellas. En este Trabajo Fin de Grado se presenta el desarrollo de un sistema que permite al robot NAO leer una partitura, analizarla y reproducirla a modo de canto. La finalidad del trabajo es que el robot actúe como un intérprete frente a una partitura musical tal y como lo haría un humano. Debe ser capaz de interpretar cual...

  4. Robots and plant safety

    International Nuclear Information System (INIS)

    Christensen, P.

    1996-02-01

    The application of robots in the harsh environments in which TELEMAN equipment will have to operate has large benefits, but also some drawbacks. The main benefit is the ability gained to perform tasks where people cannot go, while there is a possibility of inflicting damage to the equipment handled by the robot, and the plant when mobile robots are involved. The paper describes the types of possible damage and the precautions to be taken in order to reduce the frequency of the damaging events. A literature study for the topic only gave some insight into examples, but no means for a systematic treatment of the topic. (au) 16 refs

  5. Giochiamo con i robot

    Directory of Open Access Journals (Sweden)

    Andrea Bonarini

    2009-01-01

    Full Text Available "Giochiamo con i robot" e' un laboratorio interattivo per grandi e piccini realizzato per l'edizione 2007 del Festival della Scienza di Genova. Lungo un percorso che va dalla telerobotica alla robotica evolutiva, il laboratorio sviluppa il tema di dare intelligenza ai robot. Questo percorso, le cui tappe sono le varie installazioni, si conclude nella "bottega" dove e' possibile costruire e programmare i propri robot o smontare e modificare quelli esposti durante il percorso didattico. I visitatori sono coinvolti in attivita' ludiche grazie alle quali possonoentrare in contatto con alcune delle idee potenti della robotica,

  6. Robots in mining

    CSIR Research Space (South Africa)

    Green, J

    2010-09-01

    Full Text Available ? • FOG – Fall of ground • Who is at risk? • What is the cost of incident? • What can we do about it? The Robot Potential • Technology • Conclusion © CSIR 2010 Slide 3 Yes Robots can improve mine safety Robot patrols unoccupied areas Generates a... risk map Additional tool Inform miners in making safe © CSIR 2010 Slide 4 Miner Safety Statistics • from DME (2010/03) • March 2010 • 490 000 employed • 400 000 suppliers1 • 9 died, 7 in rockfall incidents 2 • Prior year- March 2010 • 152...

  7. Odico Formwork Robotics

    DEFF Research Database (Denmark)

    Søndergaard, Asbjørn

    2014-01-01

    In the next decade or so, the widespread adoption of robotics is set to transform the construction industry: building techniques will become increasingly automated both on– and off–site, dispensing with manual labour and enabling greater cost and operational efficiencies. What unique opportunities......, however, does robotics afford beyond operational effectiveness explicitly for the practice of architecture? What is the potential for the serial production of non–standard elements as well as for varied construction processes? In order to scale up and advance the application of robotics, for both...

  8. Next Generation Light Robotics

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    Light Robotics is a new field of research where ingredients from photonics, nanotechnology and biotechnology are put together in new ways to realize light-driven robotics at the smallest scales to solve major challenges primarily within the nanobio-domain but not limited hereto. Exploring the full...... potential of this new ‘drone-like’ light-printed, light-driven, light-actuated micro- and nano-robotics in challenging geometries requires a versatile and real-time reconfigurable light addressing that can dynamically track a plurality of tiny tools in 3D to ensure real-time continuous light...

  9. Optical Robotics in Mesoscopia

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    2012-01-01

    With light’s miniscule momentum, shrinking robotics down to the micro-scale regime creates opportunities for exploiting optical forces and torques in advanced actuation and control at the nano- and micro-scale dimensions. Advancing light-driven nano- or micro-robotics requires the optimization...... of optimized shapes in the micro-robotics structures [1]. We designed different three-dimensional microstructures and had them fabricated by two-photon polymerization at BRC Hungary. These microstructures were then handled by our proprietary BioPhotonics Workstation to show proof-of-principle 3 demonstrations...

  10. Autonomous mobile robot teams

    Science.gov (United States)

    Agah, Arvin; Bekey, George A.

    1994-01-01

    This paper describes autonomous mobile robot teams performing tasks in unstructured environments. The behavior and the intelligence of the group is distributed, and the system does not include a central command base or leader. The novel concept of the Tropism-Based Cognitive Architecture is introduced, which is used by the robots in order to produce behavior transforming their sensory information to proper action. The results of a number of simulation experiments are presented. These experiments include worlds where the robot teams must locate, decompose, and gather objects, and defend themselves against hostile predators, while navigating around stationary and mobile obstacles.

  11. ROBOT LITERACY AN APPROACH FOR SHARING SOCIETY WITH INTELLIGENT ROBOTS

    Directory of Open Access Journals (Sweden)

    Hidetsugu Suto

    2013-12-01

    Full Text Available A novel concept of media education called “robot literacy” is proposed. Here, robot literacy refers to the means of forming an appropriate relationship with intelligent robots. It can be considered a kind of media literacy. People who were born after the Internet age can be considered “digital natives” who have new morals and values and behave differently than previous generations in Internet societies. This can cause various problems among different generations. Thus, the necessity of media literacy education is increasing. Internet technologies, as well as robotics technologies are growing rapidly, and people who are born after the “home robot age,” whom the author calls “robot natives,” will be expected to have a certain degree of “robot literacy.” In this paper, the concept of robot literacy is defined and an approach to robot literacy education is discussed.

  12. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System

    Directory of Open Access Journals (Sweden)

    Defeng Wu

    2016-08-01

    Full Text Available A robot-based three-dimensional (3D measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  13. Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2016-03-01

    Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.

  14. Visual servoing in medical robotics: a survey. Part I: endoscopic and direct vision imaging - techniques and applications.

    Science.gov (United States)

    Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V

    2014-09-01

    Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Robotized production systems observed in modern plants

    Science.gov (United States)

    Saverina, A. N.

    1985-09-01

    Robots, robotized lines and sectors are no longer innovations in shops at automotive plants. The widespread robotization of automobile assembly operations is described in general terms. Robot use for machining operation is also discussed.

  16. Application of robotics in nuclear facilities

    International Nuclear Information System (INIS)

    Byrd, J.S.; Fisher, J.J.

    1986-01-01

    Industrial robots and other robotic systems have been successfully applied at the Savannah River nuclear site. These applications, new robotic systems presently under development, general techniques for the employment of robots in nuclear facilities, and future systems are discussed

  17. Human-machine Interface for Presentation Robot

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Ondroušek, V.

    2012-01-01

    Roč. 6, č. 2 (2012), s. 17-21 ISSN 1897-8649 Institutional research plan: CEZ:AV0Z20760514 Keywords : human-robot interface * mobile robot * presentation robot Subject RIV: JD - Computer Applications, Robotics

  18. Continuum limbed robots for locomotion

    Science.gov (United States)

    Mutlu, Alper

    This thesis focuses on continuum robots based on pneumatic muscle technology. We introduce a novel approach to use these muscles as limbs of lightweight legged robots. The flexibility of the continuum legs of these robots offers the potential to perform some duties that are not possible with classical rigid-link robots. Potential applications are as space robots in low gravity, and as cave explorer robots. The thesis covers the fabrication process of continuum pneumatic muscles and limbs. It also provides some new experimental data on this technology. Afterwards, the designs of two different novel continuum robots - one tripod, one quadruped - are introduced. Experimental data from tests using the robots is provided. The experimental results are the first published example of locomotion with tripod and quadruped continuum legged robots. Finally, discussion of the results and how far this technology can go forward is presented.

  19. Soft computing in advanced robotics

    CERN Document Server

    Kobayashi, Ichiro; Kim, Euntai

    2014-01-01

    Intelligent system and robotics are inevitably bound up; intelligent robots makes embodiment of system integration by using the intelligent systems. We can figure out that intelligent systems are to cell units, while intelligent robots are to body components. The two technologies have been synchronized in progress. Making leverage of the robotics and intelligent systems, applications cover boundlessly the range from our daily life to space station; manufacturing, healthcare, environment, energy, education, personal assistance, logistics. This book aims at presenting the research results in relevance with intelligent robotics technology. We propose to researchers and practitioners some methods to advance the intelligent systems and apply them to advanced robotics technology. This book consists of 10 contributions that feature mobile robots, robot emotion, electric power steering, multi-agent, fuzzy visual navigation, adaptive network-based fuzzy inference system, swarm EKF localization and inspection robot. Th...

  20. Fundamentals of soft robot locomotion.

    Science.gov (United States)

    Calisti, M; Picardi, G; Laschi, C

    2017-05-01

    Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human-robot interaction and locomotion. Although field applications have emerged for soft manipulation and human-robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. © 2017 The Author(s).

  1. Conceptions of health service robots

    DEFF Research Database (Denmark)

    Lystbæk, Christian Tang

    2015-01-01

    Technology developments create rich opportunities for health service providers to introduce service robots in health care. While the potential benefits of applying robots in health care are extensive, the research into the conceptions of health service robot and its importance for the uptake...... of robotics technology in health care is limited. This article develops a model of the basic conceptions of health service robots that can be used to understand different assumptions and values attached to health care technology in general and health service robots in particular. The article takes...... a discursive approach in order to develop a conceptual framework for understanding the social values of health service robots. First a discursive approach is proposed to develop a typology of conceptions of health service robots. Second, a model identifying four basic conceptions of health service robots...

  2. Situation Assessment for Mobile Robots

    DEFF Research Database (Denmark)

    Beck, Anders Billesø

    Mobile robots have become a mature technology. The first cable guided logistics robots were introduced in the industry almost 60 years ago. In this time the market for mobile robots in industry has only experienced a very modest growth and only 2.100 systems were sold worldwide in 2011. In recent...... years, many other domains have adopted the mobile robots, such as logistics robots at hospitals and the vacuum robots in our homes. However, considering the achievements in research the last 15 years within perception and operation in natural environments together with the reductions of costs in modern...... sensor systems, the growth potential for mobile robot applications are enormous. Many new technological components are available to move the limits of commercial mobile robot applications, but a key hindrance is reliability. Natural environments are complex and dynamic, and thus the risk of robots...

  3. Robotics and remote systems applications

    International Nuclear Information System (INIS)

    Rabold, D.E.

    1996-01-01

    This article is a review of numerous remote inspection techniques in use at the Savannah River (and other) facilities. These include: (1) reactor tank inspection robot, (2) californium waste removal robot, (3) fuel rod lubrication robot, (4) cesium source manipulation robot, (5) tank 13 survey and decontamination robots, (6) hot gang valve corridor decontamination and junction box removal robots, (7) lead removal from deionizer vessels robot, (8) HB line cleanup robot, (9) remote operation of a front end loader at WIPP, (10) remote overhead video extendible robot, (11) semi-intelligent mobile observing navigator, (12) remote camera systems in the SRS canyons, (13) cameras and borescope for the DWPF, (14) Hanford waste tank camera system, (15) in-tank precipitation camera system, (16) F-area retention basin pipe crawler, (17) waste tank wall crawler and annulus camera, (18) duct inspection, and (19) deionizer resin sampling

  4. Sample Return Robot

    Data.gov (United States)

    National Aeronautics and Space Administration — This Challenge requires demonstration of an autonomous robotic system to locate and collect a set of specific sample types from a large planetary analog area and...

  5. Biological Soft Robotics.

    Science.gov (United States)

    Feinberg, Adam W

    2015-01-01

    In nature, nanometer-scale molecular motors are used to generate force within cells for diverse processes from transcription and transport to muscle contraction. This adaptability and scalability across wide temporal, spatial, and force regimes have spurred the development of biological soft robotic systems that seek to mimic and extend these capabilities. This review describes how molecular motors are hierarchically organized into larger-scale structures in order to provide a basic understanding of how these systems work in nature and the complexity and functionality we hope to replicate in biological soft robotics. These span the subcellular scale to macroscale, and this article focuses on the integration of biological components with synthetic materials, coupled with bioinspired robotic design. Key examples include nanoscale molecular motor-powered actuators, microscale bacteria-controlled devices, and macroscale muscle-powered robots that grasp, walk, and swim. Finally, the current challenges and future opportunities in the field are addressed.

  6. Robotic Comfort Zones

    National Research Council Canada - National Science Library

    Likhachev, Maxim; Arkin, Ronald C

    2006-01-01

    .... A review of the existing study of human comfort, especially regarding its presence in infants, is conducted with the goal being to determine the relevant characteristics for mapping it onto the robotics domain...

  7. Tank-automotive robotics

    Science.gov (United States)

    Lane, Gerald R.

    1999-07-01

    To provide an overview of Tank-Automotive Robotics. The briefing will contain program overviews & inter-relationships and technology challenges of TARDEC managed unmanned and robotic ground vehicle programs. Specific emphasis will focus on technology developments/approaches to achieve semi- autonomous operation and inherent chassis mobility features. Programs to be discussed include: DemoIII Experimental Unmanned Vehicle (XUV), Tactical Mobile Robotics (TMR), Intelligent Mobility, Commanders Driver Testbed, Collision Avoidance, International Ground Robotics Competition (ICGRC). Specifically, the paper will discuss unique exterior/outdoor challenges facing the IGRC competing teams and the synergy created between the IGRC and ongoing DoD semi-autonomous Unmanned Ground Vehicle and DoT Intelligent Transportation System programs. Sensor and chassis approaches to meet the IGRC challenges and obstacles will be shown and discussed. Shortfalls in performance to meet the IGRC challenges will be identified.

  8. DOE Robotics Project

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    This document provide the bimonthly progress reports on the Department of Energy (DOE) Robotics Project by the University of Michigan. Reports are provided for the time periods of December 90/January 91 through June 91/July 91. (FI)

  9. MARYLAND ROBOTICS CENTER

    Data.gov (United States)

    Federal Laboratory Consortium — The Maryland Robotics Center is an interdisciplinary research center housed in the Institute for Systems Research (link is external)within the A. James Clark School...

  10. Introduction to humanoid robotics

    CERN Document Server

    Kajita, Shuuji; Harada, Kensuke; Yokoi, Kazuhito

    2014-01-01

    This book is for researchers, engineers, and students who are willing to understand how humanoid robots move and be controlled. The book starts with an overview of the humanoid robotics research history and state of the art. Then it explains the required mathematics and physics such as kinematics of multi-body system, Zero-Moment Point (ZMP) and its relationship with body motion. Biped walking control is discussed in depth, since it is one of the main interests of humanoid robotics. Various topics of the whole body motion generation are also discussed. Finally multi-body dynamics is presented to simulate the complete dynamic behavior of a humanoid robot. Throughout the book, Matlab codes are shown to test the algorithms and to help the reader´s understanding.

  11. Vascular Surgery and Robotics

    Directory of Open Access Journals (Sweden)

    Indrani Sen

    2016-01-01

    Full Text Available The application of robotics to Vascular surgery has not progressed as rapidly as of endovascular technology, but this is changing with the amalgamation of these two fields. The advent of Endovascular robotics is an exciting field which overcomes many of the limitations of endovascular therapy like vessel tortuosity and operator fatigue. This has much clinical appeal for the surgeon and hold significant promise of better patient outcomes. As with most newer technological advances, it is still limited by cost and availability. However, this field has seen some rapid progress in the last decade with the technology moving into the clinical realm. This review details the development of robotics, applications, outcomes, advantages, disadvantages and current advances focussing on Vascular and Endovascular robotics

  12. Robotics in Colorectal Surgery

    Science.gov (United States)

    Weaver, Allison; Steele, Scott

    2016-01-01

    Over the past few decades, robotic surgery has developed from a futuristic dream to a real, widely used technology. Today, robotic platforms are used for a range of procedures and have added a new facet to the development and implementation of minimally invasive surgeries. The potential advantages are enormous, but the current progress is impeded by high costs and limited technology. However, recent advances in haptic feedback systems and single-port surgical techniques demonstrate a clear role for robotics and are likely to improve surgical outcomes. Although robotic surgeries have become the gold standard for a number of procedures, the research in colorectal surgery is not definitive and more work needs to be done to prove its safety and efficacy to both surgeons and patients. PMID:27746895

  13. Robotic aortic surgery.

    Science.gov (United States)

    Duran, Cassidy; Kashef, Elika; El-Sayed, Hosam F; Bismuth, Jean

    2011-01-01

    Surgical robotics was first utilized to facilitate neurosurgical biopsies in 1985, and it has since found application in orthopedics, urology, gynecology, and cardiothoracic, general, and vascular surgery. Surgical assistance systems provide intelligent, versatile tools that augment the physician's ability to treat patients by eliminating hand tremor and enabling dexterous operation inside the patient's body. Surgical robotics systems have enabled surgeons to treat otherwise untreatable conditions while also reducing morbidity and error rates, shortening operative times, reducing radiation exposure, and improving overall workflow. These capabilities have begun to be realized in two important realms of aortic vascular surgery, namely, flexible robotics for exclusion of complex aortic aneurysms using branched endografts, and robot-assisted laparoscopic aortic surgery for occlusive and aneurysmal disease.

  14. Teaching Joint-Level Robot Programming with a New Robotics Software Tool

    Directory of Open Access Journals (Sweden)

    Fernando Gonzalez

    2017-12-01

    Full Text Available With the rising popularity of robotics in our modern world there is an increase in the number of engineering programs that offer the basic Introduction to Robotics course. This common introductory robotics course generally covers the fundamental theory of robotics including robot kinematics, dynamics, differential movements, trajectory planning and basic computer vision algorithms commonly used in the field of robotics. Joint programming, the task of writing a program that directly controls the robot’s joint motors, is an activity that involves robot kinematics, dynamics, and trajectory planning. In this paper, we introduce a new educational robotics tool developed for teaching joint programming. The tool allows the student to write a program in a modified C language that controls the movement of the arm by controlling the velocity of each joint motor. This is a very important activity in the robotics course and leads the student to gain knowledge of how to build a robotic arm controller. Sample assignments are presented for different levels of difficulty.

  15. Robots in Elderly Care

    Directory of Open Access Journals (Sweden)

    Alessandro Vercelli

    2018-03-01

    Full Text Available Low birth rate and the long life expectancy represent an explosive mixture, resulting in the rapid aging of population. The costs of healthcare in the grey society are increasing dramatically, and soon there will be not enough resources and people for care. This context requires conceptually new elderly care solutions progressively reducing the percentages of the human-based care. Research on robot-based solutions for elderly care and active ageing aims to answer these needs. From a general perspective, robotics has the power to completely reshape the landscape of healthcare both in its structure and its operation. In fact, the long-term sustainability of healthcare systems could be addressed by automation powered by digital health technologies, such as artificial intelligence, 3D-printing or robotics. The latter could take over monotonous work from healthcare workers, which would allow them to focus more on patients and to have lesser workload. Robots might be used in elder care with several different aims. (i Robots may act as caregivers, i.e. assist the elderly, (ii they can provide remainders and instructions for activities of daily life and safety, and/or assist their carers in daily tasks; (iii they can help monitor their behaviour and health; and (iv provide companionship, including entertainment and hobbies, reminiscence and social contact. The use of Robots with human subjects/patients raise several sensitive questions. First of all, robots may represent information hubs, and can collect an incredible amount of data about the subjects and their environment. In fact, they record habits such as sleeping, exercising, third persons entering in the house, appointments. Communications may be continuously recorded. Moreover, by connecting with medical devices, they can store medical data. On one hand, this represents a very powerful tool to collect information about the single subject (precision medicine, about disease (thus eventually finding

  16. Wheeled hopping robot

    Science.gov (United States)

    Fischer, Gary J [Albuquerque, NM

    2010-08-17

    The present invention provides robotic vehicles having wheeled and hopping mobilities that are capable of traversing (e.g. by hopping over) obstacles that are large in size relative to the robot and, are capable of operation in unpredictable terrain over long range. The present invention further provides combustion powered linear actuators, which can include latching mechanisms to facilitate pressurized fueling of the actuators, as can be used to provide wheeled vehicles with a hopping mobility.

  17. SPECIAL ROBOTS FOR ENERGETICS

    Directory of Open Access Journals (Sweden)

    Sit M.L.

    2014-04-01

    Full Text Available An overview of robots used in the power industry for diagnostics of power lines, cable lines, for the control, monitoring and maintenance of wind turbines, in nuclear energy, for optimum orientation of solar photovoltaic plants and solar panels for cleaning. Equations of statics and dynamics of robotic car which lifts along the vertical flexible rope are considered. It is presented the design which is made on the basis of "Lego Mindstorms" to solve the problem.

  18. 3D light robotics

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin; Villangca, Mark Jayson

    2016-01-01

    As celebrated by the Nobel Prize 2014 in Chemistry light-based technologies can now overcome the diffraction barrier for imaging with nanoscopic resolution by so-called super-resolution microscopy1. However, interactive investigations coupled with advanced imaging modalities at these small scale ...... research discipline that could potentially be able to offer the full packet needed for true "active nanoscopy" by use of so-called light-driven micro-robotics or Light Robotics in short....

  19. Robotics and general surgery.

    Science.gov (United States)

    Jacob, Brian P; Gagner, Michel

    2003-12-01

    Robotics are now being used in all surgical fields, including general surgery. By increasing intra-abdominal articulations while operating through small incisions, robotics are increasingly being used for a large number of visceral and solid organ operations, including those for the gallbladder, esophagus, stomach, intestines, colon, and rectum, as well as for the endocrine organs. Robotics and general surgery are blending for the first time in history and as a specialty field should continue to grow for many years to come. We continuously demand solutions to questions and limitations that are experienced in our daily work. Laparoscopy is laden with limitations such as fixed axis points at the trocar insertion sites, two-dimensional video monitors, limited dexterity at the instrument tips, lack of haptic sensation, and in some cases poor ergonomics. The creation of a surgical robot system with 3D visual capacity seems to deal with most of these limitations. Although some in the surgical community continue to test the feasibility of these surgical robots and to question the necessity of such an expensive venture, others are already postulating how to improve the next generation of telemanipulators, and in so doing are looking beyond today's horizon to find simpler solutions. As the robotic era enters the world of the general surgeon, more and more complex procedures will be able to be approached through small incisions. As technology catches up with our imaginations, robotic instruments (as opposed to robots) and 3D monitoring will become routine and continue to improve patient care by providing surgeons with the most precise, least traumatic ways of treating surgical disease.

  20. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    International Nuclear Information System (INIS)

    Ren, Y J; Zhu, J G; Yang, X Y; Ye, S H

    2006-01-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent

  1. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    Science.gov (United States)

    Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.

    2006-10-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.

  2. Swarm robotics and minimalism

    Science.gov (United States)

    Sharkey, Amanda J. C.

    2007-09-01

    Swarm Robotics (SR) is closely related to Swarm Intelligence, and both were initially inspired by studies of social insects. Their guiding principles are based on their biological inspiration and take the form of an emphasis on decentralized local control and communication. Earlier studies went a step further in emphasizing the use of simple reactive robots that only communicate indirectly through the environment. More recently SR studies have moved beyond these constraints to explore the use of non-reactive robots that communicate directly, and that can learn and represent their environment. There is no clear agreement in the literature about how far such extensions of the original principles could go. Should there be any limitations on the individual abilities of the robots used in SR studies? Should knowledge of the capabilities of social insects lead to constraints on the capabilities of individual robots in SR studies? There is a lack of explicit discussion of such questions, and researchers have adopted a variety of constraints for a variety of reasons. A simple taxonomy of swarm robotics is presented here with the aim of addressing and clarifying these questions. The taxonomy distinguishes subareas of SR based on the emphases and justifications for minimalism and individual simplicity.

  3. Multi-robot caravanning

    KAUST Repository

    Denny, Jory

    2013-11-01

    We study multi-robot caravanning, which is loosely defined as the problem of a heterogeneous team of robots visiting specific areas of an environment (waypoints) as a group. After formally defining this problem, we propose a novel solution that requires minimal communication and scales with the number of waypoints and robots. Our approach restricts explicit communication and coordination to occur only when robots reach waypoints, and relies on implicit coordination when moving between a given pair of waypoints. At the heart of our algorithm is the use of leader election to efficiently exploit the unique environmental knowledge available to each robot in order to plan paths for the group, which makes it general enough to work with robots that have heterogeneous representations of the environment. We implement our approach both in simulation and on a physical platform, and characterize the performance of the approach under various scenarios. We demonstrate that our approach can successfully be used to combine the planning capabilities of different agents. © 2013 IEEE.

  4. Self-Organizing Robots

    CERN Document Server

    Murata, Satoshi

    2012-01-01

    It is man’s ongoing hope that a machine could somehow adapt to its environment by reorganizing itself. This is what the notion of self-organizing robots is based on. The theme of this book is to examine the feasibility of creating such robots within the limitations of current mechanical engineering. The topics comprise the following aspects of such a pursuit: the philosophy of design of self-organizing mechanical systems; self-organization in biological systems; the history of self-organizing mechanical systems; a case study of a self-assembling/self-repairing system as an autonomous distributed system; a self-organizing robot that can create its own shape and robotic motion; implementation and instrumentation of self-organizing robots; and the future of self-organizing robots. All topics are illustrated with many up-to-date examples, including those from the authors’ own work. The book does not require advanced knowledge of mathematics to be understood, and will be of great benefit to students in the rob...

  5. Robotic assisted andrological surgery

    Science.gov (United States)

    Parekattil, Sijo J; Gudeloglu, Ahmet

    2013-01-01

    The introduction of the operative microscope for andrological surgery in the 1970s provided enhanced magnification and accuracy, unparalleled to any previous visual loop or magnification techniques. This technology revolutionized techniques for microsurgery in andrology. Today, we may be on the verge of a second such revolution by the incorporation of robotic assisted platforms for microsurgery in andrology. Robotic assisted microsurgery is being utilized to a greater degree in andrology and a number of other microsurgical fields, such as ophthalmology, hand surgery, plastics and reconstructive surgery. The potential advantages of robotic assisted platforms include elimination of tremor, improved stability, surgeon ergonomics, scalability of motion, multi-input visual interphases with up to three simultaneous visual views, enhanced magnification, and the ability to manipulate three surgical instruments and cameras simultaneously. This review paper begins with the historical development of robotic microsurgery. It then provides an in-depth presentation of the technique and outcomes of common robotic microsurgical andrological procedures, such as vasectomy reversal, subinguinal varicocelectomy, targeted spermatic cord denervation (for chronic orchialgia) and robotic assisted microsurgical testicular sperm extraction (microTESE). PMID:23241637

  6. Colias: An Autonomous Micro Robot for Swarm Robotic Applications

    Directory of Open Access Journals (Sweden)

    Farshad Arvin

    2014-07-01

    Full Text Available Robotic swarms that take inspiration from nature are becoming a fascinating topic for multi-robot researchers. The aim is to control a large number of simple robots in order to solve common complex tasks. Due to the hardware complexities and cost of robot platforms, current research in swarm robotics is mostly performed by simulation software. The simulation of large numbers of these robots in robotic swarm applications is extremely complex and often inaccurate due to the poor modelling of external conditions. In this paper, we present the design of a low-cost, open-platform, autonomous micro-robot (Colias for robotic swarm applications. Colias employs a circular platform with a diameter of 4 cm. It has a maximum speed of 35 cm/s which enables it to be used in swarm scenarios very quickly over large arenas. Long-range infrared modules with an adjustable output power allow the robot to communicate with its direct neighbours at a range of 0.5 cm to 2 m. Colias has been designed as a complete platform with supporting software development tools for robotics education and research. It has been tested in both individual and swarm scenarios, and the observed results demonstrate its feasibility for use as a micro-sized mobile robot and as a low-cost platform for robot swarm applications.

  7. Measuring Attitudes Towards Telepresence Robots

    OpenAIRE

    M Tsui, Katherine; Desai, Munjal; A. Yanco, Holly; Cramer, Henriette; Kemper, Nicander

    2011-01-01

    Studies using Nomura et al.’s “Negative Attitude toward Robots Scale” (NARS) [1] as an attitudinal measure have featured robots that were perceived to be autonomous, indepen- dent agents. State of the art telepresence robots require an explicit human-in-the-loop to drive the robot around. In this paper, we investigate if NARS can be used with telepresence robots. To this end, we conducted three studies in which people watched videos of telepresence robots (n=70), operated te...

  8. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  9. Robotic hand with modular extensions

    Science.gov (United States)

    Salisbury, Curt Michael; Quigley, Morgan

    2015-01-20

    A robotic device is described herein. The robotic device includes a frame that comprises a plurality of receiving regions that are configured to receive a respective plurality of modular robotic extensions. The modular robotic extensions are removably attachable to the frame at the respective receiving regions by way of respective mechanical fuses. Each mechanical fuse is configured to trip when a respective modular robotic extension experiences a predefined load condition, such that the respective modular robotic extension detaches from the frame when the load condition is met.

  10. Sensor Fusion for Autonomous Mobile Robot Navigation

    DEFF Research Database (Denmark)

    Plascencia, Alfredo

    Multi-sensor data fusion is a broad area of constant research which is applied to a wide variety of fields such as the field of mobile robots. Mobile robots are complex systems where the design and implementation of sensor fusion is a complex task. But research applications are explored constantl....... The scope of the thesis is limited to building a map for a laboratory robot by fusing range readings from a sonar array with landmarks extracted from stereo vision images using the (Scale Invariant Feature Transform) SIFT algorithm....

  11. Automated robotic workcell for waste characterization

    International Nuclear Information System (INIS)

    Dougan, A.D.; Gustaveson, D.K.; Alvarez, R.A.; Holliday, M.

    1993-01-01

    The authors have successfully demonstrated an automated multisensor-based robotic workcell for hazardous waste characterization. The robot within this workcell uses feedback from radiation sensors, a metal detector, object profile scanners, and a 2D vision system to automatically segregate objects based on their measured properties. The multisensor information is used to make segregation decisions of waste items and to facilitate the grasping of objects with a robotic arm. The authors used both sodium iodide and high purity germanium detectors as a two-step process to maximize throughput. For metal identification and discrimination, the authors are investigating the use of neutron interrogation techniques

  12. Model and Behavior-Based Robotic Goalkeeper

    DEFF Research Database (Denmark)

    Lausen, H.; Nielsen, J.; Nielsen, M.

    2003-01-01

    This paper describes the design, implementation and test of a goalkeeper robot for the Middle-Size League of RoboCub. The goalkeeper task is implemented by a set of primitive tasks and behaviours coordinated by a 2-level hierarchical state machine. The primitive tasks concerning complex motion...... control are implemented by a non-linear control algorithm, adapted to the different task goals (e.g., follow the ball or the robot posture from local features extracted from images acquired by a catadioptric omni-directional vision system. Most robot parameters were designed based on simulations carried...

  13. Towards safe robots approaching Asimov’s 1st law

    CERN Document Server

    Haddadin, Sami

    2014-01-01

    The vision of seamless human-robot interaction in our everyday life that allows for tight cooperation between human and robot has not become reality yet. However, the recent increase in technology maturity finally made it possible to realize systems of high integration, advanced sensorial capabilities and enhanced power to cross this barrier and merge living spaces of humans and robot workspaces to at least a certain extent. Together with the increasing industrial effort to realize first commercial service robotics products this makes it necessary to properly address one of the most fundamental questions of Human-Robot Interaction: How to ensure safety in human-robot coexistence? In this authoritative monograph, the essential question about the necessary requirements for a safe robot is addressed in depth and from various perspectives. The approach taken in this book focuses on the biomechanical level of injury assessment, addresses the physical evaluation of robot-human impacts, and isolates the major factor...

  14. Conceptual spatial representations for indoor mobile robots

    OpenAIRE

    Zender, Henrik; Mozos, Oscar Martinez; Jensfelt, Patric; Kruijff, Geert-Jan M.; Wolfram, Burgard

    2008-01-01

    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporate...

  15. Human Robotic Systems (HRS): Controlling Robots over Time Delay Element

    Data.gov (United States)

    National Aeronautics and Space Administration — This element involves the development of software that enables easier commanding of a wide range of NASA relevant robots through the Robot Application Programming...

  16. Communication of Robot Status to Improve Human-Robot Collaboration

    Data.gov (United States)

    National Aeronautics and Space Administration — Future space exploration will require humans and robots to collaborate to perform all the necessary tasks. Current robots mostly operate separately from humans due...

  17. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    A working group (WG) study was conducted aiming at realizing human type robots. The following six working groups in the basement field were organized to study in terms mostly of items of technical development and final technical targets: platform, and remote attendance control in the basement field, maintenance of plant, etc., home service, disaster/construction, and entertainment in the application field. In the platform WG, a robot of human like form is planning which walks with two legs and works with two arms, and the following were discussed: a length of 160cm, weight of 110kg, built-in LAN, actuator specifications, modulated structure, intelligent driver, etc. In the remote attendance control WG, remote control using working function, stabilized movement, stabilized control, and network is made possible. Studied were made on the decision on a remote control cockpit by open architecture added with function and reformable, problems on the development of the standard language, etc. 77 ref., 82 figs., 21 tabs.

  18. Timing of Multimodal Robot Behaviors during Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Jensen, Lars Christian; Fischer, Kerstin; Suvei, Stefan-Daniel

    2017-01-01

    In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care-O-bot, ...... output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots....

  19. Robotic transthoracic esophagectomy.

    Science.gov (United States)

    Puntambekar, Shailesh; Kenawadekar, Rahul; Kumar, Sanjay; Joshi, Saurabh; Agarwal, Geetanjali; Reddy, Sunil; Mallik, Jainul

    2015-04-23

    We have initially published our experience with the robotic transthoracic esophagectomy in 32 patients from a single institute. The present paper is the extension of our experience with robotic system and to best of our knowledge this represents the largest series of robotic transthoracic esophagectomy worldwide. The objective of this study was to investigate the feasibility of the robotic transthoracic esophagectomy for esophageal cancer in a series of patients from a single institute. A retrospective review of medical records was conducted for 83 esophageal cancer patients who underwent robotic esophagectomy at our institute from December 2009 to December 2012. All patients underwent a thorough clinical examination and pre-operative investigations. All patients underwent robotic esophageal mobilization. En-bloc dissection with lymphadenectomy was performed in all cases with preservation of Azygous vein. Relevant data were gathered from medical records. The study population comprised of 50 men and 33 women with mean age of 59.18 years. The mean operative time was 204.94 mins (range 180 to 300). The mean blood loss was 86.75 ml (range 50 to 200). The mean number of lymph node yield was 18. 36 (range 13 to 24). None of the patient required conversion. The mean ICU stay and hospital stay was 1 day (range 1 to 3) and 10.37 days (range 10 to 13), respectively. A total of 16 (19.28%) complication were reported in these patents. Commonly reported complication included dysphagia, pleural effusion and anastomotic leak. No treatment related mortality was observed. After a median follow-up period of 10 months, 66 patients (79.52%) survived with disease free stage. We found robot-assisted thoracoscopic esophagectomy feasible in cases of esophageal cancer. The procedure allowed precise en-bloc dissection with lymphadenectomy in mediastinum with reduced operative time, blood loss and complications.

  20. Robots for Astrobiology!

    Science.gov (United States)

    Boston, Penelope J.

    2016-01-01

    The search for life and its study is known as astrobiology. Conducting that search on other planets in our Solar System is a major goal of NASA and other space agencies, and a driving passion of the community of scientists and engineers around the world. We practice for that search in many ways, from exploring and studying extreme environments on Earth, to developing robots to go to other planets and help us look for any possible life that may be there or may have been there in the past. The unique challenges of space exploration make collaborations between robots and humans essential. The products of those collaborations will be novel and driven by the features of wholly new environments. For space and planetary environments that are intolerable for humans or where humans present an unacceptable risk to possible biologically sensitive sites, autonomous robots or telepresence offer excellent choices. The search for life signs on Mars fits within this category, especially in advance of human landed missions there, but also as assistants and tools once humans reach the Red Planet. For planetary destinations where we do not envision humans ever going in person, like bitterly cold icy moons, or ocean worlds with thick ice roofs that essentially make them planetary-sized ice caves, we will rely on robots alone to visit those environments for us and enable us to explore and understand any life that we may find there. Current generation robots are not quite ready for some of the tasks that we need them to do, so there are many opportunities for roboticists of the future to advance novel types of mobility, autonomy, and bio-inspired robotic designs to help us accomplish our astrobiological goals. We see an exciting partnership between robotics and astrobiology continually strengthening as we jointly pursue the quest to find extraterrestrial life.

  1. Socially intelligent robots: dimensions of human-robot interaction.

    Science.gov (United States)

    Dautenhahn, Kerstin

    2007-04-29

    Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.

  2. Robot Motion and Control 2011

    CERN Document Server

    2012-01-01

    Robot Motion Control 2011 presents very recent results in robot motion and control. Forty short papers have been chosen from those presented at the sixth International Workshop on Robot Motion and Control held in Poland in June 2011. The authors of these papers have been carefully selected and represent leading institutions in this field. The following recent developments are discussed: • Design of trajectory planning schemes for holonomic and nonholonomic systems with optimization of energy, torque limitations and other factors. • New control algorithms for industrial robots, nonholonomic systems and legged robots. • Different applications of robotic systems in industry and everyday life, like medicine, education, entertainment and others. • Multiagent systems consisting of mobile and flying robots with their applications The book is suitable for graduate students of automation and robotics, informatics and management, mechatronics, electronics and production engineering systems as well as scientists...

  3. Lessons of nuclear robot history

    International Nuclear Information System (INIS)

    Oomichi, Takeo

    2014-01-01

    Severe accidents occurred at Fukushima Daiichi Nuclear Power Station stirred up people's great expectation of nuclear robot's deployment. However unexpected nuclear disaster, especially rupture of reactor building caused by core meltdown and hydrogen explosion, made it quite difficult to introduce nuclear robot under high radiation environment to cease accidents and dispose damaged reactor. Robotics Society of Japan (RSJ) set up committee to look back upon lessons learned from 50 year's past experience of nuclear robot development and summarized 'Lessons of nuclear robot history', which was shown on the home page website of RSJ. This article outlined it with personal comment. History of nuclear robot developed for inspection and maintenance at normal operation and for specific required response at nuclear accidents was reviewed with many examples at home and abroad for TMI, Chernobyl and JCO accidents. Present state of Fukushima accident response robot's introduction and development was also described with some comments on nuclear robot development from academia based on lessons. (T. Tanaka)

  4. The Power of Educational Robotics

    Science.gov (United States)

    Cummings, Timothy

    The purpose of this action research project was to investigate the impact a students' participation in educational robotics has on his or her performance in the STEM subjects. This study attempted to utilize educational robotics as a method for increasing student achievement and engagement in STEM subjects. Over the course of 12 weeks, an after-school robotics program was offered to students. Guided by the standards and principles of VEX IQ, a leading resource in educational robotics, students worked in collaboration on creating a design for their robot, building and testing their robot, and competing in the VEX IQ Crossover Challenge. Student data was gathered through a pre-participation survey, observations from the work they performed in robotics club, their performance in STEM subject classes, and the analysis of their end-of-the-year report card. Results suggest that the students who participate in robotics club experienced a positive impact on their performance in STEM subject classes.

  5. Industrial Robots on the Line.

    Science.gov (United States)

    Ayres, Robert; Miller, Steve

    1982-01-01

    Explores the history of robotics and its effects upon the manufacturing industry. Topics include robots' capabilities and limitations, the factory of the future, displacement of the workforce, and implications for management and labor. (SK)

  6. Social and Affective Robotics Tutorial

    NARCIS (Netherlands)

    Pantic, Maja; Evers, Vanessa; Deisenroth, Marc; Merino, Luis; Schuller, Björn

    2016-01-01

    Social and Affective Robotics is a growing multidisciplinary field encompassing computer science, engineering, psychology, education, and many other disciplines. It explores how social and affective factors influence interactions between humans and robots, and how affect and social signals can be

  7. Full autonomous microline trace robot

    Science.gov (United States)

    Yi, Deer; Lu, Si; Yan, Yingbai; Jin, Guofan

    2000-10-01

    Optoelectric inspection may find applications in robotic system. In micro robotic system, smaller optoelectric inspection system is preferred. However, as miniaturizing the size of the robot, the number of the optoelectric detector becomes lack. And lack of the information makes the micro robot difficult to acquire its status. In our lab, a micro line trace robot has been designed, which autonomous acts based on its optoelectric detection. It has been programmed to follow a black line printed on the white colored ground. Besides the optoelectric inspection, logical algorithm in the microprocessor is also important. In this paper, we propose a simply logical algorithm to realize robot's intelligence. The robot's intelligence is based on a AT89C2051 microcontroller which controls its movement. The technical details of the micro robot are as follow: dimension: 30mm*25mm*35*mm; velocity: 60mm/s.

  8. Design Minimalism in Robotics Programming

    Directory of Open Access Journals (Sweden)

    Anthony Cowley

    2008-11-01

    Full Text Available With the increasing use of general robotic platforms in different application scenarios, modularity and reusability have become key issues in effective robotics programming. In this paper, we present a minimalist approach for designing robot software, in which very simple modules, with well designed interfaces and very little redundancy can be connected through a strongly typed framework to specify and execute different robotics tasks.

  9. Design Minimalism in Robotics Programming

    Directory of Open Access Journals (Sweden)

    Anthony Cowley

    2006-03-01

    Full Text Available With the increasing use of general robotic platforms in different application scenarios, modularity and reusability have become key issues in effective robotics programming. In this paper, we present a minimalist approach for designing robot software, in which very simple modules, with well designed interfaces and very little redundancy can be connected through a strongly typed framework to specify and execute different robotics tasks.

  10. Teen Sized Humanoid Robot: Archie

    Science.gov (United States)

    Baltes, Jacky; Byagowi, Ahmad; Anderson, John; Kopacek, Peter

    This paper describes our first teen sized humanoid robot Archie. This robot has been developed in conjunction with Prof. Kopacek’s lab from the Technical University of Vienna. Archie uses brushless motors and harmonic gears with a novel approach to position encoding. Based on our previous experience with small humanoid robots, we developed software to create, store, and play back motions as well as control methods which automatically balance the robot using feedback from an internal measurement unit (IMU).

  11. Robotics, Ethics, and Nanotechnology

    Science.gov (United States)

    Ganascia, Jean-Gabriel

    It may seem out of character to find a chapter on robotics in a book about nanotechnology, and even more so a chapter on the application of ethics to robots. Indeed, as we shall see, the questions look quite different in these two fields, i.e., in robotics and nanoscience. In short, in the case of robots, we are dealing with artificial beings endowed with higher cognitive faculties, such as language, reasoning, action, and perception, whereas in the case of nano-objects, we are talking about invisible macromolecules which act, move, and duplicate unseen to us. In one case, we find ourselves confronted by a possibly evil double of ourselves, and in the other, a creeping and intangible nebula assails us from all sides. In one case, we are faced with an alter ego which, although unknown, is clearly perceptible, while in the other, an unspeakable ooze, the notorious grey goo, whose properties are both mysterious and sinister, enters and immerses us. This leads to a shift in the ethical problem situation: the notion of responsibility can no longer be worded in the same terms because, despite its otherness, the robot can always be located somewhere, while in the case of nanotechnologies, myriad nanometric objects permeate everywhere, disseminating uncontrollably.

  12. Salvage robotic radical prostatectomy

    Directory of Open Access Journals (Sweden)

    Samuel D Kaffenberger

    2014-01-01

    Full Text Available Failure of non-surgical primary treatment for localized prostate cancer is a common occurrence, with rates of disease recurrence ranging from 20% to 60%. In a large proportion of patients, disease recurrence is clinically localized and therefore potentially curable. Unfortunately, due to the complex and potentially morbid nature of salvage treatment, radical salvage surgery is uncommonly performed. In an attempt to decrease the morbidity of salvage therapy without sacrificing oncologic efficacy, a number of experienced centers have utilized robotic assistance to perform minimally invasive salvage radical prostatectomy. Herein, we critically evaluate the existing literature on salvage robotic radical prostatectomy with a focus on patient selection, perioperative complications and functional and early oncologic outcomes. These results are compared with contemporary and historical open salvage radical prostatectomy series and supplemented with insights we have gained from our experience with salvage robotic radical prostatectomy. The body of evidence by which conclusions regarding the efficacy and safety of robotic salvage radical prostatectomy can be drawn comprises fewer than 200 patients with limited follow-up. Preliminary results are promising and some outcomes have been favorable when compared with contemporary open salvage prostatectomy series. Advantages of the robotic platform in the performance of salvage radical prostatectomy include decreased blood loss, short length of stay and improved visualization. Greater experience is required to confirm the long-term oncologic efficacy and functional outcomes as well as the generalizability of results achieved at experienced centers.

  13. Rehearsal for the Robot Revolution

    DEFF Research Database (Denmark)

    Jochum, Elizabeth; Goldberg, Ken

    that are central to social robotics. However automated performances that merely substitute robotic actors for human ones do not always capture our imagination or prove entertaining. While some plays explore ambivalence to robots or “misbehaving machines” thematically (such as R.U.R.), the exigencies of live...

  14. Robotics Literacy Captivates Elementary Students.

    Science.gov (United States)

    Friedman, Madeleine

    1986-01-01

    Describes a robotics literacy course offered for elementary age children at Broward Community College (Florida) and discusses the motivation for offering such a course, the course philosophy and objectives, and participant reactions. A sampling of robots and robotics devices and some of their teaching applications are included. (MBR)

  15. The future of Robotics Technology

    DEFF Research Database (Denmark)

    Pagliarini, Luigi; Lund, Henrik Hautop

    2017-01-01

    In the last decade the robotics industry has created millions of additional jobs led by consumer electronics and the electric vehicle industry, and by 2020, robotics will be a $100 billion worth industry, as big as the tourism industry.. For example, the rehabilitation robot market has grown 10...

  16. Motion planning for multiple robots

    NARCIS (Netherlands)

    Aronov, B.; Berg, de M.; van der Stappen, A.F.; Svestka, P.; Vleugels, J.M.

    1999-01-01

    We study the motion-planning problem for pairs and triples of robots operating in a shared workspace containing n obstacles. A standard way to solve such problems is to view the collection of robots as one composite robot, whose number of degrees of freedom is d , the sum of the numbers of degrees

  17. Fable: Socially Interactive Modular Robot

    DEFF Research Database (Denmark)

    Magnússon, Arnþór; Pacheco, Moises; Moghadam, Mikael

    2013-01-01

    Modular robots have a significant potential as user-reconfigurable robotic playware, but often lack sufficient sensing for social interaction. We address this issue with the Fable modular robotic system by exploring the use of smart sensor modules that has a better ability to sense the behavior...

  18. The Mobile Robot "Little Helper"

    DEFF Research Database (Denmark)

    Hvilshøj, Mads; Bøgh, Simon; Madsen, Ole

    2009-01-01

    Increased customer needs and intensified global competition require intelligent and flexible automation. The interaction technology mobile robotics addresses this, so it holds great potential within the industry. This paper presents the concepts, ideas and working principles of the mobile robot...... this show promising results regarding industrial integration, exploitation and maturation of mobile robotics....

  19. EVOLUTION OF THE ROBOT DESIGN

    Directory of Open Access Journals (Sweden)

    POPA Marina Andreea

    2011-11-01

    Full Text Available This paper presents the construction of a robot used at a national robot competition in Romania. The robot consists of datasheet sensors (2 long distance measuring sensors, 4 reflective object sensors, 4 engines, 4 gears, a battery and the plates with microcontrollers

  20. To kill a mockingbird robot

    NARCIS (Netherlands)

    Bartneck, C.; Verbunt, M.N.C.; Mubin, O.; Al Mahmud, A.

    2007-01-01

    Robots are being introduced in our society but their social status is still unclear. A critical issue is if the robot's exhibition of intelligent life-like behavior leads to the users' perception of animacy. The ultimate test for the life-likeness of a robot is to kill it. We therefore conducted an

  1. Robotics Activities in The Netherlands

    NARCIS (Netherlands)

    Kranenburg- de Lange, D.J.B.A.

    2010-01-01

    Since April 2010, in The Netherlands robotics activities are coordinated by RoboNED. This Dutch Robotics Platform, chaired by Prof. Stefano Stramigioli, aims to stimulate the synergy between the robotics fields and to formulate a focus. The goal of RoboNED is three fold: 1) RoboNED aims to bring the

  2. Humans and Robots. Educational Brief.

    Science.gov (United States)

    National Aeronautics and Space Administration, Washington, DC.

    This brief discusses human movement and robotic human movement simulators. The activity for students in grades 5-12 provides a history of robotic movement and includes making an End Effector for the robotic arms used on the Space Shuttle and the International Space Station (ISS). (MVL)

  3. Japan's ARTRA robot moves forward

    International Nuclear Information System (INIS)

    Takehara, Ken

    1992-01-01

    Work on the Japanese ARTRA robot has progressed to the point where a demonstration robot has been built. However, much work remains before ARTRA can realize its goal of developing a highly sophisticated remotely-controlled robot to replace the human maintenance worker in a radioactive environment. (author)

  4. Robot-assisted general surgery.

    Science.gov (United States)

    Hazey, Jeffrey W; Melvin, W Scott

    2004-06-01

    With the initiation of laparoscopic techniques in general surgery, we have seen a significant expansion of minimally invasive techniques in the last 16 years. More recently, robotic-assisted laparoscopy has moved into the general surgeon's armamentarium to address some of the shortcomings of laparoscopic surgery. AESOP (Computer Motion, Goleta, CA) addressed the issue of visualization as a robotic camera holder. With the introduction of the ZEUS robotic surgical system (Computer Motion), the ability to remotely operate laparoscopic instruments became a reality. US Food and Drug Administration approval in July 2000 of the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA) further defined the ability of a robotic-assist device to address limitations in laparoscopy. This includes a significant improvement in instrument dexterity, dampening of natural hand tremors, three-dimensional visualization, ergonomics, and camera stability. As experience with robotic technology increased and its applications to advanced laparoscopic procedures have become more understood, more procedures have been performed with robotic assistance. Numerous studies have shown equivalent or improved patient outcomes when robotic-assist devices are used. Initially, robotic-assisted laparoscopic cholecystectomy was deemed safe, and now robotics has been shown to be safe in foregut procedures, including Nissen fundoplication, Heller myotomy, gastric banding procedures, and Roux-en-Y gastric bypass. These techniques have been extrapolated to solid-organ procedures (splenectomy, adrenalectomy, and pancreatic surgery) as well as robotic-assisted laparoscopic colectomy. In this chapter, we review the evolution of robotic technology and its applications in general surgical procedures.

  5. Neuro-robotics from brain machine interfaces to rehabilitation robotics

    CERN Document Server

    Artemiadis

    2014-01-01

    Neuro-robotics is one of the most multidisciplinary fields of the last decades, fusing information and knowledge from neuroscience, engineering and computer science. This book focuses on the results from the strategic alliance between Neuroscience and Robotics that help the scientific community to better understand the brain as well as design robotic devices and algorithms for interfacing humans and robots. The first part of the book introduces the idea of neuro-robotics, by presenting state-of-the-art bio-inspired devices. The second part of the book focuses on human-machine interfaces for pe

  6. Affordance estimation for vision-based object replacement on a humanoid robot

    DEFF Research Database (Denmark)

    Mustafa, Wail; Wächter, Mirko; Szedmak, Sandor

    2016-01-01

    In this paper, we address the problem of finding replacements of missing objects, involved in the execution of manipulation tasks. Our approach is based on estimating functional affordances for the unknown objects in order to propose replacements. We use a vision-based affordance estimation syste...

  7. Intelligent robotic tracker

    Science.gov (United States)

    Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.

    1987-01-01

    An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.

  8. Service Robots for Hospitals

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan

    services to maintain the quality of healthcare provided. This thesis and the Industrial PhD project aim to address logistics, which is the most resource demanding service in a hospital. The scale of the transportation tasks is huge and the material flow in a hospital is comparable to that of a factory. We......Hospitals are complex and dynamic organisms that are vital to the well-being of societies. Providing good quality healthcare is the ultimate goal of a hospital, and it is what most of us are only concerned with. A hospital, on the other hand, has to orchestrate a great deal of supplementary...... believe that these transportation tasks, to a great extent, can be and will be automated using mobile robots. This thesis consequently addresses the key technical issues of implementing service robots in hospitals. In simple terms, a robotic system for automating hospital logistics has to be reliable...

  9. ISS Robotic Student Programming

    Science.gov (United States)

    Barlow, J.; Benavides, J.; Hanson, R.; Cortez, J.; Le Vasseur, D.; Soloway, D.; Oyadomari, K.

    2016-01-01

    The SPHERES facility is a set of three free-flying satellites launched in 2006. In addition to scientists and engineering, middle- and high-school students program the SPHERES during the annual Zero Robotics programming competition. Zero Robotics conducts virtual competitions via simulator and on SPHERES aboard the ISS, with students doing the programming. A web interface allows teams to submit code, receive results, collaborate, and compete in simulator-based initial rounds and semi-final rounds. The final round of each competition is conducted with SPHERES aboard the ISS. At the end of 2017 a new robotic platform called Astrobee will launch, providing new game elements and new ground support for even more student interaction.

  10. Ultrasonic decontamination robot

    International Nuclear Information System (INIS)

    Patenaude, R.S.

    1984-01-01

    An ultrasonic decontamination robot removes radioactive contamination from the internal surface of the inlet and outlet headers, divider plate, tube sheet, and lower portions of tubes of a nuclear power plant steam generator. A programmable microprocessor controller guides the movement of a robotic arm mounted in the header manway. An ultrasonic transducer having a solvent delivery subsystem through which ultrasonic action is achieved is moved by the arm over the surfaces. A solvent recovery suction tube is positioned within the header to remove solvent therefrom while avoiding interference with the main robotic arm. The solvent composition, temperature, pressure, viscosity, and purity are controlled to optimize the ultrasonic scrubbing action. The ultrasonic transducer is controlled at a power density, frequency, and on-off mode cycle such as to optimize scrubbing action within the range of transducer-to-surface distance and solvent layer thickness selected for the particular conditions encountered. Both solvent and transducer control actions are optimized by the programmable microprocessor. (author)

  11. MATHEMATICAL MODEL MANIPULATOR ROBOTS

    Directory of Open Access Journals (Sweden)

    O. N. Krakhmalev

    2015-12-01

    Full Text Available A mathematical model to describe the dynamics of manipulator robots. Mathematical model are the implementation of the method based on the Lagrange equation and using the transformation matrices of elastic coordinates. Mathematical model make it possible to determine the elastic deviations of manipulator robots from programmed motion trajectories caused by elastic deformations in hinges, which are taken into account in directions of change of the corresponding generalized coordinates. Mathematical model is approximated and makes it possible to determine small elastic quasi-static deviations and elastic vibrations. The results of modeling the dynamics by model are compared to the example of a two-link manipulator system. The considered model can be used when performing investigations of the mathematical accuracy of the manipulator robots.

  12. The universal robot

    Science.gov (United States)

    Moravec, Hans

    1993-12-01

    Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in the next decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data - act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage to be surpassed, in stages producing robots that learn like mammals, model their world like primates, and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended some of its inherited limitations and so transformed itself into something quite new.

  13. FIRST robots compete

    Science.gov (United States)

    2000-01-01

    FIRST teams and their robots work to go through the right motions at the FIRST competition. Students from all over the country are at the KSC Visitor Complex for the FIRST (For Inspiration and Recognition of Science and Technology) Southeast Regional competition March 9-11 in the Rocket Garden. Teams of high school students are testing the limits of their imagination using robots they have designed, with the support of business and engineering professionals and corporate sponsors, to compete in a technological battle against other schools' robots. Of the 30 high school teams competing, 16 are Florida teams co-sponsored by NASA and KSC contractors. Local high schools participating are Astronaut, Bayside, Cocoa Beach, Eau Gallie, Melbourne, Melbourne Central Catholic, Palm Bay, Rockledge, Satellite, and Titusville.

  14. Human-Robot Interaction

    Science.gov (United States)

    Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta

    2012-01-01

    Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies

  15. ROBOTIC SURGERY: BIOETHICAL ASPECTS.

    Science.gov (United States)

    Siqueira-Batista, Rodrigo; Souza, Camila Ribeiro; Maia, Polyana Mendes; Siqueira, Sávio Lana

    2016-01-01

    The use of robots in surgery has been increasingly common today, allowing the emergence of numerous bioethical issues in this area. To present review of the ethical aspects of robot use in surgery. Search in Pubmed, SciELO and Lilacs crossing the headings "bioethics", "surgery", "ethics", "laparoscopy" and "robotic". Of the citations obtained, were selected 17 articles, which were used for the preparation of the article. It contains brief presentation on robotics, its inclusion in health and bioethical aspects, and the use of robots in surgery. Robotic surgery is a reality today in many hospitals, which makes essential bioethical reflection on the relationship between health professionals, automata and patients. A utilização de robôs em procedimentos cirúrgicos tem sido cada vez mais frequente na atualidade, o que permite a emergência de inúmeras questões bioéticas nesse âmbito. Apresentar revisão sobre os aspectos éticos dos usos de robôs em cirurgia. Realizou-se revisão nas bases de dados Pubmed, SciELO e Lilacs cruzando-se os descritores "bioética", "cirurgia", "ética", "laparoscopia" e "robótica". Do total de citações obtidas, selecionou-se 17 artigos, os quais foram utilizados para a elaboração do artigo. Ele contém breve apresentação sobre a robótica, sua inserção na saúde e os aspectos bioéticos da utilização dos robôs em procedimentos cirúrgicos. A cirurgia robótica é uma realidade, hoje, em muitas unidades hospitalares, o que torna essencial a reflexão bioética sobre as relações entre profissionais da saúde, autômatos e pacientes.

  16. Laser assisted robotic surgery in cornea transplantation

    Science.gov (United States)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-03-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  17. Robot modelling; Control and applications with software

    Energy Technology Data Exchange (ETDEWEB)

    Ranky, P G; Ho, C Y

    1985-01-01

    This book provides a ''picture'' of robotics covering both the theoretical aspect of modeling as well as the practical and design aspects of: robot programming; robot tooling and automated hand changing; implementation planning; testing; and software design for robot systems. The authors present an introduction to robotics with a systems approach. They describe not only the tasks relating to a single robot (or arm) but also systems of robots working together on a product or several products.

  18. AN IMPLEMENTATION OF PACMAN GAME USING ROBOTS

    OpenAIRE

    Madhav. Rao

    2011-01-01

    As the field of robotics are advancing, robotics education needs to consider technological advances and societal level of interest. Realizing computer games in robotic platforms is one such technological advance for educating students in robotics science. Realizing computer games in robotics environment is still a challenge due to high investment factor in developing robot models. However the effort can lead to the enhanced interest in robotics education and further involvement in science and...

  19. Assessment of Vision-Based Target Detection and Classification Solutions Using an Indoor Aerial Robot

    Science.gov (United States)

    2014-09-01

    revolutions per minute SIFT Scale-Invariant Transform Feature SURF Speeded Up Robust Features SWAP size, weight and power TAMD threat air and missile defense...domain. The naming convention for all functions within this domain is prefix “plan_.” • Logic: Logic acts as a switch to enable and disable certain...publication, in 2006 [44]. Some other feature detectors/descriptors available in computer vision are 28 Speeded Up Robust Features ( SURF ) [4] and Scale

  20. Intelligence for Human-Assistant Planetary Surface Robots

    Science.gov (United States)

    Hirsh, Robert; Graham, Jeffrey; Tyree, Kimberly; Sierhuis, Maarten; Clancey, William J.

    2006-01-01

    The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.

  1. Accuracy in Robot Generated Image Data Sets

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Bjorholm

    2015-01-01

    In this paper we present a practical innovation concerning how to achieve high accuracy of camera positioning, when using a 6 axis industrial robots to generate high quality data sets for computer vision. This innovation is based on the realization that to a very large extent the robots positioning...... error is deterministic, and can as such be calibrated away. We have successfully used this innovation in our efforts for creating data sets for computer vision. Since the use of this innovation has a significant effect on the data set quality, we here present it in some detail, to better aid others...

  2. Put Your Robot In, Put Your Robot Out: Sequencing through Programming Robots in Early Childhood

    Science.gov (United States)

    Kazakoff, Elizabeth R.; Bers, Marina Umaschi

    2014-01-01

    This article examines the impact of programming robots on sequencing ability in early childhood. Thirty-four children (ages 4.5-6.5 years) participated in computer programming activities with a developmentally appropriate tool, CHERP, specifically designed to program a robot's behaviors. The children learned to build and program robots over three…

  3. Mining robotics sensors

    CSIR Research Space (South Africa)

    Green, JJ

    2011-07-01

    Full Text Available International Conference of CAD/CAM, Robotics & Factories of the Future (CARs&FOF 2011) 26-28 July 2-11, Kuala Lumpur, Malaysia Mining Robotics Sensors Perception Sensors on a Mine Safety Platform Green JJ1, Hlophe K2, Dickens J3, Teleka R4, Mathew Price5...-28 July 2-11, Kuala Lumpur, Malaysia visualization in confined, lightless environments, and thermography for assessing the safety and stability of hanging walls. Over the last decade approximately 200 miners have lost their lives per year in South...

  4. Robots and Moral Agency

    OpenAIRE

    Johansson, Linda

    2011-01-01

      Machine ethics is a field of applied ethics that has grown rapidly in the last decade. Increasingly advanced autonomous robots have expanded the focus of machine ethics from issues regarding the ethical development and use of technology by humans to a focus on ethical dimensions of the machines themselves. This thesis contains two essays, both about robots in some sense, representing these different perspectives of machine ethics. The first essay, “Is it Morally Right to use UAVs in War?” c...

  5. Robotics in Japan

    International Nuclear Information System (INIS)

    Martin, T.

    1987-02-01

    In September 1986, a group of German scientists visited Japanese institutions dealing with advanced robotics research, to gain a deeper insight in the Japanese status of this technology. Research projects found and discussions led in seven leading research institutes and seven firms are reported. Advanced robot or handling systems to ease or avoid human exposure to activities in harsh, demanding or dangerous conditions or environment are mainly dealt with. The Japanese show vast research activities in this area in the pre-competitive stage especially in the nuclear and underwater application area. (orig.) [de

  6. Simulation of robot manipulators

    International Nuclear Information System (INIS)

    Kress, R.L.; Babcock, S.M.; Bills, K.C.; Kwon, D.S.; Schoenwald, D.A.

    1995-01-01

    This paper describes Oak Ridge National Laboratory's development of an environment for the simulation of robotic manipulators. Simulation includes the modeling of kinematics, dynamics, sensors, actuators, control systems, operators, and environments. Models will be used for manipulator design, proposal evaluation, control system design and analysis, graphical preview of proposed motions, safety system development, and training. Of particular interest is the development of models for robotic manipulators having at least one flexible link. As a first application, models have been developed for the Pacific Northwest Laboratories' Flexible Beam Testbed which is a one-Degree-Of-Freedom, flexible arm with a hydraulic base actuator. Initial results show good agreement between model and experiment

  7. Robotic Planetary Drill Tests

    Science.gov (United States)

    Glass, Brian J.; Thompson, S.; Paulsen, G.

    2010-01-01

    Several proposed or planned planetary science missions to Mars and other Solar System bodies over the next decade require subsurface access by drilling. This paper discusses the problems of remote robotic drilling, an automation and control architecture based loosely on observed human behaviors in drilling on Earth, and an overview of robotic drilling field test results using this architecture since 2005. Both rotary-drag and rotary-percussive drills are targeted. A hybrid diagnostic approach incorporates heuristics, model-based reasoning and vibration monitoring with neural nets. Ongoing work leads to flight-ready drilling software.

  8. Human - Robot Proximity

    DEFF Research Database (Denmark)

    Nickelsen, Niels Christian Mossfeldt

    The media and political/managerial levels focus on the opportunities to re-perform Denmark through digitization. Feeding assistive robotics is a welfare technology, relevant to citizens with low or no function in their arms. Despite national dissemination strategies, it proves difficult to recruit...... the study that took place as multi-sited ethnography at different locations in Denmark and Sweden. Based on desk research, observation of meals and interviews I examine socio-technological imaginaries and their practical implications. Human - robotics interaction demands engagement and understanding...

  9. Robot welding process control

    Science.gov (United States)

    Romine, Peter L.

    1991-01-01

    This final report documents the development and installation of software and hardware for Robotic Welding Process Control. Primary emphasis is on serial communications between the CYRO 750 robotic welder, Heurikon minicomputer running Hunter & Ready VRTX, and an IBM PC/AT, for offline programming and control and closed-loop welding control. The requirements for completion of the implementation of the Rocketdyne weld tracking control are discussed. The procedure for downloading programs from the Intergraph, over the network, is discussed. Conclusions are made on the results of this task, and recommendations are made for efficient implementation of communications, weld process control development, and advanced process control procedures using the Heurikon.

  10. Embedding visual routines in AnaFocus' Eye-RIS Vision Systems for closing the perception to action loop in roving robots

    Science.gov (United States)

    Jiménez-Marrufo, A.; Caballero-García, D. J.

    2011-05-01

    The purpose of the current paper is to describe how different visual routines can be developed and embedded in the AnaFocus' Eye-RIS Vision System on Chip (VSoC) to close the perception to action loop within the roving robots developed under the framework of SPARK II European project. The Eye-RIS Vision System on Chip employs a bio-inspired architecture where image acquisition and processing are truly intermingled and the processing itself is carried out in two steps. At the first step, processing is fully parallel owing to the concourse of dedicated circuit structures which are integrated close to the sensors. At the second step, processing is realized on digitally-coded information data by means of digital processors. All these capabilities make the Eye-RIS VSoC very suitable for the integration within small robots in general, and within the robots developed by the SPARK II project in particular. These systems provide with image-processing capabilities and speed comparable to high-end conventional vision systems without the need for high-density image memory and intensive digital processing. As far as perception is concerned, current perceptual schemes are often based on information derived from visual routines. Since real world images are quite complex to be processed for perceptual needs with traditional approaches, more computationally feasible algorithms are required to extract the desired features from the scene in real time, to efficiently proceed with the consequent action. In this paper the development of such algorithms and their implementation taking full advantage of the sensing-processing capabilities of the Eye-RIS VSoC are described.

  11. Mobile robotics for CANDU maintenance

    International Nuclear Information System (INIS)

    Lipsett, M.G.; Rody, K.H.

    1996-01-01

    Although robotics researchers have been promising that robotics would soon be performing tasks in hazardous environments, the reality has yet to live up to the hype. The presently available crop of robots suitable for deployment in industrial situations are remotely operated, requiring skilled users. This talk describes cases where mobile robots have been used successfully in CANDU stations, discusses the difficulties in using mobile robots for reactor maintenance, and provides near-term goals for achievable improvements in performance and usefulness. (author) 5 refs., 2 ills

  12. Robotic system for process sampling

    International Nuclear Information System (INIS)

    Dyches, G.M.

    1985-01-01

    A three-axis cartesian geometry robot for process sampling was developed at the Savannah River Laboratory (SRL) and implemented in one of the site radioisotope separations facilities. Use of the robot reduces personnel radiation exposure and contamination potential by routinely handling sample containers under operator control in a low-level radiation area. This robot represents the initial phase of a longer term development program to use robotics for further sample automation. Preliminary design of a second generation robot with additional capabilities is also described. 8 figs

  13. An Innovative 3D Ultrasonic Actuator with Multidegree of Freedom for Machine Vision and Robot Guidance Industrial Applications Using a Single Vibration Ring Transducer

    Directory of Open Access Journals (Sweden)

    M. Shafik

    2013-07-01

    Full Text Available This paper presents an innovative 3D piezoelectric ultrasonic actuator using a single flexural vibration ring transducer, for machine vision and robot guidance industrial applications. The proposed actuator is principally aiming to overcome the visual spotlight focus angle of digital visual data capture transducer, digital cameras and enhance the machine vision system ability to perceive and move in 3D. The actuator Design, structures, working principles and finite element analysis are discussed in this paper. A prototype of the actuator was fabricated. Experimental tests and measurements showed the ability of the developed prototype to provide 3D motions of Multidegree of freedom, with typical speed of movement equal to 35 revolutions per minute, a resolution of less than 5μm and maximum load of 3.5 Newton. These initial characteristics illustrate, the potential of the developed 3D micro actuator to gear the spotlight focus angle issue of digital visual data capture transducers and possible improvement that such technology could bring to the machine vision and robot guidance industrial applications.

  14. An automated miniature robotic vehicle inspection system

    Energy Technology Data Exchange (ETDEWEB)

    Dobie, Gordon; Summan, Rahul; MacLeod, Charles; Pierce, Gareth; Galbraith, Walter [Centre for Ultrasonic Engineering, University of Strathclyde, 204 George Street, Glasgow, G1 1XW (United Kingdom)

    2014-02-18

    A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3D model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software.

  15. An automated miniature robotic vehicle inspection system

    International Nuclear Information System (INIS)

    Dobie, Gordon; Summan, Rahul; MacLeod, Charles; Pierce, Gareth; Galbraith, Walter

    2014-01-01

    A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3D model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software

  16. 24th International Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    2016-01-01

    This volume includes the Proceedings of the 24th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2015, which was held in Bucharest, Romania, on May 27-29, 2015. The Conference brought together academic and industry researchers in robotics from the 11 countries affiliated to the Alpe-Adria-Danube space: Austria, Croatia, Czech Republic, Germany, Greece, Hungary, Italy, Romania, Serbia, Slovakia and Slovenia, and their worldwide partners. According to its tradition, RAAD 2015 covered all important areas of research, development and innovation in robotics, including new trends such as: bio-inspired and cognitive robots, visual servoing of robot motion, human-robot interaction, and personal robots for ambient assisted living. The accepted papers have been grouped in nine sessions: Robot integration in industrial applications; Grasping analysis, dexterous grippers and component design; Advanced robot motion control; Robot vision and sensory control; Human-robot interaction and collaboration;...

  17. Developing a successful robotics program.

    Science.gov (United States)

    Luthringer, Tyler; Aleksic, Ilija; Caire, Arthur; Albala, David M

    2012-01-01

    Advancements in the robotic surgical technology have revolutionized the standard of care for many surgical procedures. The purpose of this review is to evaluate the important considerations in developing a new robotics program at a given healthcare institution. Patients' interest in robotic-assisted surgery has and continues to grow because of improved outcomes and decreased periods of hospitalization. Resulting market forces have created a solid foundation for the implementation of robotic surgery into surgical practice. Given proper surgeon experience and an efficient system, robotic-assisted procedures have been cost comparable to open surgical alternatives. Surgeon training and experience is closely linked to the efficiency of a new robotics program. Formally trained robotic surgeons have better patient outcomes and shorter operative times. Training in robotics has shown no negative impact on patient outcomes or mentor learning curves. Individual economic factors of local healthcare settings must be evaluated when planning for a new robotics program. The high cost of the robotic surgical platform is best offset with a large surgical volume. A mature, experienced surgeon is integral to the success of a new robotics program.

  18. Open Issues in Evolutionary Robotics.

    Science.gov (United States)

    Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne

    2016-01-01

    One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.

  19. 1st Iberian Robotics Conference

    CERN Document Server

    Sanfeliu, Alberto; Ferre, Manuel; ROBOT2013; Advances in robotics

    2014-01-01

    This book contains the proceedings of the ROBOT 2013: FIRST IBERIAN ROBOTICS CONFERENCE and it can be said that included both state of the art and more practical presentations dealing with implementation problems, support technologies and future applications. A growing interest in Assistive Robotics, Agricultural Robotics, Field Robotics, Grasping and Dexterous Manipulation, Humanoid Robots, Intelligent Systems and Robotics, Marine Robotics, has been demonstrated by the very relevant number of contributions. Moreover, ROBOT2013 incorporates a special session on Legal and Ethical Aspects in Robotics that is becoming a topic of key relevance. This Conference was held in Madrid (28-29 November 2013), organised by the Sociedad Española para la Investigación y Desarrollo en Robótica (SEIDROB) and by the Centre for Automation and Robotics - CAR (Universidad Politécnica de Madrid (UPM) and Consejo Superior de Investigaciones Científicas (CSIC)), along with the co-operation of Grupo Temático de Robótica CEA-GT...

  20. Mergeable nervous systems for robots.

    Science.gov (United States)

    Mathews, Nithin; Christensen, Anders Lyhne; O'Grady, Rehan; Mondada, Francesco; Dorigo, Marco

    2017-09-12

    Robots have the potential to display a higher degree of lifetime morphological adaptation than natural organisms. By adopting a modular approach, robots with different capabilities, shapes, and sizes could, in theory, construct and reconfigure themselves as required. However, current modular robots have only been able to display a limited range of hardwired behaviors because they rely solely on distributed control. Here, we present robots whose bodies and control systems can merge to form entirely new robots that retain full sensorimotor control. Our control paradigm enables robots to exhibit properties that go beyond those of any existing machine or of any biological organism: the robots we present can merge to form larger bodies with a single centralized controller, split into separate bodies with independent controllers, and self-heal by removing or replacing malfunctioning body parts. This work takes us closer to robots that can autonomously change their size, form and function.Robots that can self-assemble into different morphologies are desired to perform tasks that require different physical capabilities. Mathews et al. design robots whose bodies and control systems can merge and split to form new robots that retain full sensorimotor control and act as a single entity.

  1. The New Robotics-towards human-centered machines.

    Science.gov (United States)

    Schaal, Stefan

    2007-07-01

    Research in robotics has moved away from its primary focus on industrial applications. The New Robotics is a vision that has been developed in past years by our own university and many other national and international research institutions and addresses how increasingly more human-like robots can live among us and take over tasks where our current society has shortcomings. Elder care, physical therapy, child education, search and rescue, and general assistance in daily life situations are some of the examples that will benefit from the New Robotics in the near future. With these goals in mind, research for the New Robotics has to embrace a broad interdisciplinary approach, ranging from traditional mathematical issues of robotics to novel issues in psychology, neuroscience, and ethics. This paper outlines some of the important research problems that will need to be resolved to make the New Robotics a reality.

  2. Recent Development of Rehabilitation Robots

    Directory of Open Access Journals (Sweden)

    Zhiqin Qian

    2015-02-01

    Full Text Available We have conducted a critical review on the development of rehabilitation robots to identify the limitations of existing studies and clarify some promising research directions in this field. This paper is presented to summarize our findings and understanding. The demands for assistive technologies for elderly and disabled population have been discussed, the advantages and disadvantages of rehabilitation robots as assistive technologies have been explored, the issues involved in the development of rehabilitation robots are investigated, some representative robots in this field by leading research institutes have been introduced, and a few of critical challenges in developing advanced rehabilitation robots have been identified. Finally to meet the challenges of developing practical rehabilitation robots, reconfigurable and modular systems have been proposed to meet the identified challenges, and a few of critical areas leading to the potential success of rehabilitation robots have been discussed.

  3. Probabilistic approaches to robotic perception

    CERN Document Server

    Ferreira, João Filipe

    2014-01-01

    This book tries to address the following questions: How should the uncertainty and incompleteness inherent to sensing the environment be represented and modelled in a way that will increase the autonomy of a robot? How should a robotic system perceive, infer, decide and act efficiently? These are two of the challenging questions robotics community and robotic researchers have been facing. The development of robotic domain by the 1980s spurred the convergence of automation to autonomy, and the field of robotics has consequently converged towards the field of artificial intelligence (AI). Since the end of that decade, the general public’s imagination has been stimulated by high expectations on autonomy, where AI and robotics try to solve difficult cognitive problems through algorithms developed from either philosophical and anthropological conjectures or incomplete notions of cognitive reasoning. Many of these developments do not unveil even a few of the processes through which biological organisms solve thes...

  4. International Conference Educational Robotics 2016

    CERN Document Server

    Moro, Michele; Menegatti, Emanuele

    2017-01-01

    This book includes papers presented at the International Conference “Educational Robotics 2016 (EDUROBOTICS)”, Athens, November 25, 2016. The papers build on constructivist and constructionist pedagogy and cover a variety of topics, including teacher education, design of educational robotics activities, didactical models, assessment methods, theater robotics, programming & making electronics with Snap4Arduino, the Duckietown project, robotics driven by tangible programming, Lego Mindstorms combined with App Inventor, the Orbital Education Platform, Anthropomorphic Robots and Human Meaning Makers in Education, and more. It provides researchers interested in educational robotics with the latest advances in the field with a focus on science, technology, engineering, arts and mathematics (STEAM) education. At the same time it offers teachers and educators from primary to secondary and tertiary education insights into how educational robotics can trigger the development of technological interest and 21st c...

  5. Studying Robots Outside the Lab

    DEFF Research Database (Denmark)

    Blond, Lasse

    and ethnographic studies will enhance understandings of the dynamics of HRI. Furthermore, the paper emphasizes how users and the context of use matters to integration of robots, as it is shown how roboticists are unable to control how their designs are implemented in practice and that the sociality of social...... robots is inscribed by its users in social practice. This paper can be seen as a contribution to studies of long-term HRI. It presents the challenges of robot adaptation in practice and discusses the limitations of the present conceptual understanding of human-robotic relations. The ethnographic data......As more and more robots enter our social world there is a strong need for further field studies of human-robotic interaction. Based on a two-year ethnographic study of the implementation of the South Korean socially assistive robot in Danish elderly care this paper argues that empirical...

  6. Mobile Surveillance and Monitoring Robots

    International Nuclear Information System (INIS)

    Kimberly, Howard R.; Shipers, Larry R.

    1999-01-01

    Long-term nuclear material storage will require in-vault data verification, sensor testing, error and alarm response, inventory, and maintenance operations. System concept development efforts for a comprehensive nuclear material management system have identified the use of a small flexible mobile automation platform to perform these surveillance and maintenance operations. In order to have near-term wide-range application in the Complex, a mobile surveillance system must be small, flexible, and adaptable enough to allow retrofit into existing special nuclear material facilities. The objective of the Mobile Surveillance and Monitoring Robot project is to satisfy these needs by development of a human scale mobile robot to monitor the state of health, physical security and safety of items in storage and process; recognize and respond to alarms, threats, and off-normal operating conditions; and perform material handling and maintenance operations. The system will integrate a tool kit of onboard sensors and monitors, maintenance equipment and capability, and SNL developed non-lethal threat response technology with the intelligence to identify threats and develop and implement first response strategies for abnormal signals and alarm conditions. System versatility will be enhanced by incorporating a robot arm, vision and force sensing, robust obstacle avoidance, and appropriate monitoring and sensing equipment

  7. Feasibility of Robotics and Machine Vision in Military Combat Ration Inspection (Short Term Project STP No. 11)

    Science.gov (United States)

    1994-06-01

    January 1989. [11] Burdea G and Zhuang J. Dextrous telerobotics with force feedback - an overview - part 2: Control and implementation. Robotica , UK, 9...291-298, 1991. [12] Burdea G. and Zhuang J. Dextrous telerobotics with force feedback - an overview, part 1: Human I factors. Robotica , UK, 9:171-178...trans- planting workcell. In American Society of Agricultural Engineering, St. Joseph, MI, 1991. I [26] Frost A.R. Robotic milking: a review. Robotica

  8. Welding robot package; Arc yosetsu robot package

    Energy Technology Data Exchange (ETDEWEB)

    Nishikawa, S. [Yaskawa Electric Corp., Kitakyushu (Japan)

    1998-09-01

    For the conventional high-speed welding robot, the welding current was controlled mainly for reducing the spatters during short circuits and for stabilizing the beads by the periodic short circuits. However, an increase of deposition amount in response to the speed is required for the high-speed welding. Large-current low-spatter welding current region control was added. Units were integrated into a package by which the arc length is kept in short without dispersion of arc length for welding without defects such as undercut and unequal beads. In automobile industry, use of aluminum parts is extended for the light weight. The welding is very difficult, and automation is not so progressing in spite of the poor environment. Buckling of welding wire is easy to occur, and supply of wire is obstructed by the deposition of chipped powders on the torch cable, which stay within the contact chip resulting in the deposition. Dislocation of locus is easy to occur at the corner of rectangular pipe during the welding. By improving these troubles, an aluminum MIG welding robot package has been developed. 13 figs.

  9. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  10. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  11. Unsupervised semantic indoor scene classification for robot vision based on context of features using Gist and HSV-SIFT

    Science.gov (United States)

    Madokoro, H.; Yamanashi, A.; Sato, K.

    2013-08-01

    This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.

  12. Modelling of industrial robot in LabView Robotics

    Science.gov (United States)

    Banas, W.; Cwikła, G.; Foit, K.; Gwiazda, A.; Monica, Z.; Sekala, A.

    2017-08-01

    Currently can find many models of industrial systems including robots. These models differ from each other not only by the accuracy representation parameters, but the representation range. For example, CAD models describe the geometry of the robot and some even designate a mass parameters as mass, center of gravity, moment of inertia, etc. These models are used in the design of robotic lines and sockets. Also systems for off-line programming use these models and many of them can be exported to CAD. It is important to note that models for off-line programming describe not only the geometry but contain the information necessary to create a program for the robot. Exports from CAD to off-line programming system requires additional information. These models are used for static determination of reachability points, and testing collision. It’s enough to generate a program for the robot, and even check the interaction of elements of the production line, or robotic cell. Mathematical models allow robots to study the properties of kinematic and dynamic of robot movement. In these models the geometry is not so important, so are used only selected parameters such as the length of the robot arm, the center of gravity, moment of inertia. These parameters are introduced into the equations of motion of the robot and motion parameters are determined.

  13. Robotics Technology Development Program

    International Nuclear Information System (INIS)

    1994-02-01

    The Robotics Technology Development Program (RTDP) is a ''needs-driven'' effort. A lengthy series of presentations and discussions at DOE sites considered critical to DOE's Environmental Restoration and Waste Management (EM) Programs resulted in a clear understanding of needed robotics applications toward resolving definitive problems at the sites. A detailed analysis of the Tank Waste Retrieval (TWR), Contaminant Analysis Automation (CAA), Mixed Waste Operations (MWO), and Decontamination ampersand Dismantlement (D ampersand D). The RTDP Group realized that much of the technology development was common (Cross Cutting-CC) to each of these robotics application areas, for example, computer control and sensor interface protocols. Further, the OTD approach to the Research, Development, Demonstration, Testing, and Evaluation (RDDT ampersand E) process urged an additional organizational break-out between short-term (1--3 years) and long-term (3--5 years) efforts (Advanced Technology-AT). The RDTP is thus organized around these application areas -- TWR, CAA, MWO, D ampersand D and CC ampersand AT -- with the first four developing short-term applied robotics. An RTDP Five-Year Plan was developed for organizing the Program to meet the needs in these application areas

  14. Energy in Robotics

    NARCIS (Netherlands)

    Folkertsma, Gerrit A.; Stramigioli, Stefano

    2017-01-01

    Energy and energy exchange govern interactions in the physical world. By explicitly considering the energy and power in a robotic system, many control and design problems become easier or more insightful than in a purely signal-based view. We show the application of these energy considerations to

  15. Soft Robotic Actuators

    Science.gov (United States)

    Godfrey, Juleon Taylor

    In this thesis a survey on soft robotic actuators is conducted. The actuators are classified into three main categories: Pneumatic Artificial Muscles (PAM), Electronic Electroactive Polymers (Electric EAP), and Ionic Electroactive Polymers (Ionic EAP). Soft robots can have many degrees and are more compliant than hard robots. This makes them suitable for applications that are difficult for hard robots. For each actuator background history, build materials, how they operate, and modeling are presented. Multiple actuators in each class are reviewed highlighting both their use and their mathematical formulation. In addition to the survey the McKibben actuator was chosen for fabrication and in-depth experimental analysis. Four McKibben actuators were fabricated using mesh sleeve, barbed hose fittings, and different elastic bladders. All were actuated using compressed air. Tensile tests were performed for each actuator to measure the tension force as air pressure increased from 20 to 100 psi in 10 psi increments. To account for material relaxation properties eleven trials for each actuator were run for 2-3 days. In conclusion, the smallest outer diameter elastic bladder was capable of producing the highest force due to the larger gap between the bladder and the sleeve.

  16. Robotic and Survey Telescopes

    Science.gov (United States)

    Woźniak, Przemysław

    Robotic telescopes are revolutionizing the way astronomers collect their dataand conduct sky surveys. This chapter begins with a discussion of principles thatguide the process of designing, constructing, and operating telescopes andobservatories that offer a varying degree of automation, from instruments remotelycontrolled by observers to fully autonomous systems requiring no humansupervision during their normal operations. Emphasis is placed on designtrade-offs involved in building end-to-end systems intended for a wide range ofscience applications. The second part of the chapter contains descriptions ofseveral projects and instruments, both existing and currently under development.It is an attempt to provide a representative selection of actual systems thatillustrates state of the art in technology, as well as important ideas and milestonesin the development of the field. The list of presented instruments spans the fullrange in size starting from small all-sky monitors, through midrange robotic andsurvey telescopes, and finishing with large robotic instruments and surveys.Explosive growth of telescope networking is enabling entirely new modesof interaction between the survey and follow-up observing. Increasingimportance of standardized communication protocols and software is stressed.These developments are driven by the fusion of robotic telescope hardware,massive storage and databases, real-time knowledge extraction, and datacross-correlation on a global scale. The chapter concludes with examplesof major science results enabled by these new technologies and futureprospects.

  17. "Integrative Social Robotics"

    DEFF Research Database (Denmark)

    Seibt, Johanna

    2016-01-01

    -theoretic research in the Humanities, the Social Sciences, and the Human Sciences. The resulting paradigm is user-driven design writ large: research, design, and development of social robotics applications are guided—with multiple feedback—by the reflected normative preferences of a cultural community....

  18. Costruire e programmare robot

    Directory of Open Access Journals (Sweden)

    Barbara Caci

    2002-01-01

    Full Text Available Negli scenari riguardanti le nuove tecnologie didattiche sta progressivamente acquistando un posto di rilievo la robotica educativa. Tale termine designa una varieta' di esperienze formative, ispirate ai principi teorico metodologici del costruttivismo e della embodied cognition, e basate sull'impiego di Robotic Construction Kits come strumenti di apprendimento.

  19. Robotic Art for Wearable

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2010-01-01

    on “simple” plug-and-play circuits, ranging from pure sensors-actuators schemes to artefacts with a smaller level of elaboration complexity. Indeed, modular robotic wearable focuses on enhancing the body perception and proprioperception by trying to substitute all of the traditional exoskeletons perceptive...

  20. Sensory Robot Gripper

    DEFF Research Database (Denmark)

    Drimus, Alin

    The project researches and proposes a tactile sensor system for equipping robotic grippers, thus giving them a sense of touch. We start by reviewing work that covers the building of tactile sensors and we focus on the flexible sensors with multiple sensing elements. As the piezoresistive, capacit......The project researches and proposes a tactile sensor system for equipping robotic grippers, thus giving them a sense of touch. We start by reviewing work that covers the building of tactile sensors and we focus on the flexible sensors with multiple sensing elements. As the piezoresistive......, such as establishing of contact, release of contact or slip. The proposed applications are just a few examples of the advantages of equipping robotic grippers with such a tactile sensor system, that is robust, fast, affordable, adaptable to any kind of gripper and has properties similar to the human sense of touch....... Based on experimental validation, we are confident that our proposed tactile sensor solution can be successfully employed in other application areas like reactive grasping, exploration of unknown objects, slip avoidance, dexterous manipulation or service robotics....