WorldWideScience

Sample records for robot vision robot

  1. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  2. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  3. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  4. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  5. Vision servo of industrial robot: A review

    Science.gov (United States)

    Zhang, Yujin

    2018-04-01

    Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.

  6. Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.

    Science.gov (United States)

    van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline

    2010-11-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.

  7. Beyond speculative robot ethics: A vision assessment study on the future of the robotic caretaker

    NARCIS (Netherlands)

    Plas, A.P. van der; Smits, M.; Wehrmann, C.

    2010-01-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to

  8. Robotics, vision and control fundamental algorithms in Matlab

    CERN Document Server

    Corke, Peter

    2017-01-01

    Robotic vision, the combination of robotics and computer vision, involves the application of computer algorithms to data acquired from sensors. The research community has developed a large body of such algorithms but for a newcomer to the field this can be quite daunting. For over 20 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and compu...

  9. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Gerd Mayer

    2008-11-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  10. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Hans Utz

    2006-03-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  11. Machine Learning for Robotic Vision

    OpenAIRE

    Drummond, Tom

    2018-01-01

    Machine learning is a crucial enabling technology for robotics, in particular for unlocking the capabilities afforded by visual sensing. This talk will present research within Prof Drummond’s lab that explores how machine learning can be developed and used within the context of Robotic Vision.

  12. A Practical Solution Using A New Approach To Robot Vision

    Science.gov (United States)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write

  13. Advanced robot vision system for nuclear power plants

    International Nuclear Information System (INIS)

    Onoguchi, Kazunori; Kawamura, Atsuro; Nakayama, Ryoichi.

    1991-01-01

    We have developed a robot vision system for advanced robots used in nuclear power plants, under a contract with the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. This work is part of the large-scale 'advanced robot technology' project. The robot vision system consists of self-location measurement, obstacle detection, and object recognition subsystems, which are activated by a total control subsystem. This paper presents details of these subsystems and the experimental results obtained. (author)

  14. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  15. Vision-based mapping with cooperative robots

    Science.gov (United States)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  16. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  17. Manifold learning in machine vision and robotics

    Science.gov (United States)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  18. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  19. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. New development in robot vision

    CERN Document Server

    Behal, Aman; Chung, Chi-Kit

    2015-01-01

    The field of robotic vision has advanced dramatically recently with the development of new range sensors.  Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related...

  1. A robotic vision system to measure tree traits

    Science.gov (United States)

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  2. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  3. International Conference on Computational Vision and Robotics

    CERN Document Server

    2015-01-01

    Computer Vision and Robotic is one of the most challenging areas of 21st century. Its application ranges from Agriculture to Medicine, Household applications to Humanoid, Deep-sea-application to Space application, and Industry applications to Man-less-plant. Today’s technologies demand to produce intelligent machine, which are enabling applications in various domains and services. Robotics is one such area which encompasses number of technology in it and its application is widespread. Computational vision or Machine vision is one of the most challenging tools for the robot to make it intelligent.   This volume covers chapters from various areas of Computational Vision such as Image and Video Coding and Analysis, Image Watermarking, Noise Reduction and Cancellation, Block Matching and Motion Estimation, Tracking of Deformable Object using Steerable Pyramid Wavelet Transformation, Medical Image Fusion, CT and MRI Image Fusion based on Stationary Wavelet Transform. The book also covers articles from applicati...

  4. Ping-Pong Robotics with High-Speed Vision System

    DEFF Research Database (Denmark)

    Li, Hailing; Wu, Haiyan; Lou, Lei

    2012-01-01

    The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. In order to detect a flying ball accurately and robustly...... of the manipulator are updated iteratively with decreasing error. Experiments are conducted on a 7 degrees of freedom humanoid robot arm. A successful Ping-Pong playing between the robot arm and human is achieved with a high successful rate of 88%....

  5. Active Vision for Sociable Robots

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2001-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  6. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  7. Vision Based Tracker for Dart-Catching Robot

    OpenAIRE

    Linderoth, Magnus; Robertsson, Anders; Åström, Karl; Johansson, Rolf

    2009-01-01

    This paper describes how high-speed computer vision can be used in a motion control application. The specific application investigated is a dart catching robot. Computer vision is used to detect a flying dart and a filtering algorithm predicts its future trajectory. This will give data to a robot controller allowing it to catch the dart. The performance of the implemented components indicates that the dart catching application can be made to work well. Conclusions are also made about what fea...

  8. A remote assessment system with a vision robot and wearable sensors.

    Science.gov (United States)

    Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun

    2004-01-01

    This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.

  9. System and method for controlling a vision guided robot assembly

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.

    2017-03-07

    A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.

  10. Robotic Arm Control Algorithm Based on Stereo Vision Using RoboRealm Vision

    Directory of Open Access Journals (Sweden)

    SZABO, R.

    2015-05-01

    Full Text Available The goal of this paper is to present a stereo computer vision algorithm intended to control a robotic arm. Specific points on the robot joints are marked and recognized in the software. Using a dedicated set of mathematic equations, the movement of the robot is continuously computed and monitored with webcams. Positioning error is finally analyzed.

  11. Laws on Robots, Laws by Robots, Laws in Robots : Regulating Robot Behaviour by Design

    NARCIS (Netherlands)

    Leenes, R.E.; Lucivero, F.

    2015-01-01

    Speculation about robot morality is almost as old as the concept of a robot itself. Asimov’s three laws of robotics provide an early and well-discussed example of moral rules robots should observe. Despite the widespread influence of the three laws of robotics and their role in shaping visions of

  12. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  13. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  14. A real time tracking vision system and its application to robotics

    International Nuclear Information System (INIS)

    Inoue, Hirochika

    1994-01-01

    Among various sensing channels the vision is most important for making robot intelligent. If provided with a high speed visual tracking capability, the robot-environment interaction becomes dynamic instead of static, and thus the potential repertoire of robot behavior becomes very rich. For this purpose we developed a real-time tracking vision system. The fundamental operation on which our system based is the calculation of correlation between local images. Use of special chip for correlation and the multi-processor configuration enable the robot to track more than hundreds cues in full video rate. In addition to the fundamental visual performance, applications for robot behavior control are also introduced. (author)

  15. Development of Vision Control Scheme of Extended Kalman filtering for Robot's Position Control

    International Nuclear Information System (INIS)

    Jang, W. S.; Kim, K. S.; Park, S. I.; Kim, K. Y.

    2003-01-01

    It is very important to reduce the computational time in estimating the parameters of vision control algorithm for robot's position control in real time. Unfortunately, the batch estimation commonly used requires too murk computational time because it is iteration method. So, the batch estimation has difficulty for robot's position control in real time. On the other hand, the Extended Kalman Filtering(EKF) has many advantages to calculate the parameters of vision system in that it is a simple and efficient recursive procedures. Thus, this study is to develop the EKF algorithm for the robot's vision control in real time. The vision system model used in this study involves six parameters to account for the inner(orientation, focal length etc) and outer (the relative location between robot and camera) parameters of camera. Then, EKF has been first applied to estimate these parameters, and then with these estimated parameters, also to estimate the robot's joint angles used for robot's operation. finally, the practicality of vision control scheme based on the EKF has been experimentally verified by performing the robot's position control

  16. A Fast Vision System for Soccer Robot

    Directory of Open Access Journals (Sweden)

    Tianwu Yang

    2012-01-01

    Full Text Available This paper proposes a fast colour-based object recognition and localization for soccer robots. The traditional HSL colour model is modified for better colour segmentation and edge detection in a colour coded environment. The object recognition is based on only the edge pixels to speed up the computation. The edge pixels are detected by intelligently scanning a small part of whole image pixels which is distributed over the image. A fast method for line and circle centre detection is also discussed. For object localization, 26 key points are defined on the soccer field. While two or more key points can be seen from the robot camera view, the three rotation angles are adjusted to achieve a precise localization of robots and other objects. If no key point is detected, the robot position is estimated according to the history of robot movement and the feedback from the motors and sensors. The experiments on NAO and RoboErectus teen-size humanoid robots show that the proposed vision system is robust and accurate under different lighting conditions and can effectively and precisely locate robots and other objects.

  17. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  18. Vision Assisted Laser Scanner Navigation for Autonomous Robots

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2008-01-01

    This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot ...

  19. Vision Guided Intelligent Robot Design And Experiments

    Science.gov (United States)

    Slutzky, G. D.; Hall, E. L.

    1988-02-01

    The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.

  20. A novel method of robot location using RFID and stereo vision

    Science.gov (United States)

    Chen, Diansheng; Zhang, Guanxin; Li, Zhen

    2012-04-01

    This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.

  1. A Vision-Based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  2. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    Science.gov (United States)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  3. Robot path planning using expert systems and machine vision

    Science.gov (United States)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  4. 3D vision upgrade kit for TALON robot

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  5. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  6. Vision-Based Interfaces Applied to Assistive Robots

    Directory of Open Access Journals (Sweden)

    Elisa Perez

    2013-02-01

    Full Text Available This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.

  7. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    Science.gov (United States)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  8. A Framework for Obstacles Avoidance of Humanoid Robot Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2013-04-01

    Full Text Available In this paper, we propose a framework for multiple moving obstacles avoidance strategy using stereo vision for humanoid robot in indoor environment. We assume that this model of humanoid robot is used as a service robot to deliver a cup to customer from starting point to destination point. We have successfully developed and introduced three main modules to recognize faces, to identify multiple moving obstacles and to initiate a maneuver. A group of people who are walking will be tracked as multiple moving obstacles. Predefined maneuver to avoid obstacles is applied to robot because the limitation of view angle from stereo camera to detect multiple obstacles. The contribution of this research is a new method for multiple moving obstacles avoidance strategy with Bayesian approach using stereo vision based on the direction and speed of obstacles. Depth estimation is used to obtain distance calculation between obstacles and the robot. We present the results of the experiment of the humanoid robot called Gatotkoco II which is used our proposed method and evaluate its performance. The proposed moving obstacles avoidance strategy was tested empirically and proved effective for humanoid robot.

  9. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  10. Physics Based Vision Systems for Robotic Manipulation

    Data.gov (United States)

    National Aeronautics and Space Administration — With the increase of robotic manipulation tasks (TA4.3), specifically dexterous manipulation tasks (TA4.3.2), more advanced computer vision algorithms will be...

  11. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  12. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  13. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    OpenAIRE

    Kia, Chua; Arshad, Mohd Rizal

    2006-01-01

    This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system ...

  14. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  15. Robot vision system R and D for ITER blanket remote-handling system

    International Nuclear Information System (INIS)

    Maruyama, Takahito; Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka; Tesini, Alessandro

    2014-01-01

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system

  16. Robot vision system R and D for ITER blanket remote-handling system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Takahito, E-mail: maruyama.takahito@jaea.go.jp [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Tesini, Alessandro [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2014-10-15

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system.

  17. ROBERT autonomous navigation robot with artificial vision

    International Nuclear Information System (INIS)

    Cipollini, A.; Meo, G.B.; Nanni, V.; Rossi, L.; Taraglio, S.; Ferjancic, C.

    1993-01-01

    This work, a joint research between ENEA (the Italian National Agency for Energy, New Technologies and the Environment) and DIGlTAL, presents the layout of the ROBERT project, ROBot with Environmental Recognizing Tools, under development in ENEA laboratories. This project aims at the development of an autonomous mobile vehicle able to navigate in a known indoor environment through the use of artificial vision. The general architecture of the robot is shown together with the data and control flow among the various subsystems. Also the inner structure of the latter complete with the functionalities are given in detail

  18. Beyond Speculative Robot Ethics

    NARCIS (Netherlands)

    Smits, M.; Van der Plas, A.

    2010-01-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also lead to

  19. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    Palmer, “Development of a navigation system for semi-autonomous operation of wheelchairs,” in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples

  20. Stereo-vision and 3D reconstruction for nuclear mobile robots

    International Nuclear Information System (INIS)

    Lecoeur-Taibi, I.; Vacherand, F.; Rivallin, P.

    1991-01-01

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  1. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  2. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  3. Gain-scheduling control of a monocular vision-based human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-08-01

    Full Text Available , R. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition. Hutchinson, S., Hager, G., and Corke, P. (1996). A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12... environment, in a passive manner, at relatively high speeds and low cost. The control of mobile robots using vision in the feed- back loop falls into the well-studied field of visual servo control. Two primary approaches are used: image-based visual...

  4. Modeling and Implementation of Omnidirectional Soccer Robot with Wide Vision Scope Applied in Robocup-MSL

    Directory of Open Access Journals (Sweden)

    Mohsen Taheri

    2010-04-01

    Full Text Available The purpose of this paper is to design and implement a middle size soccer robot to conform RoboCup MSL league. First, according to the rules of RoboCup, we design the middle size soccer robot, The proposed autonomous soccer robot consists of the mechanical platform, motion control module, omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, competition strategies, and obstacle avoidance. And this soccer robot equips the laptop computer system and interface circuits to make decisions. In fact, the omnidirectional vision sensor of the vision system deals with the image processing and positioning for obstacle avoidance and
    target tracking. The boundary-following algorithm (BFA is applied to find the important features of the field. We utilize the sensor data fusion method in the control system parameters, self localization and world modeling. A vision-based self-localization and the conventional odometry
    systems are fused for robust selflocalization. The localization algorithm includes filtering, sharing and integration of the data for different types of objects recognized in the environment. In the control strategies, we present three state modes, which include the Attack Strategy, Defense Strategy and Intercept Strategy. The methods have been tested in the many Robocup competition field middle size robots.

  5. 8th International Conference on Robotic, Vision, Signal Processing & Power Applications

    CERN Document Server

    Mustaffa, Mohd

    2014-01-01

    The proceeding is a collection of research papers presented, at the 8th International Conference on Robotics, Vision, Signal Processing and Power Applications (ROVISP 2013), by researchers, scientists, engineers, academicians as well as industrial professionals from all around the globe. The topics of interest are as follows but are not limited to: • Robotics, Control, Mechatronics and Automation • Vision, Image, and Signal Processing • Artificial Intelligence and Computer Applications • Electronic Design and Applications • Telecommunication Systems and Applications • Power System and Industrial Applications  

  6. Autonomous military robotics

    CERN Document Server

    Nath, Vishnu

    2014-01-01

    This SpringerBrief reveals the latest techniques in computer vision and machine learning on robots that are designed as accurate and efficient military snipers. Militaries around the world are investigating this technology to simplify the time, cost and safety measures necessary for training human snipers. These robots are developed by combining crucial aspects of computer science research areas including image processing, robotic kinematics and learning algorithms. The authors explain how a new humanoid robot, the iCub, uses high-speed cameras and computer vision algorithms to track the objec

  7. Vision-Based Robot Following Using PID Control

    Directory of Open Access Journals (Sweden)

    Chandra Sekhar Pati

    2017-06-01

    Full Text Available Applications like robots which are employed for shopping, porter services, assistive robotics, etc., require a robot to continuously follow a human or another robot. This paper presents a mobile robot following another tele-operated mobile robot based on a PID (Proportional–Integral-Differential controller. Here, we use two differential wheel drive robots; one is a master robot and the other is a follower robot. The master robot is manually controlled and the follower robot is programmed to follow the master robot. For the master robot, a Bluetooth module receives the user’s command from an android application which is processed by the master robot’s controller, which is used to move the robot. The follower robot receives the image from the Kinect sensor mounted on it and recognizes the master robot. The follower robot identifies the x, y positions by employing the camera and the depth by using the Kinect depth sensor. By identifying the x, y, and z locations of the master robot, the follower robot finds the angle and distance between the master and follower robot, which is given as the error term of a PID controller. Using this, the follower robot follows the master robot. A PID controller is based on feedback and tries to minimize the error. Experiments are conducted for two indigenously developed robots; one depicting a humanoid and the other a small mobile robot. It was observed that the follower robot was easily able to follow the master robot using well-tuned PID parameters.

  8. Vision-Based Robot Following Using PID Control

    OpenAIRE

    Chandra Sekhar Pati; Rahul Kala

    2017-01-01

    Applications like robots which are employed for shopping, porter services, assistive robotics, etc., require a robot to continuously follow a human or another robot. This paper presents a mobile robot following another tele-operated mobile robot based on a PID (Proportional–Integral-Differential) controller. Here, we use two differential wheel drive robots; one is a master robot and the other is a follower robot. The master robot is manually controlled and the follower robot is programmed to ...

  9. Learning Spatial Object Localization from Vision on a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Jürgen Leitner

    2012-12-01

    Full Text Available We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN and Genetic Programming (GP, are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robot's kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robot's workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.

  10. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    Science.gov (United States)

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  11. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    Science.gov (United States)

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2018-02-01

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  12. 9th International Conference on Robotics, Vision, Signal Processing & Power Applications

    CERN Document Server

    Iqbal, Shahid; Teoh, Soo; Mustaffa, Mohd

    2017-01-01

     The proceeding is a collection of research papers presented, at the 9th International Conference on Robotics, Vision, Signal Processing & Power Applications (ROVISP 2016), by researchers, scientists, engineers, academicians as well as industrial professionals from all around the globe to present their research results and development activities for oral or poster presentations. The topics of interest are as follows but are not limited to:   • Robotics, Control, Mechatronics and Automation • Vision, Image, and Signal Processing • Artificial Intelligence and Computer Applications • Electronic Design and Applications • Telecommunication Systems and Applications • Power System and Industrial Applications • Engineering Education.

  13. Facilitating Programming of Vision-Equipped Robots through Robotic Skills and Projection Mapping

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard

    The field of collaborative industrial robots is currently developing fast both in the industry and in the scientific community. Companies such as Rethink Robotics and Universal Robots are redefining the concept of an industrial robot and entire new markets and use cases are becoming relevant for ...

  14. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  15. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    International Nuclear Information System (INIS)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun

    2013-01-01

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task

  16. 75 FR 36456 - Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision...

    Science.gov (United States)

    2010-06-25

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc.), Security... accurate information concerning the securities of Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc...

  17. 3D vision in a virtual reality robotics environment

    Science.gov (United States)

    Schutz, Christian L.; Natonek, Emerico; Baur, Charles; Hugli, Heinz

    1996-12-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.

  18. A cognitive approach to vision for a mobile robot

    Science.gov (United States)

    Benjamin, D. Paul; Funk, Christopher; Lyons, Damian

    2013-05-01

    We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both

  19. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2005-09-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  20. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2008-11-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  1. Robot bicolor system

    Science.gov (United States)

    Yamaba, Kazuo

    1999-03-01

    In case of robot vision, most important problem is the processing speed of acquiring and analyzing images are less than the speed of execution of the robot. In an actual robot color vision system, it is considered that the system should be processed at real time. We guessed this problem might be solved using by the bicolor analysis technique. We have been testing a system which we hope will give us insight to the properties of bicolor vision. The experiment is used the red channel of a color CCD camera and an image from a monochromatic camera to duplicate McCann's theory. To mix the two signals together, the mono image is copied into each of the red, green and blue memory banks of the image processing board and then added the red image to the red bank. On the contrary, pure color images, red, green and blue components are obtained from the original bicolor images in the novel color system after the scaling factor is added to each RGB image. Our search for a bicolor robot vision system was entirely successful.

  2. State of the art of robotic surgery related to vision: brain and eye applications of newly available devices

    Directory of Open Access Journals (Sweden)

    Nuzzi R

    2018-02-01

    Full Text Available Raffaele Nuzzi, Luca Brusasco Department of Surgical Sciences, Eye Clinic, University of Torino, Turin, Italy Background: Robot-assisted surgery has revolutionized many surgical subspecialties, mainly where procedures have to be performed in confined, difficult to visualize spaces. Despite advances in general surgery and neurosurgery, in vivo application of robotics to ocular surgery is still in its infancy, owing to the particular complexities of microsurgery. The use of robotic assistance and feedback guidance on surgical maneuvers could improve the technical performance of expert surgeons during the initial phase of the learning curve. Evidence acquisition: We analyzed the advantages and disadvantages of surgical robots, as well as the present applications and future outlook of robotics in neurosurgery in brain areas related to vision and ophthalmology. Discussion: Limitations to robotic assistance remain, that need to be overcome before it can be more widely applied in ocular surgery. Conclusion: There is heightened interest in studies documenting computerized systems that filter out hand tremor and optimize speed of movement, control of force, and direction and range of movement. Further research is still needed to validate robot-assisted procedures. Keywords: robotic surgery related to vision, robots, ophthalmological applications of robotics, eye and brain robots, eye robots

  3. Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots

    OpenAIRE

    Pérez Sala, Xavier

    2010-01-01

    We propose a robust system for automatic Robot Navigation in uncontrolled en- vironments. The system is composed by three main modules: the Arti cial Vision module, the Reinforcement Learning module, and the behavior control module. The aim of the system is to allow a robot to automatically nd a path that arrives to a pre xed goal. Turn and straight movements in uncontrolled environments are automatically estimated and controlled using the proposed modules. The Arti cial Vi...

  4. Computer vision system R&D for EAST Articulated Maintenance Arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Linglong, E-mail: linglonglin@ipp.ac.cn; Song, Yuntao, E-mail: songyt@ipp.ac.cn; Yang, Yang, E-mail: yangy@ipp.ac.cn; Feng, Hansheng, E-mail: hsfeng@ipp.ac.cn; Cheng, Yong, E-mail: chengyong@ipp.ac.cn; Pan, Hongtao, E-mail: panht@ipp.ac.cn

    2015-11-15

    Highlights: • We discussed the image preprocessing, object detection and pose estimation algorithms under poor light condition of inner vessel of EAST tokamak. • The main pipeline, including contours detection, contours filter, MER extracted, object location and pose estimation, was carried out in detail. • The technical issues encountered during the research were discussed. - Abstract: Experimental Advanced Superconducting Tokamak (EAST) is the first full superconducting tokamak device which was constructed at Institute of Plasma Physics Chinese Academy of Sciences (ASIPP). The EAST Articulated Maintenance Arm (EAMA) robot provides the means of the in-vessel maintenance such as inspection and picking up the fragments of first wall. This paper presents a method to identify and locate the fragments semi-automatically by using the computer vision. The use of computer vision in identification and location faces some difficult challenges such as shadows, poor contrast, low illumination level, less texture and so on. The method developed in this paper enables credible identification of objects with shadows through invariant image and edge detection. The proposed algorithms are validated through our ASIPP robotics and computer vision platform (ARVP). The results show that the method can provide a 3D pose with reference to robot base so that objects with different shapes and size can be picked up successfully.

  5. Design and Development of Vision Based Blockage Clearance Robot for Sewer Pipes

    Directory of Open Access Journals (Sweden)

    Krishna Prasad Nesaian

    2012-03-01

    Full Text Available Robotic technology is one of the advanced technologies, which is capable of completing tasks at situations where humans are unable to reach, see or survive. The underground sewer pipelines are the major tools for the transportation of effluent water. A lot of troubles caused by blockage in sewer pipe will lead to overflow of effluent water, sanitation problems. So robotic vehicle that is capable of traveling at underneath effluent water determining blockage using ultrasonic sensors and clearing by means of drilling mechanism is done. In addition to that wireless camera is fixed which acts as a robot vision by which we can monitor video and capture images using MATLAB tool. Thus in this project a prototype model of underground sewer pipe blockage clearance robot with drilling type will be developed

  6. A Vision-Based Approach for Estimating Contact Forces: Applications to Robot-Assisted Surgery

    Directory of Open Access Journals (Sweden)

    C. W. Kennedy

    2005-01-01

    Full Text Available The primary goal of this paper is to provide force feedback to the user using vision-based techniques. The approach presented in this paper can be used to provide force feedback to the surgeon for robot-assisted procedures. As proof of concept, we have developed a linear elastic finite element model (FEM of a rubber membrane whereby the nodal displacements of the membrane points are measured using vision. These nodal displacements are the input into our finite element model. In the first experiment, we track the deformation of the membrane in real-time through stereovision and compare it with the actual deformation computed through forward kinematics of the robot arm. On the basis of accurate deformation estimation through vision, we test the physical model of a membrane developed through finite element techniques. The FEM model accurately reflects the interaction forces on the user console when the interaction forces of the robot arm with the membrane are compared with those experienced by the surgeon on the console through the force feedback device. In the second experiment, the PHANToM haptic interface device is used to control the Mitsubishi PA-10 robot arm and interact with the membrane in real-time. Image data obtained through vision of the deformation of the membrane is used as the displacement input for the FEM model to compute the local interaction forces which are then displayed on the user console for providing force feedback and hence closing the loop.

  7. Vision-based control of robotic arm with 6 degrees of freedom

    OpenAIRE

    Versleegers, Wim

    2014-01-01

    This paper studies the procedure to program a vertically articulated robot with six degrees of freedom, the Mitsubishi Melfa RV-2SD, with Matlab. A major drawback of the programming software provided by Mitsubishi is that it barely allows the use of vision-based programming. The amount of useable cameras is limited and moreover, the cameras are very expensive. Using Matlab, these limitations could be overcome. However there is no direct way to control the robot with Matlab. The goal of this p...

  8. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  9. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    Science.gov (United States)

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  10. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    Science.gov (United States)

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  11. Estimation of visual maps with a robot network equipped with vision sensors.

    Science.gov (United States)

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  12. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    Directory of Open Access Journals (Sweden)

    Arturo Gil

    2010-05-01

    Full Text Available In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  13. Teaching Joint-Level Robot Programming with a New Robotics Software Tool

    Directory of Open Access Journals (Sweden)

    Fernando Gonzalez

    2017-12-01

    Full Text Available With the rising popularity of robotics in our modern world there is an increase in the number of engineering programs that offer the basic Introduction to Robotics course. This common introductory robotics course generally covers the fundamental theory of robotics including robot kinematics, dynamics, differential movements, trajectory planning and basic computer vision algorithms commonly used in the field of robotics. Joint programming, the task of writing a program that directly controls the robot’s joint motors, is an activity that involves robot kinematics, dynamics, and trajectory planning. In this paper, we introduce a new educational robotics tool developed for teaching joint programming. The tool allows the student to write a program in a modified C language that controls the movement of the arm by controlling the velocity of each joint motor. This is a very important activity in the robotics course and leads the student to gain knowledge of how to build a robotic arm controller. Sample assignments are presented for different levels of difficulty.

  14. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  15. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    Directory of Open Access Journals (Sweden)

    Xun Chai

    2015-04-01

    Full Text Available Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  16. Robots and lattice automata

    CERN Document Server

    Adamatzky, Andrew

    2015-01-01

    The book gives a comprehensive overview of the state-of-the-art research and engineering in theory and application of Lattice Automata in design and control of autonomous Robots. Automata and robots share the same notional meaning. Automata (originated from the latinization of the Greek word “αυτόματον”) as self-operating autonomous machines invented from ancient years can be easily considered the first steps of robotic-like efforts. Automata are mathematical models of Robots and also they are integral parts of robotic control systems. A Lattice Automaton is a regular array or a collective of finite state machines, or automata. The Automata update their states by the same rules depending on states of their immediate neighbours. In the context of this book, Lattice Automata are used in developing modular reconfigurable robotic systems, path planning and map exploration for robots, as robot controllers, synchronisation of robot collectives, robot vision, parallel robotic actuators. All chapters are...

  17. A Miniature Robot for Retraction Tasks under Vision Assistance in Minimally Invasive Surgery

    Directory of Open Access Journals (Sweden)

    Giuseppe Tortora

    2014-03-01

    Full Text Available Minimally Invasive Surgery (MIS is one of the main aims of modern medicine. It enables surgery to be performed with a lower number and severity of incisions. Medical robots have been developed worldwide to offer a robotic alternative to traditional medical procedures. New approaches aimed at a substantial decrease of visible scars have been explored, such as Natural Orifice Transluminal Endoscopic Surgery (NOTES. Simple surgical tasks such as the retraction of an organ can be a challenge when performed from narrow access ports. For this reason, there is a continuous need to develop new robotic tools for performing dedicated tasks. This article illustrates the design and testing of a new robotic tool for retraction tasks under vision assistance for NOTES. The retraction robots integrate brushless motors to enable additional degrees of freedom to that provided by magnetic anchoring, thus improving the dexterity of the overall platform. The retraction robot can be easily controlled to reach the target organ and apply a retraction force of up to 1.53 N. Additional degrees of freedom can be used for smooth manipulation and grasping of the organ.

  18. Negative Affect in Human Robot Interaction

    DEFF Research Database (Denmark)

    Rehm, Matthias; Krogsager, Anders

    2013-01-01

    The vision of social robotics sees robots moving more and more into unrestricted social environments, where robots interact closely with users in their everyday activities, maybe even establishing relationships with the user over time. In this paper we present a field trial with a robot in a semi...

  19. Examples of design and achievement of vision systems for mobile robotics applications

    Science.gov (United States)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  20. A Collaborative Approach for Surface Inspection Using Aerial Robots and Computer Vision

    Directory of Open Access Journals (Sweden)

    Martin Molina

    2018-03-01

    Full Text Available Aerial robots with cameras on board can be used in surface inspection to observe areas that are difficult to reach by other means. In this type of problem, it is desirable for aerial robots to have a high degree of autonomy. A way to provide more autonomy would be to use computer vision techniques to automatically detect anomalies on the surface. However, the performance of automated visual recognition methods is limited in uncontrolled environments, so that in practice it is not possible to perform a fully automatic inspection. This paper presents a solution for visual inspection that increases the degree of autonomy of aerial robots following a semi-automatic approach. The solution is based on human-robot collaboration in which the operator delegates tasks to the drone for exploration and visual recognition and the drone requests assistance in the presence of uncertainty. We validate this proposal with the development of an experimental robotic system using the software framework Aerostack. The paper describes technical challenges that we had to solve to develop such a system and the impact on this solution on the degree of autonomy to detect anomalies on the surface.

  1. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  2. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    Science.gov (United States)

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  3. Grasping in Robotics

    CERN Document Server

    2013-01-01

    Grasping in Robotics contains original contributions in the field of grasping in robotics with a broad multidisciplinary approach. This gives the possibility of addressing all the major issues related to robotized grasping, including milestones in grasping through the centuries, mechanical design issues, control issues, modelling achievements and issues, formulations and software for simulation purposes, sensors and vision integration, applications in industrial field and non-conventional applications (including service robotics and agriculture).   The contributors to this book are experts in their own diverse and wide ranging fields. This multidisciplinary approach can help make Grasping in Robotics of interest to a very wide audience. In particular, it can be a useful reference book for researchers, students and users in the wide field of grasping in robotics from many different disciplines including mechanical design, hardware design, control design, user interfaces, modelling, simulation, sensors and hum...

  4. A State-of-the-Art Review on Mapping and Localization of Mobile Robots Using Omnidirectional Vision Sensors

    Directory of Open Access Journals (Sweden)

    L. Payá

    2017-01-01

    Full Text Available Nowadays, the field of mobile robotics is experiencing a quick evolution, and a variety of autonomous vehicles is available to solve different tasks. The advances in computer vision have led to a substantial increase in the use of cameras as the main sensors in mobile robots. They can be used as the only source of information or in combination with other sensors such as odometry or laser. Among vision systems, omnidirectional sensors stand out due to the richness of the information they provide the robot with, and an increasing number of works about them have been published over the last few years, leading to a wide variety of frameworks. In this review, some of the most important works are analysed. One of the key problems the scientific community is addressing currently is the improvement of the autonomy of mobile robots. To this end, building robust models of the environment and solving the localization and navigation problems are three important abilities that any mobile robot must have. Taking it into account, the review concentrates on these problems; how researchers have addressed them by means of omnidirectional vision; the main frameworks they have proposed; and how they have evolved in recent years.

  5. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    Science.gov (United States)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two

  6. Augmented models for improving vision control of a mobile robot

    DEFF Research Database (Denmark)

    Andersen, Gert Lysgaard; Christensen, Anders C.; Ravn, Ole

    1994-01-01

    obtain good performance even when using standard low cost equipment and a comparatively low sampling rate. The plant model is a compound of kinematic, dynamic and sensor submodels, all integrated into a discrete state space representation. An intelligent strategy is applied for the vision sensor......This paper describes the modelling phases for the design of a path tracking vision controller for a three wheeled mobile robot. It is shown that, by including the dynamic characteristics of vision and encoder sensors and implementing the total system in one multivariable control loop, one can...

  7. Design of an Embedded Multi-Camera Vision System—A Case Study in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Valter Costa

    2018-02-01

    Full Text Available The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a Real-Time Robotic Vision application, thus exemplifying good algorithm design practices, the advantages of employing the “zero copy one pass” methodology and associated trade-offs leading to the selection of a controller platform. The vision tasks under study are: (i recognition of a “flat” signal; and (ii track following, requiring 3D reconstruction. This research firstly improves the used algorithms for the mentioned tasks and finally selects the controller hardware. Optimization for the shown algorithms yielded from 1.5 times to 190 times improvements, always with acceptable quality for the target application, with algorithm optimization being more important on lower computing power platforms. Results also include a 3-cm and five-degree accuracy for lane tracking and 100% accuracy for signalling panel recognition, which are better than most results found in the literature for this application. Clear results comparing different PC platforms for the mentioned Robotic Vision tasks are also shown, demonstrating trade-offs between accuracy and computing power, leading to the proper choice of control platform. The presented design principles are portable to other applications, where Real-Time constraints exist.

  8. Endoscopic vision-based tracking of multiple surgical instruments during robot-assisted surgery.

    Science.gov (United States)

    Ryu, Jiwon; Choi, Jaesoon; Kim, Hee Chan

    2013-01-01

    Robot-assisted minimally invasive surgery is effective for operations in limited space. Enhancing safety based on automatic tracking of surgical instrument position to prevent inadvertent harmful events such as tissue perforation or instrument collisions could be a meaningful augmentation to current robotic surgical systems. A vision-based instrument tracking scheme as a core algorithm to implement such functions was developed in this study. An automatic tracking scheme is proposed as a chain of computer vision techniques, including classification of metallic properties using k-means clustering and instrument movement tracking using similarity measures, Euclidean distance calculations, and a Kalman filter algorithm. The implemented system showed satisfactory performance in tests using actual robot-assisted surgery videos. Trajectory comparisons of automatically detected data and ground truth data obtained by manually locating the center of mass of each instrument were used to quantitatively validate the system. Instruments and collisions could be well tracked through the proposed methods. The developed collision warning system could provide valuable information to clinicians for safer procedures. © 2012, Copyright the Authors. Artificial Organs © 2012, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  9. Mutual Visibility by Robots with Persistent Memory

    OpenAIRE

    Bhagat, Subhash; Mukhopadhyaya, Krishnendu

    2017-01-01

    This paper addresses the mutual visibility problem for a set of semi-synchronous, opaque robots occupying distinct positions in the Euclidean plane. Since robots are opaque, if three robots lie on a line, the middle robot obstructs the visions of the two other robots. The mutual visibility problem asks the robots to coordinate their movements to form a configuration, within finite time and without collision, in which no three robots are collinear. Robots are endowed with a constant bits of pe...

  10. Humanlike Robots - The Upcoming Revolution in Robotics

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2009-01-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  11. Humanlike robots: the upcoming revolution in robotics

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2009-08-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  12. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    Science.gov (United States)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  13. Robotic anesthesia - A vision for the future of anesthesia

    OpenAIRE

    Hemmerling, Thomas M.; Taddei, Riccardo; Wehbe, Mohamad; Morse, Joshua; Cyr, Shantale; Zaouter, Cedrick

    2011-01-01

    Summary This narrative review describes a rationale for robotic anesthesia. It offers a first classification of robotic anesthesia by separating it into pharmacological robots and robots for aiding or replacing manual gestures. Developments in closed loop anesthesia are outlined. First attempts to perform manual tasks using robots are described. A critical analysis of the delayed development and introduction of robots in anesthesia is delivered.

  14. Special Issue on Intelligent Robots

    Directory of Open Access Journals (Sweden)

    Genci Capi

    2013-08-01

    Full Text Available The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in real time, a large amount of sensory data—such as vision, laser, microphone—in order to determine the best action. Intelligent algorithms have been successfully applied to link complex sensory data to robot action. This editorial briefly summarizes recent findings in the field of intelligent robots as described in the articles published in this special issue.

  15. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    Science.gov (United States)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  16. Visual guidance of a pig evisceration robot using neural networks

    DEFF Research Database (Denmark)

    Christensen, S.S.; Andersen, A.W.; Jørgensen, T.M.

    1996-01-01

    The application of a RAM-based neural network to robot vision is demonstrated for the guidance of a pig evisceration robot. Tests of the combined robot-vision system have been performed at an abattoir. The vision system locates a set of feature points on a pig carcass and transmits the 3D coordin...

  17. The development of advanced robotic technology -The development of advanced robotics for the nuclear industry-

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Bum; Kim, Woong Ki; Park, Soon Yong; Kim, Seung Ho; Kim, Chang Hoi; Hwang, Suk Yeoung; Kim, Byung Soo; Lee, Young Kwang

    1994-07-01

    In this year (the second year of this project), researches and development have been carried out to establish the essential key technologies applied to robot system for nuclear industry. In the area of robot vision, in order to construct stereo vision system necessary to tele-operation, stereo image acquisition camera module and stereo image displayer have been developed. Stereo matching and storing programs have been developed to analyse stereo images. According to the result of tele-operation experiment, operation efficiency has been enhanced about 20% by using the stereo vision system. In a part of object recognition, a tele-operated robot system has been constructed to evaluate the performance of the stereo vision system and to develop the vision algorithm to automate nozzle dam operation. A nuclear fuel rod character recognition system has been developed by using neural network. As a result of perfomance evaluation of the recognition system, 99% recognition rate has been achieved. In the area of sensing and intelligent control, temperature distribution has been measured by using the analysis of thermal image histogram and the inspection algorithm has been developed to determine of the state be normal or abnormal, and the fuzzy controller has been developed to control the compact mobile robot designed for path moving on block-typed path. (Author)

  18. Inventing Japan's 'robotics culture': the repeated assembly of science, technology, and culture in social robotics.

    Science.gov (United States)

    Sabanović, Selma

    2014-06-01

    Using interviews, participant observation, and published documents, this article analyzes the co-construction of robotics and culture in Japan through the technical discourse and practices of robotics researchers. Three cases from current robotics research--the seal-like robot PARO, the Humanoid Robotics Project HRP-2 humanoid, and 'kansei robotics' - show the different ways in which scientists invoke culture to provide epistemological grounding and possibilities for social acceptance of their work. These examples show how the production and consumption of social robotic technologies are associated with traditional crafts and values, how roboticists negotiate among social, technical, and cultural constraints while designing robots, and how humans and robots are constructed as cultural subjects in social robotics discourse. The conceptual focus is on the repeated assembly of cultural models of social behavior, organization, cognition, and technology through roboticists' narratives about the development of advanced robotic technologies. This article provides a picture of robotics as the dynamic construction of technology and culture and concludes with a discussion of the limits and possibilities of this vision in promoting a culturally situated understanding of technology and a multicultural view of science.

  19. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  20. Development of a teaching system for an industrial robot using stereo vision

    Science.gov (United States)

    Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki

    1997-12-01

    The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.

  1. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    Science.gov (United States)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  2. Towards Light‐guided Micro‐robotics

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    ‐dimensional microstructures. Furthermore, we exploit the light shaping capabilities available in the workstation to demonstrate a new strategy for controlling microstructures that goes beyond the typical refractive light deflections that are exploited in conventional optical trapping and manipulation e.g. of micro......Robotics in the macro‐scale typically uses light for carrying information in machine vision for monitoring and feedback in intelligent robotic guidance systems. With light’s miniscule momentum, shrinking robots down to the micro‐scale regime creates opportunities for exploiting optical forces...... and torques in micro‐robotic actuation and control. Indeed, the literature on optical trapping and micro‐manipulation attests to the possibilities for optical micro‐robotics. Advancing light‐driven micro‐robotics requires the optimization of optical force and optical torque that, in turn, requires...

  3. Robotics and nuclear power. Report by the Technology Transfer Robotics Task Team

    International Nuclear Information System (INIS)

    1985-06-01

    A task team was formed at the request of the Department of Energy to evaluate and assess technology development needed for advanced robotics in the nuclear industry. The mission of these technologies is to provide the nuclear industry with the support for the application of advanced robotics to reduce nuclear power generating costs and enhance the safety of the personnel in the industry. The investigation included robotic and teleoperated systems. A robotic system is defined as a reprogrammable, multifunctional manipulator designed to move materials, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks. A teleoperated system includes an operator who remotely controls the system by direct viewing or through a vision system

  4. Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2016-03-01

    Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.

  5. State of the art of robotic surgery related to vision: brain and eye applications of newly available devices

    Science.gov (United States)

    Nuzzi, Raffaele

    2018-01-01

    Background Robot-assisted surgery has revolutionized many surgical subspecialties, mainly where procedures have to be performed in confined, difficult to visualize spaces. Despite advances in general surgery and neurosurgery, in vivo application of robotics to ocular surgery is still in its infancy, owing to the particular complexities of microsurgery. The use of robotic assistance and feedback guidance on surgical maneuvers could improve the technical performance of expert surgeons during the initial phase of the learning curve. Evidence acquisition We analyzed the advantages and disadvantages of surgical robots, as well as the present applications and future outlook of robotics in neurosurgery in brain areas related to vision and ophthalmology. Discussion Limitations to robotic assistance remain, that need to be overcome before it can be more widely applied in ocular surgery. Conclusion There is heightened interest in studies documenting computerized systems that filter out hand tremor and optimize speed of movement, control of force, and direction and range of movement. Further research is still needed to validate robot-assisted procedures. PMID:29440943

  6. Vision-Based Recognition of Activities by a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Mounîm A. El-Yacoubi

    2015-12-01

    Full Text Available We present an autonomous assistive robotic system for human activity recognition from video sequences. Due to the large variability inherent to video capture from a non-fixed robot (as opposed to a fixed camera, as well as the robot's limited computing resources, implementation has been guided by robustness to this variability and by memory and computing speed efficiency. To accommodate motion speed variability across users, we encode motion using dense interest point trajectories. Our recognition model harnesses the dense interest point bag-of-words representation through an intersection kernel-based SVM that better accommodates the large intra-class variability stemming from a robot operating in different locations and conditions. To contextually assess the engine as implemented in the robot, we compare it with the most recent approaches of human action recognition performed on public datasets (non-robot-based, including a novel approach of our own that is based on a two-layer SVM-hidden conditional random field sequential recognition model. The latter's performance is among the best within the recent state of the art. We show that our robot-based recognition engine, while less accurate than the sequential model, nonetheless shows good performances, especially given the adverse test conditions of the robot, relative to those of a fixed camera.

  7. Deviation from Trajectory Detection in Vision based Robotic Navigation using SURF and Subsequent Restoration by Dynamic Auto Correction Algorithm

    Directory of Open Access Journals (Sweden)

    Ray Debraj

    2015-01-01

    Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.

  8. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    Science.gov (United States)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  9. Creating and maintaining chemical artificial life by robotic symbiosis

    DEFF Research Database (Denmark)

    Hanczyc, Martin M.; Parrilla, Juan M.; Nicholson, Arwen

    2015-01-01

    We present a robotic platform based on the open source RepRap 3D printer that can print and maintain chemical artificial life in the form of a dynamic, chemical droplet. The robot uses computer vision, a self-organizing map, and a learning program to automatically categorize the behavior of the d......We present a robotic platform based on the open source RepRap 3D printer that can print and maintain chemical artificial life in the form of a dynamic, chemical droplet. The robot uses computer vision, a self-organizing map, and a learning program to automatically categorize the behavior...... confluence of chemical, artificial intelligence, and robotic approaches to artificial life....

  10. Creating and Maintaining Chemical Artificial Life by Robotic Symbiosis

    DEFF Research Database (Denmark)

    Hanczyc, Martin; Parrilla, Juan M.; Nicholson, Arwen

    2015-01-01

    We present a robotic platform based on the open source RepRap 3D printer that can print and maintain chemical artificial life in the form of a dynamic, chemical droplet. The robot uses computer vision, a self-organizing map, and a learning program to automatically categorize the behavior of the d......We present a robotic platform based on the open source RepRap 3D printer that can print and maintain chemical artificial life in the form of a dynamic, chemical droplet. The robot uses computer vision, a self-organizing map, and a learning program to automatically categorize the behavior...... confluence of chemical, artificial intelligence, and robotic approaches to artificial life....

  11. Robotic fabrication in architecture, art, and design

    CERN Document Server

    Braumann, Johannes

    2013-01-01

    Architects, artists, and designers have been fascinated by robots for many decades, from Villemard’s utopian vision of an architect building a house with robotic labor in 1910, to the design of buildings that are robots themselves, such as Archigram’s Walking City. Today, they are again approaching the topic of robotic fabrication but this time employing a different strategy: instead of utopian proposals like Archigram’s or the highly specialized robots that were used by Japan’s construction industry in the 1990s, the current focus of architectural robotics is on industrial robots. These robotic arms have six degrees of freedom and are widely used in industry, especially for automotive production lines. What makes robotic arms so interesting for the creative industry is their multi-functionality: instead of having to develop specialized machines, a multifunctional robot arm can be equipped with a wide range of end-effectors, similar to a human hand using various tools. Therefore, architectural researc...

  12. A focused bibliography on robotics

    Science.gov (United States)

    Mergler, H. W.

    1983-08-01

    The present bibliography focuses on eight robotics-related topics believed by the author to be of special interest to researchers in the field of industrial electronics: robots, sensors, kinematics, dynamics, control systems, actuators, vision, economics, and robot applications. This literature search was conducted through the 1970-present COMPENDEX data base, which provides world-wide coverage of nearly 3500 journals, conference proceedings and reports, and the 1969-1981 INSPEC data base, which is the largest for the English language in the fields of physics, electrotechnology, computers, and control.

  13. JPL Robotics Technology Applicable to Agriculture

    Science.gov (United States)

    Udomkesmalee, Suraphol Gabriel; Kyte, L.

    2008-01-01

    This slide presentation describes several technologies that are developed for robotics that are applicable for agriculture. The technologies discussed are detection of humans to allow safe operations of autonomous vehicles, and vision guided robotic techniques for shoot selection, separation and transfer to growth media,

  14. Robot Control for Dynamic Environment Using Vision and Autocalibration

    DEFF Research Database (Denmark)

    Larsen, Thomas Dall; Lildballe, Jacob; Andersen, Nils Axel

    1997-01-01

    To enhance flexibility and extend the area of applications for robotic systems, it is important that the systems are capable ofhandling uncertainties and respond to (random) human behaviour.A vision systemmust very often be able to work in a dynamical ``noisy'' world where theplacement ofobjects...... can vary within certain restrictions. Furthermore it would be useful ifthe system is able to recover automatically after serious changes have beenapplied, for instance if the camera has been moved.In this paper an implementationof such a system is described. The system is a robotcapable of playing...

  15. HYBRID COMMUNICATION NETWORK OF MOBILE ROBOT AND QUAD-COPTER

    Directory of Open Access Journals (Sweden)

    Moustafa M. Kurdi

    2017-01-01

    Full Text Available This paper introduces the design and development of QMRS (Quadcopter Mobile Robotic System. QMRS is a real-time obstacle avoidance capability in Belarus-132N mobile robot with the cooperation of quadcopter Phantom-4. The function of QMRS consists of GPS used by Mobile Robot and image vision and image processing system from both robot and quad-copter and by using effective searching algorithm embedded inside the robot. Having the capacity to navigate accurately is one of the major abilities of a mobile robot to effectively execute a variety of jobs including manipulation, docking, and transportation. To achieve the desired navigation accuracy, mobile robots are typically equipped with on-board sensors to observe persistent features in the environment, to estimate their pose from these observations, and to adjust their motion accordingly. Quadcopter takes off from Mobile Robot, surveys the terrain and transmits the processed Image terrestrial robot. The main objective of research paper is to focus on the full coordination between robot and quadcopter by designing an efficient wireless communication using WIFI. In addition, it identify the method involving the use of vision and image processing system from both robot and quadcopter; analyzing path in real-time and avoiding obstacles based-on the computational algorithm embedded inside the robot. QMRS increases the efficiency and reliability of the whole system especially in robot navigation, image processing and obstacle avoidance due to the help and connection among the different parts of the system.

  16. Robotic Sensitive-Site Assessment

    Science.gov (United States)

    2015-09-04

    annotations. The SOA component is the backend infrastructure that receives and stores robot-generated and human-input data and serves these data to several...Architecture Server (heading level 2) The SOA server provides the backend infrastructure to receive data from robot situational awareness payloads, to archive...incapacitation or even death. The proper use of PPE is critical to avoiding exposure. However, wearing PPE limits mobility and field of vision, and

  17. Aerial service robotics: the AIRobots perspective

    NARCIS (Netherlands)

    Marconi, L.; Basile, F.; Caprari, G.; Carloni, Raffaella; Chiacchio, P.; Hurzeler, C.; Lippiello, V.; Naldi, R.; Siciliano, B.; Stramigioli, Stefano; Zwicker, E.

    This paper presents the main vision and research activities of the ongoing European project AIRobots (Innova- tive Aerial Service Robot for Remote Inspection by Contact, www.airobots.eu). The goal of AIRobots is to develop a new generation of aerial service robots capable of supporting human beings

  18. Laser assisted robotic surgery in cornea transplantation

    Science.gov (United States)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-03-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  19. 30 Years of Robotic Surgery.

    Science.gov (United States)

    Leal Ghezzi, Tiago; Campos Corleta, Oly

    2016-10-01

    The idea of reproducing himself with the use of a mechanical robot structure has been in man's imagination in the last 3000 years. However, the use of robots in medicine has only 30 years of history. The application of robots in surgery originates from the need of modern man to achieve two goals: the telepresence and the performance of repetitive and accurate tasks. The first "robot surgeon" used on a human patient was the PUMA 200 in 1985. In the 1990s, scientists developed the concept of "master-slave" robot, which consisted of a robot with remote manipulators controlled by a surgeon at a surgical workstation. Despite the lack of force and tactile feedback, technical advantages of robotic surgery, such as 3D vision, stable and magnified image, EndoWrist instruments, physiologic tremor filtering, and motion scaling, have been considered fundamental to overcome many of the limitations of the laparoscopic surgery. Since the approval of the da Vinci(®) robot by international agencies, American, European, and Asian surgeons have proved its factibility and safety for the performance of many different robot-assisted surgeries. Comparative studies of robotic and laparoscopic surgical procedures in general surgery have shown similar results with regard to perioperative, oncological, and functional outcomes. However, higher costs and lack of haptic feedback represent the major limitations of current robotic technology to become the standard technique of minimally invasive surgery worldwide. Therefore, the future of robotic surgery involves cost reduction, development of new platforms and technologies, creation and validation of curriculum and virtual simulators, and conduction of randomized clinical trials to determine the best applications of robotics.

  20. Active vision via extremum seeking for robots in unstructured environments : Applications in object recognition and manipulation

    NARCIS (Netherlands)

    Calli, B.; Caarls, W.; Wisse, M.; Jonker, P.P.

    2018-01-01

    In this paper, a novel active vision strategy is proposed for optimizing the viewpoint of a robot's vision sensor for a given success criterion. The strategy is based on extremum seeking control (ESC), which introduces two main advantages: 1) Our approach is model free: It does not require an

  1. Cultural Robotics: The Culture of Robotics and Robotics in Culture

    Directory of Open Access Journals (Sweden)

    Hooman Samani

    2013-12-01

    Full Text Available In this paper, we have investigated the concept of “Cultural Robotics” with regard to the evolution of social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of a culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. According to the importance of the embodiment of robots in the sense of presence, the influence of robots in communication culture is anticipated. The sustainability of robotics culture based on diversity for cultural communities for various acceptance modalities is explored in order to anticipate the creation of different attributes of culture between robots and humans in the future.

  2. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This paper summarizes the research results on the friendly network robotics in fiscal 1996. This research assumes an android robot as an ultimate robot and the future robot system utilizing computer network technology. The robot aiming at human daily work activities in factories or under extreme environments is required to work under usual human work environments. The human robot with similar size, shape and functions to human being is desirable. Such robot having a head with two eyes, two ears and mouth can hold a conversation with human being, can walk with two legs by autonomous adaptive control, and has a behavior intelligence. Remote operation of such robot is also possible through high-speed computer network. As a key technology to use this robot under coexistence with human being, establishment of human coexistent robotics was studied. As network based robotics, use of robots connected with computer networks was also studied. In addition, the R-cube (R{sup 3}) plan (realtime remote control robot technology) was proposed. 82 refs., 86 figs., 12 tabs.

  3. Intelligent robot trends for 1998

    Science.gov (United States)

    Hall, Ernest L.

    1998-10-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent technical and economic trends. Technically, the machines are faster, cheaper, more repeatable, more reliable and safer. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. Economically, the robotics industry now has a 1.1 billion-dollar market in the U.S. and is growing. Feasibility studies results are presented which also show decreasing costs for robots and unaudited healthy rates of return for a variety of robotic applications. However, the road from inspiration to successful application can be long and difficult, often taking decades to achieve a new product. A greater emphasis on mechatronics is needed in our universities. Certainly, more cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit industry and society.

  4. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    Directory of Open Access Journals (Sweden)

    Luis Emmi

    2014-01-01

    Full Text Available Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  5. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    Science.gov (United States)

    Gonzalez-de-Soto, Mariano; Pajares, Gonzalo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976

  6. New trends in robotics for agriculture: integration and assessment of a real fleet of robots.

    Science.gov (United States)

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  7. The New Robotics-towards human-centered machines.

    Science.gov (United States)

    Schaal, Stefan

    2007-07-01

    Research in robotics has moved away from its primary focus on industrial applications. The New Robotics is a vision that has been developed in past years by our own university and many other national and international research institutions and addresses how increasingly more human-like robots can live among us and take over tasks where our current society has shortcomings. Elder care, physical therapy, child education, search and rescue, and general assistance in daily life situations are some of the examples that will benefit from the New Robotics in the near future. With these goals in mind, research for the New Robotics has to embrace a broad interdisciplinary approach, ranging from traditional mathematical issues of robotics to novel issues in psychology, neuroscience, and ethics. This paper outlines some of the important research problems that will need to be resolved to make the New Robotics a reality.

  8. 25th Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    Borangiu, Theodor

    2017-01-01

    This book presents the proceedings of the 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 held in Belgrade, Serbia, on June 30th–July 2nd, 2016. In keeping with the tradition of the event, RAAD 2016 covered all the important areas of research and innovation in new robot designs and intelligent robot control, with papers including Intelligent robot motion control; Robot vision and sensory processing; Novel design of robot manipulators and grippers; Robot applications in manufacturing and services; Autonomous systems, humanoid and walking robots; Human–robot interaction and collaboration; Cognitive robots and emotional intelligence; Medical, human-assistive robots and prosthetic design; Robots in construction and arts, and Evolution, education, legal and social issues of robotics. For the first time in RAAD history, the themes cloud robots, legal and ethical issues in robotics as well as robots in arts were included in the technical program. The book is a valuable resource f...

  9. Robotic assisted minimally invasive surgery

    Directory of Open Access Journals (Sweden)

    Palep Jaydeep

    2009-01-01

    Full Text Available The term "robot" was coined by the Czech playright Karel Capek in 1921 in his play Rossom′s Universal Robots. The word "robot" is from the check word robota which means forced labor.The era of robots in surgery commenced in 1994 when the first AESOP (voice controlled camera holder prototype robot was used clinically in 1993 and then marketed as the first surgical robot ever in 1994 by the US FDA. Since then many robot prototypes like the Endoassist (Armstrong Healthcare Ltd., High Wycombe, Buck, UK, FIPS endoarm (Karlsruhe Research Center, Karlsruhe, Germany have been developed to add to the functions of the robot and try and increase its utility. Integrated Surgical Systems (now Intuitive Surgery, Inc. redesigned the SRI Green Telepresence Surgery system and created the daVinci Surgical System ® classified as a master-slave surgical system. It uses true 3-D visualization and EndoWrist ® . It was approved by FDA in July 2000 for general laparoscopic surgery, in November 2002 for mitral valve repair surgery. The da Vinci robot is currently being used in various fields such as urology, general surgery, gynecology, cardio-thoracic, pediatric and ENT surgery. It provides several advantages to conventional laparoscopy such as 3D vision, motion scaling, intuitive movements, visual immersion and tremor filtration. The advent of robotics has increased the use of minimally invasive surgery among laparoscopically naοve surgeons and expanded the repertoire of experienced surgeons to include more advanced and complex reconstructions.

  10. CRV 2008: Fifth Canadian Conference on Computerand Robot Vision, Windsor, ON, Canada, May 2008

    DEFF Research Database (Denmark)

    Fihl, Preben

    This technical report will cover the participation in the fifth Canadian Conference on Computer and Robot Vision in May 2008. The report will give a concise description of the topics presented at the conference, focusing on the work related to the HERMES project and human motion and action...

  11. Robotics

    Science.gov (United States)

    Popov, E. P.; Iurevich, E. I.

    The history and the current status of robotics are reviewed, as are the design, operation, and principal applications of industrial robots. Attention is given to programmable robots, robots with adaptive control and elements of artificial intelligence, and remotely controlled robots. The applications of robots discussed include mechanical engineering, cargo handling during transportation and storage, mining, and metallurgy. The future prospects of robotics are briefly outlined.

  12. IMU-based online kinematic calibration of robot manipulator.

    Science.gov (United States)

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  13. IMU-Based Online Kinematic Calibration of Robot Manipulator

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2013-01-01

    Full Text Available Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA and Kalman Filter (KF to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  14. Robot 2015 : Second Iberian Robotics Conference : Advances in Robotics

    CERN Document Server

    Moreira, António; Lima, Pedro; Montano, Luis; Muñoz-Martinez, Victor

    2016-01-01

    This book contains a selection of papers accepted for presentation and discussion at ROBOT 2015: Second Iberian Robotics Conference, held in Lisbon, Portugal, November 19th-21th, 2015. ROBOT 2015 is part of a series of conferences that are a joint organization of SPR – “Sociedade Portuguesa de Robótica/ Portuguese Society for Robotics”, SEIDROB – Sociedad Española para la Investigación y Desarrollo de la Robótica/ Spanish Society for Research and Development in Robotics and CEA-GTRob – Grupo Temático de Robótica/ Robotics Thematic Group. The conference organization had also the collaboration of several universities and research institutes, including: University of Minho, University of Porto, University of Lisbon, Polytechnic Institute of Porto, University of Aveiro, University of Zaragoza, University of Malaga, LIACC, INESC-TEC and LARSyS. Robot 2015 was focussed on the Robotics scientific and technological activities in the Iberian Peninsula, although open to research and delegates from other...

  15. Model and Behavior-Based Robotic Goalkeeper

    DEFF Research Database (Denmark)

    Lausen, H.; Nielsen, J.; Nielsen, M.

    2003-01-01

    This paper describes the design, implementation and test of a goalkeeper robot for the Middle-Size League of RoboCub. The goalkeeper task is implemented by a set of primitive tasks and behaviours coordinated by a 2-level hierarchical state machine. The primitive tasks concerning complex motion...... control are implemented by a non-linear control algorithm, adapted to the different task goals (e.g., follow the ball or the robot posture from local features extracted from images acquired by a catadioptric omni-directional vision system. Most robot parameters were designed based on simulations carried...

  16. Hand/Eye Coordination For Fine Robotic Motion

    Science.gov (United States)

    Lokshin, Anatole M.

    1992-01-01

    Fine motions of robotic manipulator controlled with help of visual feedback by new method reducing position errors by order of magnitude. Robotic vision subsystem includes five cameras: three stationary ones providing wide-angle views of workspace and two mounted on wrist of auxiliary robot arm. Stereoscopic cameras on arm give close-up views of object and end effector. Cameras measure errors between commanded and actual positions and/or provide data for mapping between visual and manipulator-joint-angle coordinates.

  17. Cultural Robotics: The Culture of Robotics and Robotics in Culture

    OpenAIRE

    Hooman Samani; Elham Saadatian; Natalie Pang; Doros Polydorou; Owen Noel Newton Fernando; Ryohei Nakatsu; Jeffrey Tzu Kwan Valino Koh

    2013-01-01

    In this paper, we have investigated the concept of “Cultural Robotics” with regard to the evolution of social into cultural robots in the 21st Century. By defining the concept of culture, the potential development of a culture between humans and robots is explored. Based on the cultural values of the robotics developers, and the learning ability of current robots, cultural attributes in this regard are in the process of being formed, which would define the new concept of cultural robotics. Ac...

  18. Vision-based robotic system for object agnostic placing operations

    DEFF Research Database (Denmark)

    Rofalis, Nikolaos; Nalpantidis, Lazaros; Andersen, Nils Axel

    2016-01-01

    Industrial robots are part of almost all modern factories. Even though, industrial robots nowadays manipulate objects of a huge variety in different environments, exact knowledge about both of them is generally assumed. The aim of this work is to investigate the ability of a robotic system to ope...... to the system, neither for the objects nor for the placing box. The experimental evaluation of the developed robotic system shows that a combination of seemingly simple modules and strategies can provide effective solution to the targeted problem....... to operate within an unknown environment manipulating unknown objects. The developed system detects objects, finds matching compartments in a placing box, and ultimately grasps and places the objects there. The developed system exploits 3D sensing and visual feature extraction. No prior knowledge is provided...

  19. Robot Actors, Robot Dramaturgies

    DEFF Research Database (Denmark)

    Jochum, Elizabeth

    This paper considers the use of tele-operated robots in live performance. Robots and performance have long been linked, from the working androids and automata staged in popular exhibitions during the nineteenth century and the robots featured at Cybernetic Serendipity (1968) and the World Expo...

  20. Experiences with a Barista Robot, FusionBot

    Science.gov (United States)

    Limbu, Dilip Kumar; Tan, Yeow Kee; Wong, Chern Yuen; Jiang, Ridong; Wu, Hengxin; Li, Liyuan; Kah, Eng Hoe; Yu, Xinguo; Li, Dong; Li, Haizhou

    In this paper, we describe the implemented service robot, called FusionBot. The goal of this research is to explore and demonstrate the utility of an interactive service robot in a smart home environment, thereby improving the quality of human life. The robot has four main features: 1) speech recognition, 2) object recognition, 3) object grabbing and fetching and 4) communication with a smart coffee machine. Its software architecture employs a multimodal dialogue system that integrates different components, including spoken dialog system, vision understanding, navigation and smart device gateway. In the experiments conducted during the TechFest 2008 event, the FusionBot successfully demonstrated that it could autonomously serve coffee to visitors on their request. Preliminary survey results indicate that the robot has potential to not only aid in the general robotics but also contribute towards the long term goal of intelligent service robotics in smart home environment.

  1. Towards Plug-n-Play robot guidance: Advanced 3D estimation and pose estimation in Robotic applications

    DEFF Research Database (Denmark)

    Sølund, Thomas

    and move objects, which are physical located at the same positions. In order to place objects in the same position each time, custom-made mechanical fixtures and aligners are constructed to ensure that objects are not moving. It is expensive to design and build these fixtures and it is difficult to quickly...... change to a novel task. In some cases where objects are placed in bins and boxes it is not possible to position the objects in the same location each time. To avoid designing expensive mechanical solutions and to be able to pick objects from boxes and bins, a sensor is necessary to guide the robot. Today...... while the robot motion programming is easily handled with the new collaborative robots. This thesis deals with robot vision technologies and how these are made easier for production workers program in order to get robots to recognize and compute the position of objects in the industry. This thesis...

  2. 1st Latin American Congress on Automation and Robotics

    CERN Document Server

    Baca, José; Moreno, Héctor; Carrera, Isela; Cardona, Manuel

    2017-01-01

    This book contains the proceedings of the 1st Latin American Congress on Automation and Robotics held at Panama City, Panama in February 2017. It gathers research work from researchers, scientists, and engineers from academia and private industry, and presents current and exciting research applications and future challenges in Latin American. The scope of this book covers a wide range of themes associated with advances in automation and robotics research encountered in engineering and scientific research and practice. These topics are related to control algorithms, systems automation, perception, mobile robotics, computer vision, educational robotics, robotics modeling and simulation, and robotics and mechanism design. LACAR 2017 has been sponsored by SENACYT (Secretaria Nacional de Ciencia, Tecnologia e Inovacion of Panama).

  3. Biomimetic vibrissal sensing for robots.

    Science.gov (United States)

    Pearson, Martin J; Mitchinson, Ben; Sullivan, J Charles; Pipe, Anthony G; Prescott, Tony J

    2011-11-12

    Active vibrissal touch can be used to replace or to supplement sensory systems such as computer vision and, therefore, improve the sensory capacity of mobile robots. This paper describes how arrays of whisker-like touch sensors have been incorporated onto mobile robot platforms taking inspiration from biology for their morphology and control. There were two motivations for this work: first, to build a physical platform on which to model, and therefore test, recent neuroethological hypotheses about vibrissal touch; second, to exploit the control strategies and morphology observed in the biological analogue to maximize the quality and quantity of tactile sensory information derived from the artificial whisker array. We describe the design of a new whiskered robot, Shrewbot, endowed with a biomimetic array of individually controlled whiskers and a neuroethologically inspired whisking pattern generation mechanism. We then present results showing how the morphology of the whisker array shapes the sensory surface surrounding the robot's head, and demonstrate the impact of active touch control on the sensory information that can be acquired by the robot. We show that adopting bio-inspired, low latency motor control of the rhythmic motion of the whiskers in response to contact-induced stimuli usefully constrains the sensory range, while also maximizing the number of whisker contacts. The robot experiments also demonstrate that the sensory consequences of active touch control can be usefully investigated in biomimetic robots.

  4. A Robust Vision Module for Humanoid Robotic Ping-Pong Game

    Directory of Open Access Journals (Sweden)

    Xiaopeng Chen

    2015-04-01

    Full Text Available Developing a vision module for a humanoid ping-pong game is challenging due to the spin and the non-linear rebound of the ping-pong ball. In this paper, we present a robust predictive vision module to overcome these problems. The hardware of the vision module is composed of two stereo camera pairs with each pair detecting the 3D positions of the ball on one half of the ping-pong table. The software of the vision module divides the trajectory of the ball into four parts and uses the perceived trajectory in the first part to predict the other parts. In particular, the software of the vision module uses an aerodynamic model to predict the trajectories of the ball in the air and uses a novel non-linear rebound model to predict the change of the ball's motion during rebound. The average prediction error of our vision module at the ball returning point is less than 50 mm - a value small enough for standard sized ping-pong rackets. Its average processing speed is 120fps. The precision and efficiency of our vision module enables two humanoid robots to play ping-pong continuously for more than 200 rounds.

  5. Intelligence for Human-Assistant Planetary Surface Robots

    Science.gov (United States)

    Hirsh, Robert; Graham, Jeffrey; Tyree, Kimberly; Sierhuis, Maarten; Clancey, William J.

    2006-01-01

    The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.

  6. Automated robotic workcell for waste characterization

    International Nuclear Information System (INIS)

    Dougan, A.D.; Gustaveson, D.K.; Alvarez, R.A.; Holliday, M.

    1993-01-01

    The authors have successfully demonstrated an automated multisensor-based robotic workcell for hazardous waste characterization. The robot within this workcell uses feedback from radiation sensors, a metal detector, object profile scanners, and a 2D vision system to automatically segregate objects based on their measured properties. The multisensor information is used to make segregation decisions of waste items and to facilitate the grasping of objects with a robotic arm. The authors used both sodium iodide and high purity germanium detectors as a two-step process to maximize throughput. For metal identification and discrimination, the authors are investigating the use of neutron interrogation techniques

  7. Is Ethics of Robotics about Robots? Philosophy of Robotics Beyond Realism and Individualilsm.

    NARCIS (Netherlands)

    Coeckelbergh, Mark

    2011-01-01

    If we are doing ethics of robotics, what exactly is the object of our inquiry? This paper challenges 'individualist' robot ontology and 'individualist' social philosophy of robots. It is argued that ethics of robotics should not study and evaluate robotics exclusively in terms of individual

  8. CANINE: a robotic mine dog

    Science.gov (United States)

    Stancil, Brian A.; Hyams, Jeffrey; Shelley, Jordan; Babu, Kartik; Badino, Hernán.; Bansal, Aayush; Huber, Daniel; Batavia, Parag

    2013-01-01

    Neya Systems, LLC competed in the CANINE program sponsored by the U.S. Army Tank Automotive Research Development and Engineering Center (TARDEC) which culminated in a competition held at Fort Benning as part of the 2012 Robotics Rodeo. As part of this program, we developed a robot with the capability to learn and recognize the appearance of target objects, conduct an area search amid distractor objects and obstacles, and relocate the target object in the same way that Mine dogs and Sentry dogs are used within military contexts for exploration and threat detection. Neya teamed with the Robotics Institute at Carnegie Mellon University to develop vision-based solutions for probabilistic target learning and recognition. In addition, we used a Mission Planning and Management System (MPMS) to orchestrate complex search and retrieval tasks using a general set of modular autonomous services relating to robot mobility, perception and grasping.

  9. Autonomous stair-climbing with miniature jumping robots.

    Science.gov (United States)

    Stoeter, Sascha A; Papanikolopoulos, Nikolaos

    2005-04-01

    The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.

  10. Visual servo simulation of EAST articulated maintenance arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Pan, Hongtao; Cheng, Yong; Feng, Hansheng [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Wu, Huapeng [Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta (Finland)

    2016-03-15

    For the inspection and light-duty maintenance of the vacuum vessel in the EAST tokamak, a serial robot arm, called EAST articulated maintenance arm, is developed. Due to the 9-m-long cantilever arm, the large flexibility of the EAMA robot introduces a problem in the accurate positioning. This article presents an autonomous robot control to cope with the robot positioning problem, which is a visual servo approach in context of tile grasping for the EAMA robot. In the experiments, the proposed method was implemented in a simulation environment to position and track a target graphite tile with the EAMA robot. As a result, the proposed visual control scheme can successfully drive the EAMA robot to approach and track the target tile until the robot reaches the desired position. Furthermore, the functionality of the simulation software presented in this paper is proved to be suitable for the development of the robotic and computer vision application.

  11. Visual servo simulation of EAST articulated maintenance arm robot

    International Nuclear Information System (INIS)

    Yang, Yang; Song, Yuntao; Pan, Hongtao; Cheng, Yong; Feng, Hansheng; Wu, Huapeng

    2016-01-01

    For the inspection and light-duty maintenance of the vacuum vessel in the EAST tokamak, a serial robot arm, called EAST articulated maintenance arm, is developed. Due to the 9-m-long cantilever arm, the large flexibility of the EAMA robot introduces a problem in the accurate positioning. This article presents an autonomous robot control to cope with the robot positioning problem, which is a visual servo approach in context of tile grasping for the EAMA robot. In the experiments, the proposed method was implemented in a simulation environment to position and track a target graphite tile with the EAMA robot. As a result, the proposed visual control scheme can successfully drive the EAMA robot to approach and track the target tile until the robot reaches the desired position. Furthermore, the functionality of the simulation software presented in this paper is proved to be suitable for the development of the robotic and computer vision application.

  12. A robotic platform for laser welding of corneal tissue

    Science.gov (United States)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-07-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  13. Exploratorium: Robots.

    Science.gov (United States)

    Brand, Judith, Ed.

    2002-01-01

    This issue of Exploratorium Magazine focuses on the topic robotics. It explains how to make a vibrating robotic bug and features articles on robots. Contents include: (1) "Where Robot Mice and Robot Men Run Round in Robot Towns" (Ray Bradbury); (2) "Robots at Work" (Jake Widman); (3) "Make a Vibrating Robotic Bug" (Modesto Tamez); (4) "The Robot…

  14. Smart mobile robot system for rubbish collection

    Science.gov (United States)

    Ali, Mohammed A. H.; Sien Siang, Tan

    2018-03-01

    This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.

  15. Development of dog-like retrieving capability in a ground robot

    Science.gov (United States)

    MacKenzie, Douglas C.; Ashok, Rahul; Rehg, James M.; Witus, Gary

    2013-01-01

    This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.

  16. Surgery with cooperative robots.

    Science.gov (United States)

    Lehman, Amy C; Berg, Kyle A; Dumpert, Jason; Wood, Nathan A; Visty, Abigail Q; Rentschler, Mark E; Platt, Stephen R; Farritor, Shane M; Oleynikov, Dmitry

    2008-03-01

    Advances in endoscopic techniques for abdominal procedures continue to reduce the invasiveness of surgery. Gaining access to the peritoneal cavity through small incisions prompted the first significant shift in general surgery. The complete elimination of external incisions through natural orifice access is potentially the next step in reducing patient trauma. While minimally invasive techniques offer significant patient advantages, the procedures are surgically challenging. Robotic surgical systems are being developed that address the visualization and manipulation limitations, but many of these systems remain constrained by the entry incisions. Alternatively, miniature in vivo robots are being developed that are completely inserted into the peritoneal cavity for laparoscopic and natural orifice procedures. These robots can provide vision and task assistance without the constraints of the entry incision, and can reduce the number of incisions required for laparoscopic procedures. In this study, a series of minimally invasive animal-model surgeries were performed using multiple miniature in vivo robots in cooperation with existing laparoscopy and endoscopy tools as well as the da Vinci Surgical System. These procedures demonstrate that miniature in vivo robots can address the visualization constraints of minimally invasive surgery by providing video feedback and task assistance from arbitrary orientations within the peritoneal cavity.

  17. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  18. Development of an advanced intelligent robot navigation system

    International Nuclear Information System (INIS)

    Hai Quan Dai; Dalton, G.R.; Tulenko, J.; Crane, C.C. III

    1992-01-01

    As part of the US Department of Energy's Robotics for Advanced Reactors Project, the authors are in the process of assembling an advanced intelligent robotic navigation and control system based on previous work performed on this project in the areas of computer control, database access, graphical interfaces, shared data and computations, computer vision for positions determination, and sonar-based computer navigation systems. The system will feature three levels of goals: (1) high-level system for management of lower level functions to achieve specific functional goals; (2) intermediate level of goals such as position determination, obstacle avoidance, and discovering unexpected objects; and (3) other supplementary low-level functions such as reading and recording sonar or video camera data. In its current phase, the Cybermotion K2A mobile robot is not equipped with an onboard computer system, which will be included in the final phase. By that time, the onboard system will play important roles in vision processing and in robotic control communication

  19. National project : advanced robot for nuclear power plant

    International Nuclear Information System (INIS)

    Tsunemi, T.; Takehara, K.; Hayashi, T.; Okano, H.; Sugiyama, S.

    1993-01-01

    The national project 'Advanced Robot' has been promoted by the Agency of Industrial science and Technology, MITI for eight years since 1983. The robot for a nuclear plant is one of the projects, and is a prototype intelligent one that also has a three dimensional vision system to generate an environmental model, a quadrupedal walking mechanism to work on stairs and four fingered manipulators to disassemble a valve with a hand tool. Many basic technologies such as an actuator, a tactile sensor, autonomous control and so on progress to high level. The prototype robot succeeded functionally in official demonstration in 1990. More refining such as downsizing and higher intelligence is necessary to realize a commercial robot, while basic technologies are useful to improve conventional robots and systems. This paper presents application studies on the advanced robot technologies. (author)

  20. Robotics

    International Nuclear Information System (INIS)

    Scheide, A.W.

    1983-01-01

    This article reviews some of the technical areas and history associated with robotics, provides information relative to the formation of a Robotics Industry Committee within the Industry Applications Society (IAS), and describes how all activities relating to robotics will be coordinated within the IEEE. Industrial robots are being used for material handling, processes such as coating and arc welding, and some mechanical and electronics assembly. An industrial robot is defined as a programmable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for a variety of tasks. The initial focus of the Robotics Industry Committee will be on the application of robotics systems to the various industries that are represented within the IAS

  1. Sensor Fusion for Autonomous Mobile Robot Navigation

    DEFF Research Database (Denmark)

    Plascencia, Alfredo

    Multi-sensor data fusion is a broad area of constant research which is applied to a wide variety of fields such as the field of mobile robots. Mobile robots are complex systems where the design and implementation of sensor fusion is a complex task. But research applications are explored constantl....... The scope of the thesis is limited to building a map for a laboratory robot by fusing range readings from a sonar array with landmarks extracted from stereo vision images using the (Scale Invariant Feature Transform) SIFT algorithm....

  2. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System

    Directory of Open Access Journals (Sweden)

    Defeng Wu

    2016-08-01

    Full Text Available A robot-based three-dimensional (3D measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  3. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  4. DARPA Robotics Challenge (DRC) Using Human-Machine Teamwork to Perform Disaster Response with a Humanoid Robot

    Science.gov (United States)

    2017-02-01

    leverage our tools and skills to develop a system in which we can get the simulated government furnished equipment (GFE) robot to walk over various types...our control software to the constellation and made a small helper program that gave us the possibility to restart our control software should...avoided this way. - The time and bandwidth limits caused us to integrate helper tools based on computer vision and a microphone sensor into the robot

  5. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  6. Towards safe robots approaching Asimov’s 1st law

    CERN Document Server

    Haddadin, Sami

    2014-01-01

    The vision of seamless human-robot interaction in our everyday life that allows for tight cooperation between human and robot has not become reality yet. However, the recent increase in technology maturity finally made it possible to realize systems of high integration, advanced sensorial capabilities and enhanced power to cross this barrier and merge living spaces of humans and robot workspaces to at least a certain extent. Together with the increasing industrial effort to realize first commercial service robotics products this makes it necessary to properly address one of the most fundamental questions of Human-Robot Interaction: How to ensure safety in human-robot coexistence? In this authoritative monograph, the essential question about the necessary requirements for a safe robot is addressed in depth and from various perspectives. The approach taken in this book focuses on the biomechanical level of injury assessment, addresses the physical evaluation of robot-human impacts, and isolates the major factor...

  7. Towards Versatile Robots Through Open Heterogeneous Modular Robots

    DEFF Research Database (Denmark)

    Lyder, Andreas

    arises, a new robot can be assembled rapidly from the existing modules, in contrast to conventional robots, which require a time consuming and expensive development process. In this thesis we define a modular robot to be a robot consisting of dynamically reconfigurable modules. The goal of this thesis......Robots are important tools in our everyday life. Both in industry and at the consumer level they serve the purpose of increasing our scope and extending our capabilities. Modular robots take the next step, allowing us to easily create and build various robots from a set of modules. If a problem...... is to increase the versatility and practical usability of modular robots by introducing new conceptual designs. Until now modular robots have been based on a pre-specified set of modules, and thus, their functionality is limited. We propose an open heterogeneous design concept, which allows a modular robot...

  8. Vision Sensor-Based Road Detection for Field Robot Navigation

    Directory of Open Access Journals (Sweden)

    Keyu Lu

    2015-11-01

    Full Text Available Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.

  9. 24th International Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    2016-01-01

    This volume includes the Proceedings of the 24th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2015, which was held in Bucharest, Romania, on May 27-29, 2015. The Conference brought together academic and industry researchers in robotics from the 11 countries affiliated to the Alpe-Adria-Danube space: Austria, Croatia, Czech Republic, Germany, Greece, Hungary, Italy, Romania, Serbia, Slovakia and Slovenia, and their worldwide partners. According to its tradition, RAAD 2015 covered all important areas of research, development and innovation in robotics, including new trends such as: bio-inspired and cognitive robots, visual servoing of robot motion, human-robot interaction, and personal robots for ambient assisted living. The accepted papers have been grouped in nine sessions: Robot integration in industrial applications; Grasping analysis, dexterous grippers and component design; Advanced robot motion control; Robot vision and sensory control; Human-robot interaction and collaboration;...

  10. Intelligent manipulation technique for multi-branch robotic systems

    Science.gov (United States)

    Chen, Alexander Y. K.; Chen, Eugene Y. S.

    1990-01-01

    New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.

  11. An automated miniature robotic vehicle inspection system

    Energy Technology Data Exchange (ETDEWEB)

    Dobie, Gordon; Summan, Rahul; MacLeod, Charles; Pierce, Gareth; Galbraith, Walter [Centre for Ultrasonic Engineering, University of Strathclyde, 204 George Street, Glasgow, G1 1XW (United Kingdom)

    2014-02-18

    A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3D model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software.

  12. An automated miniature robotic vehicle inspection system

    International Nuclear Information System (INIS)

    Dobie, Gordon; Summan, Rahul; MacLeod, Charles; Pierce, Gareth; Galbraith, Walter

    2014-01-01

    A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3D model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software

  13. 4th IFToMM International Symposium on Robotics and Mechatronics

    CERN Document Server

    Laribi, Med; Gazeau, Jean-Pierre

    2016-01-01

    This volume contains papers that have been selected after review for oral presentation at ISRM 2015, the Fourth IFToMM International Symposium on Robotics and Mechatronics held in Poitiers, France 23-24 June 2015. These papers  provide a vision of the evolution of the disciplines of robotics and mechatronics, including but not limited to: mechanism design; modeling and simulation; kinematics and dynamics of multibody systems; control methods; navigation and motion planning; sensors and actuators; bio-robotics; micro/nano-robotics; complex robotic systems; walking machines, humanoids-parallel kinematic structures: analysis and synthesis; smart devices; new design; application and prototypes. The book can be used by researchers and engineers in the relevant areas of robotics and mechatronics.

  14. Multi-Robot FastSLAM for Large Domains

    Science.gov (United States)

    2007-03-01

    Derr, D. Fox, A.B. Cremers , Integrating global position estimation and position tracking for mobile robots: The dynamic markov localization approach...Intelligence (AAAI), 2000. 53. Andrew J. Davison and David W. Murray. Simultaneous Localization and Map- Building Using Active Vision. IEEE...Wyeth, Michael Milford and David Prasser. A Modified Particle Filter for Simultaneous Robot Localization and Landmark Tracking in an Indoor

  15. The research on visual industrial robot which adopts fuzzy PID control algorithm

    Science.gov (United States)

    Feng, Yifei; Lu, Guoping; Yue, Lulin; Jiang, Weifeng; Zhang, Ye

    2017-03-01

    The control system of six degrees of freedom visual industrial robot based on the control mode of multi-axis motion control cards and PC was researched. For the variable, non-linear characteristics of industrial robot`s servo system, adaptive fuzzy PID controller was adopted. It achieved better control effort. In the vision system, a CCD camera was used to acquire signals and send them to video processing card. After processing, PC controls the six joints` motion by motion control cards. By experiment, manipulator can operate with machine tool and vision system to realize the function of grasp, process and verify. It has influence on the manufacturing of the industrial robot.

  16. Socially intelligent robots: dimensions of human-robot interaction.

    Science.gov (United States)

    Dautenhahn, Kerstin

    2007-04-29

    Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.

  17. 10th FSR (Field and Service Robotics)

    CERN Document Server

    Barfoot, Timothy

    2016-01-01

    This book contains the proceedings of the 10th FSR, (Field and Service Robotics) which is the leading single-track conference on applications of robotics in challenging environments. The 10th FSR was held in Toronto, Canada from 23-26 June 2015. The book contains 42 full-length, peer-reviewed papers organized into a variety of topics: Aquatic, Vision, Planetary, Aerial, Underground, and Systems. The goal of the book and the conference is to report and encourage the development and experimental evaluation of field and service robots, and to generate a vibrant exchange and discussion in the community. Field robots are non-factory robots, typically mobile, that operate in complex and dynamic environments: on the ground (Earth or other planets), under the ground, underwater, in the air or in space. Service robots are those that work closely with humans to help them with their lives. The first FSR was held in Canberra, Australia, in 1997. Since that first meeting, FSR has been held roughly every two years, cycling...

  18. Mobile Robot Navigation in a Corridor Using Visual Odometry

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Andersen, Nils Axel; Poulsen, Niels Kjølstad

    2009-01-01

    Incorporation of computer vision into mobile robot localization is studied in this work. It includes the generation of localization information from raw images and its fusion with the odometric pose estimation. The technique is then implemented on a small mobile robot operating at a corridor...

  19. Accuracy in Robot Generated Image Data Sets

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Bjorholm

    2015-01-01

    In this paper we present a practical innovation concerning how to achieve high accuracy of camera positioning, when using a 6 axis industrial robots to generate high quality data sets for computer vision. This innovation is based on the realization that to a very large extent the robots positioning...... error is deterministic, and can as such be calibrated away. We have successfully used this innovation in our efforts for creating data sets for computer vision. Since the use of this innovation has a significant effect on the data set quality, we here present it in some detail, to better aid others...

  20. Robot engineering

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Seul

    2006-02-15

    This book deals with robot engineering, giving descriptions of robot's history, current tendency of robot field, work and characteristic of industrial robot, essential merit and vector, application of matrix, analysis of basic vector, expression of Denavit-Hartenberg, robot kinematics such as forward kinematics, inverse kinematics, cases of MATLAB program, and motion kinematics, robot kinetics like moment of inertia, centrifugal force and coriolis power, and Euler-Lagrangian equation course plan, SIMULINK position control of robots.

  1. Robot engineering

    International Nuclear Information System (INIS)

    Jung, Seul

    2006-02-01

    This book deals with robot engineering, giving descriptions of robot's history, current tendency of robot field, work and characteristic of industrial robot, essential merit and vector, application of matrix, analysis of basic vector, expression of Denavit-Hartenberg, robot kinematics such as forward kinematics, inverse kinematics, cases of MATLAB program, and motion kinematics, robot kinetics like moment of inertia, centrifugal force and coriolis power, and Euler-Lagrangian equation course plan, SIMULINK position control of robots.

  2. Calibration of Robot Reference Frames for Enhanced Robot Positioning Accuracy

    OpenAIRE

    Cheng, Frank Shaopeng

    2008-01-01

    This chapter discussed the importance and methods of conducting robot workcell calibration for enhancing the accuracy of the robot TCP positions in industrial robot applications. It shows that the robot frame transformations define the robot geometric parameters such as joint position variables, link dimensions, and joint offsets in an industrial robot system. The D-H representation allows the robot designer to model the robot motion geometry with the four standard D-H parameters. The robot k...

  3. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    Science.gov (United States)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  4. Visual servo control for a human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-03-01

    Full Text Available This thesis presents work completed on the design of control and vision components for use in a monocular vision-based human-following robot. The use of vision in a controller feedback loop is referred to as vision-based or visual servo control...

  5. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  6. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  7. Learning for intelligent mobile robots

    Science.gov (United States)

    Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.

    2003-10-01

    Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A

  8. Declarative Rule-based Safety for Robotic Perception Systems

    DEFF Research Database (Denmark)

    Mogensen, Johann Thor Ingibergsson; Kraft, Dirk; Schultz, Ulrik Pagh

    2017-01-01

    Mobile robots are used across many domains from personal care to agriculture. Working in dynamic open-ended environments puts high constraints on the robot perception system, which is critical for the safety of the system as a whole. To achieve the required safety levels the perception system needs...... to be certified, but no specific standards exist for computer vision systems, and the concept of safe vision systems remains largely unexplored. In this paper we present a novel domain-specific language that allows the programmer to express image quality detection rules for enforcing safety constraints...

  9. CLARAty: Challenges and Steps Toward Reusable Robotic Software

    Directory of Open Access Journals (Sweden)

    Richard Madison

    2008-11-01

    Full Text Available We present in detail some of the challenges in developing reusable robotic software. We base that on our experience in developing the CLARAty robotics software, which is a generic object-oriented framework used for the integration of new algorithms in the areas of motion control, vision, manipulation, locomotion, navigation, localization, planning and execution. CLARAty was adapted to a number of heterogeneous robots with different mechanisms and hardware control architectures. In this paper, we also describe how we addressed some of these challenges in the development of the CLARAty software.

  10. CLARAty: Challenges and Steps toward Reusable Robotic Software

    Directory of Open Access Journals (Sweden)

    Issa A.D. Nesnas

    2006-03-01

    Full Text Available We present in detail some of the challenges in developing reusable robotic software. We base that on our experience in developing the CLARAty robotics software, which is a generic object-oriented framework used for the integration of new algorithms in the areas of motion control, vision, manipulation, locomotion, navigation, localization, planning and execution. CLARAty was adapted to a number of heterogeneous robots with different mechanisms and hardware control architectures. In this paper, we also describe how we addressed some of these challenges in the development of the CLARAty software.

  11. VisGraB: A Benchmark for Vision-Based Grasping. Paladyn Journal of Behavioral Robotics

    DEFF Research Database (Denmark)

    Kootstra, Gert; Popovic, Mila; Jørgensen, Jimmy Alison

    2012-01-01

    that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision......We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different...

  12. Soft Robotic Manipulation of Onions and Artichokes in the Food Industry

    Directory of Open Access Journals (Sweden)

    R. Morales

    2014-04-01

    Full Text Available This paper presents the development of a robotic solution for a problem of fast manipulation and handling of onions or artichokes in the food industry. The complete solution consists of a parallel robotic manipulatior, a specially designed end-effector based on a customized vacuum suction cup, and a computer vision software developed for pick and place operations. First, the selection and design process of the proposed robotic solution to fit with the initial requeriments is presented, including the customized vacuum suction cup. Then, the kinematic analysis of the parallel manipulator needed to develop the robot control system is reviewed. Moreover, computer vision application is presented inthe paper. Hardware details of the implementation of the building prototype are also shown. Finally, conclusions and future work show the current status of the project.

  13. Generic robot architecture

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  14. Molecular Robots Obeying Asimov's Three Laws of Robotics.

    Science.gov (United States)

    Kaminka, Gal A; Spokoini-Stern, Rachel; Amir, Yaniv; Agmon, Noa; Bachelet, Ido

    2017-01-01

    Asimov's three laws of robotics, which were shaped in the literary work of Isaac Asimov (1920-1992) and others, define a crucial code of behavior that fictional autonomous robots must obey as a condition for their integration into human society. While, general implementation of these laws in robots is widely considered impractical, limited-scope versions have been demonstrated and have proven useful in spurring scientific debate on aspects of safety and autonomy in robots and intelligent systems. In this work, we use Asimov's laws to examine these notions in molecular robots fabricated from DNA origami. We successfully programmed these robots to obey, by means of interactions between individual robots in a large population, an appropriately scoped variant of Asimov's laws, and even emulate the key scenario from Asimov's story "Runaround," in which a fictional robot gets into trouble despite adhering to the laws. Our findings show that abstract, complex notions can be encoded and implemented at the molecular scale, when we understand robots on this scale on the basis of their interactions.

  15. Colias: An Autonomous Micro Robot for Swarm Robotic Applications

    Directory of Open Access Journals (Sweden)

    Farshad Arvin

    2014-07-01

    Full Text Available Robotic swarms that take inspiration from nature are becoming a fascinating topic for multi-robot researchers. The aim is to control a large number of simple robots in order to solve common complex tasks. Due to the hardware complexities and cost of robot platforms, current research in swarm robotics is mostly performed by simulation software. The simulation of large numbers of these robots in robotic swarm applications is extremely complex and often inaccurate due to the poor modelling of external conditions. In this paper, we present the design of a low-cost, open-platform, autonomous micro-robot (Colias for robotic swarm applications. Colias employs a circular platform with a diameter of 4 cm. It has a maximum speed of 35 cm/s which enables it to be used in swarm scenarios very quickly over large arenas. Long-range infrared modules with an adjustable output power allow the robot to communicate with its direct neighbours at a range of 0.5 cm to 2 m. Colias has been designed as a complete platform with supporting software development tools for robotics education and research. It has been tested in both individual and swarm scenarios, and the observed results demonstrate its feasibility for use as a micro-sized mobile robot and as a low-cost platform for robot swarm applications.

  16. Neuro-Inspired Spike-Based Motion: From Dynamic Vision Sensor to Robot Motor Open-Loop Control through Spike-VITE

    Directory of Open Access Journals (Sweden)

    Fernando Perez-Peña

    2013-11-01

    Full Text Available In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM. All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6 and power requirements (3.4 W to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6. It also evidences the suitable use of AER as a communication protocol between processing and actuation.

  17. Neuro-Inspired Spike-Based Motion: From Dynamic Vision Sensor to Robot Motor Open-Loop Control through Spike-VITE

    Science.gov (United States)

    Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan

    2013-01-01

    In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation. PMID:24264330

  18. Robot and Human Surface Operations on Solar System Bodies

    Science.gov (United States)

    Weisbin, C. R.; Easter, R.; Rodriguez, G.

    2001-01-01

    This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.

  19. Visual Detection and Tracking System for a Spherical Amphibious Robot.

    Science.gov (United States)

    Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun

    2017-04-15

    With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.

  20. Visual Detection and Tracking System for a Spherical Amphibious Robot

    Science.gov (United States)

    Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun

    2017-01-01

    With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation. PMID:28420134

  1. Mobile robot for hazardous environments

    International Nuclear Information System (INIS)

    Bains, N.

    1995-01-01

    This paper describes the architecture and potential applications of the autonomous robot for a known environment (ARK). The ARK project has developed an autonomous mobile robot that can move around by itself in a complicated nuclear environment utilizing a number of sensors for navigation. The primary sensor system is computer vision. The ARK has the intelligence to determine its position utilizing open-quotes natural landmarks,close quotes such as ordinary building features at any point along its path. It is this feature that gives ARK its uniqueness to operate in an industrial type of environment. The prime motivation to develop ARK was the potential application of mobile robots in radioactive areas within nuclear generating stations and for nuclear waste sites. The project budget is $9 million over 4 yr and will be completed in October 1995

  2. Kinesthetic deficits after perinatal stroke: robotic measurement in hemiparetic children.

    Science.gov (United States)

    Kuczynski, Andrea M; Semrau, Jennifer A; Kirton, Adam; Dukelow, Sean P

    2017-02-15

    While sensory dysfunction is common in children with hemiparetic cerebral palsy (CP) secondary to perinatal stroke, it is an understudied contributor to disability with limited objective measurement tools. Robotic technology offers the potential to objectively measure complex sensorimotor function but has been understudied in perinatal stroke. The present study aimed to quantify kinesthetic deficits in hemiparetic children with perinatal stroke and determine their association with clinical function. Case-control study. Participants were 6-19 years of age. Stroke participants had MRI confirmed unilateral perinatal arterial ischemic stroke or periventricular venous infarction, and symptomatic hemiparetic cerebral palsy. Participants completed a robotic assessment of upper extremity kinesthesia using a robotic exoskeleton (KINARM). Four kinesthetic parameters (response latency, initial direction error, peak speed ratio, and path length ratio) and their variabilities were measured with and without vision. Robotic outcomes were compared across stroke groups and controls and to clinical measures of sensorimotor function. Forty-three stroke participants (23 arterial, 20 venous, median age 12 years, 42% female) were compared to 106 healthy controls. Stroke cases displayed significantly impaired kinesthesia that remained when vision was restored. Kinesthesia was more impaired in arterial versus venous lesions and correlated with clinical measures. Robotic assessment of kinesthesia is feasible in children with perinatal stroke. Kinesthetic impairment is common and associated with stroke type. Failure to correct with vision suggests sensory network dysfunction.

  3. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    Science.gov (United States)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  4. Motion based segmentation for robot vision using adapted EM algorithm

    NARCIS (Netherlands)

    Zhao, Wei; Roos, Nico

    2016-01-01

    Robots operate in a dynamic world in which objects are often moving. The movement of objects may help the robot to segment the objects from the background. The result of the segmentation can subsequently be used to identify the objects. This paper investigates the possibility of segmenting objects

  5. Micro intelligence robot

    International Nuclear Information System (INIS)

    Jeon, Yon Ho

    1991-07-01

    This book gives descriptions of micro robot about conception of robots and micro robot, match rules of conference of micro robots, search methods of mazes, and future and prospect of robots. It also explains making and design of 8 beat robot like making technique, software, sensor board circuit, and stepping motor catalog, speedy 3, Mr. Black and Mr. White, making and design of 16 beat robot, such as micro robot artist, Jerry 2 and magic art of shortening distances algorithm of robot simulation.

  6. Internet remote control interface for a multipurpose robotic arm

    Directory of Open Access Journals (Sweden)

    Matthew W. Dunnigan

    2008-11-01

    Full Text Available This paper presents an Internet remote control interface for a MITSUBISHI PA10-6CE manipulator established for the purpose of the ROBOT museum exhibition during spring and summer 2004. The robotic manipulator is a part of the Intelligent Robotic Systems Laboratory at Heriot ? Watt University, which has been established to work on dynamic and kinematic aspects of manipulator control in the presence of environmental disturbances. The laboratory has been enriched by a simple vision system consisting of three web-cameras to broadcast the live images of the robots over the Internet. The Interface comprises of the TCP/IP server providing command parsing and execution using the open controller architecture of the manipulator and a client Java applet web-site providing a simple robot control interface.

  7. Terpsichore. ENEA's autonomous robotics project; Progetto Tersycore, la robotica autonoma

    Energy Technology Data Exchange (ETDEWEB)

    Taraglio, S; Zanela, S; Santini, A; Nanni, V [ENEA, Centro Ricerche Casaccia, Rome (Italy). Div. Robotica e Informatica Avanzata

    1999-10-01

    The article presents some of the Terpsichore project's results aimed to developed and test algorithms and applications for autonomous robotics. Four applications are described: dynamic mapping of a building's interior through the use of ultrasonic sensors; visual drive of an autonomous robot via a neural network controller; a neural network-based stereo vision system that steers a robot through unknown indoor environments; and the evolution of intelligent behaviours via the genetic algorithm approach.

  8. Modelling of industrial robot in LabView Robotics

    Science.gov (United States)

    Banas, W.; Cwikła, G.; Foit, K.; Gwiazda, A.; Monica, Z.; Sekala, A.

    2017-08-01

    Currently can find many models of industrial systems including robots. These models differ from each other not only by the accuracy representation parameters, but the representation range. For example, CAD models describe the geometry of the robot and some even designate a mass parameters as mass, center of gravity, moment of inertia, etc. These models are used in the design of robotic lines and sockets. Also systems for off-line programming use these models and many of them can be exported to CAD. It is important to note that models for off-line programming describe not only the geometry but contain the information necessary to create a program for the robot. Exports from CAD to off-line programming system requires additional information. These models are used for static determination of reachability points, and testing collision. It’s enough to generate a program for the robot, and even check the interaction of elements of the production line, or robotic cell. Mathematical models allow robots to study the properties of kinematic and dynamic of robot movement. In these models the geometry is not so important, so are used only selected parameters such as the length of the robot arm, the center of gravity, moment of inertia. These parameters are introduced into the equations of motion of the robot and motion parameters are determined.

  9. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  10. Towards Sociable Robots

    DEFF Research Database (Denmark)

    Ngo, Trung Dung

    This thesis studies aspects of self-sufficient energy (energy autonomy) for truly autonomous robots and towards sociable robots. Over sixty years of history of robotics through three developmental ages containing single robot, multi-robot systems, and social (sociable) robots, the main objective...... of roboticists mostly focuses on how to make a robotic system function autonomously and further, socially. However, such approaches mostly emphasize behavioural autonomy, rather than energy autonomy which is the key factor for not only any living machine, but for life on the earth. Consequently, self......-sufficient energy is one of the challenges for not only single robot or multi-robot systems, but also social and sociable robots. This thesis is to deal with energy autonomy for multi-robot systems through energy sharing (trophallaxis) in which each robot is equipped with two capabilities: self-refueling energy...

  11. Biomass feeds vegetarian robot; Biomassa voedt vegetarische robot

    Energy Technology Data Exchange (ETDEWEB)

    Van den Brandt, M. [Office for Science and Technology, Embassy of the Kingdom of the Netherlands, Washington (United States)

    2009-09-15

    This brief article addresses the EATR robot (Energetically Autonomous Tactical Robot) that was developed by Cyclone Power and uses biomass as primary source of energy for propulsion. [Dutch] Een kort artikel over de door Cyclone Power ontwikkelde EATR-robot (Energetically Autonomous Tactical Robot) die voor de voortdrijving biomassa gebruikt als primaire energiebron.

  12. Robot Futures

    DEFF Research Database (Denmark)

    Christoffersen, Anja; Grindsted Nielsen, Sally; Jochum, Elizabeth Ann

    Robots are increasingly used in health care settings, e.g., as homecare assistants and personal companions. One challenge for personal robots in the home is acceptance. We describe an innovative approach to influencing the acceptance of care robots using theatrical performance. Live performance...... is a useful testbed for developing and evaluating what makes robots expressive; it is also a useful platform for designing robot behaviors and dialogue that result in believable characters. Therefore theatre is a valuable testbed for studying human-robot interaction (HRI). We investigate how audiences...... perceive social robots interacting with humans in a future care scenario through a scripted performance. We discuss our methods and initial findings, and outline future work....

  13. Soft Robotics Week

    CERN Document Server

    Rossiter, Jonathan; Iida, Fumiya; Cianchetti, Matteo; Margheri, Laura

    2017-01-01

    This book offers a comprehensive, timely snapshot of current research, technologies and applications of soft robotics. The different chapters, written by international experts across multiple fields of soft robotics, cover innovative systems and technologies for soft robot legged locomotion, soft robot manipulation, underwater soft robotics, biomimetic soft robotic platforms, plant-inspired soft robots, flying soft robots, soft robotics in surgery, as well as methods for their modeling and control. Based on the results of the second edition of the Soft Robotics Week, held on April 25 – 30, 2016, in Livorno, Italy, the book reports on the major research lines and novel technologies presented and discussed during the event.

  14. Towards Versatile Robots Through Open Heterogeneous Modular Robots

    OpenAIRE

    Lyder, Andreas

    2010-01-01

    Robots are important tools in our everyday life. Both in industry and at the consumer level they serve the purpose of increasing our scope and extending our capabilities. Modular robots take the next step, allowing us to easily create and build various robots from a set of modules. If a problem arises, a new robot can be assembled rapidly from the existing modules, in contrast to conventional robots, which require a time consuming and expensive development process. In this thesis we define a ...

  15. Robotic architectures

    CSIR Research Space (South Africa)

    Mtshali, M

    2010-01-01

    Full Text Available In the development of mobile robotic systems, a robotic architecture plays a crucial role in interconnecting all the sub-systems and controlling the system. The design of robotic architectures for mobile autonomous robots is a challenging...

  16. Conceptual spatial representations for indoor mobile robots

    OpenAIRE

    Zender, Henrik; Mozos, Oscar Martinez; Jensfelt, Patric; Kruijff, Geert-Jan M.; Wolfram, Burgard

    2008-01-01

    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporate...

  17. Soft Robotics.

    Science.gov (United States)

    Whitesides, George M

    2018-04-09

    This description of "soft robotics" is not intended to be a conventional review, in the sense of a comprehensive technical summary of a developing field. Rather, its objective is to describe soft robotics as a new field-one that offers opportunities to chemists and materials scientists who like to make "things" and to work with macroscopic objects that move and exert force. It will give one (personal) view of what soft actuators and robots are, and how this class of soft devices fits into the more highly developed field of conventional "hard" robotics. It will also suggest how and why soft robotics is more than simply a minor technical "tweak" on hard robotics and propose a unique role for chemistry, and materials science, in this field. Soft robotics is, at its core, intellectually and technologically different from hard robotics, both because it has different objectives and uses and because it relies on the properties of materials to assume many of the roles played by sensors, actuators, and controllers in hard robotics. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Motion and Emotional Behavior Design for Pet Robot Dog

    Science.gov (United States)

    Cheng, Chi-Tai; Yang, Yu-Ting; Miao, Shih-Heng; Wong, Ching-Chang

    A pet robot dog with two ears, one mouth, one facial expression plane, and one vision system is designed and implemented so that it can do some emotional behaviors. Three processors (Inter® Pentium® M 1.0 GHz, an 8-bit processer 8051, and embedded soft-core processer NIOS) are used to control the robot. One camera, one power detector, four touch sensors, and one temperature detector are used to obtain the information of the environment. The designed robot with 20 DOF (degrees of freedom) is able to accomplish the walking motion. A behavior system is built on the implemented pet robot so that it is able to choose a suitable behavior for different environmental situation. From the practical test, we can see that the implemented pet robot dog can do some emotional interaction with the human.

  19. Social Robotics in Therapy of Apraxia of Speech

    Directory of Open Access Journals (Sweden)

    José Carlos Castillo

    2018-01-01

    Full Text Available Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction.

  20. Automating the Incremental Evolution of Controllers for Physical Robots

    DEFF Research Database (Denmark)

    Faina, Andres; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    the evolution of digital objects.…” The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration...... of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range...

  1. Exploiting Child-Robot Aesthetic Interaction for a Social Robot

    OpenAIRE

    Lee, Jae-Joon; Kim, Dae-Won; Kang, Bo-Yeong

    2012-01-01

    A social robot interacts and communicates with humans by using the embodied knowledge gained from interactions with its social environment. In recent years, emotion has emerged as a popular concept for designing social robots. Several studies on social robots reported an increase in robot sociability through emotional imitative interactions between the robot and humans. In this paper conventional emotional interactions are extended by exploiting the aesthetic theories that the sociability of ...

  2. Evolutionary robotics

    Indian Academy of Sciences (India)

    In evolutionary robotics, a suitable robot control system is developed automatically through evolution due to the interactions between the robot and its environment. It is a complicated task, as the robot and the environment constitute a highly dynamical system. Several methods have been tried by various investigators to ...

  3. Interactive Exploration Robots: Human-Robotic Collaboration and Interactions

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    For decades, NASA has employed different operational approaches for human and robotic missions. Human spaceflight missions to the Moon and in low Earth orbit have relied upon near-continuous communication with minimal time delays. During these missions, astronauts and mission control communicate interactively to perform tasks and resolve problems in real-time. In contrast, deep-space robotic missions are designed for operations in the presence of significant communication delay - from tens of minutes to hours. Consequently, robotic missions typically employ meticulously scripted and validated command sequences that are intermittently uplinked to the robot for independent execution over long periods. Over the next few years, however, we will see increasing use of robots that blend these two operational approaches. These interactive exploration robots will be remotely operated by humans on Earth or from a spacecraft. These robots will be used to support astronauts on the International Space Station (ISS), to conduct new missions to the Moon, and potentially to enable remote exploration of planetary surfaces in real-time. In this talk, I will discuss the technical challenges associated with building and operating robots in this manner, along with lessons learned from research conducted with the ISS and in the field.

  4. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    Science.gov (United States)

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  5. ROBOT LITERACY AN APPROACH FOR SHARING SOCIETY WITH INTELLIGENT ROBOTS

    Directory of Open Access Journals (Sweden)

    Hidetsugu Suto

    2013-12-01

    Full Text Available A novel concept of media education called “robot literacy” is proposed. Here, robot literacy refers to the means of forming an appropriate relationship with intelligent robots. It can be considered a kind of media literacy. People who were born after the Internet age can be considered “digital natives” who have new morals and values and behave differently than previous generations in Internet societies. This can cause various problems among different generations. Thus, the necessity of media literacy education is increasing. Internet technologies, as well as robotics technologies are growing rapidly, and people who are born after the “home robot age,” whom the author calls “robot natives,” will be expected to have a certain degree of “robot literacy.” In this paper, the concept of robot literacy is defined and an approach to robot literacy education is discussed.

  6. Robotic buildings(s)

    NARCIS (Netherlands)

    Bier, H.H.

    2014-01-01

    Technological and conceptual advances in fields such as artificial intelligence, robotics, and material science have enabled robotic building to be in the last decade prototypically implemented. In this context, robotic building implies both physically built robotic environments and robotically

  7. RGB–D terrain perception and dense mapping for legged robots

    Directory of Open Access Journals (Sweden)

    Belter Dominik

    2016-03-01

    Full Text Available This paper addresses the issues of unstructured terrain modeling for the purpose of navigation with legged robots. We present an improved elevation grid concept adopted to the specific requirements of a small legged robot with limited perceptual capabilities. We propose an extension of the elevation grid update mechanism by incorporating a formal treatment of the spatial uncertainty. Moreover, this paper presents uncertainty models for a structured light RGB-D sensor and a stereo vision camera used to produce a dense depth map. The model for the uncertainty of the stereo vision camera is based on uncertainty propagation from calibration, through undistortion and rectification algorithms, allowing calculation of the uncertainty of measured 3D point coordinates. The proposed uncertainty models were used for the construction of a terrain elevation map using the Videre Design STOC stereo vision camera and Kinect-like range sensors. We provide experimental verification of the proposed mapping method, and a comparison with another recently published terrain mapping method for walking robots.

  8. Application of robotics to distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Ramsbottom, W

    1986-06-01

    Robotic technology has been recognized as having potential application in lifeline maintenance and repair. A study was conducted to investigate the feasibility of utilizing robotics for this purpose, and to prepare a general design of appropriate equipment. Four lifeline tasks were selected as representative of the majority of work. Based on a detailed task decomposition, subtasks were rated on amenability to robot completion. All tasks are feasible, but in some cases special tooling is required. Based on today's robotics, it is concluded that a force reflecting master/slave telemanipulator, augmented by automatic robot tasks under a supervisory control system, provides the optimal approach. No commercially available products are currently adequate for lifeline work. A general design of the telemanipulator, which has been named the SKYARM has been developed, addressing all subsystems such as the manipulator, video, control power and insulation. The baseline system is attainable using today's technology. Improved performance and lower cost will be achieved through developments in artificial intelligence, machine vision, supervisory control and dielectrics. Immediate benefits to utilities include increased safety, better service and savings on a subset of maintenance tasks. In 3-5 years, the SKYARM will prove cost effective as a general purpose lifeline tool. 7 refs., 26 figs., 3 tabs.

  9. Cloud Robotics Platforms

    Directory of Open Access Journals (Sweden)

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  10. Distributed Robotics Education

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2011-01-01

    Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept of a distribu......Distributed robotics takes many forms, for instance, multirobots, modular robots, and self-reconfigurable robots. The understanding and development of such advanced robotic systems demand extensive knowledge in engineering and computer science. In this paper, we describe the concept...... to be changed, related to multirobot control and human-robot interaction control from virtual to physical representation. The proposed system is valuable for bringing a vast number of issues into education – such as parallel programming, distribution, communication protocols, master dependency, connectivity...

  11. Robot Mechanisms

    CERN Document Server

    Lenarcic, Jadran; Stanišić, Michael M

    2013-01-01

    This book provides a comprehensive introduction to the area of robot mechanisms, primarily considering industrial manipulators and humanoid arms. The book is intended for both teaching and self-study. Emphasis is given to the fundamentals of kinematic analysis and the design of robot mechanisms. The coverage of topics is untypical. The focus is on robot kinematics. The book creates a balance between theoretical and practical aspects in the development and application of robot mechanisms, and includes the latest achievements and trends in robot science and technology.

  12. Robots de servicio

    Directory of Open Access Journals (Sweden)

    Rafael Aracil

    2008-04-01

    Full Text Available Resumen: El término Robots de Servicio apareció a finales de los años 80 como una necesidad de desarrollar máquinas y sistemas capaces de trabajar en entornos diferentes a los fabriles. Los Robots de Servicio tenían que poder trabajar en entornos noestructurados, en condiciones ambientales cambiantes y con una estrecha interacción con los humanos. En 1995 fue creado por la IEEE Robotics and Automation Society, el Technical Committee on Service Robots, y este comité definió en el año 2000 las áreas de aplicación de los Robots de Servicios, que se pueden dividir en dos grandes grupos: 1 sectores productivos no manufactureros tales como edificación, agricultura, naval, minería, medicina, etc. y 2 sectores de servicios propiamente dichos: asistencia personal, limpieza, vigilancia, educación, entretenimiento, etc. En este trabajo se hace una breve revisión de los principales conceptos y aplicaciones de los robots de servicio. Palabras clave: Robots de servicio, robots autónomos, robots de exteriores, robots de educación y entretenimiento, robots caminantes y escaladores, robots humanoides

  13. Robotic inspection technology-process an toolbox

    Energy Technology Data Exchange (ETDEWEB)

    Hermes, Markus [ROSEN Group (United States). R and D Dept.

    2005-07-01

    Pipeline deterioration grows progressively with ultimate aging of pipeline systems (on-plot and cross country). This includes both, very localized corrosion as well as increasing failure probability due to fatigue cracking. Limiting regular inspecting activities to the 'scrapable' part of the pipelines only, will ultimately result into a pipeline system with questionable integrity. The confidence level in the integrity of these systems will drop below acceptance levels. Inspection of presently un-inspectable sections of the pipeline system becomes a must. This paper provides information on ROSEN's progress on the 'robotic inspection technology' project. The robotic inspection concept developed by ROSEN is based on a modular toolbox principle. This is mandatory. A universal 'all purpose' robot would not be reliable and efficient in resolving the postulated inspection task. A preparatory Quality Function Deployment (QFD) analysis is performed prior to the decision about the adequate robotic solution. This enhances the serviceability and efficiency of the provided technology. The word 'robotic' can be understood in its full meaning of Recognition - Strategy - Motion - Control. Cooperation of different individual systems with an established communication, e.g. utilizing Bluetooth technology, support the robustness of the ROSEN robotic inspection approach. Beside the navigation strategy, the inspection strategy is also part of the QFD process. Multiple inspection technologies combined on a single carrier or distributed across interacting container must be selected with a clear vision of the particular goal. (author)

  14. Filigree Robotics

    DEFF Research Database (Denmark)

    Tamke, Martin; Evers, Henrik Leander; Clausen Nørgaard, Esben

    2016-01-01

    Filigree Robotics experiments with the combination of traditional ceramic craft with robotic fabrication in order to generate a new narrative of fine three-dimensional ceramic ornament for architecture.......Filigree Robotics experiments with the combination of traditional ceramic craft with robotic fabrication in order to generate a new narrative of fine three-dimensional ceramic ornament for architecture....

  15. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    International Nuclear Information System (INIS)

    Wang, Hesheng; Chen, Weidong; Xu, Lifei; He, Tao

    2015-01-01

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  16. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hesheng [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Chen, Weidong, E-mail: wdchen@sjtu.edu.cn [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Xu, Lifei; He, Tao [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2015-10-15

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  17. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  18. Hydraulic bilateral construction robot; Yuatsushiki bilateral kensetsu robot

    Energy Technology Data Exchange (ETDEWEB)

    Maehata, K.; Mori, N. [Kayaba Industry Co. Ltd., Tokyo (Japan)

    1999-05-15

    Concerning a hydraulic bilateral construction robot, its system constitution, structures and functions of important components, and the results of some tests are explained, and the researches conducted at Gifu University are described. The construction robot in this report is a servo controlled system of a version developed from the mini-shovel now available in the market. It is equipped, in addition to an electrohydraulic servo control system, with various sensors for detecting the robot attitude, vibration, and load state, and with a camera for visualizing the surrounding landscape. It is also provided with a bilateral joy stick which is a remote control actuator capable of working sensation feedback and with a rocking unit that creates robot movements of rolling, pitching, and heaving. The construction robot discussed here, with output increased and response faster thanks to the employment of a hydraulic driving system for the aim of building a robot system superior in performance to the conventional model designed primarily for heavy duty, proves after tests to be a highly sophisticated remotely controlled robot control system. (NEDO)

  19. Robotics education

    International Nuclear Information System (INIS)

    Benton, O.

    1984-01-01

    Robotics education courses are rapidly spreading throughout the nation's colleges and universities. Engineering schools are offering robotics courses as part of their mechanical or manufacturing engineering degree program. Two year colleges are developing an Associate Degree in robotics. In addition to regular courses, colleges are offering seminars in robotics and related fields. These seminars draw excellent participation at costs running up to $200 per day for each participant. The last one drew 275 people from Texas to Virginia. Seminars are also offered by trade associations, private consulting firms, and robot vendors. IBM, for example, has the Robotic Assembly Institute in Boca Raton and charges about $1,000 per week for course. This is basically for owners of IBM robots. Education (and training) can be as short as one day or as long as two years. Here is the educational pattern that is developing now

  20. Robotics in Cardiac Surgery: Past, Present, and Future

    Directory of Open Access Journals (Sweden)

    Bryan Bush

    2013-07-01

    Full Text Available Robotic cardiac operations evolved from minimally invasive operations and offer similar theoretical benefits, including less pain, shorter length of stay, improved cosmesis, and quicker return to preoperative level of functional activity. The additional benefits offered by robotic surgical systems include improved dexterity and degrees of freedom, tremor-free movements, ambidexterity, and the avoidance of the fulcrum effect that is intrinsic when using long-shaft endoscopic instruments. Also, optics and operative visualization are vastly improved compared with direct vision and traditional videoscopes. Robotic systems have been utilized successfully to perform complex mitral valve repairs, coronary revascularization, atrial fibrillation ablation, intracardiac tumor resections, atrial septal defect closures, and left ventricular lead implantation. The history and evolution of these procedures, as well as the present status and future directions of robotic cardiac surgery, are presented in this review.

  1. Light-driven nano-robotics for sub-diffraction probing and sensing

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Bañas, Andrew Rafael; Palima, Darwin

    On the macro-scale robotics typically uses light for carrying information for machine vision for and feedback in artificially intelligent guidance systems and monitoring. Using the miniscule momentum of light shrinking robots down to the micro- and even nano-scale regime creates opportunities......]. Therefore, a generic approach for optimizing lightmatter interaction involves the combination of optimal light-shaping techniques with the use of optimized nano-featured shapes in light-driven micro-robotics structures. In this work, we designed different three-dimensional micro-structures and fabricated...

  2. Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators

    Science.gov (United States)

    Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi

    2013-08-01

    Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations.

  3. Research and development of advanced robots for nuclear power plants

    International Nuclear Information System (INIS)

    Tsukune, Hideo; Hirukawa, Hirohisa; Kitagaki, Kosei; Liu, Yunhui; Onda, Hiromu; Nakamura, Akira

    1994-01-01

    Social and economic demands have been pressing for automation of inspection tasks, maintenance and repair jobs of nuclear power plants, which are carried out by human workers under circumstances with high radiation level. Since the plants are not always designed for introduction of automatic machinery, sophisticated robots shall play a crucial role to free workers from hostile environments. We have been studying intelligent robot systems and regarded nuclear industries as one of the important application fields where we can validate the feasibility of the methods and systems we have developed. In this paper we firstly discuss on the tasks required in nuclear power plants. Secondly we introduce current status of R and D on special purpose robots, versatile robots and intelligent robots for automatizing the tasks. Then we focus our discussions on three major functions in realizing robotized assembly tasks under such unstructured environments as in nuclear power plants; planning, vision and manipulation. Finally we depict an image of a prototype robot system for nuclear power plants based on the advanced functions. (author) 64 refs

  4. Robot Aesthetics

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Putnam, Lance Jonathan

    This paper considers art-based research practice in robotics through a discussion of our course and relevant research projects in autonomous art. The undergraduate course integrates basic concepts of computer science, robotic art, live performance and aesthetic theory. Through practice...... in robotics research (such as aesthetics, culture and perception), we believe robot aesthetics is an important area for research in contemporary aesthetics....

  5. How to prepare the patient for robotic surgery: before and during the operation.

    Science.gov (United States)

    Lim, Peter C; Kang, Elizabeth

    2017-11-01

    Robotic surgery in the treatment of gynecologic diseases continues to evolve and has become accepted over the last decade. The advantages of robotic-assisted laparoscopic surgery over conventional laparoscopy are three-dimensional camera vision, superior precision and dexterity with EndoWristed instruments, elimination of operator tremor, and decreased surgeon fatigue. The drawbacks of the technology are bulkiness and lack of tactile feedback. As with other surgical platforms, the limitations of robotic surgery must be understood. Patient selection and the types of surgical procedures that can be performed through the robotic surgical platform are critical to the success of robotic surgery. First, patient selection and the indication for gynecologic disease should be considered. Discussion with the patient regarding the benefits and potential risks of robotic surgery and of complications and alternative treatments is mandatory, followed by patient's signature indicating informed consent. Appropriate preoperative evaluation-including laboratory and imaging tests-and bowel cleansing should be considered depending upon the type of robotic-assisted procedure. Unlike other surgical procedures, robotic surgery is equipment-intensive and requires an appropriate surgical suite to accommodate the patient side cart, the vision system, and the surgeon's console. Surgical personnel must be properly trained with the robotics technology. Several factors must be considered to perform a successful robotic-assisted surgery: the indication and type of surgical procedure, the surgical platform, patient position and the degree of Trendelenburg, proper port placement configuration, and appropriate instrumentation. These factors that must be considered so that patients can be appropriately prepared before and during the operation are described. Copyright © 2017. Published by Elsevier Ltd.

  6. Design, implementation and testing of master slave robotic surgical system

    International Nuclear Information System (INIS)

    Ali, S.A.

    2015-01-01

    The autonomous manipulation of the medical robotics is needed to draw up a complete surgical plan in development. The autonomy of the robot comes from the fact that once the plan is drawn up off-line, it is the servo loops, and only these, that control the actions of the robot online, based on instantaneous control signals and measurements provided by the vision or force sensors. Using only these autonomous techniques in medical and surgical robotics remain relatively limited for two main reasons: Predicting complexity of the gestures, and human Safety. Therefore, Modern research in haptic force feedback in medical robotics is aimed to develop medical robots capable of performing remotely, what a surgeon does by himself. These medical robots are supposed to work exactly in the manner that a surgeon does in daily routine. In this paper the master slave tele-robotic system is designed and implemented with accuracy and stability by using 6DOF (Six Degree of Freedom) haptic force feedback devices. The master slave control strategy, haptic devices integration, application software designing using Visual C++ and experimental setup are considered. Finally, results are presented the stability, accuracy and repeatability of the system. (author)

  7. Design, Implementation and Testing of Master Slave Robotic Surgical System

    Directory of Open Access Journals (Sweden)

    Syed Amjad Ali

    2015-01-01

    Full Text Available The autonomous manipulation of the medical robotics is needed to draw up a complete surgical plan in development. The autonomy of the robot comes from the fact that once the plan is drawn up off-line, it is the servo loops, and only these, that control the actions of the robot online, based on instantaneous control signals and measurements provided by the vision or force sensors. Using only these autonomous techniques in medical and surgical robotics remain relatively limited for two main reasons: Predicting complexity of the gestures, and human Safety. Therefore, Modern research in haptic force feedback in medical robotics is aimed to develop medical robots capable of performing remotely, what a surgeon does by himself. These medical robots are supposed to work exactly in the manner that a surgeon does in daily routine. In this paper the master slave tele-robotic system is designed and implemented with accuracy and stability by using 6DOF (Six Degree of Freedom haptic force feedback devices. The master slave control strategy, haptic devices integration, application software designing using Visual C++ and experimental setup are considered. Finally, results are presented the stability, accuracy and repeatability of the system

  8. Robots: l'embarras de richesses [:survey of robots available

    International Nuclear Information System (INIS)

    Meieran, H.; Brittain, K.; Sturkey, R.

    1989-01-01

    A survey of robots available for use in the nuclear industry is presented. Two new categories of mobile robots have been introduced since the last survey (April 1987): pipe crawlers and underwater robots. The number of robots available has risen to double what it was two years ago and four times what it was in 1986. (U.K.)

  9. Robots Social Embodiment in Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Brian Duffy

    2008-11-01

    Full Text Available This work aims at demonstrating the inherent advantages of embracing a strong notion of social embodiment in designing a real-world robot control architecture with explicit ?intelligent? social behaviour between a collective of robots. It develops the current thinking on embodiment beyond the physical by demonstrating the importance of social embodiment. A social framework develops the fundamental social attributes found when more than one robot co-inhabit a physical space. The social metaphors of identity, character, stereotypes and roles are presented and implemented within a real-world social robot paradigm in order to facilitate the realisation of explicit social goals.

  10. Robotic radiation survey and analysis system for radiation waste casks

    International Nuclear Information System (INIS)

    Thunborg, S.

    1987-01-01

    Sandia National Laboratories (SNL) and the Hanford Engineering Development Laboratories have been involved in the development of remote systems technology concepts for handling defense high-level waste (DHLW) shipping casks at the waste repository. This effort was demonstrated the feasibility of using this technology for handling DHLW casks. These investigations have also shown that cask design can have a major effect on the feasibility of remote cask handling. Consequently, SNL has initiated a program to determine cask features necessary for robotic remote handling at the waste repository. The initial cask handling task selected for detailed investigation was the robotic radiation survey and analysis (RRSAS) task. In addition to determining the design features required for robotic cask handling, the RRSAS project contributes to the definition of techniques for random selection of swipe locations, the definition of robotic swipe parameters, force control techniques for robotic swipes, machine vision techniques for the location of objects in 3-D, repository robotic systems requirements, and repository data management system needs

  11. Robotic intelligence kernel

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  12. An Intelligent Robot Programing

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Seong Yong

    2012-01-15

    This book introduces an intelligent robot programing with background of the begging, introduction of VPL, and SPL, building of environment for robot platform, starting of robot programing, design of simulation environment, robot autonomy drive control programing, simulation graphic. Such as SPL graphic programing graphical image and graphical shapes, and graphical method application, application of procedure for robot control, robot multiprogramming, robot bumper sensor programing, robot LRF sencor programing and robot color sensor programing.

  13. An Intelligent Robot Programing

    International Nuclear Information System (INIS)

    Hong, Seong Yong

    2012-01-01

    This book introduces an intelligent robot programing with background of the begging, introduction of VPL, and SPL, building of environment for robot platform, starting of robot programing, design of simulation environment, robot autonomy drive control programing, simulation graphic. Such as SPL graphic programing graphical image and graphical shapes, and graphical method application, application of procedure for robot control, robot multiprogramming, robot bumper sensor programing, robot LRF sencor programing and robot color sensor programing.

  14. Design and Implementation of Autonomous Stair Climbing with Nao Humanoid Robot

    OpenAIRE

    Lu, Wei

    2015-01-01

    With the development of humanoid robots, autonomous stair climbing is an important capability. Humanoid robots will play an important role in helping people tackle some basic problems in the future. The main contribution of this thesis is that the NAO humanoid robot can climb the spiral staircase autonomously. In the vision module, the algorithm of image filtering and detecting the contours of the stair contributes to calculating the location of the stairs accurately. Additionally, the st...

  15. Neuro-robotics from brain machine interfaces to rehabilitation robotics

    CERN Document Server

    Artemiadis

    2014-01-01

    Neuro-robotics is one of the most multidisciplinary fields of the last decades, fusing information and knowledge from neuroscience, engineering and computer science. This book focuses on the results from the strategic alliance between Neuroscience and Robotics that help the scientific community to better understand the brain as well as design robotic devices and algorithms for interfacing humans and robots. The first part of the book introduces the idea of neuro-robotics, by presenting state-of-the-art bio-inspired devices. The second part of the book focuses on human-machine interfaces for pe

  16. The development of advanced robotics technology in high radiation environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Cho, Jaiwan; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Lee, Jong Min; Park, Jin Suk; Kim, Seung Ho; Kim, Byung Soo; Moon, Byung Soo

    1997-07-01

    In the tele-operation technology using tele-presence in high radiation environment, stereo vision target tracking by centroid method, vergence control of stereo camera by moving vector method, stereo observing system by correlation method, horizontal moving axis stereo camera, and 3 dimensional information acquisition by stereo image is developed. Also, gesture image acquisition by computer vision and construction of virtual environment for remote work in nuclear power plant. In the development of intelligent control and monitoring technology for tele-robot in hazardous environment, the characteristics and principle of robot operation. And, robot end-effector tracking algorithm by centroid method and neural network method are developed for the observation and survey in hazardous environment. 3-dimensional information acquisition algorithm by structured light is developed. In the development of radiation hardened sensor technology, radiation-hardened camera module is designed and tested. And radiation characteristics of electric components is robot system is evaluated. Also 2-dimensional radiation monitoring system is developed. These advanced critical robot technology and telepresence techniques developed in this project can be applied to nozzle-dam installation /removal robot system, can be used to realize unmanned remotelization of nozzle-dam installation / removal task in steam generator of nuclear power plant, which can be contributed for people involved in extremely hazardous high radioactivity area to eliminate their exposure to radiation, enhance their task safety, and raise their working efficiency. (author). 75 refs., 21 tabs., 15 figs.

  17. The development of advanced robotics technology in high radiation environment

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Cho, Jaiwan; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Lee, Jong Min; Park, Jin Suk; Kim, Seung Ho; Kim, Byung Soo; Moon, Byung Soo.

    1997-07-01

    In the tele-operation technology using tele-presence in high radiation environment, stereo vision target tracking by centroid method, vergence control of stereo camera by moving vector method, stereo observing system by correlation method, horizontal moving axis stereo camera, and 3 dimensional information acquisition by stereo image is developed. Also, gesture image acquisition by computer vision and construction of virtual environment for remote work in nuclear power plant. In the development of intelligent control and monitoring technology for tele-robot in hazardous environment, the characteristics and principle of robot operation. And, robot end-effector tracking algorithm by centroid method and neural network method are developed for the observation and survey in hazardous environment. 3-dimensional information acquisition algorithm by structured light is developed. In the development of radiation hardened sensor technology, radiation-hardened camera module is designed and tested. And radiation characteristics of electric components is robot system is evaluated. Also 2-dimensional radiation monitoring system is developed. These advanced critical robot technology and telepresence techniques developed in this project can be applied to nozzle-dam installation /removal robot system, can be used to realize unmanned remotelization of nozzle-dam installation / removal task in steam generator of nuclear power plant, which can be contributed for people involved in extremely hazardous high radioactivity area to eliminate their exposure to radiation, enhance their task safety, and raise their working efficiency. (author). 75 refs., 21 tabs., 15 figs

  18. Cloud Robotics Model

    OpenAIRE

    Mester, Gyula

    2015-01-01

    Cloud Robotics was born from the merger of service robotics and cloud technologies. It allows robots to benefit from the powerful computational, storage, and communications resources of modern data centres. Cloud robotics allows robots to take advantage of the rapid increase in data transfer rates to offload tasks without hard real time requirements. Cloud Robotics has rapidly gained momentum with initiatives by companies such as Google, Willow Garage and Gostai as well as more than a dozen a...

  19. Space Robotics Challenge

    Data.gov (United States)

    National Aeronautics and Space Administration — The Space Robotics Challenge seeks to infuse robot autonomy from the best and brightest research groups in the robotics community into NASA robots for future...

  20. Are Sex Robots as Bad as Killing Robots

    OpenAIRE

    Richardson, Kathleen

    2016-01-01

    In 2015 the Campaign Against Sex Robots was launched to draw attention to the technological production of new kinds of objects: sex robots of women and children. The campaign was launched shortly after the Future of Life Institute published an online petition: “Autonomous Weapons: An Open Letter From AI and Robotics Researchers” which was signed by leading luminaries in the field of AI and Robotics. In response to the Campaign, an academic at Oxford University opened an ethics thread “Are sex...

  1. Natural Tasking of Robots Based on Human Interaction Cues

    Science.gov (United States)

    2005-06-01

    MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching

  2. Robotic general surgery: current practice, evidence, and perspective.

    Science.gov (United States)

    Jung, M; Morel, P; Buehler, L; Buchs, N C; Hagen, M E

    2015-04-01

    Robotic technology commenced to be adopted for the field of general surgery in the 1990s. Since then, the da Vinci surgical system (Intuitive Surgical Inc, Sunnyvale, CA, USA) has remained by far the most commonly used system in this domain. The da Vinci surgical system is a master-slave machine that offers three-dimensional vision, articulated instruments with seven degrees of freedom, and additional software features such as motion scaling and tremor filtration. The specific design allows hand-eye alignment with intuitive control of the minimally invasive instruments. As such, robotic surgery appears technologically superior when compared with laparoscopy by overcoming some of the technical limitations that are imposed on the surgeon by the conventional approach. This article reviews the current literature and the perspective of robotic general surgery. While robotics has been applied to a wide range of general surgery procedures, its precise role in this field remains a subject of further research. Until now, only limited clinical evidence that could establish the use of robotics as the gold standard for procedures of general surgery has been created. While surgical robotics is still in its infancy with multiple novel systems currently under development and clinical trials in progress, the opportunities for this technology appear endless, and robotics should have a lasting impact to the field of general surgery.

  3. Multi-robot control interface

    Science.gov (United States)

    Bruemmer, David J [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID

    2011-12-06

    Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes a multi-robot common window comprised of information received from each of the plurality of robots.

  4. An experimental program on advanced robotics

    International Nuclear Information System (INIS)

    Yuan, J.S.C.; Stovman, J.; MacDonald, R.; Norgate, G.

    1987-01-01

    Remote handling in hostile environments, including space, nuclear facilities, and mines, requires hybrid systems which permit close cooperation between state of the art teleoperation and advanced robotics. Teleoperation using hand controller commands and television feedback can be enhanced by providing force-feel feedback and simulation graphics enhancement of the display. By integrating robotics features such as computer vision and force/tactile feedback with advanced local control systems, the overall effectiveness of the system can be improved and the operator workload reduced. This has been demonstrated in the laboratory. Applications such as a grappling drifting satellite or transferring material at sea are envisaged

  5. Sensory Integration with Articulated Motion on a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    J. Rojas

    2005-01-01

    Full Text Available This paper describes the integration of articulated motion with auditory and visual sensory information that enables a humanoid robot to achieve certain reflex actions that mimic those of people. Reflexes such as reach-and-grasp behavior enables the robot to learn, through experience, its own state and that of the world. A humanoid robot with binaural audio input, stereo vision, and pneumatic arms and hands exhibited tightly coupled sensory-motor behaviors in four different demonstrations. The complexity of successive demonstrations was increased to show that the reflexive sensory-motor behaviors combine to perform increasingly complex tasks. The humanoid robot executed these tasks effectively and established the groundwork for the further development of hardware and software systems, sensory-motor vector-space representations, and coupling with higher-level cognition.

  6. Robotics

    Energy Technology Data Exchange (ETDEWEB)

    Lorino, P; Altwegg, J M

    1985-05-01

    This article, which is aimed at the general reader, examines latest developments in, and the role of, modern robotics. The 7 main sections are sub-divided into 27 papers presented by 30 authors. The sections are as follows: 1) The role of robotics, 2) Robotics in the business world and what it can offer, 3) Study and development, 4) Utilisation, 5) Wages, 6) Conditions for success, and 7) Technological dynamics.

  7. [RESEARCH PROGRESS OF PERIPHERAL NERVE SURGERY ASSISTED BY Da Vinci ROBOTIC SYSTEM].

    Science.gov (United States)

    Shen, Jie; Song, Diyu; Wang, Xiaoyu; Wang, Changjiang; Zhang, Shuming

    2016-02-01

    To summarize the research progress of peripheral nerve surgery assisted by Da Vinci robotic system. The recent domestic and international articles about peripheral nerve surgery assisted by Da Vinci robotic system were reviewed and summarized. Compared with conventional microsurgery, peripheral nerve surgery assisted by Da Vinci robotic system has distinctive advantages, such as elimination of physiological tremors and three-dimensional high-resolution vision. It is possible to perform robot assisted limb nerve surgery using either the traditional brachial plexus approach or the mini-invasive approach. The development of Da Vinci robotic system has revealed new perspectives in peripheral nerve surgery. But it has still been at the initial stage, more basic and clinical researches are still needed.

  8. FPGA for Robotic Applications: from Android/Humanoid Robots to Artificial Men

    Directory of Open Access Journals (Sweden)

    Tole Sutikno

    2011-12-01

    Full Text Available Researches on home robots have been increasing enormously. There has always existed a continuous research effort on problems of anthropomorphic robots which is now called humanoid robots. Currently, robotics has evolved to the point that different branches have reached a remarkable level of maturity, that neural network and fuzzy logic are the main artificial intelligence as intelligent control on the robotics. Despite all this progress, while aiming at accomplishing work-tasks originally charged only to humans, robotic science has perhaps quite naturally turned into the attempt to create artificial men. It is true that artificial men or android humanoid robots open certainly very broad prospects. This “robot” may be viewed as a personal helper, and it will be called a home-robot, or personal robot. This is main reason why the two special sections are issued in the TELKOMNIKA sequentially.

  9. Investigation In Two Wheels Mobile Robot Movement: Stability and Motion Paths

    Directory of Open Access Journals (Sweden)

    Abdulrahman A.A. Emhemed

    2013-01-01

    Full Text Available This paper deals with the problem of dynamic modelling of inspection robot two wheels. Fuzzy controller based on robotics techniques for optimize of an inspection stability. The target is to enhancement of robot direction and avoids the obstacles. To find collision free area, distance-sensors such as ultra-sonic sensors and laser scanners or vision systems are usually employed. The distance-sensors offer only distance information between mobile robots and obstacles. Also the target are shown can be reached by different directions. The fuzzy logic controller is effect to avoid the abstacles and get ideal direction to “the target box”.

  10. Communicating with Teams of Cooperative Robots

    National Research Council Canada - National Science Library

    Perzanowski, D; Schultz, A. C; Adams, W; Bugajska, M; Marsh, E; Trafton, G; Brock, D; Skubic, M; Abramson, M

    2002-01-01

    .... For this interface, they have elected to use natural language and gesture. Gestures can be either natural gestures perceived by a vision system installed on the robot, or they can be made by using a stylus on a Personal Digital Assistant...

  11. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting.

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-04

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell's natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  12. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  13. Pointing with a One-Eyed Cursor for Supervised Training in Minimally Invasive Robotic Surgery

    DEFF Research Database (Denmark)

    Kibsgaard, Martin; Kraus, Martin

    2016-01-01

    Pointing in the endoscopic view of a surgical robot is a natural and effcient way for instructors to communicate with trainees in robot-assisted minimally invasive surgery. However, pointing in a stereo-endoscopic view can be limited by problems such as video delay, double vision, arm fatigue......-day training units in robot- assisted minimally invasive surgery on anaesthetised pigs....

  14. Timing of Multimodal Robot Behaviors during Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Jensen, Lars Christian; Fischer, Kerstin; Suvei, Stefan-Daniel

    2017-01-01

    In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care-O-bot, ...... output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots....

  15. Robotics research in Chile

    Directory of Open Access Journals (Sweden)

    Javier Ruiz-del-Solar

    2016-12-01

    Full Text Available The development of research in robotics in a developing country is a challenging task. Factors such as low research funds, low trust from local companies and the government, and a small number of qualified researchers hinder the development of strong, local research groups. In this article, and as a case of study, we present our research group in robotics at the Advanced Mining Technology Center of the Universidad de Chile, and the way in which we have addressed these challenges. In 2008, we decided to focus our research efforts in mining, which is the main industry in Chile. We observed that this industry has needs in terms of safety, productivity, operational continuity, and environmental care. All these needs could be addressed with robotics and automation technology. In a first stage, we concentrate ourselves in building capabilities in field robotics, starting with the automation of a commercial vehicle. An important outcome of this project was the earn of the local mining industry confidence. Then, in a second stage started in 2012, we began working with the local mining industry in technological projects. In this article, we describe three of the technological projects that we have developed with industry support: (i an autonomous vehicle for mining environments without global positioning system coverage; (ii the inspection of the irrigation flow in heap leach piles using unmanned aerial vehicles and thermal cameras; and (iii an enhanced vision system for vehicle teleoperation in adverse climatic conditions.

  16. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    Science.gov (United States)

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...techniques to determine the distances from each pixel to the camera. 14. SUBJECT TERMS unmanned undersea vehicles (UUVs), autonomous ... AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING Jake A. Jones Lieutenant Commander, United States Navy B.S

  17. Robotics Potential Fields

    Directory of Open Access Journals (Sweden)

    Jordi Lucero

    2009-01-01

    Full Text Available This problem was to calculate the path a robot would take to navigate an obstacle field and get to its goal. Three obstacles were given as negative potential fields which the robot avoided, and a goal was given a positive potential field that attracted the robot. The robot decided each step based on its distance, angle, and influence from every object. After each step, the robot recalculated and determined its next step until it reached its goal. The robot's calculations and steps were simulated with Microsoft Excel.

  18. Healthcare Robotics

    OpenAIRE

    Riek, Laurel D.

    2017-01-01

    Robots have the potential to be a game changer in healthcare: improving health and well-being, filling care gaps, supporting care givers, and aiding health care workers. However, before robots are able to be widely deployed, it is crucial that both the research and industrial communities work together to establish a strong evidence-base for healthcare robotics, and surmount likely adoption barriers. This article presents a broad contextualization of robots in healthcare by identifying key sta...

  19. Robotic technology in surgery: current status in 2008.

    Science.gov (United States)

    Murphy, Declan G; Hall, Rohan; Tong, Raymond; Goel, Rajiv; Costello, Anthony J

    2008-12-01

    There is increasing patient and surgeon interest in robotic-assisted surgery, particularly with the proliferation of da Vinci surgical systems (Intuitive Surgical, Sunnyvale, CA, USA) throughout the world. There is much debate over the usefulness and cost-effectiveness of these systems. The currently available robotic surgical technology is described. Published data relating to the da Vinci system are reviewed and the current status of surgical robotics within Australia and New Zealand is assessed. The first da Vinci system in Australia and New Zealand was installed in 2003. Four systems had been installed by 2006 and seven systems are currently in use. Most of these are based in private hospitals. Technical advantages of this system include 3-D vision, enhanced dexterity and improved ergonomics when compared with standard laparoscopic surgery. Most procedures currently carried out are urological, with cardiac, gynaecological and general surgeons also using this system. The number of patients undergoing robotic-assisted surgery in Australia and New Zealand has increased fivefold in the past 4 years. The most common procedure carried out is robotic-assisted laparoscopic radical prostatectomy. Published data suggest that robotic-assisted surgery is feasible and safe although the installation and recurring costs remain high. There is increasing acceptance of robotic-assisted surgery, especially for urological procedures. The da Vinci surgical system is becoming more widely available in Australia and New Zealand. Other surgical specialties will probably use this technology. Significant costs are associated with robotic technology and it is not yet widely available to public patients.

  20. A Haptic Guided Robotic System for Endoscope Positioning and Holding.

    Science.gov (United States)

    Cabuk, Burak; Ceylan, Savas; Anik, Ihsan; Tugasaygi, Mehtap; Kizir, Selcuk

    2015-01-01

    To determine the feasibility, advantages, and disadvantages of using a robot for holding and maneuvering the endoscope in transnasal transsphenoidal surgery. The system used in this study was a Stewart Platform based robotic system that was developed by Kocaeli University Department of Mechatronics Engineering for positioning and holding of endoscope. After the first use on an artificial head model, the system was used on six fresh postmortem bodies that were provided by the Morgue Specialization Department of the Forensic Medicine Institute (Istanbul, Turkey). The setup required for robotic system was easy, the time for registration procedure and setup of the robot takes 15 minutes. The resistance was felt on haptic arm in case of contact or friction with adjacent tissues. The adaptation process was shorter with the mouse to manipulate the endoscope. The endoscopic transsphenoidal approach was achieved with the robotic system. The endoscope was guided to the sphenoid ostium with the help of the robotic arm. This robotic system can be used in endoscopic transsphenoidal surgery as an endoscope positioner and holder. The robot is able to change the position easily with the help of an assistant and prevents tremor, and provides a better field of vision for work.

  1. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter.

    Science.gov (United States)

    Alatise, Mary B; Hancke, Gerhard P

    2017-09-21

    Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).

  2. RIPE [robot independent programming environment]: A robot independent programming environment

    International Nuclear Information System (INIS)

    Miller, D.J.; Lennox, R.C.

    1990-01-01

    Remote manual operations in radiation environments are typically performed very slowly. Sensor-based computer-controlled robots hold great promise for increasing the speed and safety of remote operations; however, the programming of robotic systems has proven to be expensive and difficult. Generalized approaches to robot programming that reuse available software modules and employ programming languages which are independent of the specific robotic and sensory devices being used are needed to speed software development and increase overall system reliability. This paper discusses the robot independent programming environment (RIPE) developed at Sandia National Laboratories (SNL). The RIPE is an object-oriented approach to robot system architectures; it is a software environment that facilitates rapid design and implementation of complex robot systems for diverse applications. An architecture based on hierarchies of distributed multiprocessors provides the computing platform for a layered programming structure that models applications using software objects. These objects are designed to support model-based automated programming of robotic and machining devices, real-time sensor-based control, error handling, and robust communication

  3. Evolutionary Developmental Robotics: Improving Morphology and Control of Physical Robots.

    Science.gov (United States)

    Vujovic, Vuk; Rosendo, Andre; Brodbeck, Luzius; Iida, Fumiya

    2017-01-01

    Evolutionary algorithms have previously been applied to the design of morphology and control of robots. The design space for such tasks can be very complex, which can prevent evolution from efficiently discovering fit solutions. In this article we introduce an evolutionary-developmental (evo-devo) experiment with real-world robots. It allows robots to grow their leg size to simulate ontogenetic morphological changes, and this is the first time that such an experiment has been performed in the physical world. To test diverse robot morphologies, robot legs of variable shapes were generated during the evolutionary process and autonomously built using additive fabrication. We present two cases with evo-devo experiments and one with evolution, and we hypothesize that the addition of a developmental stage can be used within robotics to improve performance. Moreover, our results show that a nonlinear system-environment interaction exists, which explains the nontrivial locomotion patterns observed. In the future, robots will be present in our daily lives, and this work introduces for the first time physical robots that evolve and grow while interacting with the environment.

  4. Recent advances in robotics

    International Nuclear Information System (INIS)

    Beni, G.; Hackwood, S.

    1984-01-01

    Featuring 10 contributions, this volume offers a state-of-the-art report on robotic science and technology. It covers robots in modern industry, robotic control to help the disabled, kinematics and dynamics, six-legged walking robots, a vector analysis of robot manipulators, tactile sensing in robots, and more

  5. INDUSTRIAL ROBOT REPEATABILITY TESTING WITH HIGH SPEED CAMERA PHANTOM V2511

    Directory of Open Access Journals (Sweden)

    Jerzy Józwik

    2016-12-01

    Full Text Available Apart from accuracy, one of the parameters describing industrial robots is positioning accuracy. The parameter in question, which is the subject of this paper, is often the decisive factor determining whether to apply a given robot to perform certain tasks or not. Articulated robots are predominantly used in such processes as: spot weld-ing, transport of materials and other welding applications, where high positioning repeatability is required. It is therefore essential to recognise the parameter in question and to control it throughout the operation of the robot. This paper presents methodology for robot positioning accuracy measurements based on vision technique. The measurements were conducted with Phantom v2511 high-speed camera and TEMA Motion software, for motion analysis. The object of the measurements was a 6-axis Yaskawa Motoman HP20F industrial robot. The results of measurements obtained in tests provided data for the calculation of positioning accuracy of the robot, which was then juxtaposed against robot specifications. Also analysed was the impact of the direction of displacement on the value of attained pose errors. Test results are given in a graphic form.

  6. Put Your Robot In, Put Your Robot Out: Sequencing through Programming Robots in Early Childhood

    Science.gov (United States)

    Kazakoff, Elizabeth R.; Bers, Marina Umaschi

    2014-01-01

    This article examines the impact of programming robots on sequencing ability in early childhood. Thirty-four children (ages 4.5-6.5 years) participated in computer programming activities with a developmentally appropriate tool, CHERP, specifically designed to program a robot's behaviors. The children learned to build and program robots over three…

  7. On quaternion based parameterization of orientation in computer vision and robotics

    Directory of Open Access Journals (Sweden)

    G. Terzakis

    2014-04-01

    Full Text Available The problem of orientation parameterization for applications in computer vision and robotics is examined in detail herein. The necessary intuition and formulas are provided for direct practical use in any existing algorithm that seeks to minimize a cost function in an iterative fashion. Two distinct schemes of parameterization are analyzed: The first scheme concerns the traditional axis-angle approach, while the second employs stereographic projection from unit quaternion sphere to the 3D real projective space. Performance measurements are taken and a comparison is made between the two approaches. Results suggests that there exist several benefits in the use of stereographic projection that include rational expressions in the rotation matrix derivatives, improved accuracy, robustness to random starting points and accelerated convergence.

  8. Terpsichore. ENEA's autonomous robotics project; Progetto Tersycore, la robotica autonoma

    Energy Technology Data Exchange (ETDEWEB)

    Taraglio, S.; Zanela, S.; Santini, A.; Nanni, V. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Div. Robotica e Informatica Avanzata

    1999-10-01

    The article presents some of the Terpsichore project's results aimed to developed and test algorithms and applications for autonomous robotics. Four applications are described: dynamic mapping of a building's interior through the use of ultrasonic sensors; visual drive of an autonomous robot via a neural network controller; a neural network-based stereo vision system that steers a robot through unknown indoor environments; and the evolution of intelligent behaviours via the genetic algorithm approach.

  9. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-03-01

    Full Text Available NASA's vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  10. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-11-01

    Full Text Available NASA?s vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  11. MASSIVE OPEN ONLINE COURSES IN EDUCATION OF ROBOTICS

    Directory of Open Access Journals (Sweden)

    Gyula Mester

    2016-03-01

    Full Text Available Recently, the requirement for learning is constantly increasing. MOOC – massive open online courses represent educational revolution of the century. A MOOC is an online course accessible to unlimited number of participation and is an open access via the web. Mayor participants in the MOOCS are: Coursera, Udacity (Stanford, since 2012 and edX (Harvard, MIT, since 2012. In this paper two MOOCs are considered: Introduction for Robotics and Robotics Vision, both from the Queensland University of Technology, Brisbane, Australia.

  12. HexaMob—A Hybrid Modular Robotic Design for Implementing Biomimetic Structures

    Directory of Open Access Journals (Sweden)

    Sasanka Sankhar Reddy CH.

    2017-10-01

    Full Text Available Modular robots are capable of forming primitive shapes such as lattice and chain structures with the additional flexibility of distributed sensing. The biomimetic structures developed using such modular units provides ease of replacement and reconfiguration in co-ordinated structures, transportation etc. in real life scenarios. Though the research in the employment of modular robotic units in formation of biological organisms is in the nascent stage, modular robotic units are already capable of forming such sophisticated structures. The modular robotic designs proposed so far in modular robotics research vary significantly in external structures, sensor-actuator mechanisms interfaces for docking and undocking, techniques for providing mobility, coordinated structures, locomotions etc. and each robotic design attempted to address various challenges faced in the domain of modular robotics by employing different strategies. This paper presents a novel modular wheeled robotic design - HexaMob facilitating four degrees of freedom (2 degrees for mobility and 2 degrees for structural reconfiguration on a single module with minimal usage of sensor-actuator assemblies. The crucial features of modular robotics such as back-driving restriction, docking, and navigation are addressed in the process of HexaMob design. The proposed docking mechanism is enabled using vision sensor, enhancing the capabilities in docking as well as navigation in co-ordinated structures such as humanoid robots.

  13. Human Robot Interaction for Hybrid Collision Avoidance System for Indoor Mobile Robots

    Directory of Open Access Journals (Sweden)

    Mazen Ghandour

    2017-06-01

    Full Text Available In this paper, a novel approach for collision avoidance for indoor mobile robots based on human-robot interaction is realized. The main contribution of this work is a new technique for collision avoidance by engaging the human and the robot in generating new collision-free paths. In mobile robotics, collision avoidance is critical for the success of the robots in implementing their tasks, especially when the robots navigate in crowded and dynamic environments, which include humans. Traditional collision avoidance methods deal with the human as a dynamic obstacle, without taking into consideration that the human will also try to avoid the robot, and this causes the people and the robot to get confused, especially in crowded social places such as restaurants, hospitals, and laboratories. To avoid such scenarios, a reactive-supervised collision avoidance system for mobile robots based on human-robot interaction is implemented. In this method, both the robot and the human will collaborate in generating the collision avoidance via interaction. The person will notify the robot about the avoidance direction via interaction, and the robot will search for the optimal collision-free path on the selected direction. In case that no people interacted with the robot, it will select the navigation path autonomously and select the path that is closest to the goal location. The humans will interact with the robot using gesture recognition and Kinect sensor. To build the gesture recognition system, two models were used to classify these gestures, the first model is Back-Propagation Neural Network (BPNN, and the second model is Support Vector Machine (SVM. Furthermore, a novel collision avoidance system for avoiding the obstacles is implemented and integrated with the HRI system. The system is tested on H20 robot from DrRobot Company (Canada and a set of experiments were implemented to report the performance of the system in interacting with the human and avoiding

  14. Line-feature-based calibration method of structured light plane parameters for robot hand-eye system

    Science.gov (United States)

    Qi, Yuhan; Jing, Fengshui; Tan, Min

    2013-03-01

    For monocular-structured light vision measurement, it is essential to calibrate the structured light plane parameters in addition to the camera intrinsic parameters. A line-feature-based calibration method of structured light plane parameters for a robot hand-eye system is proposed. Structured light stripes are selected as calibrating primitive elements, and the robot moves from one calibrating position to another with constraint in order that two misaligned stripe lines are generated. The images of stripe lines could then be captured by the camera fixed at the robot's end link. During calibration, the equations of two stripe lines in the camera coordinate system are calculated, and then the structured light plane could be determined. As the robot's motion may affect the effectiveness of calibration, so the robot's motion constraints are analyzed. A calibration experiment and two vision measurement experiments are implemented, and the results reveal that the calibration accuracy can meet the precision requirement of robot thick plate welding. Finally, analysis and discussion are provided to illustrate that the method has a high efficiency fit for industrial in-situ calibration.

  15. Multi-Locomotion Robotic Systems New Concepts of Bio-inspired Robotics

    CERN Document Server

    Fukuda, Toshio; Sekiyama, Kosuke; Aoyama, Tadayoshi

    2012-01-01

    Nowadays, multiple attention have been paid on a robot working in the human living environment, such as in the field of medical, welfare, entertainment and so on. Various types of researches are being conducted actively in a variety of fields such as artificial intelligence, cognitive engineering, sensor- technology, interfaces and motion control. In the future, it is expected to realize super high functional human-like robot by integrating technologies in various fields including these types of researches. The book represents new developments and advances in the field of bio-inspired robotics research introducing the state of the art, the idea of multi-locomotion robotic system to implement the diversity of animal motion. It covers theoretical and computational aspects of Passive Dynamic Autonomous Control (PDAC), robot motion control, multi legged walking and climbing as well as brachiation focusing concrete robot systems, components and applications. In addition, gorilla type robot systems are described as...

  16. iPathology: Robotic Applications and Management of Plants and Plant Diseases

    Directory of Open Access Journals (Sweden)

    Yiannis Ampatzidis

    2017-06-01

    Full Text Available The rapid development of new technologies and the changing landscape of the online world (e.g., Internet of Things (IoT, Internet of All, cloud-based solutions provide a unique opportunity for developing automated and robotic systems for urban farming, agriculture, and forestry. Technological advances in machine vision, global positioning systems, laser technologies, actuators, and mechatronics have enabled the development and implementation of robotic systems and intelligent technologies for precision agriculture. Herein, we present and review robotic applications on plant pathology and management, and emerging agricultural technologies for intra-urban agriculture. Greenhouse advanced management systems and technologies have been greatly developed in the last years, integrating IoT and WSN (Wireless Sensor Network. Machine learning, machine vision, and AI (Artificial Intelligence have been utilized and applied in agriculture for automated and robotic farming. Intelligence technologies, using machine vision/learning, have been developed not only for planting, irrigation, weeding (to some extent, pruning, and harvesting, but also for plant disease detection and identification. However, plant disease detection still represents an intriguing challenge, for both abiotic and biotic stress. Many recognition methods and technologies for identifying plant disease symptoms have been successfully developed; still, the majority of them require a controlled environment for data acquisition to avoid false positives. Machine learning methods (e.g., deep and transfer learning present promising results for improving image processing and plant symptom identification. Nevertheless, diagnostic specificity is a challenge for microorganism control and should drive the development of mechatronics and robotic solutions for disease management.

  17. Robot Wars: US Empire and geopolitics in the robotic age

    Science.gov (United States)

    Shaw, Ian GR

    2017-01-01

    How will the robot age transform warfare? What geopolitical futures are being imagined by the US military? This article constructs a robotic futurology to examine these crucial questions. Its central concern is how robots – driven by leaps in artificial intelligence and swarming – are rewiring the spaces and logics of US empire, warfare, and geopolitics. The article begins by building a more-than-human geopolitics to de-center the role of humans in conflict and foreground a worldly understanding of robots. The article then analyzes the idea of US empire, before speculating upon how and why robots are materializing new forms of proxy war. A three-part examination of the shifting spaces of US empire then follows: (1) Swarm Wars explores the implications of miniaturized drone swarming; (2) Roboworld investigates how robots are changing US military basing strategy and producing new topological spaces of violence; and (3) The Autogenic Battle-Site reveals how autonomous robots will produce emergent, technologically event-ful sites of security and violence – revolutionizing the battlespace. The conclusion reflects on the rise of a robotic US empire and its consequences for democracy. PMID:29081605

  18. Modular Robotic Wearable

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2009-01-01

    In this concept paper we trace the contours and define a new approach to robotic systems, composed of interactive robotic modules which are somehow worn on the body. We label such a field as Modular Robotic Wearable (MRW). We describe how, by using modular robotics for creating wearable....... Finally, by focusing on the intersection of the combination modular robotic systems, wearability, and bodymind we attempt to explore the theoretical characteristics of such approach and exploit the possible playware application fields....

  19. Marine Robot Autonomy

    CERN Document Server

    2013-01-01

    Autonomy for Marine Robots provides a timely and insightful overview of intelligent autonomy in marine robots. A brief history of this emerging field is provided, along with a discussion of the challenges unique to the underwater environment and their impact on the level of intelligent autonomy required.  Topics covered at length examine advanced frameworks, path-planning, fault tolerance, machine learning, and cooperation as relevant to marine robots that need intelligent autonomy.  This book also: Discusses and offers solutions for the unique challenges presented by more complex missions and the dynamic underwater environment when operating autonomous marine robots Includes case studies that demonstrate intelligent autonomy in marine robots to perform underwater simultaneous localization and mapping  Autonomy for Marine Robots is an ideal book for researchers and engineers interested in the field of marine robots.      

  20. Biologically inspired robots as artificial inspectors

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2002-06-01

    Imagine an inspector conducting an NDE on an aircraft where you notice something is different about him - he is not real but rather he is a robot. Your first reaction would probably be to say 'it's unbelievable but he looks real' just as you would react to an artificial flower that is a good imitation. This science fiction scenario could become a reality at the trend in the development of biologically inspired technologies, and terms like artificial intelligence, artificial muscles, artificial vision and numerous others are increasingly becoming common engineering tools. For many years, the trend has been to automate processes in order to increase the efficiency of performing redundant tasks where various systems have been developed to deal with specific production line requirements. Realizing that some parts are too complex or delicate to handle in small quantities with a simple automatic system, robotic mechanisms were developed. Aircraft inspection has benefitted from this evolving technology where manipulators and crawlers are developed for rapid and reliable inspection. Advancement in robotics towards making them autonomous and possibly look like human, can potentially address the need to inspect structures that are beyond the capability of today's technology with configuration that are not predetermined. The operation of these robots may take place at harsh or hazardous environments that are too dangerous for human presence. Making such robots is becoming increasingly feasible and in this paper the state of the art will be reviewed.

  1. Industrial Robots.

    Science.gov (United States)

    Reed, Dean; Harden, Thomas K.

    Robots are mechanical devices that can be programmed to perform some task of manipulation or locomotion under automatic control. This paper discusses: (1) early developments of the robotics industry in the United States; (2) the present structure of the industry; (3) noneconomic factors related to the use of robots; (4) labor considerations…

  2. User-centric design of a personal assistance robot (FRASIER) for active aging.

    Science.gov (United States)

    Padir, Taşkin; Skorinko, Jeanine; Dimitrov, Velin

    2015-01-01

    We present our preliminary results from the design process for developing the Worcester Polytechnic Institute's personal assistance robot, FRASIER, as an intelligent service robot for enabling active aging. The robot capabilities include vision-based object detection, tracking the user and help with carrying heavy items such as grocery bags or cafeteria trays. This work-in-progress report outlines our motivation and approach to developing the next generation of service robots for the elderly. Our main contribution in this paper is the development of a set of specifications based on the adopted user-centered design process, and realization of the prototype system designed to meet these specifications.

  3. Social Robots

    DEFF Research Database (Denmark)

    Social robotics is a cutting edge research area gathering researchers and stakeholders from various disciplines and organizations. The transformational potential that these machines, in the form of, for example, caregiving, entertainment or partner robots, pose to our societies and to us as indiv......Social robotics is a cutting edge research area gathering researchers and stakeholders from various disciplines and organizations. The transformational potential that these machines, in the form of, for example, caregiving, entertainment or partner robots, pose to our societies and to us...... as individuals seems to be limited by our technical limitations and phantasy alone. This collection contributes to the field of social robotics by exploring its boundaries from a philosophically informed standpoint. It constructively outlines central potentials and challenges and thereby also provides a stable...

  4. Non-manufacturing applications of robotics

    International Nuclear Information System (INIS)

    Dauchez, P.

    2000-12-01

    This book presents the different non-manufacturing sectors of activity where robotics can have useful or necessary applications: underwater robotics, agriculture robotics, road work robotics, nuclear robotics, medical-surgery robotics, aids to disabled people, entertainment robotics. Service robotics has been voluntarily excluded because this developing sector is not mature yet. (J.S.)

  5. Laser range finder model for autonomous navigation of a robot in a maize field using a particle filter

    NARCIS (Netherlands)

    Hiremath, S.A.; Heijden, van der G.W.A.M.; Evert, van F.K.; Stein, A.; Braak, ter C.J.F.

    2014-01-01

    Autonomous navigation of robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. Many existing agricultural robots use computer vision and other sensors to supplement Global Positioning System (GPS) data when navigating. Vision based methods are

  6. An approach to robot SLAM based on incremental appearance learning with omnidirectional vision

    Science.gov (United States)

    Wu, Hua; Qin, Shi-Yin

    2011-03-01

    Localisation and mapping with an omnidirectional camera becomes more difficult as the landmark appearances change dramatically in the omnidirectional image. With conventional techniques, it is difficult to match the features of the landmark with the template. We present a novel robot simultaneous localisation and mapping (SLAM) algorithm with an omnidirectional camera, which uses incremental landmark appearance learning to provide posterior probability distribution for estimating the robot pose under a particle filtering framework. The major contribution of our work is to represent the posterior estimation of the robot pose by incremental probabilistic principal component analysis, which can be naturally incorporated into the particle filtering algorithm for robot SLAM. Moreover, the innovative method of this article allows the adoption of the severe distorted landmark appearances viewed with omnidirectional camera for robot SLAM. The experimental results demonstrate that the localisation error is less than 1 cm in an indoor environment using five landmarks, and the location of the landmark appearances can be estimated within 5 pixels deviation from the ground truth in the omnidirectional image at a fairly fast speed.

  7. Advanced robot locomotion.

    Energy Technology Data Exchange (ETDEWEB)

    Neely, Jason C.; Sturgis, Beverly Rainwater; Byrne, Raymond Harry; Feddema, John Todd; Spletzer, Barry Louis; Rose, Scott E.; Novick, David Keith; Wilson, David Gerald; Buerger, Stephen P.

    2007-01-01

    This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics natural human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.

  8. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    Science.gov (United States)

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots

    Science.gov (United States)

    2010-04-01

    you are performing the low crawl 4.25 5.00 Drive the robot while you are negotiating the hill 6.00 5.00 Drive the robot while you are climbing the... stairs 4.67 5.00 Drive the robot while you are walking 5.70 5.27 HMD It was fairly doable. 1 When you’re looking through the lens, it’s not...Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots by Elizabeth S. Redden, Rodger A. Pettitt

  10. Human-Robot Interaction

    Science.gov (United States)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera

  11. Next generation light robotic

    DEFF Research Database (Denmark)

    Villangca, Mark Jayson; Palima, Darwin; Banas, Andrew Rafael

    2017-01-01

    -assisted surgery imbibes surgeons with superhuman abilities and gives the expression “surgical precision” a whole new meaning. Still in its infancy, much remains to be done to improve human-robot collaboration both in realizing robots that can operate safely with humans and in training personnel that can work......Conventional robotics provides machines and robots that can replace and surpass human performance in repetitive, difficult, and even dangerous tasks at industrial assembly lines, hazardous environments, or even at remote planets. A new class of robotic systems no longer aims to replace humans...... with so-called automatons but, rather, to create robots that can work alongside human operators. These new robots are intended to collaborate with humans—extending their abilities—from assisting workers on the factory floor to rehabilitating patients in their homes. In medical robotics, robot...

  12. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Directory of Open Access Journals (Sweden)

    Kristel Knaepen

    Full Text Available In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support. Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  13. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Science.gov (United States)

    Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain

    2015-01-01

    In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  14. Embedding visual routines in AnaFocus' Eye-RIS Vision Systems for closing the perception to action loop in roving robots

    Science.gov (United States)

    Jiménez-Marrufo, A.; Caballero-García, D. J.

    2011-05-01

    The purpose of the current paper is to describe how different visual routines can be developed and embedded in the AnaFocus' Eye-RIS Vision System on Chip (VSoC) to close the perception to action loop within the roving robots developed under the framework of SPARK II European project. The Eye-RIS Vision System on Chip employs a bio-inspired architecture where image acquisition and processing are truly intermingled and the processing itself is carried out in two steps. At the first step, processing is fully parallel owing to the concourse of dedicated circuit structures which are integrated close to the sensors. At the second step, processing is realized on digitally-coded information data by means of digital processors. All these capabilities make the Eye-RIS VSoC very suitable for the integration within small robots in general, and within the robots developed by the SPARK II project in particular. These systems provide with image-processing capabilities and speed comparable to high-end conventional vision systems without the need for high-density image memory and intensive digital processing. As far as perception is concerned, current perceptual schemes are often based on information derived from visual routines. Since real world images are quite complex to be processed for perceptual needs with traditional approaches, more computationally feasible algorithms are required to extract the desired features from the scene in real time, to efficiently proceed with the consequent action. In this paper the development of such algorithms and their implementation taking full advantage of the sensing-processing capabilities of the Eye-RIS VSoC are described.

  15. Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1

    Science.gov (United States)

    Erickson, Jon D. (Editor)

    1994-01-01

    The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications.

  16. Analysis and optimization on in-vessel inspection robotic system for EAST

    International Nuclear Information System (INIS)

    Zhang, Weijun; Zhou, Zeyu; Yuan, Jianjun; Du, Liang; Mao, Ziming

    2015-01-01

    Since China has successfully built her first Experimental Advanced Superconducting TOKAMAK (EAST) several years ago, great interest and demand have been increasing in robotic in-vessel inspection/operation systems, by which an observation of in-vessel physical phenomenon, collection of visual information, 3D mapping and localization, even maintenance are to be possible. However, it has been raising many challenges to implement a practical and robust robotic system, due to a lot of complex constraints and expectations, e.g., high remanent working temperature (100 °C) and vacuum (10"−"3 pa) environment even in the rest interval between plasma discharge experiments, close-up and precise inspection, operation efficiency, besides a general kinematic requirement of D shape irregular vessel. In this paper we propose an upgraded robotic system with redundant degrees of freedom (DOF) manipulator combined with a binocular vision system at the tip and a virtual reality system. A comprehensive comparison and discussion are given on the necessity and main function of the binocular vision system, path planning for inspection, fast localization, inspection efficiency and success rate in time, optimization of kinematic configuration, and the possibility of underactuated mechanism. A detailed design, implementation, and experiments of the binocular vision system together with the recent development progress of the whole robotic system are reported in the later part of the paper, while, future work and expectation are described in the end.

  17. REPORT ON FIRST INTERNATIONAL WORKSHOP ON ROBOTIC SURGERY IN THORACIC ONCOLOGY

    Directory of Open Access Journals (Sweden)

    Giulia Veronesi

    2016-10-01

    Full Text Available A workshop of experts from France, Germany, Italy and the United States took place at Humanitas Research Hospital Milan, Italy, on 10-11 February 2016, to examine techniques for and applications of robotic surgery to thoracic oncology. The main topics of presentation and discussion were: robotic surgery for lung resection; robot-assisted thymectomy; minimally invasive surgery for esophageal cancer; new developments in computer-assisted surgery and medical applications of robots; the challenge of costs; and future clinical research in robotic thoracic surgery. The following article summarizes the main contributions to the workshop. The Workshop consensus was that, since video-assisted thoracoscopic surgery (VATS is becoming the mainstream approach to resectable lung cancer in North America and Europe, robotic surgery for thoracic oncology is likely to be embraced by an increasing numbers of thoracic surgeons, since it has technical advantages over VATS, including intuitive movements, tremor filtration, more degrees of manipulative freedom, motion scaling, and high definition stereoscopic vision. These advantages may make robotic surgery more accessible than VATS to trainees and experienced surgeons, and also lead to expanded indications. However the high costs of robotic surgery and absence of tactile feedback remain obstacles to widespread dissemination. A prospective multicentric randomized trial (NCT02804893 to compare robotic and VATS approaches to stage I and II lung cancer will start shortly.

  18. Report on First International Workshop on Robotic Surgery in Thoracic Oncology.

    Science.gov (United States)

    Veronesi, Giulia; Cerfolio, Robert; Cingolani, Roberto; Rueckert, Jens C; Soler, Luc; Toker, Alper; Cariboni, Umberto; Bottoni, Edoardo; Fumagalli, Uberto; Melfi, Franca; Milli, Carlo; Novellis, Pierluigi; Voulaz, Emanuele; Alloisio, Marco

    2016-01-01

    A workshop of experts from France, Germany, Italy, and the United States took place at Humanitas Research Hospital Milan, Italy, on February 10 and 11, 2016, to examine techniques for and applications of robotic surgery to thoracic oncology. The main topics of presentation and discussion were robotic surgery for lung resection; robot-assisted thymectomy; minimally invasive surgery for esophageal cancer; new developments in computer-assisted surgery and medical applications of robots; the challenge of costs; and future clinical research in robotic thoracic surgery. The following article summarizes the main contributions to the workshop. The Workshop consensus was that since video-assisted thoracoscopic surgery (VATS) is becoming the mainstream approach to resectable lung cancer in North America and Europe, robotic surgery for thoracic oncology is likely to be embraced by an increasing numbers of thoracic surgeons, since it has technical advantages over VATS, including intuitive movements, tremor filtration, more degrees of manipulative freedom, motion scaling, and high-definition stereoscopic vision. These advantages may make robotic surgery more accessible than VATS to trainees and experienced surgeons and also lead to expanded indications. However, the high costs of robotic surgery and absence of tactile feedback remain obstacles to widespread dissemination. A prospective multicentric randomized trial (NCT02804893) to compare robotic and VATS approaches to stages I and II lung cancer will start shortly.

  19. Roles and Self-Reconfigurable Robots

    DEFF Research Database (Denmark)

    Dvinge, Nicolai; Schultz, Ulrik Pagh; Christensen, David Johan

    2007-01-01

    A self-reconfigurable robot is a robotic device that can change its own shape. Self-reconfigurable robots are commonly built from multiple identical modules that can manipulate each other to change the shape of the robot. The robot can also perform tasks such as locomotion without changing shape......., significantly simplifying the task of programming self-reconfigurable robots. Our language fully supports programming the ATRON self-reconfigurable robot, and has been used to implement several controllers running both on the physical modules and in simulation.......A self-reconfigurable robot is a robotic device that can change its own shape. Self-reconfigurable robots are commonly built from multiple identical modules that can manipulate each other to change the shape of the robot. The robot can also perform tasks such as locomotion without changing shape....... Programming a modular, self-reconfigurable robot is however a complicated task: the robot is essentially a real-time, distributed embedded system, where control and communication paths often are tightly coupled to the current physical configuration of the robot. To facilitate the task of programming modular...

  20. Robot Teachers

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Ess, Charles Melvin; Bhroin, Niamh Ni

    The world's first robot teacher, Saya, was introduced to a classroom in Japan in 2009. Saya, had the appearance of a young female teacher. She could express six basic emotions, take the register and shout orders like 'be quiet' (The Guardian, 2009). Since 2009, humanoid robot technologies have...... developed. It is now suggested that robot teachers may become regular features in educational settings, and may even 'take over' from human teachers in ten to fifteen years (cf. Amundsen, 2017 online; Gohd, 2017 online). Designed to look and act like a particular kind of human; robot teachers mediate human...... existence and roles, while also aiming to support education through sophisticated, automated, human-like interaction. Our paper explores the design and existential implications of ARTIE, a robot teacher at Oxford Brookes University (2017, online). Drawing on an initial empirical exploration we propose...

  1. Accelerating Robot Development through Integral Analysis of Human-Robot Interaction

    NARCIS (Netherlands)

    Kooijmans, T.; Kanda, T.; Bartneck, C.; Ishiguro, H.; Hagita, N.

    2007-01-01

    The development of interactive robots is a complicated process, involving a plethora of psychological, technical, and contextual influences. To design a robot capable of operating "intelligently" in everyday situations, one needs a profound understanding of human-robot interaction (HRI). We propose

  2. Micro Robotics Lab

    Data.gov (United States)

    Federal Laboratory Consortium — Our research is focused on the challenges of engineering robotic systems down to sub-millimeter size scales. We work both on small mobile robots (robotic insects for...

  3. Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2015-01-01

    Full Text Available For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system.

  4. Robots in pipe and vessel inspection: past, present, and future

    International Nuclear Information System (INIS)

    Mueller, T.A.; Tyndall, J.F.

    1984-01-01

    Over the past several decades, remotely operated scanners have been employed to inspect piping and pressure vessels. These devices in their early forms were manually controlled manipulators functioning as mere extensions of the operator. With the addition of limit sensing, speed control, and positional feedback and display, the early manipulators became primitive robots. By adding computer controls with their degree of intelligence to the devices, they achieved the status of robots. Future applications of vision, adaptive control, proximity sensing, and pattern recognition will bring these devices to a level of intelligence that will make automated robotic inspection of pipes and pressure vessels a true reality

  5. Robot fish bio-inspired fishlike underwater robots

    CERN Document Server

    Li, Zheng; Youcef-Toumi, Kamal; Alvarado, Pablo

    2015-01-01

    This book provides a comprehensive coverage on robot fish including design, modeling and optimization, control, autonomous control and applications. It gathers contributions by the leading researchers in the area. Readers will find the book very useful for designing and building robot fish, not only in theory but also in practice. Moreover, the book discusses various important issues for future research and development, including design methodology, control methodology, and autonomous control strategy. This book is intended for researchers and graduate students in the fields of robotics, ocean engineering and related areas.

  6. Robotics in endoscopy.

    Science.gov (United States)

    Klibansky, David; Rothstein, Richard I

    2012-09-01

    The increasing complexity of intralumenal and emerging translumenal endoscopic procedures has created an opportunity to apply robotics in endoscopy. Computer-assisted or direct-drive robotic technology allows the triangulation of flexible tools through telemanipulation. The creation of new flexible operative platforms, along with other emerging technology such as nanobots and steerable capsules, can be transformational for endoscopic procedures. In this review, we cover some background information on the use of robotics in surgery and endoscopy, and review the emerging literature on platforms, capsules, and mini-robotic units. The development of techniques in advanced intralumenal endoscopy (endoscopic mucosal resection and endoscopic submucosal dissection) and translumenal endoscopic procedures (NOTES) has generated a number of novel platforms, flexible tools, and devices that can apply robotic principles to endoscopy. The development of a fully flexible endoscopic surgical toolkit will enable increasingly advanced procedures to be performed through natural orifices. The application of platforms and new flexible tools to the areas of advanced endoscopy and NOTES heralds the opportunity to employ useful robotic technology. Following the examples of the utility of robotics from the field of laparoscopic surgery, we can anticipate the emerging role of robotic technology in endoscopy.

  7. Does transition from the da Vinci Si to Xi robotic platform impact single-docking technique for robot-assisted laparoscopic nephroureterectomy?

    Science.gov (United States)

    Patel, Manish N; Aboumohamed, Ahmed; Hemal, Ashok

    2015-12-01

    utilisation of the retargeting feature of the da Vinci Xi when working on the bladder cuff or in the pelvis. The vision of the camera used for da Vinci Xi was initially felt to be inferior to that of the da Vinci Si; however, with a subsequent software upgrade this was much improved. The base of the da Vinci Xi is bigger, which does not slide and occasionally requires a change in table placement/operating room setup, and requires side-docking especially when dealing with very tall and obese patients for pelvic surgery. RNU alone or with LND-BCE is a challenging surgical procedure that addresses the upper and lower urinary tract simultaneously. Single docking and single robotic port placement for RNU-LND-BCE has evolved with the development of different generations of the robotic system. These procedures can be performed safely and effectively using the da Vinci S, Si or Xi robotic platform. The new da Vinci Xi robotic platform is more user-friendly, has easy installation, and is intuitive for surgeons using its features. © 2015 The Authors BJU International © 2015 BJU International Published by John Wiley & Sons Ltd.

  8. Implementation and Reconfiguration of Robot Operating System on Human Follower Transporter Robot

    Directory of Open Access Journals (Sweden)

    Addythia Saphala

    2015-10-01

    Full Text Available Robotic Operation System (ROS is an im- portant platform to develop robot applications. One area of applications is for development of a Human Follower Transporter Robot (HFTR, which  can  be  considered  as a custom mobile robot utilizing differential driver steering method and equipped with Kinect sensor. This study discusses the development of the robot navigation system by implementing Simultaneous Localization and Mapping (SLAM.

  9. Interaction Challenges in Human-Robot Space Exploration

    Science.gov (United States)

    Fong, Terrence; Nourbakhsh, Illah

    2005-01-01

    In January 2004, NASA established a new, long-term exploration program to fulfill the President's Vision for U.S. Space Exploration. The primary goal of this program is to establish a sustained human presence in space, beginning with robotic missions to the Moon in 2008, followed by extended human expeditions to the Moon as early as 2015. In addition, the program places significant emphasis on the development of joint human-robot systems. A key difference from previous exploration efforts is that future space exploration activities must be sustainable over the long-term. Experience with the space station has shown that cost pressures will keep astronaut teams small. Consequently, care must be taken to extend the effectiveness of these astronauts well beyond their individual human capacity. Thus, in order to reduce human workload, costs, and fatigue-driven error and risk, intelligent robots will have to be an integral part of mission design.

  10. Embedded vision equipment of industrial robot for inline detection of product errors by clustering–classification algorithms

    Directory of Open Access Journals (Sweden)

    Kamil Zidek

    2016-10-01

    Full Text Available The article deals with the design of embedded vision equipment of industrial robots for inline diagnosis of product error during manipulation process. The vision equipment can be attached to the end effector of robots or manipulators, and it provides an image snapshot of part surface before grasp, searches for error during manipulation, and separates products with error from the next operation of manufacturing. The new approach is a methodology based on machine teaching for the automated identification, localization, and diagnosis of systematic errors in products of high-volume production. To achieve this, we used two main data mining algorithms: clustering for accumulation of similar errors and classification methods for the prediction of any new error to proposed class. The presented methodology consists of three separate processing levels: image acquisition for fail parameterization, data clustering for categorizing errors to separate classes, and new pattern prediction with a proposed class model. We choose main representatives of clustering algorithms, for example, K-mean from quantization of vectors, fast library for approximate nearest neighbor from hierarchical clustering, and density-based spatial clustering of applications with noise from algorithm based on the density of the data. For machine learning, we selected six major algorithms of classification: support vector machines, normal Bayesian classifier, K-nearest neighbor, gradient boosted trees, random trees, and neural networks. The selected algorithms were compared for speed and reliability and tested on two platforms: desktop-based computer system and embedded system based on System on Chip (SoC with vision equipment.

  11. Presentation robot Advee

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, Stanislav; Hrbáček, J.; Ripel, T.; Ondroušek, V.; Hrbáček, R.; Schreiber, P.

    2012-01-01

    Roč. 18, 5/6 (2012), s. 307-322 ISSN 1802-1484 Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot * human - robot interface * localization Subject RIV: JD - Computer Applications, Robot ics

  12. Multi-sensor integration for autonomous robots in nuclear power plants

    International Nuclear Information System (INIS)

    Mann, R.C.; Jones, J.P.; Beckerman, M.; Glover, C.W.; Farkas, L.; Bilbro, G.L.; Snyder, W.

    1989-01-01

    As part of a concerted RandD program in advanced robotics for hazardous environments, scientists and engineers at the Oak Ridge National Laboratory (ORNL) are performing research in the areas of systems integration, range-sensor-based 3-D world modeling, and multi-sensor integration. This program features a unique teaming arrangement that involves the universities of Florida, Michigan, Tennessee, and Texas; Odetics Corporation; and ORNL. This paper summarizes work directed at integrating information extracted from data collected with range sensors and CCD cameras on-board a mobile robot, in order to produce reliable descriptions of the robot's environment. Specifically, the paper describes the integration of two-dimensional vision and sonar range information, and an approach to integrate registered luminance and laser range images. All operations are carried out on-board the mobile robot using a 16-processor hypercube computer. 14 refs., 4 figs

  13. Robotic surgery update.

    Science.gov (United States)

    Jacobsen, G; Elli, F; Horgan, S

    2004-08-01

    Minimally invasive surgical techniques have revolutionized the field of surgery. Telesurgical manipulators (robots) and new information technologies strive to improve upon currently available minimally invasive techniques and create new possibilities. A retrospective review of all robotic cases at a single academic medical center from August 2000 until November 2002 was conducted. A comprehensive literature evaluation on robotic surgical technology was also performed. Robotic technology is safely and effectively being applied at our institution. Robotic and information technologies have improved upon minimally invasive surgical techniques and created new opportunities not attainable in open surgery. Robotic technology offers many benefits over traditional minimal access techniques and has been proven safe and effective. Further research is needed to better define the optimal application of this technology. Credentialing and educational requirements also need to be delineated.

  14. Robot-laser system

    International Nuclear Information System (INIS)

    Akeel, H.A.

    1987-01-01

    A robot-laser system is described for providing a laser beam at a desired location, the system comprising: a laser beam source; a robot including a plurality of movable parts including a hollow robot arm having a central axis along which the laser source directs the laser beam; at least one mirror for reflecting the laser beam from the source to the desired location, the mirror being mounted within the robot arm to move therewith and relative thereto to about a transverse axis that extends angularly to the central axis of the robot arm; and an automatic programmable control system for automatically moving the mirror about the transverse axis relative to and in synchronization with movement of the robot arm to thereby direct the laser beam to the desired location as the arm is moved

  15. Effects of Robot Facial Characteristics and Gender in Persuasive Human-Robot Interaction

    Directory of Open Access Journals (Sweden)

    Aimi S. Ghazali

    2018-06-01

    Full Text Available The growing interest in social robotics makes it relevant to examine the potential of robots as persuasive agents and, more specifically, to examine how robot characteristics influence the way people experience such interactions and comply with the persuasive attempts by robots. The purpose of this research is to identify how the (ostensible gender and the facial characteristics of a robot influence the extent to which people trust it and the psychological reactance they experience from its persuasive attempts. This paper reports a laboratory study where SociBot™, a robot capable of displaying different faces and dynamic social cues, delivered persuasive messages to participants while playing a game. In-game choice behavior was logged, and trust and reactance toward the advisor were measured using questionnaires. Results show that a robotic advisor with upturned eyebrows and lips (features that people tend to trust more in humans is more persuasive, evokes more trust, and less psychological reactance compared to one displaying eyebrows pointing down and lips curled downwards at the edges (facial characteristics typically not trusted in humans. Gender of the robot did not affect trust, but participants experienced higher psychological reactance when interacting with a robot of the opposite gender. Remarkably, mediation analysis showed that liking of the robot fully mediates the influence of facial characteristics on trusting beliefs and psychological reactance. Also, psychological reactance was a strong and reliable predictor of trusting beliefs but not of trusting behavior. These results suggest robots that are intended to influence human behavior should be designed to have facial characteristics we trust in humans and could be personalized to have the same gender as the user. Furthermore, personalization and adaptation techniques designed to make people like the robot more may help ensure they will also trust the robot.

  16. Robotic seeding

    DEFF Research Database (Denmark)

    Pedersen, Søren Marcus; Fountas, Spyros; Sørensen, Claus Aage Grøn

    2017-01-01

    Agricultural robotics has received attention for approximately 20 years, but today there are only a few examples of the application of robots in agricultural practice. The lack of uptake may be (at least partly) because in many cases there is either no compelling economic benefit......, or there is a benefit but it is not recognized. The aim of this chapter is to quantify the economic benefits from the application of agricultural robots under a specific condition where such a benefit is assumed to exist, namely the case of early seeding and re-seeding in sugar beet. With some predefined assumptions...... with regard to speed, capacity and seed mapping, we found that among these two technical systems both early seeding with a small robot and re-seeding using a robot for a smaller part of the field appear to be financially viable solutions in sugar beet production....

  17. An Adaptive Robot Game

    DEFF Research Database (Denmark)

    Hansen, Søren Tranberg; Svenstrup, Mikael; Dalgaard, Lars

    2010-01-01

    The goal of this paper is to describe an adaptive robot game, which motivates elderly people to do a regular amount of physical exercise while playing. One of the advantages of robot based games is that the initiative to play can be taken autonomously by the robot. In this case, the goal is to im......The goal of this paper is to describe an adaptive robot game, which motivates elderly people to do a regular amount of physical exercise while playing. One of the advantages of robot based games is that the initiative to play can be taken autonomously by the robot. In this case, the goal...... is to improve the mental and physical state of the user by playing a physical game with the robot. Ideally, a robot game should be simple to learn but difficult to master, providing an appropriate degree of challenge for players with different skills. In order to achieve that, the robot should be able to adapt...

  18. Odométrie visuelle en milieu naturel pour les robots mobiles

    OpenAIRE

    Duperal, B.

    2013-01-01

    / Dans le cadre de ce stage, l'objectif est d'étudier les performances de la vision artificielle, pour permettre à un robot mobile de se localiser, en utilisant soit une caméra, soit un système de stéréovision, dans un environnement naturel (arbres, cultures, champs agricoles, bâtiments,...), suite à des pertes de signal GPS. La localisation de robots par vision artificielle nécessite de réaliser des opérations d'appariement de points invariants (détection d'amers, zones particulières sur ...

  19. Evolution of robotic arms.

    Science.gov (United States)

    Moran, Michael E

    2007-01-01

    The foundation of surgical robotics is in the development of the robotic arm. This is a thorough review of the literature on the nature and development of this device with emphasis on surgical applications. We have reviewed the published literature and classified robotic arms by their application: show, industrial application, medical application, etc. There is a definite trend in the manufacture of robotic arms toward more dextrous devices, more degrees-of-freedom, and capabilities beyond the human arm. da Vinci designed the first sophisticated robotic arm in 1495 with four degrees-of-freedom and an analog on-board controller supplying power and programmability. von Kemplen's chess-playing automaton left arm was quite sophisticated. Unimate introduced the first industrial robotic arm in 1961, it has subsequently evolved into the PUMA arm. In 1963 the Rancho arm was designed; Minsky's Tentacle arm appeared in 1968, Scheinman's Stanford arm in 1969, and MIT's Silver arm in 1974. Aird became the first cyborg human with a robotic arm in 1993. In 2000 Miguel Nicolalis redefined possible man-machine capacity in his work on cerebral implantation in owl-monkeys directly interfacing with robotic arms both locally and at a distance. The robotic arm is the end-effector of robotic systems and currently is the hallmark feature of the da Vinci Surgical System making its entrance into surgical application. But, despite the potential advantages of this computer-controlled master-slave system, robotic arms have definite limitations. Ongoing work in robotics has many potential solutions to the drawbacks of current robotic surgical systems.

  20. Advanced mechanics in robotic systems

    CERN Document Server

    Nava Rodríguez, Nestor Eduardo

    2011-01-01

    Illustrates original and ambitious mechanical designs and techniques for the development of new robot prototypes Includes numerous figures, tables and flow charts Discusses relevant applications in robotics fields such as humanoid robots, robotic hands, mobile robots, parallel manipulators and human-centred robots

  1. Design and Implementation of Fire Extinguisher Robot with Robotic Arm

    Directory of Open Access Journals (Sweden)

    Memon Abdul Waris

    2018-01-01

    Full Text Available Robot is a device, which performs human task or behave like a human-being. It needs expertise skills and complex programming to design. For designing a fire fighter robot, many sensors and motors were used. User firstly send robot to an affected area, to get live image of the field with the help of mobile camera via Wi-Fi using IP camera application to laptop. If any signs of fire shown in image, user direct robot in that particular direction for confirmation. Fire sensor and temperature sensor detects and measures the reading, after confirmation robot sprinkle water on affected field. During extinguish process if any obstacle comes in between the prototype and the affected area the ultrasonic sensor detects the obstacle, in response the robotic arm moves to pick and place that obstacle to another location for clearing the path. Meanwhile if any poisonous gas is present, the gas sensor detects and indicates by making alarm.

  2. The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.

    Science.gov (United States)

    Arnold, Thomas; Scheutz, Matthias

    2017-06-01

    Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.

  3. Robot-assisted general surgery.

    Science.gov (United States)

    Hazey, Jeffrey W; Melvin, W Scott

    2004-06-01

    With the initiation of laparoscopic techniques in general surgery, we have seen a significant expansion of minimally invasive techniques in the last 16 years. More recently, robotic-assisted laparoscopy has moved into the general surgeon's armamentarium to address some of the shortcomings of laparoscopic surgery. AESOP (Computer Motion, Goleta, CA) addressed the issue of visualization as a robotic camera holder. With the introduction of the ZEUS robotic surgical system (Computer Motion), the ability to remotely operate laparoscopic instruments became a reality. US Food and Drug Administration approval in July 2000 of the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA) further defined the ability of a robotic-assist device to address limitations in laparoscopy. This includes a significant improvement in instrument dexterity, dampening of natural hand tremors, three-dimensional visualization, ergonomics, and camera stability. As experience with robotic technology increased and its applications to advanced laparoscopic procedures have become more understood, more procedures have been performed with robotic assistance. Numerous studies have shown equivalent or improved patient outcomes when robotic-assist devices are used. Initially, robotic-assisted laparoscopic cholecystectomy was deemed safe, and now robotics has been shown to be safe in foregut procedures, including Nissen fundoplication, Heller myotomy, gastric banding procedures, and Roux-en-Y gastric bypass. These techniques have been extrapolated to solid-organ procedures (splenectomy, adrenalectomy, and pancreatic surgery) as well as robotic-assisted laparoscopic colectomy. In this chapter, we review the evolution of robotic technology and its applications in general surgical procedures.

  4. Designing Emotionally Expressive Robots

    DEFF Research Database (Denmark)

    Tsiourti, Christiana; Weiss, Astrid; Wac, Katarzyna

    2017-01-01

    Socially assistive agents, be it virtual avatars or robots, need to engage in social interactions with humans and express their internal emotional states, goals, and desires. In this work, we conducted a comparative study to investigate how humans perceive emotional cues expressed by humanoid...... robots through five communication modalities (face, head, body, voice, locomotion) and examined whether the degree of a robot's human-like embodiment affects this perception. In an online survey, we asked people to identify emotions communicated by Pepper -a highly human-like robot and Hobbit – a robot...... for robots....

  5. Automating the Incremental Evolution of Controllers for Physical Robots.

    Science.gov (United States)

    Faíña, Andrés; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    Evolutionary robotics is challenged with some key problems that must be solved, or at least mitigated extensively, before it can fulfill some of its promises to deliver highly autonomous and adaptive robots. The reality gap and the ability to transfer phenotypes from simulation to reality constitute one such problem. Another lies in the embodiment of the evolutionary processes, which links to the first, but focuses on how evolution can act on real agents and occur independently from simulation, that is, going from being, as Eiben, Kernbach, & Haasdijk [2012, p. 261] put it, "the evolution of things, rather than just the evolution of digital objects.…" The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range of problems amenable to embodied evolution.

  6. Vision guided robot bin picking of cylindrical objects

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg; Dyhr-Nielsen, Carsten

    1997-01-01

    In order to achieve increased flexibility on robotic production lines an investigation of the rovbot bin-picking problem is presented. In the paper, the limitations related to previous attempts to solve the problem are pointed uot and a set of innovative methods are presented. The main elements...

  7. Intelligent robot trends and predictions for the first year of the new millennium

    Science.gov (United States)

    Hall, Ernest L.

    2000-10-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The current use of these machines in outer space, medicine, hazardous materials, defense applications and industry is being pursued with vigor. In factory automation, industrial robots can improve productivity, increase product quality and improve competitiveness. The computer and the robot have both been developed during recent times. The intelligent robot combines both technologies and requires a thorough understanding and knowledge of mechatronics. Today's robotic machines are faster, cheaper, more repeatable, more reliable and safer than ever. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. Economically, the robotics industry now has more than a billion-dollar market in the U.S. and is growing. Feasibility studies show decreasing costs for robots and unaudited healthy rates of return for a variety of robotic applications. However, the road from inspiration to successful application can be long and difficult, often taking decades to achieve a new product. A greater emphasis on mechatronics is needed in our universities. Certainly, more cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit industry and society. The fearful robot stories may help us prevent future disaster. The inspirational robot ideas may inspire the scientists of tomorrow. However, the intelligent robot ideas, which can be reduced to practice, will change the world.

  8. Fundamentals of soft robot locomotion.

    Science.gov (United States)

    Calisti, M; Picardi, G; Laschi, C

    2017-05-01

    Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human-robot interaction and locomotion. Although field applications have emerged for soft manipulation and human-robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. © 2017 The Author(s).

  9. Advancing the Strategic Messages Affecting Robot Trust Effect: The Dynamic of User- and Robot-Generated Content on Human-Robot Trust and Interaction Outcomes.

    Science.gov (United States)

    Liang, Yuhua Jake; Lee, Seungcheol Austin

    2016-09-01

    Human-robot interaction (HRI) will soon transform and shift the communication landscape such that people exchange messages with robots. However, successful HRI requires people to trust robots, and, in turn, the trust affects the interaction. Although prior research has examined the determinants of human-robot trust (HRT) during HRI, no research has examined the messages that people received before interacting with robots and their effect on HRT. We conceptualize these messages as SMART (Strategic Messages Affecting Robot Trust). Moreover, we posit that SMART can ultimately affect actual HRI outcomes (i.e., robot evaluations, robot credibility, participant mood) by affording the persuasive influences from user-generated content (UGC) on participatory Web sites. In Study 1, participants were assigned to one of two conditions (UGC/control) in an original experiment of HRT. Compared with the control (descriptive information only), results showed that UGC moderated the correlation between HRT and interaction outcomes in a positive direction (average Δr = +0.39) for robots as media and robots as tools. In Study 2, we explored the effect of robot-generated content but did not find similar moderation effects. These findings point to an important empirical potential to employ SMART in future robot deployment.

  10. Next-generation robotic surgery--from the aspect of surgical robots developed by industry.

    Science.gov (United States)

    Nakadate, Ryu; Arata, Jumpei; Hashizume, Makoto

    2015-02-01

    At present, much of the research conducted worldwide focuses on extending the ability of surgical robots. One approach is to extend robotic dexterity. For instance, accessibility and dexterity of the surgical instruments remains the largest issue for reduced port surgery such as single port surgery or natural orifice surgery. To solve this problem, a great deal of research is currently conducted in the field of robotics. Enhancing the surgeon's perception is an approach that uses advanced sensor technology. The real-time data acquired through the robotic system combined with the data stored in the robot (such as the robot's location) provide a major advantage. This paper aims at introducing state-of-the-art products and pre-market products in this technological advancement, namely the robotic challenge in extending dexterity and hopefully providing the path to robotic surgery in the near future.

  11. The Development of Radiation hardened tele-robot system - Development of artificial force reflection control for teleoperated mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ju Jang; Hong, Sun Gi; Kang, Young Hoon; Kim, Min Soeng [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-04-01

    One of the most important issues in teleoperation is to provide the sense of telepresence so as to conduct the task more reliably. In particular, teleoperated mobile robots are needed to have some kinds of backup system when the operator is blind for remote situation owing to the failure of vision system. In the first year, the idea of artificial force reflection was researched to enhance the reliability of operation when the mobile robot travels on the plain ground. In the second year, we extend previous results to help the teleoperator even when the robot climbs stairs. Finally, we apply the developed control algorithms to real experiments. The artificial force reflection method has two modes; traveling on the plain ground and climbing stairs. When traveling on the plain ground, the force information is artificially generated by using the range data from the environment while generating the impulse force when climbing stairs. To verify the validity of our algorithm, we develop the simulator which consists of the joystick and the visual display system. Through some experiments using this system, we confirm the validity and effectiveness of our new idea of artificial force reflection in the teleoperated mobile robot. 11 refs., 30 figs. (Author)

  12. Inverse kinematic solution for near-simple robots and its application to robot calibration

    Science.gov (United States)

    Hayati, Samad A.; Roston, Gerald P.

    1986-01-01

    This paper provides an inverse kinematic solution for a class of robot manipulators called near-simple manipulators. The kinematics of these manipulators differ from those of simple-robots by small parameter variations. Although most robots are by design simple, in practice, due to manufacturing tolerances, every robot is near-simple. The method in this paper gives an approximate inverse kinematics solution for real time applications based on the nominal solution for these robots. The validity of the results are tested both by a simulation study and by applying the algorithm to a PUMA robot.

  13. Fish and robots swimming together: attraction towards the robot demands biomimetic locomotion.

    Science.gov (United States)

    Marras, Stefano; Porfiri, Maurizio

    2012-08-07

    The integration of biomimetic robots in a fish school may enable a better understanding of collective behaviour, offering a new experimental method to test group feedback in response to behavioural modulations of its 'engineered' member. Here, we analyse a robotic fish and individual golden shiners (Notemigonus crysoleucas) swimming together in a water tunnel at different flow velocities. We determine the positional preference of fish with respect to the robot, and we study the flow structure using a digital particle image velocimetry system. We find that biomimetic locomotion is a determinant of fish preference as fish are more attracted towards the robot when its tail is beating rather than when it is statically immersed in the water as a 'dummy'. At specific conditions, the fish hold station behind the robot, which may be due to the hydrodynamic advantage obtained by swimming in the robot's wake. This work makes a compelling case for the need of biomimetic locomotion in promoting robot-animal interactions and it strengthens the hypothesis that biomimetic robots can be used to study and modulate collective animal behaviour.

  14. Emergent risk to workplace safety as a result of the use of robots in the work place

    NARCIS (Netherlands)

    Steijn, W.; Luiijf, E.; Beek, D. van der

    2016-01-01

    For decades now, robots have been a key part of future visions in films and books. As long ago as 1920, Karel Čapek wrote a play called RUR (Rossum’s Universal Robots). The first real robot, ‘Gargantuan’, was constructed between 1935 and 1937. It was made completely out of Meccano. Today’s

  15. Robots in P.W.R. nuclear powerplants

    International Nuclear Information System (INIS)

    Dubourg, M.

    1987-01-01

    The satisfactory operation of 37 900-MWe PWR powerplants in France, Belgium and South-Africa and the start-up of 1300 MWe powerplants allowed the development of a wide range of automatic units and robots for the periodic maintenance of nuclear plants, reducing the risk of ionizing radiation for the personnel. A large number of automated tools have been built. Among them: - inspection and maintenance systems for the tube bundle of steam generators, - robotized arms ROTETA and ROMEO for the heavy maintenance and delicate operations such as tube extraction or shot peening of tubes to improve their resistance to corrosion; - the versatile manipulator T.A.M. with electrically controlled articulations. The development of functionally versatile tools and robots and the integration of new technologies such as 3-D vision allowed the construction of the self-guided vehicle FRASTAR capable of moving within a nuclear building and in a cluttered environment. This vehicle includes means for avoiding isolated obstacles and can move on stairs [fr

  16. Micro robot bible

    International Nuclear Information System (INIS)

    Yoon, Jin Yeong

    2000-08-01

    This book deals with micro robot, which tells of summary of robots like entertainment robots and definition of robots, introduction of micro mouse about history, composition and rules, summary of micro controller with its history, appearance and composition, introduction of stepping motor about types, structure, basic characteristics, and driving ways, summary of sensor section, power, understanding of 80C196KC micro controller, basic driving program searching a maze algorithm, smooth turn and making of tracer line.

  17. Micro robot bible

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jin Yeong

    2000-08-15

    This book deals with micro robot, which tells of summary of robots like entertainment robots and definition of robots, introduction of micro mouse about history, composition and rules, summary of micro controller with its history, appearance and composition, introduction of stepping motor about types, structure, basic characteristics, and driving ways, summary of sensor section, power, understanding of 80C196KC micro controller, basic driving program searching a maze algorithm, smooth turn and making of tracer line.

  18. Robots at Work

    OpenAIRE

    Graetz, Georg; Michaels, Guy

    2015-01-01

    Despite ubiquitous discussions of robots' potential impact, there is almost no systematic empirical evidence on their economic effects. In this paper we analyze for the first time the economic impact of industrial robots, using new data on a panel of industries in 17 countries from 1993-2007. We find that industrial robots increased both labor productivity and value added. Our panel identification is robust to numerous controls, and we find similar results instrumenting increased robot use wi...

  19. Cloud-Enhanced Robotic System for Smart City Crowd Control

    Directory of Open Access Journals (Sweden)

    Akhlaqur Rahman

    2016-12-01

    Full Text Available Cloud robotics in smart cities is an emerging paradigm that enables autonomous robotic agents to communicate and collaborate with a cloud computing infrastructure. It complements the Internet of Things (IoT by creating an expanded network where robots offload data-intensive computation to the ubiquitous cloud to ensure quality of service (QoS. However, offloading for robots is significantly complex due to their unique characteristics of mobility, skill-learning, data collection, and decision-making capabilities. In this paper, a generic cloud robotics framework is proposed to realize smart city vision while taking into consideration its various complexities. Specifically, we present an integrated framework for a crowd control system where cloud-enhanced robots are deployed to perform necessary tasks. The task offloading is formulated as a constrained optimization problem capable of handling any task flow that can be characterized by a Direct Acyclic Graph (DAG. We consider two scenarios of minimizing energy and time, respectively, and develop a genetic algorithm (GA-based approach to identify the optimal task offloading decisions. The performance comparison with two benchmarks shows that our GA scheme achieves desired energy and time performance. We also show the adaptability of our algorithm by varying the values for bandwidth and movement. The results suggest their impact on offloading. Finally, we present a multi-task flow optimal path sequence problem that highlights how the robot can plan its task completion via movements that expend the minimum energy. This integrates path planning with offloading for robotics. To the best of our knowledge, this is the first attempt to evaluate cloud-based task offloading for a smart city crowd control system.

  20. To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot

    Directory of Open Access Journals (Sweden)

    Nicole Mirnig

    2017-05-01

    Full Text Available We conducted a user study for which we purposefully programmed faulty behavior into a robot’s routine. It was our aim to explore if participants rate the faulty robot different from an error-free robot and which reactions people show in interaction with a faulty robot. The study was based on our previous research on robot errors where we detected typical error situations and the resulting social signals of our participants during social human–robot interaction. In contrast to our previous work, where we studied video material in which robot errors occurred unintentionally, in the herein reported user study, we purposefully elicited robot errors to further explore the human interaction partners’ social signals following a robot error. Our participants interacted with a human-like NAO, and the robot either performed faulty or free from error. First, the robot asked the participants a set of predefined questions and then it asked them to complete a couple of LEGO building tasks. After the interaction, we asked the participants to rate the robot’s anthropomorphism, likability, and perceived intelligence. We also interviewed the participants on their opinion about the interaction. Additionally, we video-coded the social signals the participants showed during their interaction with the robot as well as the answers they provided the robot with. Our results show that participants liked the faulty robot significantly better than the robot that interacted flawlessly. We did not find significant differences in people’s ratings of the robot’s anthropomorphism and perceived intelligence. The qualitative data confirmed the questionnaire results in showing that although the participants recognized the robot’s mistakes, they did not necessarily reject the erroneous robot. The annotations of the video data further showed that gaze shifts (e.g., from an object to the robot or vice versa and laughter are typical reactions to unexpected robot behavior

  1. ROILA : RObot Interaction LAnguage

    NARCIS (Netherlands)

    Mubin, O.

    2011-01-01

    The number of robots in our society is increasing rapidly. The number of service robots that interact with everyday people already outnumbers industrial robots. The easiest way to communicate with these service robots, such as Roomba or Nao, would be natural speech. However, the limitations

  2. Analysis and optimization on in-vessel inspection robotic system for EAST

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Weijun, E-mail: zhangweijun@sjtu.edu.cn; Zhou, Zeyu; Yuan, Jianjun; Du, Liang; Mao, Ziming

    2015-12-15

    Since China has successfully built her first Experimental Advanced Superconducting TOKAMAK (EAST) several years ago, great interest and demand have been increasing in robotic in-vessel inspection/operation systems, by which an observation of in-vessel physical phenomenon, collection of visual information, 3D mapping and localization, even maintenance are to be possible. However, it has been raising many challenges to implement a practical and robust robotic system, due to a lot of complex constraints and expectations, e.g., high remanent working temperature (100 °C) and vacuum (10{sup −3} pa) environment even in the rest interval between plasma discharge experiments, close-up and precise inspection, operation efficiency, besides a general kinematic requirement of D shape irregular vessel. In this paper we propose an upgraded robotic system with redundant degrees of freedom (DOF) manipulator combined with a binocular vision system at the tip and a virtual reality system. A comprehensive comparison and discussion are given on the necessity and main function of the binocular vision system, path planning for inspection, fast localization, inspection efficiency and success rate in time, optimization of kinematic configuration, and the possibility of underactuated mechanism. A detailed design, implementation, and experiments of the binocular vision system together with the recent development progress of the whole robotic system are reported in the later part of the paper, while, future work and expectation are described in the end.

  3. Evidence for robots.

    Science.gov (United States)

    Shenoy, Ravikiran; Nathwani, Dinesh

    2017-01-01

    Robots have been successfully used in commercial industry and have enabled humans to perform tasks which are repetitive, dangerous and requiring extreme force. Their role has evolved and now includes many aspects of surgery to improve safety and precision. Orthopaedic surgery is largely performed on bones which are rigid immobile structures which can easily be performed by robots with great precision. Robots have been designed for use in orthopaedic surgery including joint arthroplasty and spine surgery. Experimental studies have been published evaluating the role of robots in arthroscopy and trauma surgery. In this article, we will review the incorporation of robots in orthopaedic surgery looking into the evidence in their use. © The Authors, published by EDP Sciences, 2017.

  4. Message Encryption in Robot Operating System: Collateral Effects of Hardening Mobile Robots

    Directory of Open Access Journals (Sweden)

    Francisco J. Rodríguez-Lera

    2018-03-01

    Full Text Available In human–robot interaction situations, robot sensors collect huge amounts of data from the environment in order to characterize the situation. Some of the gathered data ought to be treated as private, such as medical data (i.e., medication guidelines, personal, and safety information (i.e., images of children, home habits, alarm codes, etc.. However, most robotic software development frameworks are not designed for securely managing this information. This paper analyzes the scenario of hardening one of the most widely used robotic middlewares, Robot Operating System (ROS. The study investigates a robot’s performance when ciphering the messages interchanged between ROS nodes under the publish/subscribe paradigm. In particular, this research focuses on the nodes that manage cameras and LIDAR sensors, which are two of the most extended sensing solutions in mobile robotics, and analyzes the collateral effects on the robot’s achievement under different computing capabilities and encryption algorithms (3DES, AES, and Blowfish to robot performance. The findings present empirical evidence that simple encryption algorithms are lightweight enough to provide cyber-security even in low-powered robots when carefully designed and implemented. Nevertheless, these techniques come with a number of serious drawbacks regarding robot autonomy and performance if they are applied randomly. To avoid these issues, we define a taxonomy that links the type of ROS message, computational units, and the encryption methods. As a result, we present a model to select the optimal options for hardening a mobile robot using ROS.

  5. 1st Iberian Robotics Conference

    CERN Document Server

    Sanfeliu, Alberto; Ferre, Manuel; ROBOT2013; Advances in robotics

    2014-01-01

    This book contains the proceedings of the ROBOT 2013: FIRST IBERIAN ROBOTICS CONFERENCE and it can be said that included both state of the art and more practical presentations dealing with implementation problems, support technologies and future applications. A growing interest in Assistive Robotics, Agricultural Robotics, Field Robotics, Grasping and Dexterous Manipulation, Humanoid Robots, Intelligent Systems and Robotics, Marine Robotics, has been demonstrated by the very relevant number of contributions. Moreover, ROBOT2013 incorporates a special session on Legal and Ethical Aspects in Robotics that is becoming a topic of key relevance. This Conference was held in Madrid (28-29 November 2013), organised by the Sociedad Española para la Investigación y Desarrollo en Robótica (SEIDROB) and by the Centre for Automation and Robotics - CAR (Universidad Politécnica de Madrid (UPM) and Consejo Superior de Investigaciones Científicas (CSIC)), along with the co-operation of Grupo Temático de Robótica CEA-GT...

  6. Springer handbook of robotics

    CERN Document Server

    Khatib, Oussama

    2016-01-01

    The second edition of this handbook provides a state-of-the-art cover view on the various aspects in the rapidly developing field of robotics. Reaching for the human frontier, robotics is vigorously engaged in the growing challenges of new emerging domains. Interacting, exploring, and working with humans, the new generation of robots will increasingly touch people and their lives. The credible prospect of practical robots among humans is the result of the scientific endeavour of a half a century of robotic developments that established robotics as a modern scientific discipline. The ongoing vibrant expansion and strong growth of the field during the last decade has fueled this second edition of the Springer Handbook of Robotics. The first edition of the handbook soon became a landmark in robotics publishing and won the American Association of Publishers PROSE Award for Excellence in Physical Sciences & Mathematics as well as the organization’s Award for Engineering & Technology. The second edition o...

  7. Innovations in robotic surgery.

    Science.gov (United States)

    Gettman, Matthew; Rivera, Marcelino

    2016-05-01

    Developments in robotic surgery have continued to advance care throughout the field of urology. The purpose of this review is to evaluate innovations in robotic surgery over the past 18 months. The release of the da Vinci Xi system heralded an improvement on the Si system with improved docking, the ability to further manipulate robotic arms without clashing, and an autofocus universal endoscope. Robotic simulation continues to evolve with improvements in simulation training design to include augmented reality in robotic surgical education. Robotic-assisted laparoendoscopic single-site surgery continues to evolve with improvements on technique that allow for tackling previously complex pathologic surgical anatomy including urologic oncology and reconstruction. Last, innovations of new surgical platforms with robotic systems to improve surgeon ergonomics and efficiency in ureteral and renal surgery are being applied in the clinical setting. Urologic surgery continues to be at the forefront of the revolution of robotic surgery with advancements in not only existing technology but also creation of entirely novel surgical systems.

  8. Faster-than-real-time robot simulation for plan development and robot safety

    International Nuclear Information System (INIS)

    Crane, C.D. III; Dalton, R.; Ogles, J.; Tulenko, J.S.; Zhou, X.

    1990-01-01

    The University of Florida, in cooperation with the Universities of Texas, Tennessee, and Michigan and Oak Ridge National Laboratory (ORNL), is developing an advanced robotic system for the US Department of Energy under the University Program for Robotics for Advanced Reactors. As part of this program, the University of Florida has been pursuing the development of a faster-than-real-time robotic simulation program for planning and control of mobile robotic operations to ensure the efficient and safe operation of mobile robots in nuclear power plants and other hazardous environments

  9. Soft-Material Robotics

    OpenAIRE

    Wang, L; Nurzaman, SG; Iida, Fumiya

    2017-01-01

    There has been a boost of research activities in robotics using soft materials in the past ten years. It is expected that the use and control of soft materials can help realize robotic systems that are safer, cheaper, and more adaptable than the level that the conventional rigid-material robots can achieve. Contrary to a number of existing review and position papers on soft-material robotics, which mostly present case studies and/or discuss trends and challenges, the review focuses on the fun...

  10. AssistMe robot, an assistance robotic platform

    Directory of Open Access Journals (Sweden)

    A. I. Alexan

    2012-06-01

    Full Text Available This paper presents the design and implementation of a full size assistance robot. Its main purpose it to assist a person and eventually avoid a life threatening situation. Its implementation revolves around a chipKIT Arduino board that interconnects a robotic base controller with a 7 inch TABLET PC and various sensors. Due to the Android and Arduino combination, the robot can interact with the person and provide an easy development platform for future improvement and feature adding. The TABLET PC is Webcam, WIFI and Bluetooth enabled, offering a versatile platform that is able to process data and in the same time provide the user a friendly interface.

  11. Developing sensor-based robots with utility to waste management applications

    International Nuclear Information System (INIS)

    Trivedi, M.M.; Abidi, M.A.; Gonzalez, R.C.

    1990-01-01

    There are several Environmental Restoration and Waste Management (ER and WM) application areas where autonomous or teleoperated robotic systems can be utilized to improve personnel safety and reduce operation costs. In this paper the authors describe continuing research undertaken by their group in intelligent robotics area which should have a direct relevance to a number of ER and WM applications. The authors' current research is sponsored by the advanced technology division of the U.S. Department of Energy. It is part of a program undertaken at four universities (Florida, Michigan, Tennessee, and Texas) and the Oak ridge National Laboratory directed towards the development of advanced robotic systems for use in nuclear environments. The primary motivation for using robotic (autonomous and/or teleoperated) technology in such hazardous environments is to reduce exposure and costs associated with performing tasks such as surveillance, maintenance and repair. The main focus of the authors' research a the University of Tennessee has been to contribute to the development of autonomous inspection and manipulation systems which utilize a wide array of sensory inputs in controlling the actions of a stationary robot. The authors' experimental research effort is directed towards design and evaluation of new methodologies using a laboratory based robotic testbed. A unique feature of this testbed is a multisensor module useful in the characterization of the robot workspace. In this paper, the authors describe the development of a robot vision system for automatic spill detection, localization and clean-up verification; and the development of efficient techniques for analyzing range images using a parallel computer. The 'simulated spill cleanup' scenario allows us to show the applicability of robotic systems to problems encountered in nuclear environments

  12. An Address Event Representation-Based Processing System for a Biped Robot

    Directory of Open Access Journals (Sweden)

    Uziel Jaramillo-Avila

    2016-02-01

    Full Text Available In recent years, several important advances have been made in the fields of both biologically inspired sensorial processing and locomotion systems, such as Address Event Representation-based cameras (or Dynamic Vision Sensors and in human-like robot locomotion, e.g., the walking of a biped robot. However, making these fields merge properly is not an easy task. In this regard, Neuromorphic Engineering is a fast-growing research field, the main goal of which is the biologically inspired design of hybrid hardware systems in order to mimic neural architectures and to process information in the manner of the brain. However, few robotic applications exist to illustrate them. The main goal of this work is to demonstrate, by creating a closed-loop system using only bio-inspired techniques, how such applications can work properly. We present an algorithm using Spiking Neural Networks (SNN for a biped robot equipped with a Dynamic Vision Sensor, which is designed to follow a line drawn on the floor. This is a commonly used method for demonstrating control techniques. Most of them are fairly simple to implement without very sophisticated components; however, it can still serve as a good test in more elaborate circumstances. In addition, the locomotion system proposed is able to coordinately control the six DOFs of a biped robot in switching between basic forms of movement. The latter has been implemented as a FPGA-based neuromorphic system. Numerical tests and hardware validation are presented.

  13. Socially Impaired Robots: Human Social Disorders and Robots' Socio-Emotional Intelligence

    OpenAIRE

    Vitale, Jonathan; Williams, Mary-Anne; Johnston, Benjamin

    2016-01-01

    Social robots need intelligence in order to safely coexist and interact with humans. Robots without functional abilities in understanding others and unable to empathise might be a societal risk and they may lead to a society of socially impaired robots. In this work we provide a survey of three relevant human social disorders, namely autism, psychopathy and schizophrenia, as a means to gain a better understanding of social robots' future capability requirements. We provide evidence supporting...

  14. Educational Robotics as Mindtools

    Science.gov (United States)

    Mikropoulos, Tassos A.; Bellou, Ioanna

    2013-01-01

    Although there are many studies on the constructionist use of educational robotics, they have certain limitations. Some of them refer to robotics education, rather than educational robotics. Others follow a constructionist approach, but give emphasis only to design skills, creativity and collaboration. Some studies use robotics as an educational…

  15. Embedded Visual System and its Applications on Robots

    CERN Document Server

    Xu, De

    2010-01-01

    Embedded vision systems such as smart cameras have been rapidly developed recently. Vision systems have become smaller and lighter, but their performance has improved. The algorithms in embedded vision systems have their specifications limited by frequency of CPU, memory size, and architecture. The goal of this e-book is to provide a an advanced reference work for engineers, researchers and scholars in the field of robotics, machine vision, and automation and to facilitate the exchange of their ideas, experiences and views on embedded vision system models. The effectiveness for all methods is

  16. iPathology: Robotic Applications and Management of Plants and Plant Diseases

    OpenAIRE

    Yiannis Ampatzidis; Luigi De Bellis; Andrea Luvisi

    2017-01-01

    The rapid development of new technologies and the changing landscape of the online world (e.g., Internet of Things (IoT), Internet of All, cloud-based solutions) provide a unique opportunity for developing automated and robotic systems for urban farming, agriculture, and forestry. Technological advances in machine vision, global positioning systems, laser technologies, actuators, and mechatronics have enabled the development and implementation of robotic systems and intelligent technologies f...

  17. Robot friendship: Can a robot be a friend?

    DEFF Research Database (Denmark)

    Emmeche, Claus

    2014-01-01

    Friendship is used here as a conceptual vehicle for framing questions about the distinctiveness of human cognition in relation to natural systems such as other animal species and to artificial systems such as robots. By exploring this very common form of a human interpersonal relationship......, the author indicates that even though it is difficult to say something generally true about friendship among humans, distinct forms of friendship as practiced and distinct notions of friendship have been investigated in the social and human sciences and in biology. A more general conceptualization...... of friendship as a triadic relation analogous to the sign relation is suggested. Based on this the author asks how one may conceive of robot-robot and robot-human friendships; and how an interdisciplinary perspective upon that relation can contribute to analyse levels of embodied cognition in natural...

  18. Evaluating the effect of three-dimensional visualization on force application and performance time during robotics-assisted mitral valve repair.

    Science.gov (United States)

    Currie, Maria E; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W A; Patel, Rajni; Peters, Terry; Kiaii, Bob B

    2013-01-01

    The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P robotic system with either 2D or 3D vision (P robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.

  19. Multi-robot caravanning

    KAUST Repository

    Denny, Jory

    2013-11-01

    We study multi-robot caravanning, which is loosely defined as the problem of a heterogeneous team of robots visiting specific areas of an environment (waypoints) as a group. After formally defining this problem, we propose a novel solution that requires minimal communication and scales with the number of waypoints and robots. Our approach restricts explicit communication and coordination to occur only when robots reach waypoints, and relies on implicit coordination when moving between a given pair of waypoints. At the heart of our algorithm is the use of leader election to efficiently exploit the unique environmental knowledge available to each robot in order to plan paths for the group, which makes it general enough to work with robots that have heterogeneous representations of the environment. We implement our approach both in simulation and on a physical platform, and characterize the performance of the approach under various scenarios. We demonstrate that our approach can successfully be used to combine the planning capabilities of different agents. © 2013 IEEE.

  20. Self-Organizing Robots

    CERN Document Server

    Murata, Satoshi

    2012-01-01

    It is man’s ongoing hope that a machine could somehow adapt to its environment by reorganizing itself. This is what the notion of self-organizing robots is based on. The theme of this book is to examine the feasibility of creating such robots within the limitations of current mechanical engineering. The topics comprise the following aspects of such a pursuit: the philosophy of design of self-organizing mechanical systems; self-organization in biological systems; the history of self-organizing mechanical systems; a case study of a self-assembling/self-repairing system as an autonomous distributed system; a self-organizing robot that can create its own shape and robotic motion; implementation and instrumentation of self-organizing robots; and the future of self-organizing robots. All topics are illustrated with many up-to-date examples, including those from the authors’ own work. The book does not require advanced knowledge of mathematics to be understood, and will be of great benefit to students in the rob...

  1. Human-robot interaction tests on a novel robot for gait assistance.

    Science.gov (United States)

    Tagliamonte, Nevio Luigi; Sergi, Fabrizio; Carpino, Giorgio; Accoto, Dino; Guglielmelli, Eugenio

    2013-06-01

    This paper presents tests on a treadmill-based non-anthropomorphic wearable robot assisting hip and knee flexion/extension movements using compliant actuation. Validation experiments were performed on the actuators and on the robot, with specific focus on the evaluation of intrinsic backdrivability and of assistance capability. Tests on a young healthy subject were conducted. In the case of robot completely unpowered, maximum backdriving torques were found to be in the order of 10 Nm due to the robot design features (reduced swinging masses; low intrinsic mechanical impedance and high-efficiency reduction gears for the actuators). Assistance tests demonstrated that the robot can deliver torques attracting the subject towards a predicted kinematic status.

  2. Pose Estimation and Adaptive Robot Behaviour for Human-Robot Interaction

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Hansen, Søren Tranberg; Andersen, Hans Jørgen

    2009-01-01

    Abstract—This paper introduces a new method to determine a person’s pose based on laser range measurements. Such estimates are typically a prerequisite for any human-aware robot navigation, which is the basis for effective and timeextended interaction between a mobile robot and a human. The robot......’s pose. The resulting pose estimates are used to identify humans who wish to be approached and interacted with. The interaction motion of the robot is based on adaptive potential functions centered around the person that respect the persons social spaces. The method is tested in experiments...

  3. From sex robots to love robots: is mutual love with a robot possible?

    NARCIS (Netherlands)

    Nyholm, S.R.; Frank, L.E.; Danaher, J.; McArthur, N.

    2017-01-01

    Some critics of sex-robots worry that their use might spread objectifying attitudes about sex, and common sense places a higher value on sex within love-relationships than on casual sex. If there could be mutual love between humans and sex-robots, this could help to ease the worries about

  4. The relation between people's attitudes and anxiety towards robots in human-robot interaction

    NARCIS (Netherlands)

    de Graaf, M.M.A.; Ben Allouch, Soumaya

    2013-01-01

    This paper examines the relation between an interaction with a robot and peoples’ attitudes and emotion towards robots. In our study, participants have had an acquaintance talk with a social robot and both their general attitude and anxiety towards social robots were measured before and after the

  5. Combining psychological and engineering approaches to utilizing social robots with children with autism.

    Science.gov (United States)

    Dickstein-Fischer, Laurie; Fischer, Gregory S

    2014-01-01

    It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.

  6. [Usefullness of the Da Vinci robot in urologic surgery].

    Science.gov (United States)

    Iselin, C; Fateri, F; Caviezel, A; Schwartz, J; Hauser, J

    2007-12-05

    A telemanipulator for laparoscopic instruments is now available in the world of surgical robotics. This device has three distincts advantages over traditional laparoscopic surgery: it improves precision because of the many degrees of freedom of its instruments, and it offers 3-D vision so as better ergonomics for the surgeon. These characteristics are most useful for procedures that require delicate suturing in a focused operative field which may be difficult to reach. The Da Vinci robot has found its place in 2 domains of laparoscopic urologic surgery: radical prostatectomy and ureteral surgery. The cost of the robot, so as the price of its maintenance and instruments is high. This increases healthcare costs in comparison to open surgery, however not dramatically since patients stay less time in hospital and go back to work earlier.

  7. Application of ultrasonic sensor for measuring distances in robotics

    Science.gov (United States)

    Zhmud, V. A.; Kondratiev, N. O.; Kuznetsov, K. A.; Trubin, V. G.; Dimitrov, L. V.

    2018-05-01

    Ultrasonic sensors allow us to equip robots with a means of perceiving surrounding objects, an alternative to technical vision. Humanoid robots, like robots of other types, are, first, equipped with sensory systems similar to the senses of a human. However, this approach is not enough. All possible types and kinds of sensors should be used, including those that are similar to those of other animals and creations (in particular, echolocation in dolphins and bats), as well as sensors that have no analogues in the wild. This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board. The characteristics of similar modules for comparison are given. A subroutine for working with the sensor is given.

  8. Robotic hand project

    OpenAIRE

    Karaçizmeli, Cengiz; Çakır, Gökçe; Tükel, Dilek

    2014-01-01

    In this work, the mechatronic based robotic hand is controlled by the position data taken from the glove which has flex sensors mounted to capture finger bending of the human hand. The angular movement of human hand’s fingers are perceived and processed by a microcontroller, and the robotic hand is controlled by actuating servo motors. It has seen that robotic hand can simulate the movement of the human hand that put on the glove, during tests have done. This robotic hand can be used not only...

  9. PEAR: Prototyping Expressive Animated Robots - A framework for social robot prototyping

    OpenAIRE

    Balit , Etienne; Vaufreydaz , Dominique; Reignier , Patrick

    2018-01-01

    International audience; Social robots are transitioning from lab experiments to commercial products, creating new needs for proto-typing and design tools. In this paper, we present a framework to facilitate the prototyping of expressive animated robots. For this, we start by reviewing the design of existing social robots in order to define a set of basic components of social robots. We then show how to extend an existing 3D animation software to enable the animation of these components. By co...

  10. Developing a successful robotics program.

    Science.gov (United States)

    Luthringer, Tyler; Aleksic, Ilija; Caire, Arthur; Albala, David M

    2012-01-01

    Advancements in the robotic surgical technology have revolutionized the standard of care for many surgical procedures. The purpose of this review is to evaluate the important considerations in developing a new robotics program at a given healthcare institution. Patients' interest in robotic-assisted surgery has and continues to grow because of improved outcomes and decreased periods of hospitalization. Resulting market forces have created a solid foundation for the implementation of robotic surgery into surgical practice. Given proper surgeon experience and an efficient system, robotic-assisted procedures have been cost comparable to open surgical alternatives. Surgeon training and experience is closely linked to the efficiency of a new robotics program. Formally trained robotic surgeons have better patient outcomes and shorter operative times. Training in robotics has shown no negative impact on patient outcomes or mentor learning curves. Individual economic factors of local healthcare settings must be evaluated when planning for a new robotics program. The high cost of the robotic surgical platform is best offset with a large surgical volume. A mature, experienced surgeon is integral to the success of a new robotics program.

  11. Design and Implementation an Autonomous Humanoid Robot Based on Fuzzy Rule-Based Motion Controller

    Directory of Open Access Journals (Sweden)

    Mohsen Taheri

    2010-04-01

    Full Text Available Research on humanoid robotics in Mechatronics and Automation Laboratory, Electrical and Computer Engineering, Islamic Azad University Khorasgan branch (Isfahan of Iran was started at
    the beginning of this decade. Various research prototypes for humanoid robots have been designed and are going through evolution over these years. This paper describes the hardware and software design of the kid size humanoid robot systems of the PERSIA Team in 2009. The robot has 20 actuated degrees of freedom based on Hitec HSR898. In this paper we have tried to focus on areas such as mechanical structure, Image processing unit, robot controller, Robot AI and behavior
    learning. In 2009, our developments for the Kid size humanoid robot include: (1 the design and construction of our new humanoid robots (2 the design and construction of a new hardware and software controller to be used in our robots. The project is described in two main parts: Hardware and Software. The software is developed a robot application which consists walking controller, autonomous motion robot, self localization base on vision and Particle Filter, local AI, Trajectory Planning, Motion Controller and Network. The hardware consists of the mechanical structure and the driver circuit board. Each robot is able to walk, fast walk, pass, kick and dribble when it catches
    the ball. These humanoids have been successfully participating in various robotic soccer competitions. This project is still in progress and some new interesting methods are described in the current report.

  12. Communication of Robot Status to Improve Human-Robot Collaboration

    Data.gov (United States)

    National Aeronautics and Space Administration — Future space exploration will require humans and robots to collaborate to perform all the necessary tasks. Current robots mostly operate separately from humans due...

  13. Making Humanoid Robots More Acceptable Based on the Study of Robot Characters in Animation

    Directory of Open Access Journals (Sweden)

    Fatemeh Maleki

    2015-03-01

    Full Text Available In this paper we take an approach in Humanoid Robots are not considered as robots who resembles human beings in a realistic way of appearance and act but as robots who act and react like human that make them more believable by people. Regarding this approach we will study robot characters in animation movies and discuss what makes some of them to be accepted just like a moving body and what makes some other robot characters to be believable as a living human. The goal of this paper is to create a rule set that describes friendly, socially acceptable, kind, cute... robots and in this study we will review example robots in popular animated movies. The extracted rules and features can be used for making real robots more acceptable.

  14. Robotics in medicine

    Science.gov (United States)

    Kuznetsov, D. N.; Syryamkin, V. I.

    2015-11-01

    Modern technologies play a very important role in our lives. It is hard to imagine how people can get along without personal computers, and companies - without powerful computer centers. Nowadays, many devices make modern medicine more effective. Medicine is developing constantly, so introduction of robots in this sector is a very promising activity. Advances in technology have influenced medicine greatly. Robotic surgery is now actively developing worldwide. Scientists have been carrying out research and practical attempts to create robotic surgeons for more than 20 years, since the mid-80s of the last century. Robotic assistants play an important role in modern medicine. This industry is new enough and is at the early stage of development; despite this, some developments already have worldwide application; they function successfully and bring invaluable help to employees of medical institutions. Today, doctors can perform operations that seemed impossible a few years ago. Such progress in medicine is due to many factors. First, modern operating rooms are equipped with up-to-date equipment, allowing doctors to make operations more accurately and with less risk to the patient. Second, technology has enabled to improve the quality of doctors' training. Various types of robots exist now: assistants, military robots, space, household and medical, of course. Further, we should make a detailed analysis of existing types of robots and their application. The purpose of the article is to illustrate the most popular types of robots used in medicine.

  15. A Vision Controlled Robot to Detect and Collect Fallen Hot Cobalt60 Capsules inside Wet Storage Pool of Cobalt60 Irradiators

    International Nuclear Information System (INIS)

    Solyman, A.E.M.

    2015-01-01

    In a typical irradiator that use radioactive cobalt-60 capsules source is one of the peaceful uses of atomic energy, it originated strategy in terms of its importance in the sterilization of medical products and food processing from bacteria and fungi before being exported. However, there are several well-known problems related to the fall of the cobalt-60 capsules inside the wet storage pool as a result of manufacturing defects, defects welds or a problem occurs in the vertical movement of the radioactive source rack. Therefore it is necessary to study this problem and solve it in a scientific way so as to keep the human as much as possible from radiation exposure, according to the principles of radiation protection and safety issued by the International Atomic Energy Agency. The present work considers the possibility to use a vision based control arm robot to collect fallen hot cobalt-60 capsules inside wet storage pool. A 5-DOF arm robot is designed and vision algorithms are established to pick the fallen capsule on the bottom surface of the storage pool, read the information printed on its edge (cap) and move it to a safe storage place. Two object detection approaches are studied; RGB-based filter and background subtraction technique. Vision algorithms and camera calibration are done using MATLAB/SIMULINK program. Robot arm forward and inverse kinematics are developed and programmed using an embedded micro controller system. Experiments show the validity of the proposed system and prove its success. The collecting process will be done without interference of operators, so radiation safety will be increased. The results showed camera calibration equations accuracy. And also the presence of vibrations in the hands of the movement of the robot and thus were seized motor rotation speed to 10 degrees per second to avoid these vibrations.This scientific application keeps the operators as much as possible from radiation exposure so it leads to increase radiation

  16. German robots: The impact of industrial robots on workers

    OpenAIRE

    Dauth, Wolfgang; Findeisen, Sebastian; Südekum, Jens; Wößner, Nicole

    2017-01-01

    We study the impact of rising robot exposure on the careers of individual manufacturing workers, and the equilibrium impact across industries and local labor markets in Germany. We find no evidence that robots cause total job losses, but they do affect the composition of aggregate employment. Every robot destroys two manufacturing jobs. This accounts for almost 23 percent of the overall decline of manufacturing employment in Germany over the period 1994-2014, roughly 275,000 jobs. But this lo...

  17. Robotic refueling machine

    International Nuclear Information System (INIS)

    Challberg, R.C.; Jones, C.R.

    1996-01-01

    One of the longest critical path operations performed during the outage is removing and replacing the fuel. A design is currently under development for a refueling machine which would allow faster, fully automated operation and would also allow the handling of two fuel assemblies at the same time. This design is different from current designs, (a) because of its lighter weight, making increased acceleration and speed possible, (b) because of its control system which makes locating the fuel assembly more dependable and faster, and (c) because of its dual handling system allowing simultaneous fuel movements. The new design uses two robotic arms to span a designated area of the vessel and the fuel storage area. Attached to the end of each robotic arm is a lightweight telescoping mast with a pendant attached to the end of each mast. The pendant acts as the base unit, allowing attachment of any number of end effectors depending on the servicing or inspection operation. Housed within the pendant are two television cameras used for the positioning control system. The control system is adapted from the robotics field using the technology known as machine vision, which provides both object and character recognition techniques to enable relative position control rather than absolute position control as in past designs. The pendant also contains thrusters that are used for fast, short distance, precise positioning. The new refueling machine system design is capable of a complete off load and reload of an 872 element core in about 5.3 days compared to 13 days for a conventional system

  18. Robotic arm

    Science.gov (United States)

    Kwech, Horst

    1989-04-18

    A robotic arm positionable within a nuclear vessel by access through a small diameter opening and having a mounting tube supported within the vessel and mounting a plurality of arm sections for movement lengthwise of the mounting tube as well as for movement out of a window provided in the wall of the mounting tube. An end effector, such as a grinding head or welding element, at an operating end of the robotic arm, can be located and operated within the nuclear vessel through movement derived from six different axes of motion provided by mounting and drive connections between arm sections of the robotic arm. The movements are achieved by operation of remotely-controllable servo motors, all of which are mounted at a control end of the robotic arm to be outside the nuclear vessel.

  19. Toward cognitive robotics

    Science.gov (United States)

    Laird, John E.

    2009-05-01

    Our long-term goal is to develop autonomous robotic systems that have the cognitive abilities of humans, including communication, coordination, adapting to novel situations, and learning through experience. Our approach rests on the recent integration of the Soar cognitive architecture with both virtual and physical robotic systems. Soar has been used to develop a wide variety of knowledge-rich agents for complex virtual environments, including distributed training environments and interactive computer games. For development and testing in robotic virtual environments, Soar interfaces to a variety of robotic simulators and a simple mobile robot. We have recently made significant extensions to Soar that add new memories and new non-symbolic reasoning to Soar's original symbolic processing, which should significantly improve Soar abilities for control of robots. These extensions include episodic memory, semantic memory, reinforcement learning, and mental imagery. Episodic memory and semantic memory support the learning and recalling of prior events and situations as well as facts about the world. Reinforcement learning provides the ability of the system to tune its procedural knowledge - knowledge about how to do things. Mental imagery supports the use of diagrammatic and visual representations that are critical to support spatial reasoning. We speculate on the future of unmanned systems and the need for cognitive robotics to support dynamic instruction and taskability.

  20. Coordinated robotic system for civil structural health monitoring

    Directory of Open Access Journals (Sweden)

    Qidwai Uvais

    2017-01-01

    Full Text Available With the recent advances in sensors, robotics, unmanned aerial vehicles, communication, and information technologies, it is now feasible to move towards the vision of ubiquitous cities, where virtually everything throughout the city is linked to an information system through technologies such as wireless networking and radio-frequency identification (RFID tags, to provide systematic and more efficient management of urban systems, including civil and mechanical infrastructure monitoring, to achieve the goal of resilient and sustainable societies. In this proposed system, unmanned aerial vehicle (UAVs is used to ascertain the coarse defect signature using panoramic imaging. This involves image stitching and registration so that a complete view of the surface is seen with reference to a common reference or origin point. Thereafter, crack verification and localization has been done using the magnetic flux leakage (MFL approach which has been performed with the help of a coordinated robotic system. In which the first robot is placed at the top of the structure whereas the second robot is equipped with the designed MFL sensory system. With the initial findings, the proposed system identifies and localize the crack in the given structure.

  1. Robot vision language RVL/V: An integration scheme of visual processing and manipulator control

    International Nuclear Information System (INIS)

    Matsushita, T.; Sato, T.; Hirai, S.

    1984-01-01

    RVL/V is a robot vision language designed to write a program for visual processing and manipulator control of a hand-eye system. This paper describes the design of RVL/V and the current implementation of the system. Visual processing is performed on one-dimensional range data of the object surface. Model-based instructions execute object detection, measurement and view control. The hierarchy of visual data and processing is introduced to give RVL/V generality. A new scheme to integrate visual information and manipulator control is proposed. The effectiveness of the model-based visual processing scheme based on profile data is demonstrated by a hand-eye experiment

  2. Tactile Robotic Topographical Mapping Without Force or Contact Sensors

    Science.gov (United States)

    Burke, Kevin; Melko, Joseph; Krajewski, Joel; Cady, Ian

    2008-01-01

    A method of topographical mapping of a local solid surface within the range of motion of a robot arm is based on detection of contact between the surface and the end effector (the fixture or tool at the tip of the robot arm). The method was conceived to enable mapping of local terrain by an exploratory robot on a remote planet, without need to incorporate delicate contact switches, force sensors, a vision system, or other additional, costly hardware. The method could also be used on Earth for determining the size and shape of an unknown surface in the vicinity of a robot, perhaps in an unanticipated situation in which other means of mapping (e.g., stereoscopic imaging or laser scanning with triangulation) are not available. The method uses control software modified to utilize the inherent capability of the robotic control system to measure the joint positions, the rates of change of the joint positions, and the electrical current demanded by the robotic arm joint actuators. The system utilizes these coordinate data and the known robot-arm kinematics to compute the position and velocity of the end effector, move the end effector along a specified trajectory, place the end effector at a specified location, and measure the electrical currents in the joint actuators. Since the joint actuator current is approximately proportional to the actuator forces and torques, a sudden rise in joint current, combined with a slowing of the joint, is a possible indication of actuator stall and surface contact. Hence, even though the robotic arm is not equipped with contact sensors, it is possible to sense contact (albeit with reduced sensitivity) as the end effector becomes stalled against a surface that one seeks to measure.

  3. Conceptions of health service robots

    DEFF Research Database (Denmark)

    Lystbæk, Christian Tang

    2015-01-01

    Technology developments create rich opportunities for health service providers to introduce service robots in health care. While the potential benefits of applying robots in health care are extensive, the research into the conceptions of health service robot and its importance for the uptake...... of robotics technology in health care is limited. This article develops a model of the basic conceptions of health service robots that can be used to understand different assumptions and values attached to health care technology in general and health service robots in particular. The article takes...... a discursive approach in order to develop a conceptual framework for understanding the social values of health service robots. First a discursive approach is proposed to develop a typology of conceptions of health service robots. Second, a model identifying four basic conceptions of health service robots...

  4. Continuum limbed robots for locomotion

    Science.gov (United States)

    Mutlu, Alper

    This thesis focuses on continuum robots based on pneumatic muscle technology. We introduce a novel approach to use these muscles as limbs of lightweight legged robots. The flexibility of the continuum legs of these robots offers the potential to perform some duties that are not possible with classical rigid-link robots. Potential applications are as space robots in low gravity, and as cave explorer robots. The thesis covers the fabrication process of continuum pneumatic muscles and limbs. It also provides some new experimental data on this technology. Afterwards, the designs of two different novel continuum robots - one tripod, one quadruped - are introduced. Experimental data from tests using the robots is provided. The experimental results are the first published example of locomotion with tripod and quadruped continuum legged robots. Finally, discussion of the results and how far this technology can go forward is presented.

  5. Soft computing in advanced robotics

    CERN Document Server

    Kobayashi, Ichiro; Kim, Euntai

    2014-01-01

    Intelligent system and robotics are inevitably bound up; intelligent robots makes embodiment of system integration by using the intelligent systems. We can figure out that intelligent systems are to cell units, while intelligent robots are to body components. The two technologies have been synchronized in progress. Making leverage of the robotics and intelligent systems, applications cover boundlessly the range from our daily life to space station; manufacturing, healthcare, environment, energy, education, personal assistance, logistics. This book aims at presenting the research results in relevance with intelligent robotics technology. We propose to researchers and practitioners some methods to advance the intelligent systems and apply them to advanced robotics technology. This book consists of 10 contributions that feature mobile robots, robot emotion, electric power steering, multi-agent, fuzzy visual navigation, adaptive network-based fuzzy inference system, swarm EKF localization and inspection robot. Th...

  6. Robotics and remote systems applications

    International Nuclear Information System (INIS)

    Rabold, D.E.

    1996-01-01

    This article is a review of numerous remote inspection techniques in use at the Savannah River (and other) facilities. These include: (1) reactor tank inspection robot, (2) californium waste removal robot, (3) fuel rod lubrication robot, (4) cesium source manipulation robot, (5) tank 13 survey and decontamination robots, (6) hot gang valve corridor decontamination and junction box removal robots, (7) lead removal from deionizer vessels robot, (8) HB line cleanup robot, (9) remote operation of a front end loader at WIPP, (10) remote overhead video extendible robot, (11) semi-intelligent mobile observing navigator, (12) remote camera systems in the SRS canyons, (13) cameras and borescope for the DWPF, (14) Hanford waste tank camera system, (15) in-tank precipitation camera system, (16) F-area retention basin pipe crawler, (17) waste tank wall crawler and annulus camera, (18) duct inspection, and (19) deionizer resin sampling

  7. Situation Assessment for Mobile Robots

    DEFF Research Database (Denmark)

    Beck, Anders Billesø

    Mobile robots have become a mature technology. The first cable guided logistics robots were introduced in the industry almost 60 years ago. In this time the market for mobile robots in industry has only experienced a very modest growth and only 2.100 systems were sold worldwide in 2011. In recent...... years, many other domains have adopted the mobile robots, such as logistics robots at hospitals and the vacuum robots in our homes. However, considering the achievements in research the last 15 years within perception and operation in natural environments together with the reductions of costs in modern...... sensor systems, the growth potential for mobile robot applications are enormous. Many new technological components are available to move the limits of commercial mobile robot applications, but a key hindrance is reliability. Natural environments are complex and dynamic, and thus the risk of robots...

  8. Fundamentals of soft robot locomotion

    OpenAIRE

    Calisti, M.; Picardi, G.; Laschi, C.

    2017-01-01

    Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human���robot interaction and locomotion. Although field applications have emerged for soft manipulation and human���robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This p...

  9. Open Issues in Evolutionary Robotics.

    Science.gov (United States)

    Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne

    2016-01-01

    One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.

  10. Mergeable nervous systems for robots.

    Science.gov (United States)

    Mathews, Nithin; Christensen, Anders Lyhne; O'Grady, Rehan; Mondada, Francesco; Dorigo, Marco

    2017-09-12

    Robots have the potential to display a higher degree of lifetime morphological adaptation than natural organisms. By adopting a modular approach, robots with different capabilities, shapes, and sizes could, in theory, construct and reconfigure themselves as required. However, current modular robots have only been able to display a limited range of hardwired behaviors because they rely solely on distributed control. Here, we present robots whose bodies and control systems can merge to form entirely new robots that retain full sensorimotor control. Our control paradigm enables robots to exhibit properties that go beyond those of any existing machine or of any biological organism: the robots we present can merge to form larger bodies with a single centralized controller, split into separate bodies with independent controllers, and self-heal by removing or replacing malfunctioning body parts. This work takes us closer to robots that can autonomously change their size, form and function.Robots that can self-assemble into different morphologies are desired to perform tasks that require different physical capabilities. Mathews et al. design robots whose bodies and control systems can merge and split to form new robots that retain full sensorimotor control and act as a single entity.

  11. Mobile Surveillance and Monitoring Robots

    International Nuclear Information System (INIS)

    Kimberly, Howard R.; Shipers, Larry R.

    1999-01-01

    Long-term nuclear material storage will require in-vault data verification, sensor testing, error and alarm response, inventory, and maintenance operations. System concept development efforts for a comprehensive nuclear material management system have identified the use of a small flexible mobile automation platform to perform these surveillance and maintenance operations. In order to have near-term wide-range application in the Complex, a mobile surveillance system must be small, flexible, and adaptable enough to allow retrofit into existing special nuclear material facilities. The objective of the Mobile Surveillance and Monitoring Robot project is to satisfy these needs by development of a human scale mobile robot to monitor the state of health, physical security and safety of items in storage and process; recognize and respond to alarms, threats, and off-normal operating conditions; and perform material handling and maintenance operations. The system will integrate a tool kit of onboard sensors and monitors, maintenance equipment and capability, and SNL developed non-lethal threat response technology with the intelligence to identify threats and develop and implement first response strategies for abnormal signals and alarm conditions. System versatility will be enhanced by incorporating a robot arm, vision and force sensing, robust obstacle avoidance, and appropriate monitoring and sensing equipment

  12. Evolving self-assembly in autonomous homogeneous robots: experiments with two physical robots.

    Science.gov (United States)

    Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Christensen, Anders Lyhne; Dorigo, Marco

    2009-01-01

    This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot. The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.

  13. [Robotics in pediatric surgery].

    Science.gov (United States)

    Camps, J I

    2011-10-01

    Despite the extensive use of robotics in the adult population, the use of robotics in pediatrics has not been well accepted. There is still a lack of awareness from pediatric surgeons on how to use the robotic equipment, its advantages and indications. Benefit is still controversial. Dexterity and better visualization of the surgical field are one of the strong values. Conversely, cost and a lack of small instruments prevent the use of robotics in the smaller patients. The aim of this manuscript is to present the controversies about the use of robotics in pediatric surgery.

  14. Visual Control of Robots Using Range Images

    Directory of Open Access Journals (Sweden)

    Fernando Torres

    2010-08-01

    Full Text Available In the last years, 3D-vision systems based on the time-of-flight (ToF principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  15. Toward a framework for levels of robot autonomy in human-robot interaction.

    Science.gov (United States)

    Beer, Jenay M; Fisk, Arthur D; Rogers, Wendy A

    2014-07-01

    A critical construct related to human-robot interaction (HRI) is autonomy, which varies widely across robot platforms. Levels of robot autonomy (LORA), ranging from teleoperation to fully autonomous systems, influence the way in which humans and robots may interact with one another. Thus, there is a need to understand HRI by identifying variables that influence - and are influenced by - robot autonomy. Our overarching goal is to develop a framework for levels of robot autonomy in HRI. To reach this goal, the framework draws links between HRI and human-automation interaction, a field with a long history of studying and understanding human-related variables. The construct of autonomy is reviewed and redefined within the context of HRI. Additionally, the framework proposes a process for determining a robot's autonomy level, by categorizing autonomy along a 10-point taxonomy. The framework is intended to be treated as guidelines to determine autonomy, categorize the LORA along a qualitative taxonomy, and consider which HRI variables (e.g., acceptance, situation awareness, reliability) may be influenced by the LORA.

  16. Control of a Quadcopter Aerial Robot Using Optic Flow Sensing

    Science.gov (United States)

    Hurd, Michael Brandon

    This thesis focuses on the motion control of a custom-built quadcopter aerial robot using optic flow sensing. Optic flow sensing is a vision-based approach that can provide a robot the ability to fly in global positioning system (GPS) denied environments, such as indoor environments. In this work, optic flow sensors are used to stabilize the motion of quadcopter robot, where an optic flow algorithm is applied to provide odometry measurements to the quadcopter's central processing unit to monitor the flight heading. The optic-flow sensor and algorithm are capable of gathering and processing the images at 250 frames/sec, and the sensor package weighs 2.5 g and has a footprint of 6 cm2 in area. The odometry value from the optic flow sensor is then used a feedback information in a simple proportional-integral-derivative (PID) controller on the quadcopter. Experimental results are presented to demonstrate the effectiveness of using optic flow for controlling the motion of the quadcopter aerial robot. The technique presented herein can be applied to different types of aerial robotic systems or unmanned aerial vehicles (UAVs), as well as unmanned ground vehicles (UGV).

  17. [Robot-aided training in rehabilitation].

    Science.gov (United States)

    Hachisuka, Kenji

    2010-02-01

    Recently, new training techniques that involve the use of robots have been used in the rehabilitation of patients with hemiplegia and paraplegia. Robots used for training the arm include the MIT-MANUS, Arm Trainer, mirror-image motion enabler (MIME) robot, and the assisted rehabilitation and measurement (ARM) Guide. Robots that are used for lower-limb training are the Rehabot, Gait Trainer, Lokomat, LOPES Exoskeleton Robot, and Gait Assist Robot. Robot-aided therapy has enabled the functional training of the arm and the lower limbs in an effective, easy, and comfortable manner. Therefore, with this type of therapy, the patients can repeatedly undergo sufficient and accurate training for a prolonged period. However, evidence of the benefits of robot-aided training has not yet been established.

  18. Multi-arm multilateral haptics-based immersive tele-robotic system (HITS) for improvised explosive device disposal

    Science.gov (United States)

    Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir

    2014-06-01

    This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.

  19. Current status of robotic simulators in acquisition of robotic surgical skills.

    Science.gov (United States)

    Kumar, Anup; Smith, Roger; Patel, Vipul R

    2015-03-01

    This article provides an overview of the current status of simulator systems in robotic surgery training curriculum, focusing on available simulators for training, their comparison, new technologies introduced in simulation focusing on concepts of training along with existing challenges and future perspectives of simulator training in robotic surgery. The different virtual reality simulators available in the market like dVSS, dVT, RoSS, ProMIS and SEP have shown face, content and construct validity in robotic skills training for novices outside the operating room. Recently, augmented reality simulators like HoST, Maestro AR and RobotiX Mentor have been introduced in robotic training providing a more realistic operating environment, emphasizing more on procedure-specific robotic training . Further, the Xperience Team Trainer, which provides training to console surgeon and bed-side assistant simultaneously, has been recently introduced to emphasize the importance of teamwork and proper coordination. Simulator training holds an important place in current robotic training curriculum of future robotic surgeons. There is a need for more procedure-specific augmented reality simulator training, utilizing advancements in computing and graphical capabilities for new innovations in simulator technology. Further studies are required to establish its cost-benefit ratio along with concurrent and predictive validity.

  20. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2016-01-01

    Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.

  1. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Science.gov (United States)

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  2. Image Based Solution to Occlusion Problem for Multiple Robots Navigation

    Directory of Open Access Journals (Sweden)

    Taj Mohammad Khan

    2012-04-01

    Full Text Available In machine vision, occlusions problem is always a challenging issue in image based mapping and navigation tasks. This paper presents a multiple view vision based algorithm for the development of occlusion-free map of the indoor environment. The map is assumed to be utilized by the mobile robots within the workspace. It has wide range of applications, including mobile robot path planning and navigation, access control in restricted areas, and surveillance systems. We used wall mounted fixed camera system. After intensity adjustment and background subtraction of the synchronously captured images, the image registration was performed. We applied our algorithm on the registered images to resolve the occlusion problem. This technique works well even in the existence of total occlusion for a longer period.

  3. [Robot assisted Frykman-Goldberg procedure. Case report].

    Science.gov (United States)

    Zubieta-O'Farrill, Gregorio; Ramírez-Ramírez, Moisés; Villanueva-Sáenz, Eduardo

    2017-12-01

    Rectal prolapse is defined as the protrusion of the rectal wall through the anal canal; with a prevalence of less than 0.5%. The most frequent symptoms include pain, incomplete defecation sensation with blood and mucus, fecal incontinence and/or constipation. The surgical approach can be perineal or abdominal with the tendency for minimal invasion. Robot-assisted procedures are a novel option that offer technique advantages over open or laparoscopic approaches. 67 year-old female, who presented with rectal prolapse, posterior to an episode of constipation, that required manual reduction, associated with transanal hemorrhage during defecation and occasional fecal incontinence. A RMI defecography was performed that reported complete rectal and uterine prolapse, and cystocele. A robotic assisted Frykman-Goldberg procedure wass performed. There are more than 100 surgical procedures for rectal prolapse treatment. We report the first robot assisted procedure in Mexico. Robotic assisted surgery has the same safety rate as laparoscopic surgery, with the advantages of better instrument mobility, no human hand tremor, better vision, and access to complicated and narrow areas. Robotic surgery as the surgical treatment is a feasible, safe and effective option, there is no difference in recurrence and function compared with laparoscopy. It facilitates the technique, improves nerve preservation and bleeding. Further clinical, prospective and randomized studies to compare the different minimal invasive approaches, their functional and long term results for this pathology are needed. Copyright © 2016 Academia Mexicana de Cirugía A.C. Publicado por Masson Doyma México S.A. All rights reserved.

  4. A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics.

    Science.gov (United States)

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

  5. DSLs in robotics

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh; Bordignon, Mirko; Stoy, Kasper

    2017-01-01

    Robotic systems blend hardware and software in a holistic way that intrinsically raises many crosscutting concerns such as concurrency, uncertainty, and time constraints. These concerns make programming robotic systems challenging as expertise from multiple domains needs to be integrated...... conceptually and technically. Programming languages play a central role in providing a higher level of abstraction. This briefing presents a case study on the evolution of domain-specific languages based on modular robotics. The case study on the evolution of domain-specific languages is based on a series...... of DSL prototypes developed over five years for the domain of modular, self-reconfigurable robots....

  6. Robots de servicio

    OpenAIRE

    Aracil, Rafael; Balaguer, Carlos; Armada, Manuel

    2008-01-01

    8 págs, 9 figs. El término Robots de Servicio apareció a finales de los años 80 como una necesidad de desarrollar máquinas y sistemas capaces de trabajar en entornos diferentes a los fabriles. Los Robots de Servicio tenían que poder trabajar en entornos noestructurados, en condiciones ambientales cambiantes y con una estrecha interacción con los humanos. En 1995 fue creado por la IEEE Robotics and Automation Society, el Technical Committee on Service Robots, y este comité definió en el año...

  7. Robotic arm

    International Nuclear Information System (INIS)

    Kwech, H.

    1989-01-01

    A robotic arm positionable within a nuclear vessel by access through a small diameter opening and having a mounting tube supported within the vessel and mounting a plurality of arm sections for movement lengthwise of the mounting tube as well as for movement out of a window provided in the wall of the mounting tube is disclosed. An end effector, such as a grinding head or welding element, at an operating end of the robotic arm, can be located and operated within the nuclear vessel through movement derived from six different axes of motion provided by mounting and drive connections between arm sections of the robotic arm. The movements are achieved by operation of remotely-controllable servo motors, all of which are mounted at a control end of the robotic arm to be outside the nuclear vessel. 23 figs

  8. Human-robot skills transfer interfaces for a flexible surgical robot.

    Science.gov (United States)

    Calinon, Sylvain; Bruno, Danilo; Malekzadeh, Milad S; Nanayakkara, Thrishantha; Caldwell, Darwin G

    2014-09-01

    In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Fiscal 1998 achievement report on regional consortium research and development project. Venture business fostering regional consortium--Creation of key industries (Development of Task-Oriented Robot Control System TORCS based on versatile 3-dimensional vision system VVV--Vertical Volumetric Vision); 1998 nendo sanjigen shikaku system VVV wo mochiita task shikogata robot seigyo system TORCS no kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    Research is conducted for the development of a highly autonomous robot control system TORCS for the purpose of realizing an automated, unattended manufacturing process. In the development of an interface, an indicating function is built which easily adds or removes job attributes relative to given shape data. In the development of a 3-dimensional vision system VVV, a camera set and a new range finder are manufactured for ranging and recognition, the latter being an improvement from the conventional laser-aided range finder TDS. A 3-dimensional image processor is developed, which picks up pictures at a speed approximately 8 times higher than that of the conventional type. In the development of orbit calculating software programs, a job planner, an operation planner, and a vision planner are prepared. A robot program which is necessary for robot operation is also prepared. In an evaluation test involving a simulated casting line, the pick-and-place concept is successfully implemented for several kinds of cast articles positioned at random on a conveyer in motion. Difference in environmental conditions between manufacturing sites is not pursued in this paper on the ground that such should be discussed on the case-by-case basis. (NEDO)

  10. Measuring Attitudes Towards Telepresence Robots

    OpenAIRE

    M Tsui, Katherine; Desai, Munjal; A. Yanco, Holly; Cramer, Henriette; Kemper, Nicander

    2011-01-01

    Studies using Nomura et al.’s “Negative Attitude toward Robots Scale” (NARS) [1] as an attitudinal measure have featured robots that were perceived to be autonomous, indepen- dent agents. State of the art telepresence robots require an explicit human-in-the-loop to drive the robot around. In this paper, we investigate if NARS can be used with telepresence robots. To this end, we conducted three studies in which people watched videos of telepresence robots (n=70), operated te...

  11. Robotic environments

    NARCIS (Netherlands)

    Bier, H.H.

    2011-01-01

    Technological and conceptual advances in fields such as artificial intelligence, robotics, and material science have enabled robotic architectural environments to be implemented and tested in the last decade in virtual and physical prototypes. These prototypes are incorporating sensing-actuating

  12. 2016 International Symposium on Experimental Robotics

    CERN Document Server

    Nakamura, Yoshihiko; Khatib, Oussama; Venture, Gentiane

    2017-01-01

    Experimental Robotics XV is the collection of papers presented at the International Symposium on Experimental Robotics, Roppongi, Tokyo, Japan on October 3-6, 2016. 73 scientific papers were selected and presented after peer review. The papers span a broad range of sub-fields in robotics including aerial robots, mobile robots, actuation, grasping, manipulation, planning and control and human-robot interaction, but shared cutting-edge approaches and paradigms to experimental robotics. The readers will find a breadth of new directions of experimental robotics. The International Symposium on Experimental Robotics is a series of bi-annual symposia sponsored by the International Foundation of Robotics Research, whose goal is to provide a forum dedicated to experimental robotics research. Robotics has been widening its scientific scope, deepening its methodologies and expanding its applications. However, the significance of experiments remains and will remain at the center of the discipline. The ISER gatherings are...

  13. Localization from Visual Landmarks on a Free-Flying Robot

    Science.gov (United States)

    Coltin, Brian; Fusco, Jesse; Moratto, Zack; Alexandrov, Oleg; Nakamura, Robert

    2016-01-01

    We present the localization approach for Astrobee, a new free-flying robot designed to navigate autonomously on the International Space Station (ISS). Astrobee will accommodate a variety of payloads and enable guest scientists to run experiments in zero-g, as well as assist astronauts and ground controllers. Astrobee will replace the SPHERES robots which currently operate on the ISS, whose use of fixed ultrasonic beacons for localization limits them to work in a 2 meter cube. Astrobee localizes with monocular vision and an IMU, without any environmental modifications. Visual features detected on a pre-built map, optical flow information, and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise, and extensively evaluate the localization algorithm.

  14. Pose estimation for mobile robots working on turbine blade

    Energy Technology Data Exchange (ETDEWEB)

    Ma, X.D.; Chen, Q.; Liu, J.J.; Sun, Z.G.; Zhang, W.Z. [Tsinghua Univ., Beijing (China). Key Laboratory for Advanced Materials Processing Technology, Ministry of Education, Dept. of Mechanical Engineering

    2009-03-11

    This paper discussed a features point detection and matching task technique for mobile robots used in wind turbine blade applications. The vision-based scheme used visual information from the robot's surrounding environment to match successive image frames. An improved pose estimation algorithm based on a scale invariant feature transform (SIFT) was developed to consider the characteristics of local images of turbine blades, pose estimation problems, and conditions. The method included a pre-subsampling technique for reducing computation and bidirectional matching for improving precision. A random sample consensus (RANSAC) method was used to estimate the robot's pose. Pose estimation conditions included a wide pose range; the distance between neighbouring blades; and mechanical, electromagnetic, and optical disturbances. An experimental platform was used to demonstrate the validity of the proposed algorithm. 20 refs., 6 figs.

  15. Service Oriented Robotic Architecture for Space Robotics: Design, Testing, and Lessons Learned

    Science.gov (United States)

    Fluckiger, Lorenzo Jean Marc E; Utz, Hans Heinrich

    2013-01-01

    This paper presents the lessons learned from six years of experiments with planetary rover prototypes running the Service Oriented Robotic Architecture (SORA) developed by the Intelligent Robotics Group (IRG) at the NASA Ames Research Center. SORA relies on proven software engineering methods and technologies applied to space robotics. Based on a Service Oriented Architecture and robust middleware, SORA encompasses on-board robot control and a full suite of software tools necessary for remotely operated exploration missions. SORA has been eld tested in numerous scenarios of robotic lunar and planetary exploration. The experiments conducted by IRG with SORA exercise a large set of the constraints encountered in space applications: remote robotic assets, ight relevant science instruments, distributed operations, high network latencies and unreliable or intermittent communication links. In this paper, we present the results of these eld tests in regard to the developed architecture, and discuss its bene ts and limitations.

  16. Robot Control Overview: An Industrial Perspective

    Directory of Open Access Journals (Sweden)

    T. Brogårdh

    2009-07-01

    Full Text Available One key competence for robot manufacturers is robot control, defined as all the technologies needed to control the electromechanical system of an industrial robot. By means of modeling, identification, optimization, and model-based control it is possible to reduce robot cost, increase robot performance, and solve requirements from new automation concepts and new application processes. Model-based control, including kinematics error compensation, optimal servo reference- and feed-forward generation, and servo design, tuning, and scheduling, has meant a breakthrough for the use of robots in industry. Relying on this breakthrough, new automation concepts such as high performance multi robot collaboration and human robot collaboration can be introduced. Robot manufacturers can build robots with more compliant components and mechanical structures without loosing performance and robots can be used also in applications with very high performance requirements, e.g., in assembly, machining, and laser cutting. In the future it is expected that the importance of sensor control will increase, both with respect to sensors in the robot structure to increase the control performance of the robot itself and sensors outside the robot related to the applications and the automation systems. In this connection sensor fusion and learning functionalities will be needed together with the robot control for easy and intuitive installation, programming, and maintenance of industrial robots.

  17. Low cost submarine robot

    Directory of Open Access Journals (Sweden)

    Ponlachart Chotikarn

    2010-10-01

    Full Text Available A submarine robot is a semi-autonomous submarine robot used mainly for marine environmental research. We aim todevelop a low cost, semi-autonomous submarine robot which is able to travel underwater. The robot’s structure was designedand patented using a novel idea of the diving system employing a volume adjustment mechanism to vary the robot’s density.A light weight, flexibility and small structure provided by PVC can be used to construct the torpedo-liked shape robot.Hydraulic seal and O-ring rubbers are used to prevent water leaking. This robot is controlled by a wired communicationsystem.

  18. ROBOTICS 2014 - The International Conference on ROBOTICS, Bucharest, Romania, October 23-24, 2014

    Directory of Open Access Journals (Sweden)

    Iulian TABĂRĂ

    2014-12-01

    Full Text Available ROBOTICS 2014 was organized by Robotics Society of Romania (RSR with the support of University POLITEHNICA of Bucharest (UPB, Institute of Solid Mechanics of the Romanian Academy (ISMRA and Technical University of Civil Engineering Bucharest (TUCEB, Ministry of National Education (MNE, under the patronage of International Federation for the Promotion of Mechanism and Machine Science (IFToMM. The first scientific event in the field of Robotics in Romania was held at the University POLITEHNICA of Bucharest (UPB, in 1981, by Professor Chrisitan PELECUDI, head of the Mechanisms and Robots research and design team MERO (MEchanisms and RObots and was named "National Symposium of Robotics ". Since the first edition have been held in Romania various scientific events dedicated to Robotics under the name of National Seminars (first fifteen events, since 1981 and National and International Conferences (last five editions. This is the 22nd edition of these scientific events, the first three (1981, 1982, and 1983 being held at the Polytechnic University of Bucharest.

  19. Robotic hand with modular extensions

    Science.gov (United States)

    Salisbury, Curt Michael; Quigley, Morgan

    2015-01-20

    A robotic device is described herein. The robotic device includes a frame that comprises a plurality of receiving regions that are configured to receive a respective plurality of modular robotic extensions. The modular robotic extensions are removably attachable to the frame at the respective receiving regions by way of respective mechanical fuses. Each mechanical fuse is configured to trip when a respective modular robotic extension experiences a predefined load condition, such that the respective modular robotic extension detaches from the frame when the load condition is met.

  20. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.

    Science.gov (United States)

    Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro

    2018-01-01

    In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  1. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots

    Directory of Open Access Journals (Sweden)

    Yoshinobu Hagiwara

    2018-03-01

    Full Text Available In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA. Object recognition results using convolutional neural network (CNN, hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL, and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  2. An Innovative 3D Ultrasonic Actuator with Multidegree of Freedom for Machine Vision and Robot Guidance Industrial Applications Using a Single Vibration Ring Transducer

    Directory of Open Access Journals (Sweden)

    M. Shafik

    2013-07-01

    Full Text Available This paper presents an innovative 3D piezoelectric ultrasonic actuator using a single flexural vibration ring transducer, for machine vision and robot guidance industrial applications. The proposed actuator is principally aiming to overcome the visual spotlight focus angle of digital visual data capture transducer, digital cameras and enhance the machine vision system ability to perceive and move in 3D. The actuator Design, structures, working principles and finite element analysis are discussed in this paper. A prototype of the actuator was fabricated. Experimental tests and measurements showed the ability of the developed prototype to provide 3D motions of Multidegree of freedom, with typical speed of movement equal to 35 revolutions per minute, a resolution of less than 5μm and maximum load of 3.5 Newton. These initial characteristics illustrate, the potential of the developed 3D micro actuator to gear the spotlight focus angle issue of digital visual data capture transducers and possible improvement that such technology could bring to the machine vision and robot guidance industrial applications.

  3. Physical Human Robot Interaction for a Wall Mounting Robot - External Force Estimation

    DEFF Research Database (Denmark)

    Alonso García, Alejandro; Villarmarzo Arruñada, Noelia; Pedersen, Rasmus

    2018-01-01

    The use of collaborative robots enhances human capabilities, leading to better working conditions and increased productivity. In building construction, such robots are needed, among other tasks, to install large glass panels, where the robot takes care of the heavy lifting part of the job while...

  4. SWARM-BOT: Pattern Formation in a Swarm of Self-Assembling Mobile Robots

    OpenAIRE

    El Kamel, A.; Mellouli, K.; Borne, P.; Sahin, E.; Labella, T.H.; Trianni, V.; Deneubourg, J.-L.; Rasse, P.; Floreano, D.; Gambardella, L.M.; Mondada, F.; Nolfi, S.; Dorigo, M.

    2002-01-01

    In this paper we introduce a new robotic system, called swarm-bot. The system consists of a swarm of mobile robots with the ability to connect to/disconnect from each other to self-assemble into different kinds of structures. First, we describe our vision and the goals of the project. Then we present preliminary results on the formation of patterns obtained from a grid-world simulation of the system.

  5. Robotics Inspection Vehicle for Advanced Storages

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz, Emilio; Renaldi, Graziano; Puig, David; Franzetti, Michele; Correcher, Carlos [European Commission, Ispra (Italy). Inst. for the Protection and Security of the Citizen

    2003-05-01

    After the dismantling of nuclear weapons and the probable release of large quantities of weapon graded materials under international verification regimes, there will be a wide interest in unmanned, highly automated and secure storage areas. In such circumstances, robotics technologies can provide an effective answer to the problem of securing, manipulating and inventorying all stored materials. In view of this future application JRC's NPNS started the development and construction of an advanced robotics prototype and demonstration system, named Robotics Inspection Vehicle (RIV), for remote inspection, surveillance and remote handling in those areas. The system was designed to meet requirements of reliability, security, high availability, robustness against radiation effects, self-maintainability (i.e., auto-repair capability), and easy installation. Due to its innovative holonomic design, RIV is a highly maneuverable and agile platform able to move in any direction, including sideways. The platform carries on-board a five degree of freedom manipulator arm. The high maneuverability and operation modes take into account the needs for accessing in the most easy way materials in the storage area. The platform is prepared to operate in one of three modes: i) manual tele-operation, ii) semiautonomous and iii) fully autonomous. The paper describes RIV's main design features, and details its GENERIS based control software [JRC's software architecture for robotics] and embedded sensors (i.e., 3D laser range, transponder antenna, ultra-sound, vision-based robot guidance, force-torque sensors, etc.). RIV was designed to incorporate several JRC innovative surveillance and inspection technologies and reveals that the current state of technology is mature to effectively provide a solution to novel storage solutions. The system is available for demonstration at JRC's Rialto Laboratory.

  6. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    Science.gov (United States)

    de Greeff, Joachim; Belpaeme, Tony

    2015-01-01

    Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  7. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    Directory of Open Access Journals (Sweden)

    Joachim de Greeff

    Full Text Available Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference; the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  8. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  9. ROBOT TASK SCENE ANALYZER

    International Nuclear Information System (INIS)

    Hamel, William R.; Everett, Steven

    2000-01-01

    Environmental restoration and waste management (ER and WM) challenges in the United States Department of Energy (DOE), and around the world, involve radiation or other hazards which will necessitate the use of remote operations to protect human workers from dangerous exposures. Remote operations carry the implication of greater costs since remote work systems are inherently less productive than contact human work due to the inefficiencies/complexities of teleoperation. To reduce costs and improve quality, much attention has been focused on methods to improve the productivity of combined human operator/remote equipment systems; the achievements to date are modest at best. The most promising avenue in the near term is to supplement conventional remote work systems with robotic planning and control techniques borrowed from manufacturing and other domains where robotic automation has been used. Practical combinations of teleoperation and robotic control will yield telerobotic work systems that outperform currently available remote equipment. It is believed that practical telerobotic systems may increase remote work efficiencies significantly. Increases of 30% to 50% have been conservatively estimated for typical remote operations. It is important to recognize that the basic hardware and software features of most modern remote manipulation systems can readily accommodate the functionality required for telerobotics. Further, several of the additional system ingredients necessary to implement telerobotic control--machine vision, 3D object and workspace modeling, automatic tool path generation and collision-free trajectory planning--are existent

  10. ROBOT TASK SCENE ANALYZER

    Energy Technology Data Exchange (ETDEWEB)

    William R. Hamel; Steven Everett

    2000-08-01

    Environmental restoration and waste management (ER and WM) challenges in the United States Department of Energy (DOE), and around the world, involve radiation or other hazards which will necessitate the use of remote operations to protect human workers from dangerous exposures. Remote operations carry the implication of greater costs since remote work systems are inherently less productive than contact human work due to the inefficiencies/complexities of teleoperation. To reduce costs and improve quality, much attention has been focused on methods to improve the productivity of combined human operator/remote equipment systems; the achievements to date are modest at best. The most promising avenue in the near term is to supplement conventional remote work systems with robotic planning and control techniques borrowed from manufacturing and other domains where robotic automation has been used. Practical combinations of teleoperation and robotic control will yield telerobotic work systems that outperform currently available remote equipment. It is believed that practical telerobotic systems may increase remote work efficiencies significantly. Increases of 30% to 50% have been conservatively estimated for typical remote operations. It is important to recognize that the basic hardware and software features of most modern remote manipulation systems can readily accommodate the functionality required for telerobotics. Further, several of the additional system ingredients necessary to implement telerobotic control--machine vision, 3D object and workspace modeling, automatic tool path generation and collision-free trajectory planning--are existent.

  11. Robotics for nuclear power plants

    International Nuclear Information System (INIS)

    Shiraiwa, Takanori; Watanabe, Atsuo; Miyasawa, Tatsuo

    1984-01-01

    Demand for robots in nuclear power plants is increasing of late in order to reduce workers' exposure to radiations. Especially, owing to the progress of microelectronics and robotics, earnest desire is growing for the advent of intellecturized robots that perform indeterminate and complicated security work. Herein represented are the robots recently developed for nuclear power plants and the review of the present status of robotics. (author)

  12. Robotics for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Shiraiwa, Takanori; Watanabe, Atsuo; Miyasawa, Tatsuo

    1984-10-01

    Demand for robots in nuclear power plants is increasing of late in order to reduce workers' exposure to radiations. Especially, owing to the progress of microelectronics and robotics, earnest desire is growing for the advent of intellecturized robots that perform indeterminate and complicated security work. Herein represented are the robots recently developed for nuclear power plants and the review of the present status of robotics.

  13. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  14. How robotic-assisted surgery can decrease the risk of mucosal tear during Heller myotomy procedure?

    Science.gov (United States)

    Ballouhey, Quentin; Dib, Nabil; Binet, Aurélien; Carcauzon-Couvrat, Véronique; Clermidi, Pauline; Longis, Bernard; Lardy, Hubert; Languepin, Jane; Cros, Jérôme; Fourcade, Laurent

    2017-06-01

    We report the first description of robotic-assisted Heller myotomy in children. The purpose of this study was to improve the safety of Heller myotomy by demonstrating, in two adolescent patients, the contribution of the robot to the different steps of this procedure. Due to the robot's freedom of movement and three-dimensional vision, there was an improvement in the accuracy, a gain in the safety regarding different key-points, decreasing the risk of mucosal perforation associated with this procedure.

  15. Human Robotic Systems (HRS): Controlling Robots over Time Delay Element

    Data.gov (United States)

    National Aeronautics and Space Administration — This element involves the development of software that enables easier commanding of a wide range of NASA relevant robots through the Robot Application Programming...

  16. Assessment of Laparoscopic Skills Performance: 2D Versus 3D Vision and Classic Instrument Versus New Hand-Held Robotic Device for Laparoscopy.

    Science.gov (United States)

    Leite, Mariana; Carvalho, Ana F; Costa, Patrício; Pereira, Ricardo; Moreira, Antonio; Rodrigues, Nuno; Laureano, Sara; Correia-Pinto, Jorge; Vilaça, João L; Leão, Pedro

    2016-02-01

    Laparoscopic surgery has undeniable advantages, such as reduced postoperative pain, smaller incisions, and faster recovery. However, to improve surgeons' performance, ergonomic adaptations of the laparoscopic instruments and introduction of robotic technology are needed. The aim of this study was to ascertain the influence of a new hand-held robotic device for laparoscopy (HHRDL) and 3D vision on laparoscopic skills performance of 2 different groups, naïve and expert. Each participant performed 3 laparoscopic tasks-Peg transfer, Wire chaser, Knot-in 4 different ways. With random sequencing we assigned the execution order of the tasks based on the first type of visualization and laparoscopic instrument. Time to complete each laparoscopic task was recorded and analyzed with one-way analysis of variance. Eleven experts and 15 naïve participants were included. Three-dimensional video helps the naïve group to get better performance in Peg transfer, Wire chaser 2 hands, and Knot; the new device improved the execution of all laparoscopic tasks (P < .05). For expert group, the 3D video system benefited them in Peg transfer and Wire chaser 1 hand, and the robotic device in Peg transfer, Wire chaser 1 hand, and Wire chaser 2 hands (P < .05). The HHRDL helps the execution of difficult laparoscopic tasks, such as Knot, in the naïve group. Three-dimensional vision makes the laparoscopic performance of the participants without laparoscopic experience easier, unlike those with experience in laparoscopic procedures. © The Author(s) 2015.

  17. Advances in robot kinematics

    CERN Document Server

    Khatib, Oussama

    2014-01-01

    The topics addressed in this book cover the whole range of kinematic analysis, synthesis and design and consider robotic systems possessing serial, parallel and cable driven mechanisms. The robotic systems range from being less than fully mobile to kinematically redundant to overconstrained.  The fifty-six contributions report the latest results in robot kinematics with emphasis on emerging areas such as design and control of humanoids or humanoid subsystems. The book is of interest to researchers wanting to bring their knowledge up to date regarding modern topics in one of the basic disciplines in robotics, which relates to the essential property of robots, the motion of mechanisms.

  18. Effect of cognitive biases on human-robot interaction: a case study of robot's misattribution

    OpenAIRE

    Biswas, Mriganka; Murray, John

    2014-01-01

    This paper presents a model for developing long-term human-robot interactions and social relationships based on the principle of 'human' cognitive biases applied to a robot. The aim of this work is to study how a robot influenced with human ‘misattribution’ helps to build better human-robot interactions than unbiased robots. The results presented in this paper suggest that it is important to know the effect of cognitive biases in human characteristics and interactions in order to better u...

  19. Robotic systems in spine surgery.

    Science.gov (United States)

    Onen, Mehmet Resid; Naderi, Sait

    2014-01-01

    Surgical robotic systems have been available for almost twenty years. The first surgical robotic systems were designed as supportive systems for laparoscopic approaches in general surgery (the first procedure was a cholecystectomy in 1987). The da Vinci Robotic System is the most common system used for robotic surgery today. This system is widely used in urology, gynecology and other surgical disciplines, and recently there have been initial reports of its use in spine surgery, for transoral access and anterior approaches for lumbar inter-body fusion interventions. SpineAssist, which is widely used in spine surgery, and Renaissance Robotic Systems, which are considered the next generation of robotic systems, are now FDA approved. These robotic systems are designed for use as guidance systems in spine instrumentation, cement augmentations and biopsies. The aim is to increase surgical accuracy while reducing the intra-operative exposure to harmful radiation to the patient and operating team personnel during the intervention. We offer a review of the published literature related to the use of robotic systems in spine surgery and provide information on using robotic systems.

  20. Robots in the Roses

    OpenAIRE

    2014-01-01

    2014-04 Robots in the Roses A CRUSER Sponsored Event. The 4th Annual Robots in the Roses provides a venue for Faculty & NPS Students to showcase unmanned systems research (current or completed) and recruit NPS Students to join in researching on your project. Posters, robots, vehicles, videos, and even just plain humans welcome! Families are welcome to attend Robots in the Roses as we'll have a STEM activity for children to participate in.

  1. Robot Motion and Control 2011

    CERN Document Server

    2012-01-01

    Robot Motion Control 2011 presents very recent results in robot motion and control. Forty short papers have been chosen from those presented at the sixth International Workshop on Robot Motion and Control held in Poland in June 2011. The authors of these papers have been carefully selected and represent leading institutions in this field. The following recent developments are discussed: • Design of trajectory planning schemes for holonomic and nonholonomic systems with optimization of energy, torque limitations and other factors. • New control algorithms for industrial robots, nonholonomic systems and legged robots. • Different applications of robotic systems in industry and everyday life, like medicine, education, entertainment and others. • Multiagent systems consisting of mobile and flying robots with their applications The book is suitable for graduate students of automation and robotics, informatics and management, mechatronics, electronics and production engineering systems as well as scientists...

  2. Full autonomous microline trace robot

    Science.gov (United States)

    Yi, Deer; Lu, Si; Yan, Yingbai; Jin, Guofan

    2000-10-01

    Optoelectric inspection may find applications in robotic system. In micro robotic system, smaller optoelectric inspection system is preferred. However, as miniaturizing the size of the robot, the number of the optoelectric detector becomes lack. And lack of the information makes the micro robot difficult to acquire its status. In our lab, a micro line trace robot has been designed, which autonomous acts based on its optoelectric detection. It has been programmed to follow a black line printed on the white colored ground. Besides the optoelectric inspection, logical algorithm in the microprocessor is also important. In this paper, we propose a simply logical algorithm to realize robot's intelligence. The robot's intelligence is based on a AT89C2051 microcontroller which controls its movement. The technical details of the micro robot are as follow: dimension: 30mm*25mm*35*mm; velocity: 60mm/s.

  3. Rehabilitation robotics.

    Science.gov (United States)

    Krebs, H I; Volpe, B T

    2013-01-01

    This chapter focuses on rehabilitation robotics which can be used to augment the clinician's toolbox in order to deliver meaningful restorative therapy for an aging population, as well as on advances in orthotics to augment an individual's functional abilities beyond neurorestoration potential. The interest in rehabilitation robotics and orthotics is increasing steadily with marked growth in the last 10 years. This growth is understandable in view of the increased demand for caregivers and rehabilitation services escalating apace with the graying of the population. We provide an overview on improving function in people with a weak limb due to a neurological disorder who cannot properly control it to interact with the environment (orthotics); we then focus on tools to assist the clinician in promoting rehabilitation of an individual so that s/he can interact with the environment unassisted (rehabilitation robotics). We present a few clinical results occurring immediately poststroke as well as during the chronic phase that demonstrate superior gains for the upper extremity when employing rehabilitation robotics instead of usual care. These include the landmark VA-ROBOTICS multisite, randomized clinical study which demonstrates clinical gains for chronic stroke that go beyond usual care at no additional cost. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Robots of the Future

    Indian Academy of Sciences (India)

    two main types of robots: industrial robots, and autonomous robots. .... position); it also has a virtual CPU with two stacks and three registers that hold 32-bit strings. Each item ..... just like we can aggregate images, text, and information from.

  5. ENVIRONMENT INDEPENDENT DIRECTIONAL GESTURE RECOGNITION TECHNIQUE FOR ROBOTS USING MULTIPLE DATA FUSION

    Directory of Open Access Journals (Sweden)

    Kishore Abishek

    2013-10-01

    Full Text Available A technique is presented here for directional gesture recognition by robots. The usual technique employed now is using camera vision and image processing. One major disadvantage with that is the environmental constrain. The machine vision system has a lot of lighting constrains. It is therefore only possible to use that technique in a conditioned environment, where the lighting is compatible with camera system used. The technique presented here is designed to work in any environment. It does not employ machine vision. It utilizes a set of sensors fixed on the hands of a human to identify the direction in which the hand is pointing. This technique uses cylindrical coordinate system to precisely find the direction. A programmed computing block in the robot identifies the direction accurately within the given range.

  6. A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics

    Directory of Open Access Journals (Sweden)

    Philipp Beckerle

    2017-05-01

    Full Text Available Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human–robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

  7. A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics

    Science.gov (United States)

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D.; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human–robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions. PMID:28588473

  8. Advances in Robotics and Virtual Reality

    CERN Document Server

    Hassanien, Aboul

    2012-01-01

    A beyond human knowledge and reach, robotics is strongly involved in tackling challenges of new emerging multidisciplinary fields. Together with humans, robots are busy exploring and working on the new generation of ideas and problems whose solution is otherwise impossible to find. The future is near when robots will sense, smell and touch people and their lives. Behind this practical aspect of human-robotics, there is a half a century spanned robotics research, which transformed robotics into a modern science. The Advances in Robotics and Virtual Reality is a compilation of emerging application areas of robotics. The book covers robotics role in medicine, space exploration and also explains the role of virtual reality as a non-destructive test bed which constitutes a premise of further advances towards new challenges in robotics. This book, edited by two famous scientists with the support of an outstanding team of fifteen authors, is a well suited reference for robotics researchers and scholars from related ...

  9. Color-based scale-invariant feature detection applied in robot vision

    Science.gov (United States)

    Gao, Jian; Huang, Xinhan; Peng, Gang; Wang, Min; Li, Xinde

    2007-11-01

    The scale-invariant feature detecting methods always require a lot of computation yet sometimes still fail to meet the real-time demands in robot vision fields. To solve the problem, a quick method for detecting interest points is presented. To decrease the computation time, the detector selects as interest points those whose scale normalized Laplacian values are the local extrema in the nonholonomic pyramid scale space. The descriptor is built with several subregions, whose width is proportional to the scale factor, and the coordinates of the descriptor are rotated in relation to the interest point orientation just like the SIFT descriptor. The eigenvector is computed in the original color image and the mean values of the normalized color g and b in each subregion are chosen to be the factors of the eigenvector. Compared with the SIFT descriptor, this descriptor's dimension has been reduced evidently, which can simplify the point matching process. The performance of the method is analyzed in theory in this paper and the experimental results have certified its validity too.

  10. The development of advanced robotics for the nuclear industry -The development of advanced robotic technology-

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Min; Lee, Yong Bum; Park, Soon Yong; Cho, Jae Wan; Lee, Nam Hoh; Kim, Woong Kee; Moon, Byung Soo; Kim, Seung Hoh; Kim, Chang Heui; Kim, Byung Soo; Hwang, Suk Yong; Lee, Yung Kwang; Moon, Je Sun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    Main activity in this year is to develop both remote handling system and telepresence techniques, which can be used for people involved in extremely hazardous working area to alleviate their burden. In the robot vision technology part, KAERI-PSM system, stereo imaging camera module, stereo BOOM/MOLLY unit, and stereo HMD unit are developed. Also, autostereo TV system which falls under the category of next generation stereo imaging technology has been studied. The performance of KAERI-PSM system for remote handling task is evaluated and compared with other stereo imaging systems as well as general TV imaging system. The result shows that KAERI-PSM system is superior to the other stereo imaging systems about remote operation speedup and accuracy. The automatic recognition algorithm of instrument panel is studied and passive visual target tracking system is developed. The 5 DOF camera serving unit has been designed and fabricated. It is designed to function like human`s eye. In the sensing and intelligent control research part, thermal image database system for thermal image analysis is developed and remote temperature monitoring technique using fiber optics is investigated. And also, two dimensional radioactivity sensor head for radiation profile monitoring system is designed. In the part of intelligent robotics, mobile robot is fabricated and its autonomous navigation using fuzzy control logic is studied. These remote handling and telepresence techniques developed in this project can be applied to nozzle-dam installation/removal robot system, reactor inspection unit, underwater nuclear pellet inspection and pipe abnormality inspection. And these developed remote handling and telepresence techniques will be applied in general industry, medical science, and military as well as nuclear facilities. 203 figs, 12 tabs, 72 refs. (Author).

  11. Grounding Robot Autonomy in Emotion and Self-awareness

    Science.gov (United States)

    Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita

    Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.

  12. Can Social Robots Qualify for Moral Consideration? Reframing the Question about Robot Rights

    Directory of Open Access Journals (Sweden)

    Herman T. Tavani

    2018-03-01

    Full Text Available A controversial question that has been hotly debated in the emerging field of robot ethics is whether robots should be granted rights. Yet, a review of the recent literature in that field suggests that this seemingly straightforward question is far from clear and unambiguous. For example, those who favor granting rights to robots have not always been clear as to which kinds of robots should (or should not be eligible; nor have they been consistent with regard to which kinds of rights—civil, legal, moral, etc.—should be granted to qualifying robots. Also, there has been considerable disagreement about which essential criterion, or cluster of criteria, a robot would need to satisfy to be eligible for rights, and there is ongoing disagreement as to whether a robot must satisfy the conditions for (moral agency to qualify either for rights or (at least some level of moral consideration. One aim of this paper is to show how the current debate about whether to grant rights to robots would benefit from an analysis and clarification of some key concepts and assumptions underlying that question. My principal objective, however, is to show why we should reframe that question by asking instead whether some kinds of social robots qualify for moral consideration as moral patients. In arguing that the answer to this question is “yes,” I draw from some insights in the writings of Hans Jonas to defend my position.

  13. Multi-Robot Assembly Strategies and Metrics

    Science.gov (United States)

    MARVEL, JEREMY A.; BOSTELMAN, ROGER; FALCO, JOE

    2018-01-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies. PMID:29497234

  14. Multi-Robot Assembly Strategies and Metrics.

    Science.gov (United States)

    Marvel, Jeremy A; Bostelman, Roger; Falco, Joe

    2018-02-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies.

  15. Stereo vision with distance and gradient recognition

    Science.gov (United States)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  16. Present and Future of Nuclear Robotics

    International Nuclear Information System (INIS)

    Bielza Ciaz-Caneja, M.; Carmena Servet, P.; Gomez Santamaria, J.; Gonzalez Fernandez, J.; Izquierdo Mendoza, J.A.; Linares Pintos, F.; Martinez Gonzalez; Muntion Ruesgas, A.; Serna Oliveira, M.A.

    1997-01-01

    New technologies have increased the use of robotic systems in fields other than Industry. As a result, research and developers are focusing their interest in concepts like Intelligent Robotics and Robotics in Services. This paper describes the use of Robotics in Nuclear facilities, where robots can be used to protect workers in high radiation areas, to reduce total worker exposure and to minimise downtime. First, the structure of robot systems is introduced and the benefits of nuclear robots is presented. Next, the paper describes some specific nuclear applications and the families of nuclear robots present in the market. After that, a section is devoted to Nuclear Robotics in Spain, with emphasis in some of the developments being carried out at present. Finally, some reflections about the future of robots in Nuclear Industry are offered. (Author) 18 refs

  17. Robots as Confederates

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    2016-01-01

    This paper addresses the use of robots in experimental research for the study of human language, human interaction, and human nature. It is argued that robots make excellent confederates that can be completely controlled, yet which engage human participants in interactions that allow us to study...... numerous linguistic and psychological variables in isolation in an ecologically valid way. Robots thus combine the advantages of observational studies and of controlled experimentation....

  18. Developing new behavior strategies of robot soccer team SjF TUKE Robotics

    Directory of Open Access Journals (Sweden)

    Mikuláš Hajduk

    2016-09-01

    Full Text Available There are too many types of robotic soccer approaches at present. SjF TUKE Robotics, who won robot soccer world tournament for year 2010 in category MiroSot, is a team with multiagent system approach. They have one main agent (master and five agent players, represented by robots. There is a point of view, in the article, for code programmer how to create new behavior strategies by creating a new code for master. There is a methodology how to prepare and create it following some rules.

  19. Robotic Label Applicator: Design, Development and Visual Servoing Based Control

    Directory of Open Access Journals (Sweden)

    Lin Chyi-Yeu

    2016-01-01

    Full Text Available Use of robotic arms and computer vision in manufacture, and assembly process are getting more interest as flexible customization is becoming priority over mass production as frontier industry practice. In this paper an innovative label applicator as end of arm tooling (EOAT capable of dispensing and applying label stickers of various dimensions to a product is designed, fabricated and tested. The system incorporates a label dispenserapplicator and had eye-in-hand camera system, attached to 6-dof robot arm can autonomously apply a label sticker to the target position on a randomly placed product. Employing multiple advantages from different knowledge basis, mechanism design and vision based automatic control, offers this system distinctive efficiency as well as flexibility to change in manufacturing and assembly process with time and cost saving.

  20. Learning robotics using Python

    CERN Document Server

    Joseph, Lentin

    2015-01-01

    If you are an engineer, a researcher, or a hobbyist, and you are interested in robotics and want to build your own robot, this book is for you. Readers are assumed to be new to robotics but should have experience with Python.