WorldWideScience

Sample records for active robot vision

  1. Robot vision for nuclear advanced robot

    International Nuclear Information System (INIS)

    Nakayama, Ryoichi; Okano, Hideharu; Kuno, Yoshinori; Miyazawa, Tatsuo; Shimada, Hideo; Okada, Satoshi; Kawamura, Astuo

    1991-01-01

    This paper describes Robot Vision and Operation System for Nuclear Advanced Robot. This Robot Vision consists of robot position detection, obstacle detection and object recognition. With these vision techniques, a mobile robot can make a path and move autonomously along the planned path. The authors implemented the above robot vision system on the 'Advanced Robot for Nuclear Power Plant' and tested in an environment mocked up as nuclear power plant facilities. Since the operation system for this robot consists of operator's console and a large stereo monitor, this system can be easily operated by one person. Experimental tests were made using the Advanced Robot (nuclear robot). Results indicate that the proposed operation system is very useful, and can be operate by only person. (author)

  2. Active Vision for Sociable Robots

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2001-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  3. Robot Vision Library

    Science.gov (United States)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  4. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    Science.gov (United States)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  5. Advanced robot vision system for nuclear power plants

    International Nuclear Information System (INIS)

    Onoguchi, Kazunori; Kawamura, Atsuro; Nakayama, Ryoichi.

    1991-01-01

    We have developed a robot vision system for advanced robots used in nuclear power plants, under a contract with the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. This work is part of the large-scale 'advanced robot technology' project. The robot vision system consists of self-location measurement, obstacle detection, and object recognition subsystems, which are activated by a total control subsystem. This paper presents details of these subsystems and the experimental results obtained. (author)

  6. Vision servo of industrial robot: A review

    Science.gov (United States)

    Zhang, Yujin

    2018-04-01

    Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.

  7. Robot vision

    International Nuclear Information System (INIS)

    Hall, E.L.

    1984-01-01

    Almost all industrial robots use internal sensors such as shaft encoders which measure rotary position, or tachometers which measure velocity, to control their motions. Most controllers also provide interface capabilities so that signals from conveyors, machine tools, and the robot itself may be used to accomplish a task. However, advanced external sensors, such as visual sensors, can provide a much greater degree of adaptability for robot control as well as add automatic inspection capabilities to the industrial robot. Visual and other sensors are now being used in fundamental operations such as material processing with immediate inspection, material handling with adaption, arc welding, and complex assembly tasks. A new industry of robot vision has emerged. The application of these systems is an area of great potential

  8. Applications of AI, machine vision and robotics

    CERN Document Server

    Boyer, Kim; Bunke, H

    1995-01-01

    This text features a broad array of research efforts in computer vision including low level processing, perceptual organization, object recognition and active vision. The volume's nine papers specifically report on topics such as sensor confidence, low level feature extraction schemes, non-parametric multi-scale curve smoothing, integration of geometric and non-geometric attributes for object recognition, design criteria for a four degree-of-freedom robot head, a real-time vision system based on control of visual attention and a behavior-based active eye vision system. The scope of the book pr

  9. Robotic vision system for random bin picking with dual-arm robots

    Directory of Open Access Journals (Sweden)

    Kang Sangseung

    2016-01-01

    Full Text Available Random bin picking is one of the most challenging industrial robotics applications available. It constitutes a complicated interaction between the vision system, robot, and control system. For a packaging operation requiring a pick-and-place task, the robot system utilized should be able to perform certain functions for recognizing the applicable target object from randomized objects in a bin. In this paper, we introduce a robotic vision system for bin picking using industrial dual-arm robots. The proposed system recognizes the best object from randomized target candidates based on stereo vision, and estimates the position and orientation of the object. It then sends the result to the robot control system. The system was developed for use in the packaging process of cell phone accessories using dual-arm robots.

  10. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  11. Active vision via extremum seeking for robots in unstructured environments : Applications in object recognition and manipulation

    NARCIS (Netherlands)

    Calli, B.; Caarls, W.; Wisse, M.; Jonker, P.P.

    2018-01-01

    In this paper, a novel active vision strategy is proposed for optimizing the viewpoint of a robot's vision sensor for a given success criterion. The strategy is based on extremum seeking control (ESC), which introduces two main advantages: 1) Our approach is model free: It does not require an

  12. Robotics, vision and control fundamental algorithms in Matlab

    CERN Document Server

    Corke, Peter

    2017-01-01

    Robotic vision, the combination of robotics and computer vision, involves the application of computer algorithms to data acquired from sensors. The research community has developed a large body of such algorithms but for a newcomer to the field this can be quite daunting. For over 20 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and compu...

  13. A Practical Solution Using A New Approach To Robot Vision

    Science.gov (United States)

    Hudson, David L.

    1984-01-01

    Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write

  14. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Gerd Mayer

    2008-11-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  15. VIP - A Framework-Based Approach to Robot Vision

    Directory of Open Access Journals (Sweden)

    Hans Utz

    2006-03-01

    Full Text Available For robot perception, video cameras are very valuable sensors, but the computer vision methods applied to extract information from camera images are usually computationally expensive. Integrating computer vision methods into a robot control architecture requires to balance exploitation of camera images with the need to preserve reactivity and robustness. We claim that better software support is needed in order to facilitate and simplify the application of computer vision and image processing methods on autonomous mobile robots. In particular, such support must address a simplified specification of image processing architectures, control and synchronization issues of image processing steps, and the integration of the image processing machinery into the overall robot control architecture. This paper introduces the video image processing (VIP framework, a software framework for multithreaded control flow modeling in robot vision.

  16. Beyond speculative robot ethics: a vision assessment study on the future of the robotic caretaker.

    Science.gov (United States)

    van der Plas, Arjanna; Smits, Martijntje; Wehrmann, Caroline

    2010-11-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to some promising co-designed robot concepts in which jointly articulated moral guidelines are embedded. With our model, we think to have designed an interesting response on a recent call for a less speculative ethics of technology by encouraging discussions about the quality of positive and negative visions on the future of robotics.

  17. Beyond speculative robot ethics: A vision assessment study on the future of the robotic caretaker

    NARCIS (Netherlands)

    Plas, A.P. van der; Smits, M.; Wehrmann, C.

    2010-01-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also led to

  18. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  19. Machine Learning for Robotic Vision

    OpenAIRE

    Drummond, Tom

    2018-01-01

    Machine learning is a crucial enabling technology for robotics, in particular for unlocking the capabilities afforded by visual sensing. This talk will present research within Prof Drummond’s lab that explores how machine learning can be developed and used within the context of Robotic Vision.

  20. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  1. International Conference on Computational Vision and Robotics

    CERN Document Server

    2015-01-01

    Computer Vision and Robotic is one of the most challenging areas of 21st century. Its application ranges from Agriculture to Medicine, Household applications to Humanoid, Deep-sea-application to Space application, and Industry applications to Man-less-plant. Today’s technologies demand to produce intelligent machine, which are enabling applications in various domains and services. Robotics is one such area which encompasses number of technology in it and its application is widespread. Computational vision or Machine vision is one of the most challenging tools for the robot to make it intelligent.   This volume covers chapters from various areas of Computational Vision such as Image and Video Coding and Analysis, Image Watermarking, Noise Reduction and Cancellation, Block Matching and Motion Estimation, Tracking of Deformable Object using Steerable Pyramid Wavelet Transformation, Medical Image Fusion, CT and MRI Image Fusion based on Stationary Wavelet Transform. The book also covers articles from applicati...

  2. Vision-based mapping with cooperative robots

    Science.gov (United States)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  3. Computer vision for an autonomous mobile robot

    CSIR Research Space (South Africa)

    Withey, Daniel J

    2015-10-01

    Full Text Available Computer vision systems are essential for practical, autonomous, mobile robots – machines that employ artificial intelligence and control their own motion within an environment. As with biological systems, computer vision systems include the vision...

  4. Stereo-vision and 3D reconstruction for nuclear mobile robots

    International Nuclear Information System (INIS)

    Lecoeur-Taibi, I.; Vacherand, F.; Rivallin, P.

    1991-01-01

    In order to perceive the geometric structure of the surrounding environment of a mobile robot, a 3D reconstruction system has been developed. Its main purpose is to provide geometric information to an operator who has to telepilot the vehicle in a nuclear power plant. The perception system is split into two parts: the vision part and the map building part. Vision is enhanced with a fusion process that rejects bas samples over space and time. The vision is based on trinocular stereo-vision which provides a range image of the image contours. It performs line contour correlation on horizontal image pairs and vertical image pairs. The results are then spatially fused in order to have one distance image, with a quality independent of the orientation of the contour. The 3D reconstruction is based on grid-based sensor fusion. As the robot moves and perceives its environment, distance data is accumulated onto a regular square grid, taking into account the uncertainty of the sensor through a sensor measurement statistical model. This approach allows both spatial and temporal fusion. Uncertainty due to sensor position and robot position is also integrated into the absolute local map. This system is modular and generic and can integrate 2D laser range finder and active vision. (author)

  5. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  6. Robot Vision to Monitor Structures in Invisible Fog Environments Using Active Imaging Technology

    International Nuclear Information System (INIS)

    Park, Seungkyu; Park, Nakkyu; Baik, Sunghoon; Choi, Youngsoo; Jeong, Kyungmin

    2014-01-01

    Active vision is a direct visualization technique using a highly sensitive image sensor and a high intensity illuminant. Range-gated imaging (RGI) technique providing 2D and 3D images is one of emerging active vision technologies. The RGI technique extracts vision information by summing time sliced vision images. In the RGI system, objects are illuminated for ultra-short time by a high intensity illuminant and then the light reflected from objects is captured by a highly sensitive image sensor with the exposure of ultra-short time. The RGI system provides 2D and 3D image data from several images and it moreover provides clear images from invisible fog and smoke environment by using summing of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays, more and more applicable by virtue of the rapid development of optical and sensor technologies, such as highly sensitive imaging sensor and ultra-short pulse laser light. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been demonstrated 3D imaging based on range-gated imaging. In this paper, a robot system to monitor structures in invisible fog environment is developed using an active range-gated imaging technique. The system consists of an ultra-short pulse laser device and a highly sensitive imaging sensor. The developed vision system is carried out to monitor objects in invisible fog environment. The experimental result of this newly approach vision system is described in this paper. To see invisible objects in fog

  7. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. New development in robot vision

    CERN Document Server

    Behal, Aman; Chung, Chi-Kit

    2015-01-01

    The field of robotic vision has advanced dramatically recently with the development of new range sensors.  Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related...

  9. Manifold learning in machine vision and robotics

    Science.gov (United States)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  10. Ping-Pong Robotics with High-Speed Vision System

    DEFF Research Database (Denmark)

    Li, Hailing; Wu, Haiyan; Lou, Lei

    2012-01-01

    The performance of vision-based control is usually limited by the low sampling rate of the visual feedback. We address Ping-Pong robotics as a widely studied example which requires high-speed vision for highly dynamic motion control. In order to detect a flying ball accurately and robustly...... of the manipulator are updated iteratively with decreasing error. Experiments are conducted on a 7 degrees of freedom humanoid robot arm. A successful Ping-Pong playing between the robot arm and human is achieved with a high successful rate of 88%....

  11. Vision Based Tracker for Dart-Catching Robot

    OpenAIRE

    Linderoth, Magnus; Robertsson, Anders; Åström, Karl; Johansson, Rolf

    2009-01-01

    This paper describes how high-speed computer vision can be used in a motion control application. The specific application investigated is a dart catching robot. Computer vision is used to detect a flying dart and a filtering algorithm predicts its future trajectory. This will give data to a robot controller allowing it to catch the dart. The performance of the implemented components indicates that the dart catching application can be made to work well. Conclusions are also made about what fea...

  12. Robotic Arm Control Algorithm Based on Stereo Vision Using RoboRealm Vision

    Directory of Open Access Journals (Sweden)

    SZABO, R.

    2015-05-01

    Full Text Available The goal of this paper is to present a stereo computer vision algorithm intended to control a robotic arm. Specific points on the robot joints are marked and recognized in the software. Using a dedicated set of mathematic equations, the movement of the robot is continuously computed and monitored with webcams. Positioning error is finally analyzed.

  13. 9th International Conference on Robotics, Vision, Signal Processing & Power Applications

    CERN Document Server

    Iqbal, Shahid; Teoh, Soo; Mustaffa, Mohd

    2017-01-01

     The proceeding is a collection of research papers presented, at the 9th International Conference on Robotics, Vision, Signal Processing & Power Applications (ROVISP 2016), by researchers, scientists, engineers, academicians as well as industrial professionals from all around the globe to present their research results and development activities for oral or poster presentations. The topics of interest are as follows but are not limited to:   • Robotics, Control, Mechatronics and Automation • Vision, Image, and Signal Processing • Artificial Intelligence and Computer Applications • Electronic Design and Applications • Telecommunication Systems and Applications • Power System and Industrial Applications • Engineering Education.

  14. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    Energy Technology Data Exchange (ETDEWEB)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun [Gwangju (Korea, Republic of)

    2013-04-15

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task.

  15. Evaluation of Two Robot Vision Control Algorithms Developed Based on N-R and EKG Methods for Slender Bar Placement

    International Nuclear Information System (INIS)

    Son, Jae Kyung; Jang, Wan Shik; Hong, Sung Mun

    2013-01-01

    Many problems need to be solved before vision systems can actually be applied in industry, such as the precision of the kinematics model of the robot control algorithm based on visual information, active compensation of the camera's focal length and orientation during the movement of the robot, and understanding the mapping of the physical 3-D space into 2-D camera coordinates. An algorithm is proposed to enable robot to move actively even if the relative positions between the camera and the robot is unknown. To solve the correction problem, this study proposes vision system model with six camera parameters. To develop the robot vision control algorithm, the N-R and EKG methods are applied to the vision system model. Finally, the position accuracy and processing time of the two algorithms developed based based on the EKG and the N-R methods are compared experimentally by making the robot perform slender bar placement task

  16. A robotic vision system to measure tree traits

    Science.gov (United States)

    The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...

  17. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  18. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  19. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    Science.gov (United States)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  20. Development of Vision Control Scheme of Extended Kalman filtering for Robot's Position Control

    International Nuclear Information System (INIS)

    Jang, W. S.; Kim, K. S.; Park, S. I.; Kim, K. Y.

    2003-01-01

    It is very important to reduce the computational time in estimating the parameters of vision control algorithm for robot's position control in real time. Unfortunately, the batch estimation commonly used requires too murk computational time because it is iteration method. So, the batch estimation has difficulty for robot's position control in real time. On the other hand, the Extended Kalman Filtering(EKF) has many advantages to calculate the parameters of vision system in that it is a simple and efficient recursive procedures. Thus, this study is to develop the EKF algorithm for the robot's vision control in real time. The vision system model used in this study involves six parameters to account for the inner(orientation, focal length etc) and outer (the relative location between robot and camera) parameters of camera. Then, EKF has been first applied to estimate these parameters, and then with these estimated parameters, also to estimate the robot's joint angles used for robot's operation. finally, the practicality of vision control scheme based on the EKF has been experimentally verified by performing the robot's position control

  1. A remote assessment system with a vision robot and wearable sensors.

    Science.gov (United States)

    Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun

    2004-01-01

    This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.

  2. System and method for controlling a vision guided robot assembly

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.

    2017-03-07

    A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.

  3. A real time tracking vision system and its application to robotics

    International Nuclear Information System (INIS)

    Inoue, Hirochika

    1994-01-01

    Among various sensing channels the vision is most important for making robot intelligent. If provided with a high speed visual tracking capability, the robot-environment interaction becomes dynamic instead of static, and thus the potential repertoire of robot behavior becomes very rich. For this purpose we developed a real-time tracking vision system. The fundamental operation on which our system based is the calculation of correlation between local images. Use of special chip for correlation and the multi-processor configuration enable the robot to track more than hundreds cues in full video rate. In addition to the fundamental visual performance, applications for robot behavior control are also introduced. (author)

  4. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  5. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    Science.gov (United States)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  6. Vision Assisted Laser Scanner Navigation for Autonomous Robots

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2008-01-01

    This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot ...

  7. Robot path planning using expert systems and machine vision

    Science.gov (United States)

    Malone, Denis E.; Friedrich, Werner E.

    1992-02-01

    This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.

  8. An assembly system based on industrial robot with binocular stereo vision

    Science.gov (United States)

    Tang, Hong; Xiao, Nanfeng

    2017-01-01

    This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.

  9. 3D vision upgrade kit for TALON robot

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  10. Physics Based Vision Systems for Robotic Manipulation

    Data.gov (United States)

    National Aeronautics and Space Administration — With the increase of robotic manipulation tasks (TA4.3), specifically dexterous manipulation tasks (TA4.3.2), more advanced computer vision algorithms will be...

  11. A novel method of robot location using RFID and stereo vision

    Science.gov (United States)

    Chen, Diansheng; Zhang, Guanxin; Li, Zhen

    2012-04-01

    This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.

  12. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  13. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    OpenAIRE

    Kia, Chua; Arshad, Mohd Rizal

    2006-01-01

    This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system ...

  14. Vision-Based Recognition of Activities by a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Mounîm A. El-Yacoubi

    2015-12-01

    Full Text Available We present an autonomous assistive robotic system for human activity recognition from video sequences. Due to the large variability inherent to video capture from a non-fixed robot (as opposed to a fixed camera, as well as the robot's limited computing resources, implementation has been guided by robustness to this variability and by memory and computing speed efficiency. To accommodate motion speed variability across users, we encode motion using dense interest point trajectories. Our recognition model harnesses the dense interest point bag-of-words representation through an intersection kernel-based SVM that better accommodates the large intra-class variability stemming from a robot operating in different locations and conditions. To contextually assess the engine as implemented in the robot, we compare it with the most recent approaches of human action recognition performed on public datasets (non-robot-based, including a novel approach of our own that is based on a two-layer SVM-hidden conditional random field sequential recognition model. The latter's performance is among the best within the recent state of the art. We show that our robot-based recognition engine, while less accurate than the sequential model, nonetheless shows good performances, especially given the adverse test conditions of the robot, relative to those of a fixed camera.

  15. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  16. Vision-Based Interfaces Applied to Assistive Robots

    Directory of Open Access Journals (Sweden)

    Elisa Perez

    2013-02-01

    Full Text Available This paper presents two vision-based interfaces for disabled people to command a mobile robot for personal assistance. The developed interfaces can be subdivided according to the algorithm of image processing implemented for the detection and tracking of two different body regions. The first interface detects and tracks movements of the user's head, and these movements are transformed into linear and angular velocities in order to command a mobile robot. The second interface detects and tracks movements of the user's hand, and these movements are similarly transformed. In addition, this paper also presents the control laws for the robot. The experimental results demonstrate good performance and balance between complexity and feasibility for real-time applications.

  17. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  18. Vision Guided Intelligent Robot Design And Experiments

    Science.gov (United States)

    Slutzky, G. D.; Hall, E. L.

    1988-02-01

    The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.

  19. Robot vision system R and D for ITER blanket remote-handling system

    International Nuclear Information System (INIS)

    Maruyama, Takahito; Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka; Tesini, Alessandro

    2014-01-01

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system

  20. Robot vision system R and D for ITER blanket remote-handling system

    Energy Technology Data Exchange (ETDEWEB)

    Maruyama, Takahito, E-mail: maruyama.takahito@jaea.go.jp [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Aburadani, Atsushi; Takeda, Nobukazu; Kakudate, Satoshi; Nakahira, Masataka [Japan Atomic Energy Agency, Fusion Research and Development Directorate, Naka, Ibaraki-ken 311-0193 (Japan); Tesini, Alessandro [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul Lez Durance (France)

    2014-10-15

    For regular maintenance of the International Thermonuclear Experimental Reactor (ITER), a system called the ITER blanket remote-handling system is necessary to remotely handle the blanket modules because of the high levels of gamma radiation. Modules will be handled by robotic power manipulators and they must have a non-contact-sensing system for installing and grasping to avoid contact with other modules. A robot vision system that uses cameras was adopted for this non-contact-sensing system. Experiments for grasping modules were carried out in a dark room to simulate the environment inside the vacuum vessel and the robot vision system's measurement errors were studied. As a result, the accuracy of the manipulator's movements was within 2.01 mm and 0.31°, which satisfies the system requirements. Therefore, it was concluded that this robot vision system is suitable for the non-contact-sensing system of the ITER blanket remote-handling system.

  1. A Fast Vision System for Soccer Robot

    Directory of Open Access Journals (Sweden)

    Tianwu Yang

    2012-01-01

    Full Text Available This paper proposes a fast colour-based object recognition and localization for soccer robots. The traditional HSL colour model is modified for better colour segmentation and edge detection in a colour coded environment. The object recognition is based on only the edge pixels to speed up the computation. The edge pixels are detected by intelligently scanning a small part of whole image pixels which is distributed over the image. A fast method for line and circle centre detection is also discussed. For object localization, 26 key points are defined on the soccer field. While two or more key points can be seen from the robot camera view, the three rotation angles are adjusted to achieve a precise localization of robots and other objects. If no key point is detected, the robot position is estimated according to the history of robot movement and the feedback from the motors and sensors. The experiments on NAO and RoboErectus teen-size humanoid robots show that the proposed vision system is robust and accurate under different lighting conditions and can effectively and precisely locate robots and other objects.

  2. Gain-scheduling control of a monocular vision-based human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-08-01

    Full Text Available , R. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition. Hutchinson, S., Hager, G., and Corke, P. (1996). A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12... environment, in a passive manner, at relatively high speeds and low cost. The control of mobile robots using vision in the feed- back loop falls into the well-studied field of visual servo control. Two primary approaches are used: image-based visual...

  3. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  4. A Framework for Obstacles Avoidance of Humanoid Robot Using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2013-04-01

    Full Text Available In this paper, we propose a framework for multiple moving obstacles avoidance strategy using stereo vision for humanoid robot in indoor environment. We assume that this model of humanoid robot is used as a service robot to deliver a cup to customer from starting point to destination point. We have successfully developed and introduced three main modules to recognize faces, to identify multiple moving obstacles and to initiate a maneuver. A group of people who are walking will be tracked as multiple moving obstacles. Predefined maneuver to avoid obstacles is applied to robot because the limitation of view angle from stereo camera to detect multiple obstacles. The contribution of this research is a new method for multiple moving obstacles avoidance strategy with Bayesian approach using stereo vision based on the direction and speed of obstacles. Depth estimation is used to obtain distance calculation between obstacles and the robot. We present the results of the experiment of the humanoid robot called Gatotkoco II which is used our proposed method and evaluate its performance. The proposed moving obstacles avoidance strategy was tested empirically and proved effective for humanoid robot.

  5. 3D vision in a virtual reality robotics environment

    Science.gov (United States)

    Schutz, Christian L.; Natonek, Emerico; Baur, Charles; Hugli, Heinz

    1996-12-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.

  6. A Vision-Based Wireless Charging System for Robot Trophallaxis

    Directory of Open Access Journals (Sweden)

    Jae-O Kim

    2015-12-01

    Full Text Available The need to recharge the batteries of a mobile robot has presented an important challenge for a long time. In this paper, a vision-based wireless charging method for robot energy trophallaxis between two robots is presented. Even though wireless power transmission allows more positional error between receiver-transmitter coils than with a contact-type charging system, both coils have to be aligned as accurately as possible for efficient power transfer. To align the coils, a transmitter robot recognizes the coarse pose of a receiver robot via a camera image and the ambiguity of the estimated pose is removed with a Bayesian estimator. The precise pose of the receiver coil is calculated using a marker image attached to a receiver robot. Experiments with several types of receiver robots have been conducted to verify the proposed method.

  7. 8th International Conference on Robotic, Vision, Signal Processing & Power Applications

    CERN Document Server

    Mustaffa, Mohd

    2014-01-01

    The proceeding is a collection of research papers presented, at the 8th International Conference on Robotics, Vision, Signal Processing and Power Applications (ROVISP 2013), by researchers, scientists, engineers, academicians as well as industrial professionals from all around the globe. The topics of interest are as follows but are not limited to: • Robotics, Control, Mechatronics and Automation • Vision, Image, and Signal Processing • Artificial Intelligence and Computer Applications • Electronic Design and Applications • Telecommunication Systems and Applications • Power System and Industrial Applications  

  8. Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision

    Directory of Open Access Journals (Sweden)

    Achmad Jazidie

    2011-12-01

    Full Text Available In this paper, we propose a multiple moving obstacles avoidance using stereo vision for service robots in indoor environments. We assume that this model of service robot is used to deliver a cup to the recognized customer from the starting point to the destination. The contribution of this research is a new method for multiple moving obstacle avoidance with Bayesian approach using stereo camera. We have developed and introduced 3 main modules to recognize faces, to identify multiple moving obstacles and to maneuver of robot. A group of people who is walking will be tracked as a multiple moving obstacle, and the speed, direction, and distance of the moving obstacles is estimated by a stereo camera in order that the robot can maneuver to avoid the collision. To overcome the inaccuracies of vision sensor, Bayesian approach is used for estimate the absense and direction of obstacles. We present the results of the experiment of the service robot called Srikandi III which uses our proposed method and we also evaluate its performance. Experiments shown that our proposed method working well, and Bayesian approach proved increasing the estimation perform for absence and direction of moving obstacle.

  9. ROBERT autonomous navigation robot with artificial vision

    International Nuclear Information System (INIS)

    Cipollini, A.; Meo, G.B.; Nanni, V.; Rossi, L.; Taraglio, S.; Ferjancic, C.

    1993-01-01

    This work, a joint research between ENEA (the Italian National Agency for Energy, New Technologies and the Environment) and DIGlTAL, presents the layout of the ROBERT project, ROBot with Environmental Recognizing Tools, under development in ENEA laboratories. This project aims at the development of an autonomous mobile vehicle able to navigate in a known indoor environment through the use of artificial vision. The general architecture of the robot is shown together with the data and control flow among the various subsystems. Also the inner structure of the latter complete with the functionalities are given in detail

  10. Modeling and Implementation of Omnidirectional Soccer Robot with Wide Vision Scope Applied in Robocup-MSL

    Directory of Open Access Journals (Sweden)

    Mohsen Taheri

    2010-04-01

    Full Text Available The purpose of this paper is to design and implement a middle size soccer robot to conform RoboCup MSL league. First, according to the rules of RoboCup, we design the middle size soccer robot, The proposed autonomous soccer robot consists of the mechanical platform, motion control module, omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, competition strategies, and obstacle avoidance. And this soccer robot equips the laptop computer system and interface circuits to make decisions. In fact, the omnidirectional vision sensor of the vision system deals with the image processing and positioning for obstacle avoidance and
    target tracking. The boundary-following algorithm (BFA is applied to find the important features of the field. We utilize the sensor data fusion method in the control system parameters, self localization and world modeling. A vision-based self-localization and the conventional odometry
    systems are fused for robust selflocalization. The localization algorithm includes filtering, sharing and integration of the data for different types of objects recognized in the environment. In the control strategies, we present three state modes, which include the Attack Strategy, Defense Strategy and Intercept Strategy. The methods have been tested in the many Robocup competition field middle size robots.

  11. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    Science.gov (United States)

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2018-02-01

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  12. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    Palmer, “Development of a navigation system for semi-autonomous operation of wheelchairs,” in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples

  13. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  14. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2005-09-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  15. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2008-11-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  16. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  17. 75 FR 36456 - Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision...

    Science.gov (United States)

    2010-06-25

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] Channel America Television Network, Inc., EquiMed, Inc., Kore Holdings, Inc., Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc.), Security... accurate information concerning the securities of Robotic Vision Systems, Inc. (n/k/a Acuity Cimatrix, Inc...

  18. Augmented models for improving vision control of a mobile robot

    DEFF Research Database (Denmark)

    Andersen, Gert Lysgaard; Christensen, Anders C.; Ravn, Ole

    1994-01-01

    obtain good performance even when using standard low cost equipment and a comparatively low sampling rate. The plant model is a compound of kinematic, dynamic and sensor submodels, all integrated into a discrete state space representation. An intelligent strategy is applied for the vision sensor......This paper describes the modelling phases for the design of a path tracking vision controller for a three wheeled mobile robot. It is shown that, by including the dynamic characteristics of vision and encoder sensors and implementing the total system in one multivariable control loop, one can...

  19. A cognitive approach to vision for a mobile robot

    Science.gov (United States)

    Benjamin, D. Paul; Funk, Christopher; Lyons, Damian

    2013-05-01

    We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both

  20. Computer vision system R&D for EAST Articulated Maintenance Arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Linglong, E-mail: linglonglin@ipp.ac.cn; Song, Yuntao, E-mail: songyt@ipp.ac.cn; Yang, Yang, E-mail: yangy@ipp.ac.cn; Feng, Hansheng, E-mail: hsfeng@ipp.ac.cn; Cheng, Yong, E-mail: chengyong@ipp.ac.cn; Pan, Hongtao, E-mail: panht@ipp.ac.cn

    2015-11-15

    Highlights: • We discussed the image preprocessing, object detection and pose estimation algorithms under poor light condition of inner vessel of EAST tokamak. • The main pipeline, including contours detection, contours filter, MER extracted, object location and pose estimation, was carried out in detail. • The technical issues encountered during the research were discussed. - Abstract: Experimental Advanced Superconducting Tokamak (EAST) is the first full superconducting tokamak device which was constructed at Institute of Plasma Physics Chinese Academy of Sciences (ASIPP). The EAST Articulated Maintenance Arm (EAMA) robot provides the means of the in-vessel maintenance such as inspection and picking up the fragments of first wall. This paper presents a method to identify and locate the fragments semi-automatically by using the computer vision. The use of computer vision in identification and location faces some difficult challenges such as shadows, poor contrast, low illumination level, less texture and so on. The method developed in this paper enables credible identification of objects with shadows through invariant image and edge detection. The proposed algorithms are validated through our ASIPP robotics and computer vision platform (ARVP). The results show that the method can provide a 3D pose with reference to robot base so that objects with different shapes and size can be picked up successfully.

  1. A Vision-Based Approach for Estimating Contact Forces: Applications to Robot-Assisted Surgery

    Directory of Open Access Journals (Sweden)

    C. W. Kennedy

    2005-01-01

    Full Text Available The primary goal of this paper is to provide force feedback to the user using vision-based techniques. The approach presented in this paper can be used to provide force feedback to the surgeon for robot-assisted procedures. As proof of concept, we have developed a linear elastic finite element model (FEM of a rubber membrane whereby the nodal displacements of the membrane points are measured using vision. These nodal displacements are the input into our finite element model. In the first experiment, we track the deformation of the membrane in real-time through stereovision and compare it with the actual deformation computed through forward kinematics of the robot arm. On the basis of accurate deformation estimation through vision, we test the physical model of a membrane developed through finite element techniques. The FEM model accurately reflects the interaction forces on the user console when the interaction forces of the robot arm with the membrane are compared with those experienced by the surgeon on the console through the force feedback device. In the second experiment, the PHANToM haptic interface device is used to control the Mitsubishi PA-10 robot arm and interact with the membrane in real-time. Image data obtained through vision of the deformation of the membrane is used as the displacement input for the FEM model to compute the local interaction forces which are then displayed on the user console for providing force feedback and hence closing the loop.

  2. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  3. Range-Image Acquisition for Discriminated Objects in a Range-gated Robot Vision System

    International Nuclear Information System (INIS)

    Park, Seung-Kyu; Ahn, Yong-Jin; Park, Nak-Kyu; Baik, Sung-Hoon; Choi, Young-Soo; Jeong, Kyung-Min

    2015-01-01

    The imaging capability of a surveillance vision system from harsh low-visibility environments such as in fire and detonation areas is a key function to monitor the safety of the facilities. 2D and range image data acquired from low-visibility environment are important data to assess the safety and prepare appropriate countermeasures. Passive vision systems, such as conventional camera and binocular stereo vision systems usually cannot acquire image information when the reflected light is highly scattered and absorbed by airborne particles such as fog. In addition, the image resolution captured through low-density airborne particles is decreased because the image is blurred and dimmed by the scattering, emission and absorption. Active vision systems, such as structured light vision and projected stereo vision are usually more robust for harsh environment than passive vision systems. However, the performance is considerably decreased in proportion to the density of the particles. The RGI system provides 2D and range image data from several RGI images and it moreover provides clear images from low-visibility fog and smoke environment by using the sum of time-sliced images. Nowadays, the Range-gated (RG) imaging is an emerging technology in the field of surveillance for security applications, especially in the visualization of invisible night and fog environment. Although RGI viewing was discovered in the 1960's, this technology is, nowadays becoming more applicable by virtue of the rapid development of optical and sensor technologies. Especially, this system can be adopted in robot-vision system by virtue of its compact portable configuration. In contrast to passive vision systems, this technology enables operation even in harsh environments like fog and smoke. During the past decades, several applications of this technology have been applied in target recognition and in harsh environments, such as fog, underwater vision. Also, this technology has been

  4. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    Science.gov (United States)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  5. Design of an Embedded Multi-Camera Vision System—A Case Study in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Valter Costa

    2018-02-01

    Full Text Available The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a Real-Time Robotic Vision application, thus exemplifying good algorithm design practices, the advantages of employing the “zero copy one pass” methodology and associated trade-offs leading to the selection of a controller platform. The vision tasks under study are: (i recognition of a “flat” signal; and (ii track following, requiring 3D reconstruction. This research firstly improves the used algorithms for the mentioned tasks and finally selects the controller hardware. Optimization for the shown algorithms yielded from 1.5 times to 190 times improvements, always with acceptable quality for the target application, with algorithm optimization being more important on lower computing power platforms. Results also include a 3-cm and five-degree accuracy for lane tracking and 100% accuracy for signalling panel recognition, which are better than most results found in the literature for this application. Clear results comparing different PC platforms for the mentioned Robotic Vision tasks are also shown, demonstrating trade-offs between accuracy and computing power, leading to the proper choice of control platform. The presented design principles are portable to other applications, where Real-Time constraints exist.

  6. Learning Spatial Object Localization from Vision on a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Jürgen Leitner

    2012-12-01

    Full Text Available We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN and Genetic Programming (GP, are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robot's kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robot's workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.

  7. Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots

    OpenAIRE

    Pérez Sala, Xavier

    2010-01-01

    We propose a robust system for automatic Robot Navigation in uncontrolled en- vironments. The system is composed by three main modules: the Arti cial Vision module, the Reinforcement Learning module, and the behavior control module. The aim of the system is to allow a robot to automatically nd a path that arrives to a pre xed goal. Turn and straight movements in uncontrolled environments are automatically estimated and controlled using the proposed modules. The Arti cial Vi...

  8. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  9. Vision-based control of robotic arm with 6 degrees of freedom

    OpenAIRE

    Versleegers, Wim

    2014-01-01

    This paper studies the procedure to program a vertically articulated robot with six degrees of freedom, the Mitsubishi Melfa RV-2SD, with Matlab. A major drawback of the programming software provided by Mitsubishi is that it barely allows the use of vision-based programming. The amount of useable cameras is limited and moreover, the cameras are very expensive. Using Matlab, these limitations could be overcome. However there is no direct way to control the robot with Matlab. The goal of this p...

  10. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot.

    Science.gov (United States)

    Chai, Xun; Gao, Feng; Pan, Yang; Qi, Chenkun; Xu, Yilin

    2015-04-22

    Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called "Octopus", which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  11. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    Science.gov (United States)

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  12. A Robust Vision Module for Humanoid Robotic Ping-Pong Game

    Directory of Open Access Journals (Sweden)

    Xiaopeng Chen

    2015-04-01

    Full Text Available Developing a vision module for a humanoid ping-pong game is challenging due to the spin and the non-linear rebound of the ping-pong ball. In this paper, we present a robust predictive vision module to overcome these problems. The hardware of the vision module is composed of two stereo camera pairs with each pair detecting the 3D positions of the ball on one half of the ping-pong table. The software of the vision module divides the trajectory of the ball into four parts and uses the perceived trajectory in the first part to predict the other parts. In particular, the software of the vision module uses an aerodynamic model to predict the trajectories of the ball in the air and uses a novel non-linear rebound model to predict the change of the ball's motion during rebound. The average prediction error of our vision module at the ball returning point is less than 50 mm - a value small enough for standard sized ping-pong rackets. Its average processing speed is 120fps. The precision and efficiency of our vision module enables two humanoid robots to play ping-pong continuously for more than 200 rounds.

  13. Multi-focal Vision and Gaze Control Improve Navigation Performance

    Directory of Open Access Journals (Sweden)

    Kolja Kuehnlenz

    2008-11-01

    Full Text Available Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.

  14. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

    Directory of Open Access Journals (Sweden)

    Xun Chai

    2015-04-01

    Full Text Available Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

  15. A State-of-the-Art Review on Mapping and Localization of Mobile Robots Using Omnidirectional Vision Sensors

    Directory of Open Access Journals (Sweden)

    L. Payá

    2017-01-01

    Full Text Available Nowadays, the field of mobile robotics is experiencing a quick evolution, and a variety of autonomous vehicles is available to solve different tasks. The advances in computer vision have led to a substantial increase in the use of cameras as the main sensors in mobile robots. They can be used as the only source of information or in combination with other sensors such as odometry or laser. Among vision systems, omnidirectional sensors stand out due to the richness of the information they provide the robot with, and an increasing number of works about them have been published over the last few years, leading to a wide variety of frameworks. In this review, some of the most important works are analysed. One of the key problems the scientific community is addressing currently is the improvement of the autonomy of mobile robots. To this end, building robust models of the environment and solving the localization and navigation problems are three important abilities that any mobile robot must have. Taking it into account, the review concentrates on these problems; how researchers have addressed them by means of omnidirectional vision; the main frameworks they have proposed; and how they have evolved in recent years.

  16. Endoscopic vision-based tracking of multiple surgical instruments during robot-assisted surgery.

    Science.gov (United States)

    Ryu, Jiwon; Choi, Jaesoon; Kim, Hee Chan

    2013-01-01

    Robot-assisted minimally invasive surgery is effective for operations in limited space. Enhancing safety based on automatic tracking of surgical instrument position to prevent inadvertent harmful events such as tissue perforation or instrument collisions could be a meaningful augmentation to current robotic surgical systems. A vision-based instrument tracking scheme as a core algorithm to implement such functions was developed in this study. An automatic tracking scheme is proposed as a chain of computer vision techniques, including classification of metallic properties using k-means clustering and instrument movement tracking using similarity measures, Euclidean distance calculations, and a Kalman filter algorithm. The implemented system showed satisfactory performance in tests using actual robot-assisted surgery videos. Trajectory comparisons of automatically detected data and ground truth data obtained by manually locating the center of mass of each instrument were used to quantitatively validate the system. Instruments and collisions could be well tracked through the proposed methods. The developed collision warning system could provide valuable information to clinicians for safer procedures. © 2012, Copyright the Authors. Artificial Organs © 2012, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  17. Design and Development of Vision Based Blockage Clearance Robot for Sewer Pipes

    Directory of Open Access Journals (Sweden)

    Krishna Prasad Nesaian

    2012-03-01

    Full Text Available Robotic technology is one of the advanced technologies, which is capable of completing tasks at situations where humans are unable to reach, see or survive. The underground sewer pipelines are the major tools for the transportation of effluent water. A lot of troubles caused by blockage in sewer pipe will lead to overflow of effluent water, sanitation problems. So robotic vehicle that is capable of traveling at underneath effluent water determining blockage using ultrasonic sensors and clearing by means of drilling mechanism is done. In addition to that wireless camera is fixed which acts as a robot vision by which we can monitor video and capture images using MATLAB tool. Thus in this project a prototype model of underground sewer pipe blockage clearance robot with drilling type will be developed

  18. State of the art of robotic surgery related to vision: brain and eye applications of newly available devices

    Directory of Open Access Journals (Sweden)

    Nuzzi R

    2018-02-01

    Full Text Available Raffaele Nuzzi, Luca Brusasco Department of Surgical Sciences, Eye Clinic, University of Torino, Turin, Italy Background: Robot-assisted surgery has revolutionized many surgical subspecialties, mainly where procedures have to be performed in confined, difficult to visualize spaces. Despite advances in general surgery and neurosurgery, in vivo application of robotics to ocular surgery is still in its infancy, owing to the particular complexities of microsurgery. The use of robotic assistance and feedback guidance on surgical maneuvers could improve the technical performance of expert surgeons during the initial phase of the learning curve. Evidence acquisition: We analyzed the advantages and disadvantages of surgical robots, as well as the present applications and future outlook of robotics in neurosurgery in brain areas related to vision and ophthalmology. Discussion: Limitations to robotic assistance remain, that need to be overcome before it can be more widely applied in ocular surgery. Conclusion: There is heightened interest in studies documenting computerized systems that filter out hand tremor and optimize speed of movement, control of force, and direction and range of movement. Further research is still needed to validate robot-assisted procedures. Keywords: robotic surgery related to vision, robots, ophthalmological applications of robotics, eye and brain robots, eye robots

  19. Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.

    Science.gov (United States)

    Rumei Zhang; Hao Liu; Jianda Han

    2017-07-01

    Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.

  20. User-centric design of a personal assistance robot (FRASIER) for active aging.

    Science.gov (United States)

    Padir, Taşkin; Skorinko, Jeanine; Dimitrov, Velin

    2015-01-01

    We present our preliminary results from the design process for developing the Worcester Polytechnic Institute's personal assistance robot, FRASIER, as an intelligent service robot for enabling active aging. The robot capabilities include vision-based object detection, tracking the user and help with carrying heavy items such as grocery bags or cafeteria trays. This work-in-progress report outlines our motivation and approach to developing the next generation of service robots for the elderly. Our main contribution in this paper is the development of a set of specifications based on the adopted user-centered design process, and realization of the prototype system designed to meet these specifications.

  1. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    Science.gov (United States)

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  2. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    The paper discusses the roles of visioning processes and visions in foresight activities and in societal discourses and changes parallel to or following foresight activities. The overall topic can be characterised as the dynamics and mechanisms that make visions and visioning processes work...... or not work. The theoretical part of the paper presents an actor-network theory approach to the analyses of visions and visioning processes, where the shaping of the visions and the visioning and what has made them work or not work is analysed. The empirical part is based on analyses of the roles of visions...... and visioning processes in a number of foresight processes from different societal contexts. The analyses have been carried out as part of the work in the COST A22 network on foresight. A vision is here understood as a description of a desirable or preferable future, compared to a scenario which is understood...

  3. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    Science.gov (United States)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two

  4. Negative Affect in Human Robot Interaction

    DEFF Research Database (Denmark)

    Rehm, Matthias; Krogsager, Anders

    2013-01-01

    The vision of social robotics sees robots moving more and more into unrestricted social environments, where robots interact closely with users in their everyday activities, maybe even establishing relationships with the user over time. In this paper we present a field trial with a robot in a semi...

  5. Development of a Stereo Vision Measurement System for a 3D Three-Axial Pneumatic Parallel Mechanism Robot Arm

    Directory of Open Access Journals (Sweden)

    Chien-Lun Hou

    2011-02-01

    Full Text Available In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  6. Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2016-03-01

    Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.

  7. Self-organization via active exploration in robotic applications

    Science.gov (United States)

    Ogmen, H.; Prakash, R. V.

    1992-01-01

    We describe a neural network based robotic system. Unlike traditional robotic systems, our approach focussed on non-stationary problems. We indicate that self-organization capability is necessary for any system to operate successfully in a non-stationary environment. We suggest that self-organization should be based on an active exploration process. We investigated neural architectures having novelty sensitivity, selective attention, reinforcement learning, habit formation, flexible criteria categorization properties and analyzed the resulting behavior (consisting of an intelligent initiation of exploration) by computer simulations. While various computer vision researchers acknowledged recently the importance of active processes (Swain and Stricker, 1991), the proposed approaches within the new framework still suffer from a lack of self-organization (Aloimonos and Bandyopadhyay, 1987; Bajcsy, 1988). A self-organizing, neural network based robot (MAVIN) has been recently proposed (Baloch and Waxman, 1991). This robot has the capability of position, size rotation invariant pattern categorization, recognition and pavlovian conditioning. Our robot does not have initially invariant processing properties. The reason for this is the emphasis we put on active exploration. We maintain the point of view that such invariant properties emerge from an internalization of exploratory sensory-motor activity. Rather than coding the equilibria of such mental capabilities, we are seeking to capture its dynamics to understand on the one hand how the emergence of such invariances is possible and on the other hand the dynamics that lead to these invariances. The second point is crucial for an adaptive robot to acquire new invariances in non-stationary environments, as demonstrated by the inverting glass experiments of Helmholtz. We will introduce Pavlovian conditioning circuits in our future work for the precise objective of achieving the generation, coordination, and internalization

  8. Laws on Robots, Laws by Robots, Laws in Robots : Regulating Robot Behaviour by Design

    NARCIS (Netherlands)

    Leenes, R.E.; Lucivero, F.

    2015-01-01

    Speculation about robot morality is almost as old as the concept of a robot itself. Asimov’s three laws of robotics provide an early and well-discussed example of moral rules robots should observe. Despite the widespread influence of the three laws of robotics and their role in shaping visions of

  9. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    Science.gov (United States)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  10. Vision Sensor-Based Road Detection for Field Robot Navigation

    Directory of Open Access Journals (Sweden)

    Keyu Lu

    2015-11-01

    Full Text Available Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.

  11. Beyond Speculative Robot Ethics

    NARCIS (Netherlands)

    Smits, M.; Van der Plas, A.

    2010-01-01

    In this article we develop a dialogue model for robot technology experts and designated users to discuss visions on the future of robotics in long-term care. Our vision assessment study aims for more distinguished and more informed visions on future robots. Surprisingly, our experiment also lead to

  12. Robot Control for Dynamic Environment Using Vision and Autocalibration

    DEFF Research Database (Denmark)

    Larsen, Thomas Dall; Lildballe, Jacob; Andersen, Nils Axel

    1997-01-01

    To enhance flexibility and extend the area of applications for robotic systems, it is important that the systems are capable ofhandling uncertainties and respond to (random) human behaviour.A vision systemmust very often be able to work in a dynamical ``noisy'' world where theplacement ofobjects...... can vary within certain restrictions. Furthermore it would be useful ifthe system is able to recover automatically after serious changes have beenapplied, for instance if the camera has been moved.In this paper an implementationof such a system is described. The system is a robotcapable of playing...

  13. Estimation of visual maps with a robot network equipped with vision sensors.

    Science.gov (United States)

    Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis

    2010-01-01

    In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  14. Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

    Directory of Open Access Journals (Sweden)

    Arturo Gil

    2010-05-01

    Full Text Available In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.

  15. Examples of design and achievement of vision systems for mobile robotics applications

    Science.gov (United States)

    Bonnin, Patrick J.; Cabaret, Laurent; Raulet, Ludovic; Hugel, Vincent; Blazevic, Pierre; M'Sirdi, Nacer K.; Coiffet, Philippe

    2000-10-01

    Our goal is to design and to achieve a multiple purpose vision system for various robotics applications : wheeled robots (like cars for autonomous driving), legged robots (six, four (SONY's AIBO) legged robots, and humanoid), flying robots (to inspect bridges for example) in various conditions : indoor or outdoor. Considering that the constraints depend on the application, we propose an edge segmentation implemented either in software, or in hardware using CPLDs (ASICs or FPGAs could be used too). After discussing the criteria of our choice, we propose a chain of image processing operators constituting an edge segmentation. Although this chain is quite simple and very fast to perform, results appear satisfactory. We proposed a software implementation of it. Its temporal optimization is based on : its implementation under the pixel data flow programming model, the gathering of local processing when it is possible, the simplification of computations, and the use of fast access data structures. Then, we describe a first dedicated hardware implementation of the first part, which requires 9CPLS in this low cost version. It is technically possible, but more expensive, to implement these algorithms using only a signle FPGA.

  16. A Collaborative Approach for Surface Inspection Using Aerial Robots and Computer Vision

    Directory of Open Access Journals (Sweden)

    Martin Molina

    2018-03-01

    Full Text Available Aerial robots with cameras on board can be used in surface inspection to observe areas that are difficult to reach by other means. In this type of problem, it is desirable for aerial robots to have a high degree of autonomy. A way to provide more autonomy would be to use computer vision techniques to automatically detect anomalies on the surface. However, the performance of automated visual recognition methods is limited in uncontrolled environments, so that in practice it is not possible to perform a fully automatic inspection. This paper presents a solution for visual inspection that increases the degree of autonomy of aerial robots following a semi-automatic approach. The solution is based on human-robot collaboration in which the operator delegates tasks to the drone for exploration and visual recognition and the drone requests assistance in the presence of uncertainty. We validate this proposal with the development of an experimental robotic system using the software framework Aerostack. The paper describes technical challenges that we had to solve to develop such a system and the impact on this solution on the degree of autonomy to detect anomalies on the surface.

  17. A Miniature Robot for Retraction Tasks under Vision Assistance in Minimally Invasive Surgery

    Directory of Open Access Journals (Sweden)

    Giuseppe Tortora

    2014-03-01

    Full Text Available Minimally Invasive Surgery (MIS is one of the main aims of modern medicine. It enables surgery to be performed with a lower number and severity of incisions. Medical robots have been developed worldwide to offer a robotic alternative to traditional medical procedures. New approaches aimed at a substantial decrease of visible scars have been explored, such as Natural Orifice Transluminal Endoscopic Surgery (NOTES. Simple surgical tasks such as the retraction of an organ can be a challenge when performed from narrow access ports. For this reason, there is a continuous need to develop new robotic tools for performing dedicated tasks. This article illustrates the design and testing of a new robotic tool for retraction tasks under vision assistance for NOTES. The retraction robots integrate brushless motors to enable additional degrees of freedom to that provided by magnetic anchoring, thus improving the dexterity of the overall platform. The retraction robot can be easily controlled to reach the target organ and apply a retraction force of up to 1.53 N. Additional degrees of freedom can be used for smooth manipulation and grasping of the organ.

  18. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    Science.gov (United States)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  19. CRV 2008: Fifth Canadian Conference on Computerand Robot Vision, Windsor, ON, Canada, May 2008

    DEFF Research Database (Denmark)

    Fihl, Preben

    This technical report will cover the participation in the fifth Canadian Conference on Computer and Robot Vision in May 2008. The report will give a concise description of the topics presented at the conference, focusing on the work related to the HERMES project and human motion and action...

  20. Development of a teaching system for an industrial robot using stereo vision

    Science.gov (United States)

    Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki

    1997-12-01

    The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.

  1. Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor

    Science.gov (United States)

    Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick

    This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.

  2. Teaching Joint-Level Robot Programming with a New Robotics Software Tool

    Directory of Open Access Journals (Sweden)

    Fernando Gonzalez

    2017-12-01

    Full Text Available With the rising popularity of robotics in our modern world there is an increase in the number of engineering programs that offer the basic Introduction to Robotics course. This common introductory robotics course generally covers the fundamental theory of robotics including robot kinematics, dynamics, differential movements, trajectory planning and basic computer vision algorithms commonly used in the field of robotics. Joint programming, the task of writing a program that directly controls the robot’s joint motors, is an activity that involves robot kinematics, dynamics, and trajectory planning. In this paper, we introduce a new educational robotics tool developed for teaching joint programming. The tool allows the student to write a program in a modified C language that controls the movement of the arm by controlling the velocity of each joint motor. This is a very important activity in the robotics course and leads the student to gain knowledge of how to build a robotic arm controller. Sample assignments are presented for different levels of difficulty.

  3. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    Science.gov (United States)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the

  4. Aerial service robotics: the AIRobots perspective

    NARCIS (Netherlands)

    Marconi, L.; Basile, F.; Caprari, G.; Carloni, Raffaella; Chiacchio, P.; Hurzeler, C.; Lippiello, V.; Naldi, R.; Siciliano, B.; Stramigioli, Stefano; Zwicker, E.

    This paper presents the main vision and research activities of the ongoing European project AIRobots (Innova- tive Aerial Service Robot for Remote Inspection by Contact, www.airobots.eu). The goal of AIRobots is to develop a new generation of aerial service robots capable of supporting human beings

  5. VisGraB: A Benchmark for Vision-Based Grasping. Paladyn Journal of Behavioral Robotics

    DEFF Research Database (Denmark)

    Kootstra, Gert; Popovic, Mila; Jørgensen, Jimmy Alison

    2012-01-01

    that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision......We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different...

  6. Stereo vision with distance and gradient recognition

    Science.gov (United States)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  7. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  8. Get a head in telepresence: active vision for remote intervention

    International Nuclear Information System (INIS)

    Pretlove, J.

    1996-01-01

    Despite advances in robotic systems, many tasks needing to be undertaken in hazardous environments require human control. The risk to human life can be reduced or minimised using an integrated control system comprising an active controllable stereo vision system and a virtual reality head-mounted display. The human operator is then immersed in and can interact with the remote environment in complete safety. An overview is presented of the design and development of just such an advanced, dynamic telepresence system, developed at the Department of Mechanical Engineering at the University of Surrey. (UK)

  9. Vision-Based Robot Following Using PID Control

    Directory of Open Access Journals (Sweden)

    Chandra Sekhar Pati

    2017-06-01

    Full Text Available Applications like robots which are employed for shopping, porter services, assistive robotics, etc., require a robot to continuously follow a human or another robot. This paper presents a mobile robot following another tele-operated mobile robot based on a PID (Proportional–Integral-Differential controller. Here, we use two differential wheel drive robots; one is a master robot and the other is a follower robot. The master robot is manually controlled and the follower robot is programmed to follow the master robot. For the master robot, a Bluetooth module receives the user’s command from an android application which is processed by the master robot’s controller, which is used to move the robot. The follower robot receives the image from the Kinect sensor mounted on it and recognizes the master robot. The follower robot identifies the x, y positions by employing the camera and the depth by using the Kinect depth sensor. By identifying the x, y, and z locations of the master robot, the follower robot finds the angle and distance between the master and follower robot, which is given as the error term of a PID controller. Using this, the follower robot follows the master robot. A PID controller is based on feedback and tries to minimize the error. Experiments are conducted for two indigenously developed robots; one depicting a humanoid and the other a small mobile robot. It was observed that the follower robot was easily able to follow the master robot using well-tuned PID parameters.

  10. Vision-Based Robot Following Using PID Control

    OpenAIRE

    Chandra Sekhar Pati; Rahul Kala

    2017-01-01

    Applications like robots which are employed for shopping, porter services, assistive robotics, etc., require a robot to continuously follow a human or another robot. This paper presents a mobile robot following another tele-operated mobile robot based on a PID (Proportional–Integral-Differential) controller. Here, we use two differential wheel drive robots; one is a master robot and the other is a follower robot. The master robot is manually controlled and the follower robot is programmed to ...

  11. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System

    Directory of Open Access Journals (Sweden)

    Defeng Wu

    2016-08-01

    Full Text Available A robot-based three-dimensional (3D measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  12. Autonomous military robotics

    CERN Document Server

    Nath, Vishnu

    2014-01-01

    This SpringerBrief reveals the latest techniques in computer vision and machine learning on robots that are designed as accurate and efficient military snipers. Militaries around the world are investigating this technology to simplify the time, cost and safety measures necessary for training human snipers. These robots are developed by combining crucial aspects of computer science research areas including image processing, robotic kinematics and learning algorithms. The authors explain how a new humanoid robot, the iCub, uses high-speed cameras and computer vision algorithms to track the objec

  13. Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators

    Science.gov (United States)

    Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi

    2013-08-01

    Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations.

  14. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  15. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    Science.gov (United States)

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. Deviation from Trajectory Detection in Vision based Robotic Navigation using SURF and Subsequent Restoration by Dynamic Auto Correction Algorithm

    Directory of Open Access Journals (Sweden)

    Ray Debraj

    2015-01-01

    Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.

  17. Robot bicolor system

    Science.gov (United States)

    Yamaba, Kazuo

    1999-03-01

    In case of robot vision, most important problem is the processing speed of acquiring and analyzing images are less than the speed of execution of the robot. In an actual robot color vision system, it is considered that the system should be processed at real time. We guessed this problem might be solved using by the bicolor analysis technique. We have been testing a system which we hope will give us insight to the properties of bicolor vision. The experiment is used the red channel of a color CCD camera and an image from a monochromatic camera to duplicate McCann's theory. To mix the two signals together, the mono image is copied into each of the red, green and blue memory banks of the image processing board and then added the red image to the red bank. On the contrary, pure color images, red, green and blue components are obtained from the original bicolor images in the novel color system after the scaling factor is added to each RGB image. Our search for a bicolor robot vision system was entirely successful.

  18. Vector disparity sensor with vergence control for active vision systems.

    Science.gov (United States)

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  19. State of the art of robotic surgery related to vision: brain and eye applications of newly available devices

    Science.gov (United States)

    Nuzzi, Raffaele

    2018-01-01

    Background Robot-assisted surgery has revolutionized many surgical subspecialties, mainly where procedures have to be performed in confined, difficult to visualize spaces. Despite advances in general surgery and neurosurgery, in vivo application of robotics to ocular surgery is still in its infancy, owing to the particular complexities of microsurgery. The use of robotic assistance and feedback guidance on surgical maneuvers could improve the technical performance of expert surgeons during the initial phase of the learning curve. Evidence acquisition We analyzed the advantages and disadvantages of surgical robots, as well as the present applications and future outlook of robotics in neurosurgery in brain areas related to vision and ophthalmology. Discussion Limitations to robotic assistance remain, that need to be overcome before it can be more widely applied in ocular surgery. Conclusion There is heightened interest in studies documenting computerized systems that filter out hand tremor and optimize speed of movement, control of force, and direction and range of movement. Further research is still needed to validate robot-assisted procedures. PMID:29440943

  20. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  1. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    Science.gov (United States)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  2. A calibration system for measuring 3D ground truth for validation and error analysis of robot vision algorithms

    Science.gov (United States)

    Stolkin, R.; Greig, A.; Gilby, J.

    2006-10-01

    An important task in robot vision is that of determining the position, orientation and trajectory of a moving camera relative to an observed object or scene. Many such visual tracking algorithms have been proposed in the computer vision, artificial intelligence and robotics literature over the past 30 years. However, it is seldom possible to explicitly measure the accuracy of these algorithms, since the ground-truth camera positions and orientations at each frame in a video sequence are not available for comparison with the outputs of the proposed vision systems. A method is presented for generating real visual test data with complete underlying ground truth. The method enables the production of long video sequences, filmed along complicated six-degree-of-freedom trajectories, featuring a variety of objects and scenes, for which complete ground-truth data are known including the camera position and orientation at every image frame, intrinsic camera calibration data, a lens distortion model and models of the viewed objects. This work encounters a fundamental measurement problem—how to evaluate the accuracy of measured ground truth data, which is itself intended for validation of other estimated data. Several approaches for reasoning about these accuracies are described.

  3. Multi-Robot FastSLAM for Large Domains

    Science.gov (United States)

    2007-03-01

    Derr, D. Fox, A.B. Cremers , Integrating global position estimation and position tracking for mobile robots: The dynamic markov localization approach...Intelligence (AAAI), 2000. 53. Andrew J. Davison and David W. Murray. Simultaneous Localization and Map- Building Using Active Vision. IEEE...Wyeth, Michael Milford and David Prasser. A Modified Particle Filter for Simultaneous Robot Localization and Landmark Tracking in an Indoor

  4. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    International Nuclear Information System (INIS)

    Wang, Hesheng; Chen, Weidong; Xu, Lifei; He, Tao

    2015-01-01

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  5. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hesheng [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Chen, Weidong, E-mail: wdchen@sjtu.edu.cn [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Xu, Lifei; He, Tao [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2015-10-15

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  6. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    Science.gov (United States)

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  7. Visual guidance of a pig evisceration robot using neural networks

    DEFF Research Database (Denmark)

    Christensen, S.S.; Andersen, A.W.; Jørgensen, T.M.

    1996-01-01

    The application of a RAM-based neural network to robot vision is demonstrated for the guidance of a pig evisceration robot. Tests of the combined robot-vision system have been performed at an abattoir. The vision system locates a set of feature points on a pig carcass and transmits the 3D coordin...

  8. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  9. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  10. Facilitating Programming of Vision-Equipped Robots through Robotic Skills and Projection Mapping

    DEFF Research Database (Denmark)

    Andersen, Rasmus Skovgaard

    The field of collaborative industrial robots is currently developing fast both in the industry and in the scientific community. Companies such as Rethink Robotics and Universal Robots are redefining the concept of an industrial robot and entire new markets and use cases are becoming relevant for ...

  11. Vision-based Ground Test for Active Debris Removal

    Directory of Open Access Journals (Sweden)

    Seong-Min Lim

    2013-12-01

    Full Text Available Due to the continuous space development by mankind, the number of space objects including space debris in orbits around the Earth has increased, and accordingly, difficulties of space development and activities are expected in the near future. In this study, among the stages for space debris removal, the implementation of a vision-based approach technique for approaching space debris from a far-range rendezvous state to a proximity state, and the ground test performance results were described. For the vision-based object tracking, the CAM-shift algorithm with high speed and strong performance, and the Kalman filter were combined and utilized. For measuring the distance to a tracking object, a stereo camera was used. For the construction of a low-cost space environment simulation test bed, a sun simulator was used, and in the case of the platform for approaching, a two-dimensional mobile robot was used. The tracking status was examined while changing the position of the sun simulator, and the results indicated that the CAM-shift showed a tracking rate of about 87% and the relative distance could be measured down to 0.9 m. In addition, considerations for future space environment simulation tests were proposed.

  12. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    Science.gov (United States)

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...techniques to determine the distances from each pixel to the camera. 14. SUBJECT TERMS unmanned undersea vehicles (UUVs), autonomous ... AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING Jake A. Jones Lieutenant Commander, United States Navy B.S

  13. Robotics Activities in The Netherlands

    NARCIS (Netherlands)

    Kranenburg- de Lange, D.J.B.A.

    2010-01-01

    Since April 2010, in The Netherlands robotics activities are coordinated by RoboNED. This Dutch Robotics Platform, chaired by Prof. Stefano Stramigioli, aims to stimulate the synergy between the robotics fields and to formulate a focus. The goal of RoboNED is three fold: 1) RoboNED aims to bring the

  14. Embedded vision equipment of industrial robot for inline detection of product errors by clustering–classification algorithms

    Directory of Open Access Journals (Sweden)

    Kamil Zidek

    2016-10-01

    Full Text Available The article deals with the design of embedded vision equipment of industrial robots for inline diagnosis of product error during manipulation process. The vision equipment can be attached to the end effector of robots or manipulators, and it provides an image snapshot of part surface before grasp, searches for error during manipulation, and separates products with error from the next operation of manufacturing. The new approach is a methodology based on machine teaching for the automated identification, localization, and diagnosis of systematic errors in products of high-volume production. To achieve this, we used two main data mining algorithms: clustering for accumulation of similar errors and classification methods for the prediction of any new error to proposed class. The presented methodology consists of three separate processing levels: image acquisition for fail parameterization, data clustering for categorizing errors to separate classes, and new pattern prediction with a proposed class model. We choose main representatives of clustering algorithms, for example, K-mean from quantization of vectors, fast library for approximate nearest neighbor from hierarchical clustering, and density-based spatial clustering of applications with noise from algorithm based on the density of the data. For machine learning, we selected six major algorithms of classification: support vector machines, normal Bayesian classifier, K-nearest neighbor, gradient boosted trees, random trees, and neural networks. The selected algorithms were compared for speed and reliability and tested on two platforms: desktop-based computer system and embedded system based on System on Chip (SoC with vision equipment.

  15. Development of an instrumented and active servocontrolled robot gripper

    International Nuclear Information System (INIS)

    Debnath, S.K.; Dutta, A.K.; Deb, S.R.

    1990-01-01

    The design and construction of an instrumented and active robotic gripper is presented in this paper. The gripping device is a four-bar-linkage parallel jaw type end effector and the fingers are actuated by DC servo motor and gear drive. To make the gripper active, it is equipped with several sensors viz. straingauge type force sensor, magnetic proximity sensor, infrared sensor and vision sensor. A potentiometric position sensor is used to give position feed back of the fingers to the gripper controller. All sensory data are received by a Z-80 microprocessor and a software is developed to process data to transmit corresponding signals to the servocontroller designed for the gripper activation. The gripper can be used for automated grasping of randomly scattered object that remains in the field of view of the camera mounted on the gripper. (author). 3 refs., 2 figs

  16. On quaternion based parameterization of orientation in computer vision and robotics

    Directory of Open Access Journals (Sweden)

    G. Terzakis

    2014-04-01

    Full Text Available The problem of orientation parameterization for applications in computer vision and robotics is examined in detail herein. The necessary intuition and formulas are provided for direct practical use in any existing algorithm that seeks to minimize a cost function in an iterative fashion. Two distinct schemes of parameterization are analyzed: The first scheme concerns the traditional axis-angle approach, while the second employs stereographic projection from unit quaternion sphere to the 3D real projective space. Performance measurements are taken and a comparison is made between the two approaches. Results suggests that there exist several benefits in the use of stereographic projection that include rational expressions in the rotation matrix derivatives, improved accuracy, robustness to random starting points and accelerated convergence.

  17. The development of advanced robotic technology -The development of advanced robotics for the nuclear industry-

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Bum; Kim, Woong Ki; Park, Soon Yong; Kim, Seung Ho; Kim, Chang Hoi; Hwang, Suk Yeoung; Kim, Byung Soo; Lee, Young Kwang

    1994-07-01

    In this year (the second year of this project), researches and development have been carried out to establish the essential key technologies applied to robot system for nuclear industry. In the area of robot vision, in order to construct stereo vision system necessary to tele-operation, stereo image acquisition camera module and stereo image displayer have been developed. Stereo matching and storing programs have been developed to analyse stereo images. According to the result of tele-operation experiment, operation efficiency has been enhanced about 20% by using the stereo vision system. In a part of object recognition, a tele-operated robot system has been constructed to evaluate the performance of the stereo vision system and to develop the vision algorithm to automate nozzle dam operation. A nuclear fuel rod character recognition system has been developed by using neural network. As a result of perfomance evaluation of the recognition system, 99% recognition rate has been achieved. In the area of sensing and intelligent control, temperature distribution has been measured by using the analysis of thermal image histogram and the inspection algorithm has been developed to determine of the state be normal or abnormal, and the fuzzy controller has been developed to control the compact mobile robot designed for path moving on block-typed path. (Author)

  18. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter.

    Science.gov (United States)

    Alatise, Mary B; Hancke, Gerhard P

    2017-09-21

    Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).

  19. Coherent laser vision system

    International Nuclear Information System (INIS)

    Sebastion, R.L.

    1995-01-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system

  20. Coherent laser vision system

    Energy Technology Data Exchange (ETDEWEB)

    Sebastion, R.L. [Coleman Research Corp., Springfield, VA (United States)

    1995-10-01

    The Coherent Laser Vision System (CLVS) is being developed to provide precision real-time 3D world views to support site characterization and robotic operations and during facilities Decontamination and Decommissioning. Autonomous or semiautonomous robotic operations requires an accurate, up-to-date 3D world view. Existing technologies for real-time 3D imaging, such as AM laser radar, have limited accuracy at significant ranges and have variability in range estimates caused by lighting or surface shading. Recent advances in fiber optic component technology and digital processing components have enabled the development of a new 3D vision system based upon a fiber optic FMCW coherent laser radar. The approach includes a compact scanner with no-moving parts capable of randomly addressing all pixels. The system maintains the immunity to lighting and surface shading conditions which is characteristic to coherent laser radar. The random pixel addressability allows concentration of scanning and processing on the active areas of a scene, as is done by the human eye-brain system.

  1. Grasping Unknown Objects in an Early Cognitive Vision System

    DEFF Research Database (Denmark)

    Popovic, Mila

    2011-01-01

    Grasping of unknown objects presents an important and challenging part of robot manipulation. The growing area of service robotics depends upon the ability of robots to autonomously grasp and manipulate a wide range of objects in everyday environments. Simple, non task-specific grasps of unknown ...... and comparing vision-based grasping methods, and the creation of algorithms for bootstrapping a process of acquiring world understanding for artificial cognitive agents....... presents a system for robotic grasping of unknown objects us- ing stereo vision. Grasps are defined based on contour and surface information provided by the Early Cognitive Vision System, that organizes visual informa- tion into a biologically motivated hierarchical representation. The contributions...... of the thesis are: the extension of the Early Cognitive Vision representation with a new type of feature hierarchy in the texture domain, the definition and evaluation of contour based grasping methods, the definition and evaluation of surface based grasping methods, the definition of a benchmark for testing...

  2. Biomimetic vibrissal sensing for robots.

    Science.gov (United States)

    Pearson, Martin J; Mitchinson, Ben; Sullivan, J Charles; Pipe, Anthony G; Prescott, Tony J

    2011-11-12

    Active vibrissal touch can be used to replace or to supplement sensory systems such as computer vision and, therefore, improve the sensory capacity of mobile robots. This paper describes how arrays of whisker-like touch sensors have been incorporated onto mobile robot platforms taking inspiration from biology for their morphology and control. There were two motivations for this work: first, to build a physical platform on which to model, and therefore test, recent neuroethological hypotheses about vibrissal touch; second, to exploit the control strategies and morphology observed in the biological analogue to maximize the quality and quantity of tactile sensory information derived from the artificial whisker array. We describe the design of a new whiskered robot, Shrewbot, endowed with a biomimetic array of individually controlled whiskers and a neuroethologically inspired whisking pattern generation mechanism. We then present results showing how the morphology of the whisker array shapes the sensory surface surrounding the robot's head, and demonstrate the impact of active touch control on the sensory information that can be acquired by the robot. We show that adopting bio-inspired, low latency motor control of the rhythmic motion of the whiskers in response to contact-induced stimuli usefully constrains the sensory range, while also maximizing the number of whisker contacts. The robot experiments also demonstrate that the sensory consequences of active touch control can be usefully investigated in biomimetic robots.

  3. Machine Vision Tests for Spent Fuel Scrap Characteristics

    International Nuclear Information System (INIS)

    BERGER, W.W.

    2000-01-01

    The purpose of this work is to perform a feasibility test of a Machine Vision system for potential use at the Hanford K basins during spent nuclear fuel (SNF) operations. This report documents the testing performed to establish functionality of the system including quantitative assessment of results. Fauske and Associates, Inc., which has been intimately involved in development of the SNF safety basis, has teamed with Agris-Schoen Vision Systems, experts in robotics, tele-robotics, and Machine Vision, for this work

  4. Embedding visual routines in AnaFocus' Eye-RIS Vision Systems for closing the perception to action loop in roving robots

    Science.gov (United States)

    Jiménez-Marrufo, A.; Caballero-García, D. J.

    2011-05-01

    The purpose of the current paper is to describe how different visual routines can be developed and embedded in the AnaFocus' Eye-RIS Vision System on Chip (VSoC) to close the perception to action loop within the roving robots developed under the framework of SPARK II European project. The Eye-RIS Vision System on Chip employs a bio-inspired architecture where image acquisition and processing are truly intermingled and the processing itself is carried out in two steps. At the first step, processing is fully parallel owing to the concourse of dedicated circuit structures which are integrated close to the sensors. At the second step, processing is realized on digitally-coded information data by means of digital processors. All these capabilities make the Eye-RIS VSoC very suitable for the integration within small robots in general, and within the robots developed by the SPARK II project in particular. These systems provide with image-processing capabilities and speed comparable to high-end conventional vision systems without the need for high-density image memory and intensive digital processing. As far as perception is concerned, current perceptual schemes are often based on information derived from visual routines. Since real world images are quite complex to be processed for perceptual needs with traditional approaches, more computationally feasible algorithms are required to extract the desired features from the scene in real time, to efficiently proceed with the consequent action. In this paper the development of such algorithms and their implementation taking full advantage of the sensing-processing capabilities of the Eye-RIS VSoC are described.

  5. Riemannian computing in computer vision

    CERN Document Server

    Srivastava, Anuj

    2016-01-01

    This book presents a comprehensive treatise on Riemannian geometric computations and related statistical inferences in several computer vision problems. This edited volume includes chapter contributions from leading figures in the field of computer vision who are applying Riemannian geometric approaches in problems such as face recognition, activity recognition, object detection, biomedical image analysis, and structure-from-motion. Some of the mathematical entities that necessitate a geometric analysis include rotation matrices (e.g. in modeling camera motion), stick figures (e.g. for activity recognition), subspace comparisons (e.g. in face recognition), symmetric positive-definite matrices (e.g. in diffusion tensor imaging), and function-spaces (e.g. in studying shapes of closed contours).   ·         Illustrates Riemannian computing theory on applications in computer vision, machine learning, and robotics ·         Emphasis on algorithmic advances that will allow re-application in other...

  6. Vision restoration after brain and retina damage: the "residual vision activation theory".

    Science.gov (United States)

    Sabel, Bernhard A; Henrich-Noack, Petra; Fedorov, Anton; Gall, Carolin

    2011-01-01

    Vision loss after retinal or cerebral visual injury (CVI) was long considered to be irreversible. However, there is considerable potential for vision restoration and recovery even in adulthood. Here, we propose the "residual vision activation theory" of how visual functions can be reactivated and restored. CVI is usually not complete, but some structures are typically spared by the damage. They include (i) areas of partial damage at the visual field border, (ii) "islands" of surviving tissue inside the blind field, (iii) extrastriate pathways unaffected by the damage, and (iv) downstream, higher-level neuronal networks. However, residual structures have a triple handicap to be fully functional: (i) fewer neurons, (ii) lack of sufficient attentional resources because of the dominant intact hemisphere caused by excitation/inhibition dysbalance, and (iii) disturbance in their temporal processing. Because of this resulting activation loss, residual structures are unable to contribute much to everyday vision, and their "non-use" further impairs synaptic strength. However, residual structures can be reactivated by engaging them in repetitive stimulation by different means: (i) visual experience, (ii) visual training, or (iii) noninvasive electrical brain current stimulation. These methods lead to strengthening of synaptic transmission and synchronization of partially damaged structures (within-systems plasticity) and downstream neuronal networks (network plasticity). Just as in normal perceptual learning, synaptic plasticity can improve vision and lead to vision restoration. This can be induced at any time after the lesion, at all ages and in all types of visual field impairments after retinal or brain damage (stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). If and to what extent vision restoration can be achieved is a function of the amount of residual tissue and its activation state. However, sustained improvements require repetitive

  7. The Employment Effects of High-Technology: A Case Study of Machine Vision. Research Report No. 86-19.

    Science.gov (United States)

    Chen, Kan; Stafford, Frank P.

    A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…

  8. Robotic anesthesia - A vision for the future of anesthesia

    OpenAIRE

    Hemmerling, Thomas M.; Taddei, Riccardo; Wehbe, Mohamad; Morse, Joshua; Cyr, Shantale; Zaouter, Cedrick

    2011-01-01

    Summary This narrative review describes a rationale for robotic anesthesia. It offers a first classification of robotic anesthesia by separating it into pharmacological robots and robots for aiding or replacing manual gestures. Developments in closed loop anesthesia are outlined. First attempts to perform manual tasks using robots are described. A critical analysis of the delayed development and introduction of robots in anesthesia is delivered.

  9. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    Science.gov (United States)

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  10. Enhanced operator perception through 3D vision and haptic feedback

    Science.gov (United States)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  11. Three-dimensional vision enhances task performance independently of the surgical method.

    Science.gov (United States)

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  12. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting.

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-04

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell's natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  13. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  14. An Innovative 3D Ultrasonic Actuator with Multidegree of Freedom for Machine Vision and Robot Guidance Industrial Applications Using a Single Vibration Ring Transducer

    Directory of Open Access Journals (Sweden)

    M. Shafik

    2013-07-01

    Full Text Available This paper presents an innovative 3D piezoelectric ultrasonic actuator using a single flexural vibration ring transducer, for machine vision and robot guidance industrial applications. The proposed actuator is principally aiming to overcome the visual spotlight focus angle of digital visual data capture transducer, digital cameras and enhance the machine vision system ability to perceive and move in 3D. The actuator Design, structures, working principles and finite element analysis are discussed in this paper. A prototype of the actuator was fabricated. Experimental tests and measurements showed the ability of the developed prototype to provide 3D motions of Multidegree of freedom, with typical speed of movement equal to 35 revolutions per minute, a resolution of less than 5μm and maximum load of 3.5 Newton. These initial characteristics illustrate, the potential of the developed 3D micro actuator to gear the spotlight focus angle issue of digital visual data capture transducers and possible improvement that such technology could bring to the machine vision and robot guidance industrial applications.

  15. Neuro-Inspired Spike-Based Motion: From Dynamic Vision Sensor to Robot Motor Open-Loop Control through Spike-VITE

    Directory of Open Access Journals (Sweden)

    Fernando Perez-Peña

    2013-11-01

    Full Text Available In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM. All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6 and power requirements (3.4 W to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6. It also evidences the suitable use of AER as a communication protocol between processing and actuation.

  16. Neuro-Inspired Spike-Based Motion: From Dynamic Vision Sensor to Robot Motor Open-Loop Control through Spike-VITE

    Science.gov (United States)

    Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan

    2013-01-01

    In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation. PMID:24264330

  17. Robot vision language RVL/V: An integration scheme of visual processing and manipulator control

    International Nuclear Information System (INIS)

    Matsushita, T.; Sato, T.; Hirai, S.

    1984-01-01

    RVL/V is a robot vision language designed to write a program for visual processing and manipulator control of a hand-eye system. This paper describes the design of RVL/V and the current implementation of the system. Visual processing is performed on one-dimensional range data of the object surface. Model-based instructions execute object detection, measurement and view control. The hierarchy of visual data and processing is introduced to give RVL/V generality. A new scheme to integrate visual information and manipulator control is proposed. The effectiveness of the model-based visual processing scheme based on profile data is demonstrated by a hand-eye experiment

  18. Embedded active vision system based on an FPGA architecture

    OpenAIRE

    Chalimbaud , Pierre; Berry , François

    2006-01-01

    International audience; In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision) is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks,...

  19. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2016-01-01

    Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.

  20. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Science.gov (United States)

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  1. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    Directory of Open Access Journals (Sweden)

    Luis Emmi

    2014-01-01

    Full Text Available Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  2. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    Science.gov (United States)

    Gonzalez-de-Soto, Mariano; Pajares, Gonzalo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976

  3. New trends in robotics for agriculture: integration and assessment of a real fleet of robots.

    Science.gov (United States)

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  4. Mutual Visibility by Robots with Persistent Memory

    OpenAIRE

    Bhagat, Subhash; Mukhopadhyaya, Krishnendu

    2017-01-01

    This paper addresses the mutual visibility problem for a set of semi-synchronous, opaque robots occupying distinct positions in the Euclidean plane. Since robots are opaque, if three robots lie on a line, the middle robot obstructs the visions of the two other robots. The mutual visibility problem asks the robots to coordinate their movements to form a configuration, within finite time and without collision, in which no three robots are collinear. Robots are endowed with a constant bits of pe...

  5. Color-based scale-invariant feature detection applied in robot vision

    Science.gov (United States)

    Gao, Jian; Huang, Xinhan; Peng, Gang; Wang, Min; Li, Xinde

    2007-11-01

    The scale-invariant feature detecting methods always require a lot of computation yet sometimes still fail to meet the real-time demands in robot vision fields. To solve the problem, a quick method for detecting interest points is presented. To decrease the computation time, the detector selects as interest points those whose scale normalized Laplacian values are the local extrema in the nonholonomic pyramid scale space. The descriptor is built with several subregions, whose width is proportional to the scale factor, and the coordinates of the descriptor are rotated in relation to the interest point orientation just like the SIFT descriptor. The eigenvector is computed in the original color image and the mean values of the normalized color g and b in each subregion are chosen to be the factors of the eigenvector. Compared with the SIFT descriptor, this descriptor's dimension has been reduced evidently, which can simplify the point matching process. The performance of the method is analyzed in theory in this paper and the experimental results have certified its validity too.

  6. Visual servo control for a human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-03-01

    Full Text Available This thesis presents work completed on the design of control and vision components for use in a monocular vision-based human-following robot. The use of vision in a controller feedback loop is referred to as vision-based or visual servo control...

  7. A Vision Controlled Robot to Detect and Collect Fallen Hot Cobalt60 Capsules inside Wet Storage Pool of Cobalt60 Irradiators

    International Nuclear Information System (INIS)

    Solyman, A.E.M.

    2015-01-01

    In a typical irradiator that use radioactive cobalt-60 capsules source is one of the peaceful uses of atomic energy, it originated strategy in terms of its importance in the sterilization of medical products and food processing from bacteria and fungi before being exported. However, there are several well-known problems related to the fall of the cobalt-60 capsules inside the wet storage pool as a result of manufacturing defects, defects welds or a problem occurs in the vertical movement of the radioactive source rack. Therefore it is necessary to study this problem and solve it in a scientific way so as to keep the human as much as possible from radiation exposure, according to the principles of radiation protection and safety issued by the International Atomic Energy Agency. The present work considers the possibility to use a vision based control arm robot to collect fallen hot cobalt-60 capsules inside wet storage pool. A 5-DOF arm robot is designed and vision algorithms are established to pick the fallen capsule on the bottom surface of the storage pool, read the information printed on its edge (cap) and move it to a safe storage place. Two object detection approaches are studied; RGB-based filter and background subtraction technique. Vision algorithms and camera calibration are done using MATLAB/SIMULINK program. Robot arm forward and inverse kinematics are developed and programmed using an embedded micro controller system. Experiments show the validity of the proposed system and prove its success. The collecting process will be done without interference of operators, so radiation safety will be increased. The results showed camera calibration equations accuracy. And also the presence of vibrations in the hands of the movement of the robot and thus were seized motor rotation speed to 10 degrees per second to avoid these vibrations.This scientific application keeps the operators as much as possible from radiation exposure so it leads to increase radiation

  8. Active Vision for Humanoid Robots

    NARCIS (Netherlands)

    Wang, X.

    2015-01-01

    Human perception is an active process. By altering its viewpoint rather than passively observing surroundings and by operating on sequences of images rather than on a single frame, the human visual system has the ability to explore the most relevant information based on knowledge, therefore when

  9. Creating and maintaining chemical artificial life by robotic symbiosis

    DEFF Research Database (Denmark)

    Hanczyc, Martin M.; Parrilla, Juan M.; Nicholson, Arwen

    2015-01-01

    We present a robotic platform based on the open source RepRap 3D printer that can print and maintain chemical artificial life in the form of a dynamic, chemical droplet. The robot uses computer vision, a self-organizing map, and a learning program to automatically categorize the behavior of the d......We present a robotic platform based on the open source RepRap 3D printer that can print and maintain chemical artificial life in the form of a dynamic, chemical droplet. The robot uses computer vision, a self-organizing map, and a learning program to automatically categorize the behavior...... confluence of chemical, artificial intelligence, and robotic approaches to artificial life....

  10. Creating and Maintaining Chemical Artificial Life by Robotic Symbiosis

    DEFF Research Database (Denmark)

    Hanczyc, Martin; Parrilla, Juan M.; Nicholson, Arwen

    2015-01-01

    We present a robotic platform based on the open source RepRap 3D printer that can print and maintain chemical artificial life in the form of a dynamic, chemical droplet. The robot uses computer vision, a self-organizing map, and a learning program to automatically categorize the behavior of the d......We present a robotic platform based on the open source RepRap 3D printer that can print and maintain chemical artificial life in the form of a dynamic, chemical droplet. The robot uses computer vision, a self-organizing map, and a learning program to automatically categorize the behavior...... confluence of chemical, artificial intelligence, and robotic approaches to artificial life....

  11. A Novel Generic Ball Recognition Algorithm Based on Omnidirectional Vision for Soccer Robots

    Directory of Open Access Journals (Sweden)

    Hui Zhang

    2013-11-01

    Full Text Available It is significant for the final goal of RoboCup to realize the recognition of generic balls for soccer robots. In this paper, a novel generic ball recognition algorithm based on omnidirectional vision is proposed by combining the modified Haar-like features and AdaBoost learning algorithm. The algorithm is divided into offline training and online recognition. During the phase of offline training, numerous sub-images are acquired from various panoramic images, including generic balls, and then the modified Haar-like features are extracted from them and used as the input of the AdaBoost learning algorithm to obtain a classifier. During the phase of online recognition, and according to the imaging characteristics of our omnidirectional vision system, rectangular windows are defined to search for the generic ball along the rotary and radial directions in the panoramic image, and the learned classifier is used to judge whether a ball is included in the window. After the ball has been recognized globally, ball tracking is realized by integrating a ball velocity estimation algorithm to reduce the computational cost. The experimental results show that good performance can be achieved using our algorithm, and that the generic ball can be recognized and tracked effectively.

  12. A Vision for the Exploration of Mars: Robotic Precursors Followed by Humans to Mars Orbit in 2033

    Science.gov (United States)

    Sellers, Piers J.; Garvin, James B.; Kinney, Anne L.; Amato, Michael J.; White, Nicholas E.

    2012-01-01

    The reformulation of the Mars program gives NASA a rare opportunity to deliver a credible vision in which humans, robots, and advancements in information technology combine to open the deep space frontier to Mars. There is a broad challenge in the reformulation of the Mars exploration program that truly sets the stage for: 'a strategic collaboration between the Science Mission Directorate (SMD), the Human Exploration and Operations Mission Directorate (HEOMD) and the Office of the Chief Technologist, for the next several decades of exploring Mars'.Any strategy that links all three challenge areas listed into a true long term strategic program necessitates discussion. NASA's SMD and HEOMD should accept the President's challenge and vision by developing an integrated program that will enable a human expedition to Mars orbit in 2033 with the goal of returning samples suitable for addressing the question of whether life exists or ever existed on Mars

  13. Social Constraints on Animate Vision

    National Research Council Canada - National Science Library

    Breazeal, Cynthia; Edsinger, Aaron; Fitzpatrick, Paul; Scassellati, Brian

    2000-01-01

    .... In humanoid robotic systems, or in any animate vision system that interacts with people, social dynamics provide additional levels of constraint and provide additional opportunities for processing economy...

  14. Machine vision and mechatronics in practice

    CERN Document Server

    Brett, Peter

    2015-01-01

    The contributions for this book have been gathered over several years from conferences held in the series of Mechatronics and Machine Vision in Practice, the latest of which was held in Ankara, Turkey. The essential aspect is that they concern practical applications rather than the derivation of mere theory, though simulations and visualization are important components. The topics range from mining, with its heavy engineering, to the delicate machining of holes in the human skull or robots for surgery on human flesh. Mobile robots continue to be a hot topic, both from the need for navigation and for the task of stabilization of unmanned aerial vehicles. The swinging of a spray rig is damped, while machine vision is used for the control of heating in an asphalt-laying machine.  Manipulators are featured, both for general tasks and in the form of grasping fingers. A robot arm is proposed for adding to the mobility scooter of the elderly. Can EEG signals be a means to control a robot? Can face recognition be ac...

  15. KNOWLEDGE-BASED ROBOT VISION SYSTEM FOR AUTOMATED PART HANDLING

    Directory of Open Access Journals (Sweden)

    J. Wang

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper discusses an algorithm incorporating a knowledge-based vision system into an industrial robot system for handling parts intelligently. A continuous fuzzy controller was employed to extract boundary information in a computationally efficient way. The developed algorithm for on-line part recognition using fuzzy logic is shown to be an effective solution to extract the geometric features of objects. The proposed edge vector representation method provides enough geometric information and facilitates the object geometric reconstruction for gripping planning. Furthermore, a part-handling model was created by extracting the grasp features from the geometric features.

    AFRIKAANSE OPSOMMING: Hierdie artikel beskryf ‘n kennis-gebaseerde visiesisteemalgoritme wat in ’n industriёle robotsisteem ingesluit word om sodoende intelligente komponenthantering te bewerkstellig. ’n Kontinue wasige beheerder is gebruik om allerlei objekinligting deur middel van ’n effektiewe berekeningsmetode te bepaal. Die ontwikkelde algoritme vir aan-lyn komponentherkenning maak gebruik van wasige logika en word bewys as ’n effektiewe metode om geometriese inligting van objekte te bepaal. Die voorgestelde grensvektormetode verskaf voldoende inligting en maak geometriese rekonstruksie van die objek moontlik om greepbeplanning te kan doen. Voorts is ’n komponenthanteringsmodel ontwikkel deur die grypkenmerke af te lei uit die geometriese eienskappe.

  16. Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

    Directory of Open Access Journals (Sweden)

    Giuseppe Airò Farulla

    2016-02-01

    Full Text Available Vision-based Pose Estimation (VPE represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements.

  17. Fiscal 1998 achievement report on regional consortium research and development project. Venture business fostering regional consortium--Creation of key industries (Development of Task-Oriented Robot Control System TORCS based on versatile 3-dimensional vision system VVV--Vertical Volumetric Vision); 1998 nendo sanjigen shikaku system VVV wo mochiita task shikogata robot seigyo system TORCS no kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    Research is conducted for the development of a highly autonomous robot control system TORCS for the purpose of realizing an automated, unattended manufacturing process. In the development of an interface, an indicating function is built which easily adds or removes job attributes relative to given shape data. In the development of a 3-dimensional vision system VVV, a camera set and a new range finder are manufactured for ranging and recognition, the latter being an improvement from the conventional laser-aided range finder TDS. A 3-dimensional image processor is developed, which picks up pictures at a speed approximately 8 times higher than that of the conventional type. In the development of orbit calculating software programs, a job planner, an operation planner, and a vision planner are prepared. A robot program which is necessary for robot operation is also prepared. In an evaluation test involving a simulated casting line, the pick-and-place concept is successfully implemented for several kinds of cast articles positioned at random on a conveyer in motion. Difference in environmental conditions between manufacturing sites is not pursued in this paper on the ground that such should be discussed on the case-by-case basis. (NEDO)

  18. Robots and lattice automata

    CERN Document Server

    Adamatzky, Andrew

    2015-01-01

    The book gives a comprehensive overview of the state-of-the-art research and engineering in theory and application of Lattice Automata in design and control of autonomous Robots. Automata and robots share the same notional meaning. Automata (originated from the latinization of the Greek word “αυτόματον”) as self-operating autonomous machines invented from ancient years can be easily considered the first steps of robotic-like efforts. Automata are mathematical models of Robots and also they are integral parts of robotic control systems. A Lattice Automaton is a regular array or a collective of finite state machines, or automata. The Automata update their states by the same rules depending on states of their immediate neighbours. In the context of this book, Lattice Automata are used in developing modular reconfigurable robotic systems, path planning and map exploration for robots, as robot controllers, synchronisation of robot collectives, robot vision, parallel robotic actuators. All chapters are...

  19. Clustered features for use in stereo vision SLAM

    CSIR Research Space (South Africa)

    Joubert, D

    2010-07-01

    Full Text Available SLAM, or simultaneous localization and mapping, is a key component in the development of truly independent robots. Vision-based SLAM utilising stereo vision is a promising approach to SLAM but it is computationally expensive and difficult...

  20. Active vision and image/video understanding with decision structures based on the network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  1. Grasping in Robotics

    CERN Document Server

    2013-01-01

    Grasping in Robotics contains original contributions in the field of grasping in robotics with a broad multidisciplinary approach. This gives the possibility of addressing all the major issues related to robotized grasping, including milestones in grasping through the centuries, mechanical design issues, control issues, modelling achievements and issues, formulations and software for simulation purposes, sensors and vision integration, applications in industrial field and non-conventional applications (including service robotics and agriculture).   The contributors to this book are experts in their own diverse and wide ranging fields. This multidisciplinary approach can help make Grasping in Robotics of interest to a very wide audience. In particular, it can be a useful reference book for researchers, students and users in the wide field of grasping in robotics from many different disciplines including mechanical design, hardware design, control design, user interfaces, modelling, simulation, sensors and hum...

  2. Special Issue on Intelligent Robots

    Directory of Open Access Journals (Sweden)

    Genci Capi

    2013-08-01

    Full Text Available The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in real time, a large amount of sensory data—such as vision, laser, microphone—in order to determine the best action. Intelligent algorithms have been successfully applied to link complex sensory data to robot action. This editorial briefly summarizes recent findings in the field of intelligent robots as described in the articles published in this special issue.

  3. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Chalimbaud Pierre

    2007-01-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  4. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Pierre Chalimbaud

    2006-12-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  5. Autonomous Motion Learning for Intra-Vehicular Activity Space Robot

    Science.gov (United States)

    Watanabe, Yutaka; Yairi, Takehisa; Machida, Kazuo

    Space robots will be needed in the future space missions. So far, many types of space robots have been developed, but in particular, Intra-Vehicular Activity (IVA) space robots that support human activities should be developed to reduce human-risks in space. In this paper, we study the motion learning method of an IVA space robot with the multi-link mechanism. The advantage point is that this space robot moves using reaction force of the multi-link mechanism and contact forces from the wall as space walking of an astronaut, not to use a propulsion. The control approach is determined based on a reinforcement learning with the actor-critic algorithm. We demonstrate to clear effectiveness of this approach using a 5-link space robot model by simulation. First, we simulate that a space robot learn the motion control including contact phase in two dimensional case. Next, we simulate that a space robot learn the motion control changing base attitude in three dimensional case.

  6. Robotics in Cardiac Surgery: Past, Present, and Future

    Directory of Open Access Journals (Sweden)

    Bryan Bush

    2013-07-01

    Full Text Available Robotic cardiac operations evolved from minimally invasive operations and offer similar theoretical benefits, including less pain, shorter length of stay, improved cosmesis, and quicker return to preoperative level of functional activity. The additional benefits offered by robotic surgical systems include improved dexterity and degrees of freedom, tremor-free movements, ambidexterity, and the avoidance of the fulcrum effect that is intrinsic when using long-shaft endoscopic instruments. Also, optics and operative visualization are vastly improved compared with direct vision and traditional videoscopes. Robotic systems have been utilized successfully to perform complex mitral valve repairs, coronary revascularization, atrial fibrillation ablation, intracardiac tumor resections, atrial septal defect closures, and left ventricular lead implantation. The history and evolution of these procedures, as well as the present status and future directions of robotic cardiac surgery, are presented in this review.

  7. Safety Computer Vision Rules for Improved Sensor Certification

    DEFF Research Database (Denmark)

    Mogensen, Johann Thor Ingibergsson; Kraft, Dirk; Schultz, Ulrik Pagh

    2017-01-01

    Mobile robots are used across many domains from personal care to agriculture. Working in dynamic open-ended environments puts high constraints on the robot perception system, which is critical for the safety of the system as a whole. To achieve the required safety levels the perception system needs...... to be certified, but no specific standards exist for computer vision systems, and the concept of safe vision systems remains largely unexplored. In this paper we present a novel domain-specific language that allows the programmer to express image quality detection rules for enforcing safety constraints...

  8. A Behavior-Based Approach for Educational Robotics Activities

    Science.gov (United States)

    De Cristoforis, P.; Pedre, S.; Nitsche, M.; Fischer, T.; Pessacg, F.; Di Pietro, C.

    2013-01-01

    Educational robotics proposes the use of robots as a teaching resource that enables inexperienced students to approach topics in fields unrelated to robotics. In recent years, these activities have grown substantially in elementary and secondary school classrooms and also in outreach experiences to interest students in science, technology,…

  9. JPL Robotics Technology Applicable to Agriculture

    Science.gov (United States)

    Udomkesmalee, Suraphol Gabriel; Kyte, L.

    2008-01-01

    This slide presentation describes several technologies that are developed for robotics that are applicable for agriculture. The technologies discussed are detection of humans to allow safe operations of autonomous vehicles, and vision guided robotic techniques for shoot selection, separation and transfer to growth media,

  10. Assessment of Laparoscopic Skills Performance: 2D Versus 3D Vision and Classic Instrument Versus New Hand-Held Robotic Device for Laparoscopy.

    Science.gov (United States)

    Leite, Mariana; Carvalho, Ana F; Costa, Patrício; Pereira, Ricardo; Moreira, Antonio; Rodrigues, Nuno; Laureano, Sara; Correia-Pinto, Jorge; Vilaça, João L; Leão, Pedro

    2016-02-01

    Laparoscopic surgery has undeniable advantages, such as reduced postoperative pain, smaller incisions, and faster recovery. However, to improve surgeons' performance, ergonomic adaptations of the laparoscopic instruments and introduction of robotic technology are needed. The aim of this study was to ascertain the influence of a new hand-held robotic device for laparoscopy (HHRDL) and 3D vision on laparoscopic skills performance of 2 different groups, naïve and expert. Each participant performed 3 laparoscopic tasks-Peg transfer, Wire chaser, Knot-in 4 different ways. With random sequencing we assigned the execution order of the tasks based on the first type of visualization and laparoscopic instrument. Time to complete each laparoscopic task was recorded and analyzed with one-way analysis of variance. Eleven experts and 15 naïve participants were included. Three-dimensional video helps the naïve group to get better performance in Peg transfer, Wire chaser 2 hands, and Knot; the new device improved the execution of all laparoscopic tasks (P < .05). For expert group, the 3D video system benefited them in Peg transfer and Wire chaser 1 hand, and the robotic device in Peg transfer, Wire chaser 1 hand, and Wire chaser 2 hands (P < .05). The HHRDL helps the execution of difficult laparoscopic tasks, such as Knot, in the naïve group. Three-dimensional vision makes the laparoscopic performance of the participants without laparoscopic experience easier, unlike those with experience in laparoscopic procedures. © The Author(s) 2015.

  11. Multi-arm multilateral haptics-based immersive tele-robotic system (HITS) for improvised explosive device disposal

    Science.gov (United States)

    Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir

    2014-06-01

    This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.

  12. A bio-inspired apposition compound eye machine vision sensor system

    International Nuclear Information System (INIS)

    Davis, J D; Barrett, S F; Wright, C H G; Wilcox, M

    2009-01-01

    The Wyoming Information, Signal Processing, and Robotics Laboratory is developing a wide variety of bio-inspired vision sensors. We are interested in exploring the vision system of various insects and adapting some of their features toward the development of specialized vision sensors. We do not attempt to supplant traditional digital imaging techniques but rather develop sensor systems tailor made for the application at hand. We envision that many applications may require a hybrid approach using conventional digital imaging techniques enhanced with bio-inspired analogue sensors. In this specific project, we investigated the apposition compound eye and its characteristics commonly found in diurnal insects and certain species of arthropods. We developed and characterized an array of apposition compound eye-type sensors and tested them on an autonomous robotic vehicle. The robot exhibits the ability to follow a pre-defined target and avoid specified obstacles using a simple control algorithm.

  13. Humanlike Robots - The Upcoming Revolution in Robotics

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2009-01-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  14. Humanlike robots: the upcoming revolution in robotics

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2009-08-01

    Humans have always sought to imitate the human appearance, functions and intelligence. Human-like robots, which for many years have been a science fiction, are increasingly becoming an engineering reality resulting from the many advances in biologically inspired technologies. These biomimetic technologies include artificial intelligence, artificial vision and hearing as well as artificial muscles, also known as electroactive polymers (EAP). Robots, such as the vacuum cleaner Rumba and the robotic lawnmower, that don't have human shape, are already finding growing use in homes worldwide. As opposed to other human-made machines and devices, this technology raises also various questions and concerns and they need to be addressed as the technology advances. These include the need to prevent accidents, deliberate harm, or their use in crime. In this paper the state-of-the-art of the ultimate goal of biomimetics, the development of humanlike robots, the potentials and the challenges are reviewed.

  15. ERROR DETECTION BY ANTICIPATION FOR VISION-BASED CONTROL

    Directory of Open Access Journals (Sweden)

    A ZAATRI

    2001-06-01

    Full Text Available A vision-based control system has been developed.  It enables a human operator to remotely direct a robot, equipped with a camera, towards targets in 3D space by simply pointing on their images with a pointing device. This paper presents an anticipatory system, which has been designed for improving the safety and the effectiveness of the vision-based commands. It simulates these commands in a virtual environment. It attempts to detect hard contacts that may occur between the robot and its environment, which can be caused by machine errors or operator errors as well.

  16. Vision-based robotic system for object agnostic placing operations

    DEFF Research Database (Denmark)

    Rofalis, Nikolaos; Nalpantidis, Lazaros; Andersen, Nils Axel

    2016-01-01

    Industrial robots are part of almost all modern factories. Even though, industrial robots nowadays manipulate objects of a huge variety in different environments, exact knowledge about both of them is generally assumed. The aim of this work is to investigate the ability of a robotic system to ope...... to the system, neither for the objects nor for the placing box. The experimental evaluation of the developed robotic system shows that a combination of seemingly simple modules and strategies can provide effective solution to the targeted problem....... to operate within an unknown environment manipulating unknown objects. The developed system detects objects, finds matching compartments in a placing box, and ultimately grasps and places the objects there. The developed system exploits 3D sensing and visual feature extraction. No prior knowledge is provided...

  17. Design and Simulation of 5-DOF Vision-Based Manipulator to Increase Radiation Safety for Industrial Cobalt-60 Irradiators

    International Nuclear Information System (INIS)

    Solyman, A.E.; Keshk, A.B.; Sharshar, K.A.; Roman, M.R.

    2016-01-01

    Robotics has proved its efficiency in nuclear and radiation fields. Computer vision is one of the advanced approaches used to enhance robotic efficiency. The current work investigates the possibility of using a vision-based controlled arm robot to collect the fallen hot Cobalt-60 capsules inside wet storage pool of industrial irradiator. A 5-DOF arm robot is designed and vision algorithms are established to pick the fallen capsules on the bottom surface of the storage pool, read the information printed on its edge (cap) and move it to a safe storage place. Two object detection approaches are studied; RGB-based filter and background subtraction technique. Vision algorithms and camera calibration are done using MATLAB/SIMULINK program. Robot arm forward and inverse kinematics are developed and programmed using an embedded micro controller system. Experiments show the validity of the proposed system and prove its success. The collecting process will be done without interference of operators, hence radiation safety will be increased.

  18. Laser assisted robotic surgery in cornea transplantation

    Science.gov (United States)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-03-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  19. Vision-based control of the Manus using SIFT

    NARCIS (Netherlands)

    Liefhebber, F.; Sijs, J.

    2007-01-01

    The rehabilitation robot Manus is an assistive device for severely motor handicapped users. The executing of all day living tasks with the Manus, can be very complex and a vision-based controller can simplify this. The lack of existing vision-based controlled systems, is the poor reliability of the

  20. Robotic Sensitive-Site Assessment

    Science.gov (United States)

    2015-09-04

    annotations. The SOA component is the backend infrastructure that receives and stores robot-generated and human-input data and serves these data to several...Architecture Server (heading level 2) The SOA server provides the backend infrastructure to receive data from robot situational awareness payloads, to archive...incapacitation or even death. The proper use of PPE is critical to avoiding exposure. However, wearing PPE limits mobility and field of vision, and

  1. When Should We Use Care Robots? The Nature-of-Activities Approach.

    Science.gov (United States)

    Santoni de Sio, Filippo; van Wynsberghe, Aimee

    2016-12-01

    When should we use care robots? In this paper we endorse the shift from a simple normative approach to care robots ethics to a complex one: we think that one main task of a care robot ethics is that of analysing the different ways in which different care robots may affect the different values at stake in different care practices. We start filling a gap in the literature by showing how the philosophical analysis of the nature of healthcare activities can contribute to (care) robot ethics. We rely on the nature-of-activities approach recently proposed in the debate on human enhancement, and we apply it to the ethics of care robots. The nature-of-activities approach will help us to understand why certain practice-oriented activities in healthcare should arguably be left to humans, but certain (predominantly) goal-directed activities in healthcare can be fulfilled (sometimes even more ethically) with the assistance of a robot. In relation to the latter, we aim to show that even though all healthcare activities can be considered as practice-oriented, when we understand the activity in terms of different legitimate 'fine-grained' descriptions, the same activities or at least certain components of them can be seen as clearly goal-directed. Insofar as it allows us to ethically assess specific functionalities of specific robots to be deployed in well-defined circumstances, we hold the nature-of-activities approach to be particularly helpful also from a design perspective, i.e. to realize the Value Sensitive Design approach.

  2. Vision Restoration in Glaucoma by Activating Residual Vision with a Holistic, Clinical Approach: A Review.

    Science.gov (United States)

    Sabel, Bernhard A; Cárdenas-Morales, Lizbeth; Gao, Ying

    2018-01-01

    How to cite this article: Sabel BA, Cárdenas-Morales L, Gao Y. Vision Restoration in Glaucoma by activating Residual Vision with a Holistic, Clinical Approach: A Review. J Curr Glaucoma Pract 2018;12(1):1-9.

  3. A focused bibliography on robotics

    Science.gov (United States)

    Mergler, H. W.

    1983-08-01

    The present bibliography focuses on eight robotics-related topics believed by the author to be of special interest to researchers in the field of industrial electronics: robots, sensors, kinematics, dynamics, control systems, actuators, vision, economics, and robot applications. This literature search was conducted through the 1970-present COMPENDEX data base, which provides world-wide coverage of nearly 3500 journals, conference proceedings and reports, and the 1969-1981 INSPEC data base, which is the largest for the English language in the fields of physics, electrotechnology, computers, and control.

  4. Using High-Level RTOS Models for HW/SW Embedded Architecture Exploration: Case Study on Mobile Robotic Vision

    Directory of Open Access Journals (Sweden)

    Verdier François

    2008-01-01

    Full Text Available Abstract We are interested in the design of a system-on-chip implementing the vision system of a mobile robot. Following a biologically inspired approach, this vision architecture belongs to a larger sensorimotor loop. This regulation loop both creates and exploits dynamics properties to achieve a wide variety of target tracking and navigation objectives. Such a system is representative of numerous flexible and dynamic applications which are more and more encountered in embedded systems. In order to deal with all of the dynamic aspects of these applications, it appears necessary to embed a dedicated real-time operating system on the chip. The presence of this on-chip custom executive layer constitutes a major scientific obstacle in the traditional hardware and software design flows. Classical exploration and simulation tools are particularly inappropriate in this case. We detail in this paper the specific mechanisms necessary to build a high-level model of an embedded custom operating system able to manage such a real-time but flexible application. We also describe our executable RTOS model written in SystemC allowing an early simulation of our application on top of its specific scheduling layer. Based on this model, a methodology is discussed and results are given on the exploration and validation of a distributed platform adapted to this vision system.

  5. Robotics and nuclear power. Report by the Technology Transfer Robotics Task Team

    International Nuclear Information System (INIS)

    1985-06-01

    A task team was formed at the request of the Department of Energy to evaluate and assess technology development needed for advanced robotics in the nuclear industry. The mission of these technologies is to provide the nuclear industry with the support for the application of advanced robotics to reduce nuclear power generating costs and enhance the safety of the personnel in the industry. The investigation included robotic and teleoperated systems. A robotic system is defined as a reprogrammable, multifunctional manipulator designed to move materials, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks. A teleoperated system includes an operator who remotely controls the system by direct viewing or through a vision system

  6. Towards Light‐guided Micro‐robotics

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    ‐dimensional microstructures. Furthermore, we exploit the light shaping capabilities available in the workstation to demonstrate a new strategy for controlling microstructures that goes beyond the typical refractive light deflections that are exploited in conventional optical trapping and manipulation e.g. of micro......Robotics in the macro‐scale typically uses light for carrying information in machine vision for monitoring and feedback in intelligent robotic guidance systems. With light’s miniscule momentum, shrinking robots down to the micro‐scale regime creates opportunities for exploiting optical forces...... and torques in micro‐robotic actuation and control. Indeed, the literature on optical trapping and micro‐manipulation attests to the possibilities for optical micro‐robotics. Advancing light‐driven micro‐robotics requires the optimization of optical force and optical torque that, in turn, requires...

  7. HYBRID COMMUNICATION NETWORK OF MOBILE ROBOT AND QUAD-COPTER

    Directory of Open Access Journals (Sweden)

    Moustafa M. Kurdi

    2017-01-01

    Full Text Available This paper introduces the design and development of QMRS (Quadcopter Mobile Robotic System. QMRS is a real-time obstacle avoidance capability in Belarus-132N mobile robot with the cooperation of quadcopter Phantom-4. The function of QMRS consists of GPS used by Mobile Robot and image vision and image processing system from both robot and quad-copter and by using effective searching algorithm embedded inside the robot. Having the capacity to navigate accurately is one of the major abilities of a mobile robot to effectively execute a variety of jobs including manipulation, docking, and transportation. To achieve the desired navigation accuracy, mobile robots are typically equipped with on-board sensors to observe persistent features in the environment, to estimate their pose from these observations, and to adjust their motion accordingly. Quadcopter takes off from Mobile Robot, surveys the terrain and transmits the processed Image terrestrial robot. The main objective of research paper is to focus on the full coordination between robot and quadcopter by designing an efficient wireless communication using WIFI. In addition, it identify the method involving the use of vision and image processing system from both robot and quadcopter; analyzing path in real-time and avoiding obstacles based-on the computational algorithm embedded inside the robot. QMRS increases the efficiency and reliability of the whole system especially in robot navigation, image processing and obstacle avoidance due to the help and connection among the different parts of the system.

  8. The research on visual industrial robot which adopts fuzzy PID control algorithm

    Science.gov (United States)

    Feng, Yifei; Lu, Guoping; Yue, Lulin; Jiang, Weifeng; Zhang, Ye

    2017-03-01

    The control system of six degrees of freedom visual industrial robot based on the control mode of multi-axis motion control cards and PC was researched. For the variable, non-linear characteristics of industrial robot`s servo system, adaptive fuzzy PID controller was adopted. It achieved better control effort. In the vision system, a CCD camera was used to acquire signals and send them to video processing card. After processing, PC controls the six joints` motion by motion control cards. By experiment, manipulator can operate with machine tool and vision system to realize the function of grasp, process and verify. It has influence on the manufacturing of the industrial robot.

  9. Accuracy in Robot Generated Image Data Sets

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Bjorholm

    2015-01-01

    In this paper we present a practical innovation concerning how to achieve high accuracy of camera positioning, when using a 6 axis industrial robots to generate high quality data sets for computer vision. This innovation is based on the realization that to a very large extent the robots positioning...... error is deterministic, and can as such be calibrated away. We have successfully used this innovation in our efforts for creating data sets for computer vision. Since the use of this innovation has a significant effect on the data set quality, we here present it in some detail, to better aid others...

  10. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  11. Declarative Rule-based Safety for Robotic Perception Systems

    DEFF Research Database (Denmark)

    Mogensen, Johann Thor Ingibergsson; Kraft, Dirk; Schultz, Ulrik Pagh

    2017-01-01

    Mobile robots are used across many domains from personal care to agriculture. Working in dynamic open-ended environments puts high constraints on the robot perception system, which is critical for the safety of the system as a whole. To achieve the required safety levels the perception system needs...... to be certified, but no specific standards exist for computer vision systems, and the concept of safe vision systems remains largely unexplored. In this paper we present a novel domain-specific language that allows the programmer to express image quality detection rules for enforcing safety constraints...

  12. Interaction Challenges in Human-Robot Space Exploration

    Science.gov (United States)

    Fong, Terrence; Nourbakhsh, Illah

    2005-01-01

    In January 2004, NASA established a new, long-term exploration program to fulfill the President's Vision for U.S. Space Exploration. The primary goal of this program is to establish a sustained human presence in space, beginning with robotic missions to the Moon in 2008, followed by extended human expeditions to the Moon as early as 2015. In addition, the program places significant emphasis on the development of joint human-robot systems. A key difference from previous exploration efforts is that future space exploration activities must be sustainable over the long-term. Experience with the space station has shown that cost pressures will keep astronaut teams small. Consequently, care must be taken to extend the effectiveness of these astronauts well beyond their individual human capacity. Thus, in order to reduce human workload, costs, and fatigue-driven error and risk, intelligent robots will have to be an integral part of mission design.

  13. IMU-based online kinematic calibration of robot manipulator.

    Science.gov (United States)

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  14. IMU-Based Online Kinematic Calibration of Robot Manipulator

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2013-01-01

    Full Text Available Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA and Kalman Filter (KF to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  15. Development of dog-like retrieving capability in a ground robot

    Science.gov (United States)

    MacKenzie, Douglas C.; Ashok, Rahul; Rehg, James M.; Witus, Gary

    2013-01-01

    This paper presents the Mobile Intelligence Team's approach to addressing the CANINE outdoor ground robot competition. The competition required developing a robot that provided retrieving capabilities similar to a dog, while operating fully autonomously in unstructured environments. The vision team consisted of Mobile Intelligence, the Georgia Institute of Technology, and Wayne State University. Important computer vision aspects of the project were the ability to quickly learn the distinguishing characteristics of novel objects, searching images for the object as the robot drove a search pattern, identifying people near the robot for safe operations, correctly identify the object among distractors, and localizing the object for retrieval. The classifier used to identify the objects will be discussed, including an analysis of its performance, and an overview of the entire system architecture presented. A discussion of the robot's performance in the competition will demonstrate the system's successes in real-world testing.

  16. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    Science.gov (United States)

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  17. Vision drives correlated activity without patterned spontaneous activity in developing Xenopus retina.

    Science.gov (United States)

    Demas, James A; Payne, Hannah; Cline, Hollis T

    2012-04-01

    Developing amphibians need vision to avoid predators and locate food before visual system circuits fully mature. Xenopus tadpoles can respond to visual stimuli as soon as retinal ganglion cells (RGCs) innervate the brain, however, in mammals, chicks and turtles, RGCs reach their central targets many days, or even weeks, before their retinas are capable of vision. In the absence of vision, activity-dependent refinement in these amniote species is mediated by waves of spontaneous activity that periodically spread across the retina, correlating the firing of action potentials in neighboring RGCs. Theory suggests that retinorecipient neurons in the brain use patterned RGC activity to sharpen the retinotopy first established by genetic cues. We find that in both wild type and albino Xenopus tadpoles, RGCs are spontaneously active at all stages of tadpole development studied, but their population activity never coalesces into waves. Even at the earliest stages recorded, visual stimulation dominates over spontaneous activity and can generate patterns of RGC activity similar to the locally correlated spontaneous activity observed in amniotes. In addition, we show that blocking AMPA and NMDA type glutamate receptors significantly decreases spontaneous activity in young Xenopus retina, but that blocking GABA(A) receptor blockers does not. Our findings indicate that vision drives correlated activity required for topographic map formation. They further suggest that developing retinal circuits in the two major subdivisions of tetrapods, amphibians and amniotes, evolved different strategies to supply appropriately patterned RGC activity to drive visual circuit refinement. Copyright © 2011 Wiley Periodicals, Inc.

  18. Vision Assessment and Prescription of Low Vision Devices

    OpenAIRE

    Keeffe, Jill

    2004-01-01

    Assessment of vision and prescription of low vision devices are part of a comprehensive low vision service. Other components of the service include training the person affected by low vision in use of vision and other senses, mobility, activities of daily living, and support for education, employment or leisure activities. Specialist vision rehabilitation agencies have services to provide access to information (libraries) and activity centres for groups of people with impaired vision.

  19. Development of an advanced intelligent robot navigation system

    International Nuclear Information System (INIS)

    Hai Quan Dai; Dalton, G.R.; Tulenko, J.; Crane, C.C. III

    1992-01-01

    As part of the US Department of Energy's Robotics for Advanced Reactors Project, the authors are in the process of assembling an advanced intelligent robotic navigation and control system based on previous work performed on this project in the areas of computer control, database access, graphical interfaces, shared data and computations, computer vision for positions determination, and sonar-based computer navigation systems. The system will feature three levels of goals: (1) high-level system for management of lower level functions to achieve specific functional goals; (2) intermediate level of goals such as position determination, obstacle avoidance, and discovering unexpected objects; and (3) other supplementary low-level functions such as reading and recording sonar or video camera data. In its current phase, the Cybermotion K2A mobile robot is not equipped with an onboard computer system, which will be included in the final phase. By that time, the onboard system will play important roles in vision processing and in robotic control communication

  20. Inventing Japan's 'robotics culture': the repeated assembly of science, technology, and culture in social robotics.

    Science.gov (United States)

    Sabanović, Selma

    2014-06-01

    Using interviews, participant observation, and published documents, this article analyzes the co-construction of robotics and culture in Japan through the technical discourse and practices of robotics researchers. Three cases from current robotics research--the seal-like robot PARO, the Humanoid Robotics Project HRP-2 humanoid, and 'kansei robotics' - show the different ways in which scientists invoke culture to provide epistemological grounding and possibilities for social acceptance of their work. These examples show how the production and consumption of social robotic technologies are associated with traditional crafts and values, how roboticists negotiate among social, technical, and cultural constraints while designing robots, and how humans and robots are constructed as cultural subjects in social robotics discourse. The conceptual focus is on the repeated assembly of cultural models of social behavior, organization, cognition, and technology through roboticists' narratives about the development of advanced robotic technologies. This article provides a picture of robotics as the dynamic construction of technology and culture and concludes with a discussion of the limits and possibilities of this vision in promoting a culturally situated understanding of technology and a multicultural view of science.

  1. Social Intelligence for a Robot Engaging People in Cognitive Training Activities

    Directory of Open Access Journals (Sweden)

    Jeanie Chan

    2012-10-01

    Full Text Available Current research supports the use of cognitive training interventions to improve the brain functioning of both adults and children. Our work focuses on exploring the potential use of robot assistants to allow for these interventions to become more accessible. Namely, we aim to develop an intelligent, socially assistive robot that can engage individuals in person-centred cognitively stimulating activities. In this paper, we present the design of a novel control architecture for the robot Brian 2.0, which enables the robot to be a social motivator by providing assistance, encouragement and celebration during an activity. A hierarchical reinforcement learning approach is used in the architecture to allow the robot to: 1 learn appropriate assistive behaviours based on the structure of the activity, and 2 personalize an interaction based on user states. Experiments show that the control architecture is effective in determining the robot's optimal assistive behaviours during a memory game interaction.

  2. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Science.gov (United States)

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  3. Hand/Eye Coordination For Fine Robotic Motion

    Science.gov (United States)

    Lokshin, Anatole M.

    1992-01-01

    Fine motions of robotic manipulator controlled with help of visual feedback by new method reducing position errors by order of magnitude. Robotic vision subsystem includes five cameras: three stationary ones providing wide-angle views of workspace and two mounted on wrist of auxiliary robot arm. Stereoscopic cameras on arm give close-up views of object and end effector. Cameras measure errors between commanded and actual positions and/or provide data for mapping between visual and manipulator-joint-angle coordinates.

  4. Embodied Computation: An Active-Learning Approach to Mobile Robotics Education

    Science.gov (United States)

    Riek, L. D.

    2013-01-01

    This paper describes a newly designed upper-level undergraduate and graduate course, Autonomous Mobile Robots. The course employs active, cooperative, problem-based learning and is grounded in the fundamental computational problems in mobile robotics defined by Dudek and Jenkin. Students receive a broad survey of robotics through lectures, weekly…

  5. Embedded Visual System and its Applications on Robots

    CERN Document Server

    Xu, De

    2010-01-01

    Embedded vision systems such as smart cameras have been rapidly developed recently. Vision systems have become smaller and lighter, but their performance has improved. The algorithms in embedded vision systems have their specifications limited by frequency of CPU, memory size, and architecture. The goal of this e-book is to provide a an advanced reference work for engineers, researchers and scholars in the field of robotics, machine vision, and automation and to facilitate the exchange of their ideas, experiences and views on embedded vision system models. The effectiveness for all methods is

  6. Discrete-State-Based Vision Navigation Control Algorithm for One Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Dunwen Wei

    2015-01-01

    Full Text Available Navigation with the specific objective can be defined by specifying desired timed trajectory. The concept of desired direction field is proposed to deal with such navigation problem. To lay down a principled discussion of the accuracy and efficiency of navigation algorithms, strictly quantitative definitions of tracking error, actuator effect, and time efficiency are established. In this paper, one vision navigation control method based on desired direction field is proposed. This proposed method uses discrete image sequences to form discrete state space, which is especially suitable for bipedal walking robots with single camera walking on a free-barrier plane surface to track the specific objective without overshoot. The shortest path method (SPM is proposed to design such direction field with the highest time efficiency. However, one improved control method called canonical piecewise-linear function (PLF is proposed. In order to restrain the noise disturbance from the camera sensor, the band width control method is presented to significantly decrease the error influence. The robustness and efficiency of the proposed algorithm are illustrated through a number of computer simulations considering the error from camera sensor. Simulation results show that the robustness and efficiency can be balanced by choosing the proper controlling value of band width.

  7. A robotic platform for laser welding of corneal tissue

    Science.gov (United States)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-07-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  8. A Preliminary Study Exploring the Use of Fictional Narrative in Robotics Activities

    Science.gov (United States)

    Williams, Douglas; Ma, Yuxin; Prejean, Louise

    2010-01-01

    Educational robotics activities are gaining in popularity. Though some research data suggest that educational robotics can be an effective approach in teaching mathematics, science, and engineering, research is needed to generate the best practices and strategies for designing these learning environments. Existing robotics activities typically do…

  9. RGB–D terrain perception and dense mapping for legged robots

    Directory of Open Access Journals (Sweden)

    Belter Dominik

    2016-03-01

    Full Text Available This paper addresses the issues of unstructured terrain modeling for the purpose of navigation with legged robots. We present an improved elevation grid concept adopted to the specific requirements of a small legged robot with limited perceptual capabilities. We propose an extension of the elevation grid update mechanism by incorporating a formal treatment of the spatial uncertainty. Moreover, this paper presents uncertainty models for a structured light RGB-D sensor and a stereo vision camera used to produce a dense depth map. The model for the uncertainty of the stereo vision camera is based on uncertainty propagation from calibration, through undistortion and rectification algorithms, allowing calculation of the uncertainty of measured 3D point coordinates. The proposed uncertainty models were used for the construction of a terrain elevation map using the Videre Design STOC stereo vision camera and Kinect-like range sensors. We provide experimental verification of the proposed mapping method, and a comparison with another recently published terrain mapping method for walking robots.

  10. Mobile Robot Navigation in a Corridor Using Visual Odometry

    DEFF Research Database (Denmark)

    Bayramoglu, Enis; Andersen, Nils Axel; Poulsen, Niels Kjølstad

    2009-01-01

    Incorporation of computer vision into mobile robot localization is studied in this work. It includes the generation of localization information from raw images and its fusion with the odometric pose estimation. The technique is then implemented on a small mobile robot operating at a corridor...

  11. Model and Behavior-Based Robotic Goalkeeper

    DEFF Research Database (Denmark)

    Lausen, H.; Nielsen, J.; Nielsen, M.

    2003-01-01

    This paper describes the design, implementation and test of a goalkeeper robot for the Middle-Size League of RoboCub. The goalkeeper task is implemented by a set of primitive tasks and behaviours coordinated by a 2-level hierarchical state machine. The primitive tasks concerning complex motion...... control are implemented by a non-linear control algorithm, adapted to the different task goals (e.g., follow the ball or the robot posture from local features extracted from images acquired by a catadioptric omni-directional vision system. Most robot parameters were designed based on simulations carried...

  12. Soft Robotic Manipulation of Onions and Artichokes in the Food Industry

    Directory of Open Access Journals (Sweden)

    R. Morales

    2014-04-01

    Full Text Available This paper presents the development of a robotic solution for a problem of fast manipulation and handling of onions or artichokes in the food industry. The complete solution consists of a parallel robotic manipulatior, a specially designed end-effector based on a customized vacuum suction cup, and a computer vision software developed for pick and place operations. First, the selection and design process of the proposed robotic solution to fit with the initial requeriments is presented, including the customized vacuum suction cup. Then, the kinematic analysis of the parallel manipulator needed to develop the robot control system is reviewed. Moreover, computer vision application is presented inthe paper. Hardware details of the implementation of the building prototype are also shown. Finally, conclusions and future work show the current status of the project.

  13. Kinesthetic deficits after perinatal stroke: robotic measurement in hemiparetic children.

    Science.gov (United States)

    Kuczynski, Andrea M; Semrau, Jennifer A; Kirton, Adam; Dukelow, Sean P

    2017-02-15

    While sensory dysfunction is common in children with hemiparetic cerebral palsy (CP) secondary to perinatal stroke, it is an understudied contributor to disability with limited objective measurement tools. Robotic technology offers the potential to objectively measure complex sensorimotor function but has been understudied in perinatal stroke. The present study aimed to quantify kinesthetic deficits in hemiparetic children with perinatal stroke and determine their association with clinical function. Case-control study. Participants were 6-19 years of age. Stroke participants had MRI confirmed unilateral perinatal arterial ischemic stroke or periventricular venous infarction, and symptomatic hemiparetic cerebral palsy. Participants completed a robotic assessment of upper extremity kinesthesia using a robotic exoskeleton (KINARM). Four kinesthetic parameters (response latency, initial direction error, peak speed ratio, and path length ratio) and their variabilities were measured with and without vision. Robotic outcomes were compared across stroke groups and controls and to clinical measures of sensorimotor function. Forty-three stroke participants (23 arterial, 20 venous, median age 12 years, 42% female) were compared to 106 healthy controls. Stroke cases displayed significantly impaired kinesthesia that remained when vision was restored. Kinesthesia was more impaired in arterial versus venous lesions and correlated with clinical measures. Robotic assessment of kinesthesia is feasible in children with perinatal stroke. Kinesthetic impairment is common and associated with stroke type. Failure to correct with vision suggests sensory network dysfunction.

  14. Laser range finder model for autonomous navigation of a robot in a maize field using a particle filter

    NARCIS (Netherlands)

    Hiremath, S.A.; Heijden, van der G.W.A.M.; Evert, van F.K.; Stein, A.; Braak, ter C.J.F.

    2014-01-01

    Autonomous navigation of robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. Many existing agricultural robots use computer vision and other sensors to supplement Global Positioning System (GPS) data when navigating. Vision based methods are

  15. Artificial Vision, New Visual Modalities and Neuroadaptation

    Directory of Open Access Journals (Sweden)

    Hilmi Or

    2012-01-01

    Full Text Available To study the descriptions from which artificial vision derives, to explore the new visual modalities resulting from eye surgeries and diseases, and to gain awareness of the use of machine vision systems for both enhancement of visual perception and better understanding of neuroadaptation. Science could not define until today what vision is. However, some optical-based systems and definitions have been established considering some factors for the formation of seeing. The best known system includes Gabor filter and Gabor patch which work on edge perception, describing the visual perception in the best known way. These systems are used today in industry and technology of machines, robots and computers to provide their "seeing". These definitions are used beyond the machinery in humans for neuroadaptation in new visual modalities after some eye surgeries or to improve the quality of some already known visual modalities. Beside this, “the blindsight” -which was not known to exist until 35 years ago - can be stimulated with visual exercises. Gabor system is a description of visual perception definable in machine vision as well as in human visual perception. This system is used today in robotic vision. There are new visual modalities which arise after some eye surgeries or with the use of some visual optical devices. Also, blindsight is a different visual modality starting to be defined even though the exact etiology is not known. In all the new visual modalities, new vision stimulating therapies using the Gabor systems can be applied. (Turk J Oph thal mol 2012; 42: 61-5

  16. Robotic fabrication in architecture, art, and design

    CERN Document Server

    Braumann, Johannes

    2013-01-01

    Architects, artists, and designers have been fascinated by robots for many decades, from Villemard’s utopian vision of an architect building a house with robotic labor in 1910, to the design of buildings that are robots themselves, such as Archigram’s Walking City. Today, they are again approaching the topic of robotic fabrication but this time employing a different strategy: instead of utopian proposals like Archigram’s or the highly specialized robots that were used by Japan’s construction industry in the 1990s, the current focus of architectural robotics is on industrial robots. These robotic arms have six degrees of freedom and are widely used in industry, especially for automotive production lines. What makes robotic arms so interesting for the creative industry is their multi-functionality: instead of having to develop specialized machines, a multifunctional robot arm can be equipped with a wide range of end-effectors, similar to a human hand using various tools. Therefore, architectural researc...

  17. Robotic inspection technology-process an toolbox

    Energy Technology Data Exchange (ETDEWEB)

    Hermes, Markus [ROSEN Group (United States). R and D Dept.

    2005-07-01

    Pipeline deterioration grows progressively with ultimate aging of pipeline systems (on-plot and cross country). This includes both, very localized corrosion as well as increasing failure probability due to fatigue cracking. Limiting regular inspecting activities to the 'scrapable' part of the pipelines only, will ultimately result into a pipeline system with questionable integrity. The confidence level in the integrity of these systems will drop below acceptance levels. Inspection of presently un-inspectable sections of the pipeline system becomes a must. This paper provides information on ROSEN's progress on the 'robotic inspection technology' project. The robotic inspection concept developed by ROSEN is based on a modular toolbox principle. This is mandatory. A universal 'all purpose' robot would not be reliable and efficient in resolving the postulated inspection task. A preparatory Quality Function Deployment (QFD) analysis is performed prior to the decision about the adequate robotic solution. This enhances the serviceability and efficiency of the provided technology. The word 'robotic' can be understood in its full meaning of Recognition - Strategy - Motion - Control. Cooperation of different individual systems with an established communication, e.g. utilizing Bluetooth technology, support the robustness of the ROSEN robotic inspection approach. Beside the navigation strategy, the inspection strategy is also part of the QFD process. Multiple inspection technologies combined on a single carrier or distributed across interacting container must be selected with a clear vision of the particular goal. (author)

  18. Research into the Architecture of CAD Based Robot Vision Systems

    Science.gov (United States)

    1988-02-09

    Vision 󈨚 and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge

  19. Motion based segmentation for robot vision using adapted EM algorithm

    NARCIS (Netherlands)

    Zhao, Wei; Roos, Nico

    2016-01-01

    Robots operate in a dynamic world in which objects are often moving. The movement of objects may help the robot to segment the objects from the background. The result of the segmentation can subsequently be used to identify the objects. This paper investigates the possibility of segmenting objects

  20. DARPA Robotics Challenge (DRC) Using Human-Machine Teamwork to Perform Disaster Response with a Humanoid Robot

    Science.gov (United States)

    2017-02-01

    leverage our tools and skills to develop a system in which we can get the simulated government furnished equipment (GFE) robot to walk over various types...our control software to the constellation and made a small helper program that gave us the possibility to restart our control software should...avoided this way. - The time and bandwidth limits caused us to integrate helper tools based on computer vision and a microphone sensor into the robot

  1. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Directory of Open Access Journals (Sweden)

    Sebastian McBride

    Full Text Available Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1 conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2 implementation and validation of the model into robotic hardware (as a representative of an active vision system. Seven computational requirements were identified: 1 transformation of retinotopic to egocentric mappings, 2 spatial memory for the purposes of medium-term inhibition of return, 3 synchronization of 'where' and 'what' information from the two visual streams, 4 convergence of top-down and bottom-up information to a centralized point of information processing, 5 a threshold function to elicit saccade action, 6 a function to represent task relevance as a ratio of excitation and inhibition, and 7 derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  2. Active robotic training improves locomotor function in a stroke survivor

    Directory of Open Access Journals (Sweden)

    Krishnan Chandramouli

    2012-08-01

    Full Text Available Abstract Background Clinical outcomes after robotic training are often not superior to conventional therapy. One key factor responsible for this is the use of control strategies that provide substantial guidance. This strategy not only leads to a reduction in volitional physical effort, but also interferes with motor relearning. Methods We tested the feasibility of a novel training approach (active robotic training using a powered gait orthosis (Lokomat in mitigating post-stroke gait impairments of a 52-year-old male stroke survivor. This gait training paradigm combined patient-cooperative robot-aided walking with a target-tracking task. The training lasted for 4-weeks (12 visits, 3 × per week. The subject’s neuromotor performance and recovery were evaluated using biomechanical, neuromuscular and clinical measures recorded at various time-points (pre-training, post-training, and 6-weeks after training. Results Active robotic training resulted in considerable increase in target-tracking accuracy and reduction in the kinematic variability of ankle trajectory during robot-aided treadmill walking. These improvements also transferred to overground walking as characterized by larger propulsive forces and more symmetric ground reaction forces (GRFs. Training also resulted in improvements in muscle coordination, which resembled patterns observed in healthy controls. These changes were accompanied by a reduction in motor cortical excitability (MCE of the vastus medialis, medial hamstrings, and gluteus medius muscles during treadmill walking. Importantly, active robotic training resulted in substantial improvements in several standard clinical and functional parameters. These improvements persisted during the follow-up evaluation at 6 weeks. Conclusions The results indicate that active robotic training appears to be a promising way of facilitating gait and physical function in moderately impaired stroke survivors.

  3. Using perturbations to identify the brain circuits underlying active vision.

    Science.gov (United States)

    Wurtz, Robert H

    2015-09-19

    The visual and oculomotor systems in the brain have been studied extensively in the primate. Together, they can be regarded as a single brain system that underlies active vision--the normal vision that begins with visual processing in the retina and extends through the brain to the generation of eye movement by the brainstem. The system is probably one of the most thoroughly studied brain systems in the primate, and it offers an ideal opportunity to evaluate the advantages and disadvantages of the series of perturbation techniques that have been used to study it. The perturbations have been critical in moving from correlations between neuronal activity and behaviour closer to a causal relation between neuronal activity and behaviour. The same perturbation techniques have also been used to tease out neuronal circuits that are related to active vision that in turn are driving behaviour. The evolution of perturbation techniques includes ablation of both cortical and subcortical targets, punctate chemical lesions, reversible inactivations, electrical stimulation, and finally the expanding optogenetic techniques. The evolution of perturbation techniques has supported progressively stronger conclusions about what neuronal circuits in the brain underlie active vision and how the circuits themselves might be organized.

  4. Automated robotic workcell for waste characterization

    International Nuclear Information System (INIS)

    Dougan, A.D.; Gustaveson, D.K.; Alvarez, R.A.; Holliday, M.

    1993-01-01

    The authors have successfully demonstrated an automated multisensor-based robotic workcell for hazardous waste characterization. The robot within this workcell uses feedback from radiation sensors, a metal detector, object profile scanners, and a 2D vision system to automatically segregate objects based on their measured properties. The multisensor information is used to make segregation decisions of waste items and to facilitate the grasping of objects with a robotic arm. The authors used both sodium iodide and high purity germanium detectors as a two-step process to maximize throughput. For metal identification and discrimination, the authors are investigating the use of neutron interrogation techniques

  5. Machine vision system for remote inspection in hazardous environments

    International Nuclear Information System (INIS)

    Mukherjee, J.K.; Krishna, K.Y.V.; Wadnerkar, A.

    2011-01-01

    Visual Inspection of radioactive components need remote inspection systems for human safety and equipment (CCD imagers) protection from radiation. Elaborate view transport optics is required to deliver images at safe areas while maintaining fidelity of image data. Automation of the system requires robots to operate such equipment. A robotized periscope has been developed to meet the challenge of remote safe viewing and vision based inspection. (author)

  6. Sensor Fusion for Autonomous Mobile Robot Navigation

    DEFF Research Database (Denmark)

    Plascencia, Alfredo

    Multi-sensor data fusion is a broad area of constant research which is applied to a wide variety of fields such as the field of mobile robots. Mobile robots are complex systems where the design and implementation of sensor fusion is a complex task. But research applications are explored constantl....... The scope of the thesis is limited to building a map for a laboratory robot by fusing range readings from a sonar array with landmarks extracted from stereo vision images using the (Scale Invariant Feature Transform) SIFT algorithm....

  7. Building Artificial Vision Systems with Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    LeCun, Yann [New York University

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  8. Active MRI tracking for robotic assisted FUS

    Science.gov (United States)

    Xiao, Xu; Huang, Zhihong; Melzer, Andreas

    2017-03-01

    MR guided FUS is a noninvasive method producing thermal necrosis at the position of tumors with high accuracy and temperature control. Because the typical size of the ultrasound focus is smaller than the area of interested treatment tissues, focus repositioning become necessary to achieve multiple sonications to cover the whole targeted area. Using MR compatible mechanical actuators could help the ultrasound beam to reach a wider treatment range than using electrical beam steering technique and more flexibility in position the transducer. An active MR tracking technique was combined into the MRgFUS system to help locating the position of the mechanical actuator and the FUS transducer. For this study, a precise agar reference model was designed and fabricated to test the performance of the active tracking technique when it was used on the MR-compatible robotics InnoMotion™ (IBSMM, Engineering spol. s r.o. / Ltd, Czech Republic). The precision, tracking range and positioning speed of the combined robotic FUS system were evaluated in this study. Compared to the existing MR guided HIFU systems, the combined robotic system with active tracking techniques provides a potential that allows the FUS treatment to operate in a larger spatial range and with a faster speed, which is one of the main challenges for organ motion tracking.

  9. A Lower Limb Rehabilitation Robot in Sitting Position with a Review of Training Activities.

    Science.gov (United States)

    Eiammanussakul, Trinnachoke; Sangveraphunsiri, Viboon

    2018-01-01

    Robots for stroke rehabilitation at the lower limbs in sitting/lying position have been developed extensively. Some of them have been applied in clinics and shown the potential of the recovery of poststroke patients who suffer from hemiparesis. These robots were developed to provide training at different joints of lower limbs with various activities and modalities. This article reviews the training activities that were realized by rehabilitation robots in literature, in order to offer insights for developing a novel robot suitable for stroke rehabilitation. The control system of the lower limb rehabilitation robot in sitting position that was introduced in the previous work is discussed in detail to demonstrate the behavior of the robot while training a subject. The nonlinear impedance control law, based on active assistive control strategy, is able to define the response of the robot with more specifications while the passivity property and the robustness of the system is verified. A preliminary experiment is conducted on a healthy subject to show that the robot is able to perform active assistive exercises with various training activities and assist the subject to complete the training with desired level of assistance.

  10. A Lower Limb Rehabilitation Robot in Sitting Position with a Review of Training Activities

    Directory of Open Access Journals (Sweden)

    Trinnachoke Eiammanussakul

    2018-01-01

    Full Text Available Robots for stroke rehabilitation at the lower limbs in sitting/lying position have been developed extensively. Some of them have been applied in clinics and shown the potential of the recovery of poststroke patients who suffer from hemiparesis. These robots were developed to provide training at different joints of lower limbs with various activities and modalities. This article reviews the training activities that were realized by rehabilitation robots in literature, in order to offer insights for developing a novel robot suitable for stroke rehabilitation. The control system of the lower limb rehabilitation robot in sitting position that was introduced in the previous work is discussed in detail to demonstrate the behavior of the robot while training a subject. The nonlinear impedance control law, based on active assistive control strategy, is able to define the response of the robot with more specifications while the passivity property and the robustness of the system is verified. A preliminary experiment is conducted on a healthy subject to show that the robot is able to perform active assistive exercises with various training activities and assist the subject to complete the training with desired level of assistance.

  11. The New Robotics-towards human-centered machines.

    Science.gov (United States)

    Schaal, Stefan

    2007-07-01

    Research in robotics has moved away from its primary focus on industrial applications. The New Robotics is a vision that has been developed in past years by our own university and many other national and international research institutions and addresses how increasingly more human-like robots can live among us and take over tasks where our current society has shortcomings. Elder care, physical therapy, child education, search and rescue, and general assistance in daily life situations are some of the examples that will benefit from the New Robotics in the near future. With these goals in mind, research for the New Robotics has to embrace a broad interdisciplinary approach, ranging from traditional mathematical issues of robotics to novel issues in psychology, neuroscience, and ethics. This paper outlines some of the important research problems that will need to be resolved to make the New Robotics a reality.

  12. 1st Latin American Congress on Automation and Robotics

    CERN Document Server

    Baca, José; Moreno, Héctor; Carrera, Isela; Cardona, Manuel

    2017-01-01

    This book contains the proceedings of the 1st Latin American Congress on Automation and Robotics held at Panama City, Panama in February 2017. It gathers research work from researchers, scientists, and engineers from academia and private industry, and presents current and exciting research applications and future challenges in Latin American. The scope of this book covers a wide range of themes associated with advances in automation and robotics research encountered in engineering and scientific research and practice. These topics are related to control algorithms, systems automation, perception, mobile robotics, computer vision, educational robotics, robotics modeling and simulation, and robotics and mechanism design. LACAR 2017 has been sponsored by SENACYT (Secretaria Nacional de Ciencia, Tecnologia e Inovacion of Panama).

  13. Reinforcement learning in computer vision

    Science.gov (United States)

    Bernstein, A. V.; Burnaev, E. V.

    2018-04-01

    Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.

  14. Admittance-Based Upper Limb Robotic Active and Active-Assistive Movements

    Directory of Open Access Journals (Sweden)

    Cristóbal Ochoa Luna

    2015-09-01

    Full Text Available This paper presents two rehabilitation schemes for patients with upper limb impairments. The first is an active-assistive scheme based on the trajectory tracking of predefined paths in Cartesian space. In it, the system allows for an adjustable degree of variation with respect to ideal tracking. The amount of variation is determined through an admittance function that depends on the opposition forces exerted on the system by the user, due to possible impairments. The coefficients of the function allow the adjustment of the degree of assistance the robot will provide in order to complete the target trajectory. The second scheme corresponds to active movements in a constrained space. Here, the same admittance function is applied; however, in this case, it is unattached to a predefined trajectory and instead connected to one generated in real time, according to the user's intended movements. This allows the user to move freely with the robot in order to track a given path. The free movement is bounded through the use of virtual walls that do not allow users to exceed certain limits. A human-machine interface was developed to guide the robot's user.

  15. An FPGA-Based Omnidirectional Vision Sensor for Motion Detection on Mobile Robots

    Directory of Open Access Journals (Sweden)

    Jones Y. Mori

    2012-01-01

    Full Text Available This work presents the development of an integrated hardware/software sensor system for moving object detection and distance calculation, based on background subtraction algorithm. The sensor comprises a catadioptric system composed by a camera and a convex mirror that reflects the environment to the camera from all directions, obtaining a panoramic view. The sensor is used as an omnidirectional vision system, allowing for localization and navigation tasks of mobile robots. Several image processing operations such as filtering, segmentation and morphology have been included in the processing architecture. For achieving distance measurement, an algorithm to determine the center of mass of a detected object was implemented. The overall architecture has been mapped onto a commercial low-cost FPGA device, using a hardware/software co-design approach, which comprises a Nios II embedded microprocessor and specific image processing blocks, which have been implemented in hardware. The background subtraction algorithm was also used to calibrate the system, allowing for accurate results. Synthesis results show that the system can achieve a throughput of 26.6 processed frames per second and the performance analysis pointed out that the overall architecture achieves a speedup factor of 13.78 in comparison with a PC-based solution running on the real-time operating system xPC Target.

  16. Robots to assist daily activities: views of older adults with Alzheimer's disease and their caregivers.

    Science.gov (United States)

    Wang, Rosalie H; Sudhama, Aishwarya; Begum, Momotaz; Huq, Rajibul; Mihailidis, Alex

    2017-01-01

    Robots have the potential to both enable older adults with dementia to perform daily activities with greater independence, and provide support to caregivers. This study explored perspectives of older adults with Alzheimer's disease (AD) and their caregivers on robots that provide stepwise prompting to complete activities in the home. Ten dyads participated: Older adults with mild-to-moderate AD and difficulty completing activity steps, and their family caregivers. Older adults were prompted by a tele-operated robot to wash their hands in the bathroom and make a cup of tea in the kitchen. Caregivers observed interactions. Semi-structured interviews were conducted individually. Transcribed interviews were thematically analyzed. Three themes summarized responses to robot interactions: contemplating a future with assistive robots, considering opportunities with assistive robots, and reflecting on implications for social relationships. Older adults expressed opportunities for robots to help in daily activities, were open to the idea of robotic assistance, but did not want a robot. Caregivers identified numerous opportunities and were more open to robots. Several wanted a robot, if available. Positive consequences of robots in caregiving scenarios could include decreased frustration, stress, and relationship strain, and increased social interaction via the robot. A negative consequence could be decreased interaction with caregivers. Few studies have investigated in-depth perspectives of older adults with dementia and their caregivers following direct interaction with an assistive prompting robot. To fulfill the potential of robots, continued dialogue between users and developers, and consideration of robot design and caregiving relationship factors are necessary.

  17. Vision-Based Interest Point Extraction Evaluation in Multiple Environments

    National Research Council Canada - National Science Library

    McKeehan, Zachary D

    2008-01-01

    Computer-based vision is becoming a primary sensor mechanism in many facets of real world 2-D and 3-D applications, including autonomous robotics, augmented reality, object recognition, motion tracking, and biometrics...

  18. 25th Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    Borangiu, Theodor

    2017-01-01

    This book presents the proceedings of the 25th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2016 held in Belgrade, Serbia, on June 30th–July 2nd, 2016. In keeping with the tradition of the event, RAAD 2016 covered all the important areas of research and innovation in new robot designs and intelligent robot control, with papers including Intelligent robot motion control; Robot vision and sensory processing; Novel design of robot manipulators and grippers; Robot applications in manufacturing and services; Autonomous systems, humanoid and walking robots; Human–robot interaction and collaboration; Cognitive robots and emotional intelligence; Medical, human-assistive robots and prosthetic design; Robots in construction and arts, and Evolution, education, legal and social issues of robotics. For the first time in RAAD history, the themes cloud robots, legal and ethical issues in robotics as well as robots in arts were included in the technical program. The book is a valuable resource f...

  19. Visual Detection and Tracking System for a Spherical Amphibious Robot.

    Science.gov (United States)

    Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun

    2017-04-15

    With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.

  20. Visual Detection and Tracking System for a Spherical Amphibious Robot

    Science.gov (United States)

    Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun

    2017-01-01

    With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation. PMID:28420134

  1. Intelligent robot trends for 1998

    Science.gov (United States)

    Hall, Ernest L.

    1998-10-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent technical and economic trends. Technically, the machines are faster, cheaper, more repeatable, more reliable and safer. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. Economically, the robotics industry now has a 1.1 billion-dollar market in the U.S. and is growing. Feasibility studies results are presented which also show decreasing costs for robots and unaudited healthy rates of return for a variety of robotic applications. However, the road from inspiration to successful application can be long and difficult, often taking decades to achieve a new product. A greater emphasis on mechatronics is needed in our universities. Certainly, more cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit industry and society.

  2. 3-D Vision Techniques for Autonomous Vehicles

    Science.gov (United States)

    1988-08-01

    TITLE (Include Security Classification) W 3-D Vision Techniques for Autonomous Vehicles 12 PERSONAL AUTHOR(S) Martial Hebert, Takeo Kanade, inso Kweoni... Autonomous Vehicles Martial Hebert, Takeo Kanade, Inso Kweon CMU-RI-TR-88-12 The Robotics Institute Carnegie Mellon University Acession For Pittsburgh

  3. Fast Segmentation of Colour Apple Image under All-Weather Natural Conditions for Vision Recognition of Picking Robots

    Directory of Open Access Journals (Sweden)

    Wei Ji

    2016-02-01

    Full Text Available In order to resolve the poor real-time performance problem of the normalized cut (Ncut method in apple vision recognition of picking robots, a fast segmentation method of colour apple images based on the adaptive mean-shift and Ncut methods is proposed in this paper. Firstly, the traditional Ncut method based on pixels is changed into the Ncut method based on regions by the adaptive mean-shift initial segmenting. In this way, the number of peaks and edges in the image is dramatically reduced and the computation speed is improved. Secondly, the image is divided into regional maps by extracting the R-B colour feature, which not only reduces the quantity of regions, but also to some extent overcomes the effect on illumination. On this basis, every region map is expressed by a region point, so the undirected graph of the R-B colour grey-level feature is attained. Finally, regarding the undirected graph as the input of Ncut, we construct the weight matrix W by region points and determine the number of clusters based on the decision-theoretic rough set. The adaptive clustering segmentation can be implemented by an Ncut algorithm. Experimental results show that the maximum segmentation error is 3% and the average recognition time is less than 0.7s, which can meet the requirements of a real-time picking robot.

  4. Intelligent manipulation technique for multi-branch robotic systems

    Science.gov (United States)

    Chen, Alexander Y. K.; Chen, Eugene Y. S.

    1990-01-01

    New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.

  5. Behaviour based Mobile Robot Navigation Technique using AI System: Experimental Investigation on Active Media Pioneer Robot

    Directory of Open Access Journals (Sweden)

    S. Parasuraman, V.Ganapathy

    2012-10-01

    Full Text Available A key issue in the research of an autonomous robot is the design and development of the navigation technique that enables the robot to navigate in a real world environment. In this research, the issues investigated and methodologies established include (a Designing of the individual behavior and behavior rule selection using Alpha level fuzzy logic system  (b Designing of the controller, which maps the sensors input to the motor output through model based Fuzzy Logic Inference System and (c Formulation of the decision-making process by using Alpha-level fuzzy logic system. The proposed method is applied to Active Media Pioneer Robot and the results are discussed and compared with most accepted methods. This approach provides a formal methodology for representing and implementing the human expert heuristic knowledge and perception-based action in mobile robot navigation. In this approach, the operational strategies of the human expert driver are transferred via fuzzy logic to the robot navigation in the form of a set of simple conditional statements composed of linguistic variables.Keywards: Mobile robot, behavior based control, fuzzy logic, alpha level fuzzy logic, obstacle avoidance behavior and goal seek behavior

  6. Terpsichore. ENEA's autonomous robotics project; Progetto Tersycore, la robotica autonoma

    Energy Technology Data Exchange (ETDEWEB)

    Taraglio, S; Zanela, S; Santini, A; Nanni, V [ENEA, Centro Ricerche Casaccia, Rome (Italy). Div. Robotica e Informatica Avanzata

    1999-10-01

    The article presents some of the Terpsichore project's results aimed to developed and test algorithms and applications for autonomous robotics. Four applications are described: dynamic mapping of a building's interior through the use of ultrasonic sensors; visual drive of an autonomous robot via a neural network controller; a neural network-based stereo vision system that steers a robot through unknown indoor environments; and the evolution of intelligent behaviours via the genetic algorithm approach.

  7. Robot and Human Surface Operations on Solar System Bodies

    Science.gov (United States)

    Weisbin, C. R.; Easter, R.; Rodriguez, G.

    2001-01-01

    This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.

  8. The Active Pupil: Pupil size in attention, working memory, and active vision

    OpenAIRE

    Mathôt, Sebastiaan

    2015-01-01

    Slides for the following talk: Mathôt, S. (2015, June). The Active Pupil: Pupil Size in Attention, Working Memory, and Active Vision. Talk presented at the Laboratoire de Psychologie de la Perception, Paris, France.

  9. Experiences with a Barista Robot, FusionBot

    Science.gov (United States)

    Limbu, Dilip Kumar; Tan, Yeow Kee; Wong, Chern Yuen; Jiang, Ridong; Wu, Hengxin; Li, Liyuan; Kah, Eng Hoe; Yu, Xinguo; Li, Dong; Li, Haizhou

    In this paper, we describe the implemented service robot, called FusionBot. The goal of this research is to explore and demonstrate the utility of an interactive service robot in a smart home environment, thereby improving the quality of human life. The robot has four main features: 1) speech recognition, 2) object recognition, 3) object grabbing and fetching and 4) communication with a smart coffee machine. Its software architecture employs a multimodal dialogue system that integrates different components, including spoken dialog system, vision understanding, navigation and smart device gateway. In the experiments conducted during the TechFest 2008 event, the FusionBot successfully demonstrated that it could autonomously serve coffee to visitors on their request. Preliminary survey results indicate that the robot has potential to not only aid in the general robotics but also contribute towards the long term goal of intelligent service robotics in smart home environment.

  10. CLARAty: Challenges and Steps Toward Reusable Robotic Software

    Directory of Open Access Journals (Sweden)

    Richard Madison

    2008-11-01

    Full Text Available We present in detail some of the challenges in developing reusable robotic software. We base that on our experience in developing the CLARAty robotics software, which is a generic object-oriented framework used for the integration of new algorithms in the areas of motion control, vision, manipulation, locomotion, navigation, localization, planning and execution. CLARAty was adapted to a number of heterogeneous robots with different mechanisms and hardware control architectures. In this paper, we also describe how we addressed some of these challenges in the development of the CLARAty software.

  11. CLARAty: Challenges and Steps toward Reusable Robotic Software

    Directory of Open Access Journals (Sweden)

    Issa A.D. Nesnas

    2006-03-01

    Full Text Available We present in detail some of the challenges in developing reusable robotic software. We base that on our experience in developing the CLARAty robotics software, which is a generic object-oriented framework used for the integration of new algorithms in the areas of motion control, vision, manipulation, locomotion, navigation, localization, planning and execution. CLARAty was adapted to a number of heterogeneous robots with different mechanisms and hardware control architectures. In this paper, we also describe how we addressed some of these challenges in the development of the CLARAty software.

  12. Communicating with Teams of Cooperative Robots

    National Research Council Canada - National Science Library

    Perzanowski, D; Schultz, A. C; Adams, W; Bugajska, M; Marsh, E; Trafton, G; Brock, D; Skubic, M; Abramson, M

    2002-01-01

    .... For this interface, they have elected to use natural language and gesture. Gestures can be either natural gestures perceived by a vision system installed on the robot, or they can be made by using a stylus on a Personal Digital Assistant...

  13. Conceptual spatial representations for indoor mobile robots

    OpenAIRE

    Zender, Henrik; Mozos, Oscar Martinez; Jensfelt, Patric; Kruijff, Geert-Jan M.; Wolfram, Burgard

    2008-01-01

    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporate...

  14. 4th IFToMM International Symposium on Robotics and Mechatronics

    CERN Document Server

    Laribi, Med; Gazeau, Jean-Pierre

    2016-01-01

    This volume contains papers that have been selected after review for oral presentation at ISRM 2015, the Fourth IFToMM International Symposium on Robotics and Mechatronics held in Poitiers, France 23-24 June 2015. These papers  provide a vision of the evolution of the disciplines of robotics and mechatronics, including but not limited to: mechanism design; modeling and simulation; kinematics and dynamics of multibody systems; control methods; navigation and motion planning; sensors and actuators; bio-robotics; micro/nano-robotics; complex robotic systems; walking machines, humanoids-parallel kinematic structures: analysis and synthesis; smart devices; new design; application and prototypes. The book can be used by researchers and engineers in the relevant areas of robotics and mechatronics.

  15. Towards Plug-n-Play robot guidance: Advanced 3D estimation and pose estimation in Robotic applications

    DEFF Research Database (Denmark)

    Sølund, Thomas

    and move objects, which are physical located at the same positions. In order to place objects in the same position each time, custom-made mechanical fixtures and aligners are constructed to ensure that objects are not moving. It is expensive to design and build these fixtures and it is difficult to quickly...... change to a novel task. In some cases where objects are placed in bins and boxes it is not possible to position the objects in the same location each time. To avoid designing expensive mechanical solutions and to be able to pick objects from boxes and bins, a sensor is necessary to guide the robot. Today...... while the robot motion programming is easily handled with the new collaborative robots. This thesis deals with robot vision technologies and how these are made easier for production workers program in order to get robots to recognize and compute the position of objects in the industry. This thesis...

  16. Odométrie visuelle en milieu naturel pour les robots mobiles

    OpenAIRE

    Duperal, B.

    2013-01-01

    / Dans le cadre de ce stage, l'objectif est d'étudier les performances de la vision artificielle, pour permettre à un robot mobile de se localiser, en utilisant soit une caméra, soit un système de stéréovision, dans un environnement naturel (arbres, cultures, champs agricoles, bâtiments,...), suite à des pertes de signal GPS. La localisation de robots par vision artificielle nécessite de réaliser des opérations d'appariement de points invariants (détection d'amers, zones particulières sur ...

  17. CANINE: a robotic mine dog

    Science.gov (United States)

    Stancil, Brian A.; Hyams, Jeffrey; Shelley, Jordan; Babu, Kartik; Badino, Hernán.; Bansal, Aayush; Huber, Daniel; Batavia, Parag

    2013-01-01

    Neya Systems, LLC competed in the CANINE program sponsored by the U.S. Army Tank Automotive Research Development and Engineering Center (TARDEC) which culminated in a competition held at Fort Benning as part of the 2012 Robotics Rodeo. As part of this program, we developed a robot with the capability to learn and recognize the appearance of target objects, conduct an area search amid distractor objects and obstacles, and relocate the target object in the same way that Mine dogs and Sentry dogs are used within military contexts for exploration and threat detection. Neya teamed with the Robotics Institute at Carnegie Mellon University to develop vision-based solutions for probabilistic target learning and recognition. In addition, we used a Mission Planning and Management System (MPMS) to orchestrate complex search and retrieval tasks using a general set of modular autonomous services relating to robot mobility, perception and grasping.

  18. National project : advanced robot for nuclear power plant

    International Nuclear Information System (INIS)

    Tsunemi, T.; Takehara, K.; Hayashi, T.; Okano, H.; Sugiyama, S.

    1993-01-01

    The national project 'Advanced Robot' has been promoted by the Agency of Industrial science and Technology, MITI for eight years since 1983. The robot for a nuclear plant is one of the projects, and is a prototype intelligent one that also has a three dimensional vision system to generate an environmental model, a quadrupedal walking mechanism to work on stairs and four fingered manipulators to disassemble a valve with a hand tool. Many basic technologies such as an actuator, a tactile sensor, autonomous control and so on progress to high level. The prototype robot succeeded functionally in official demonstration in 1990. More refining such as downsizing and higher intelligence is necessary to realize a commercial robot, while basic technologies are useful to improve conventional robots and systems. This paper presents application studies on the advanced robot technologies. (author)

  19. An automated miniature robotic vehicle inspection system

    Energy Technology Data Exchange (ETDEWEB)

    Dobie, Gordon; Summan, Rahul; MacLeod, Charles; Pierce, Gareth; Galbraith, Walter [Centre for Ultrasonic Engineering, University of Strathclyde, 204 George Street, Glasgow, G1 1XW (United Kingdom)

    2014-02-18

    A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3D model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software.

  20. An automated miniature robotic vehicle inspection system

    International Nuclear Information System (INIS)

    Dobie, Gordon; Summan, Rahul; MacLeod, Charles; Pierce, Gareth; Galbraith, Walter

    2014-01-01

    A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3D model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software

  1. Low computation vision-based navigation for a Martian rover

    Science.gov (United States)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  2. Design of An Electronic Narrator on Assistant Robot for Blind People

    Directory of Open Access Journals (Sweden)

    Ardiansyah Rizqi Andry

    2016-01-01

    Full Text Available Many personal service robot is developed to help blind people in daily life, such as room cleaning, for navigating, object finding, reading and other activities. In this context, the present work focuses the development of an image-to-speech application for the blind. The project is called Design of An Electronic Narrator on Assistant Robot for Blind People, and the final purpose is the design of an electronic narrator application on personal service robot that helps to narrate a text on a book, magazine, a sheet of paper etc to a blind person. To achieve that, a Raspberry pi board, a light sensor, OpenCV computer vision library, Tesseract OCR (Optical Character Recognition library, eSpeak Text-to-Speech Synthesizer (TTS library are integrated, which is enables the blind person to hear a narration from text on a book, magazine, a sheet etc.

  3. Automatic turbot fish cutting using machine vision

    OpenAIRE

    Martín Rodríguez, Fernando; Barral Martínez, Mónica

    2015-01-01

    This paper is about the design of an automated machine to cut turbot fish specimens. Machine vision is a key part of this project as it is used to compute a cutting curve for specimen’s head. This task is impossible to be carried out by mechanical means. Machine vision is used to detect head boundary and a robot is used to cut the head. Afterwards mechanical systems are used to slice fish to get an easy presentation for end consumer (as fish fillets than can be easily marketed ...

  4. Does vision work well enough for industry?

    DEFF Research Database (Denmark)

    Hagelskjær, Frederik; Krüger, Norbert; Buch, Anders Glent

    2018-01-01

    A multitude of pose estimation algorithms has been developed in the last decades and many proprietary computer vision packages exist which can simplify the setup process. Despite this, pose estimation still lacks the ease of use that robots have attained in the industry. The statement ”vision does...... not work” is still not uncommon in the industry, even from integrators. This points to difficulties in setting up solutions in industrial applications. In this paper, we analyze and investigate the current usage of pose estimation algorithms. A questionnaire was sent out to both university and industry...

  5. Vision guided robot bin picking of cylindrical objects

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg; Dyhr-Nielsen, Carsten

    1997-01-01

    In order to achieve increased flexibility on robotic production lines an investigation of the rovbot bin-picking problem is presented. In the paper, the limitations related to previous attempts to solve the problem are pointed uot and a set of innovative methods are presented. The main elements...

  6. An Automatic Assembling System for Sealing Rings Based on Machine Vision

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2017-01-01

    Full Text Available In order to grab and place the sealing rings of battery lid quickly and accurately, an automatic assembling system for sealing rings based on machine vision is developed in this paper. The whole system is composed of the light sources, cameras, industrial control units, and a 4-degree-of-freedom industrial robot. Specifically, the sealing rings are recognized and located automatically with the machine vision module. Then industrial robot is controlled for grabbing the sealing rings dynamically under the joint work of multiple control units and visual feedback. Furthermore, the coordinates of the fast-moving battery lid are tracked by the machine vision module. Finally the sealing rings are placed on the sealing ports of battery lid accurately and automatically. Experimental results demonstrate that the proposed system can grab the sealing rings and place them on the sealing port of the fast-moving battery lid successfully. More importantly, the proposed system can improve the efficiency of the battery production line obviously.

  7. Visual servo simulation of EAST articulated maintenance arm robot

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Pan, Hongtao; Cheng, Yong; Feng, Hansheng [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Wu, Huapeng [Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta (Finland)

    2016-03-15

    For the inspection and light-duty maintenance of the vacuum vessel in the EAST tokamak, a serial robot arm, called EAST articulated maintenance arm, is developed. Due to the 9-m-long cantilever arm, the large flexibility of the EAMA robot introduces a problem in the accurate positioning. This article presents an autonomous robot control to cope with the robot positioning problem, which is a visual servo approach in context of tile grasping for the EAMA robot. In the experiments, the proposed method was implemented in a simulation environment to position and track a target graphite tile with the EAMA robot. As a result, the proposed visual control scheme can successfully drive the EAMA robot to approach and track the target tile until the robot reaches the desired position. Furthermore, the functionality of the simulation software presented in this paper is proved to be suitable for the development of the robotic and computer vision application.

  8. Visual servo simulation of EAST articulated maintenance arm robot

    International Nuclear Information System (INIS)

    Yang, Yang; Song, Yuntao; Pan, Hongtao; Cheng, Yong; Feng, Hansheng; Wu, Huapeng

    2016-01-01

    For the inspection and light-duty maintenance of the vacuum vessel in the EAST tokamak, a serial robot arm, called EAST articulated maintenance arm, is developed. Due to the 9-m-long cantilever arm, the large flexibility of the EAMA robot introduces a problem in the accurate positioning. This article presents an autonomous robot control to cope with the robot positioning problem, which is a visual servo approach in context of tile grasping for the EAMA robot. In the experiments, the proposed method was implemented in a simulation environment to position and track a target graphite tile with the EAMA robot. As a result, the proposed visual control scheme can successfully drive the EAMA robot to approach and track the target tile until the robot reaches the desired position. Furthermore, the functionality of the simulation software presented in this paper is proved to be suitable for the development of the robotic and computer vision application.

  9. Smart mobile robot system for rubbish collection

    Science.gov (United States)

    Ali, Mohammed A. H.; Sien Siang, Tan

    2018-03-01

    This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.

  10. Human-Robot Control Strategies for the NASA/DARPA Robonaut

    Science.gov (United States)

    Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.

    2003-01-01

    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.

  11. Analysis and optimization on in-vessel inspection robotic system for EAST

    International Nuclear Information System (INIS)

    Zhang, Weijun; Zhou, Zeyu; Yuan, Jianjun; Du, Liang; Mao, Ziming

    2015-01-01

    Since China has successfully built her first Experimental Advanced Superconducting TOKAMAK (EAST) several years ago, great interest and demand have been increasing in robotic in-vessel inspection/operation systems, by which an observation of in-vessel physical phenomenon, collection of visual information, 3D mapping and localization, even maintenance are to be possible. However, it has been raising many challenges to implement a practical and robust robotic system, due to a lot of complex constraints and expectations, e.g., high remanent working temperature (100 °C) and vacuum (10"−"3 pa) environment even in the rest interval between plasma discharge experiments, close-up and precise inspection, operation efficiency, besides a general kinematic requirement of D shape irregular vessel. In this paper we propose an upgraded robotic system with redundant degrees of freedom (DOF) manipulator combined with a binocular vision system at the tip and a virtual reality system. A comprehensive comparison and discussion are given on the necessity and main function of the binocular vision system, path planning for inspection, fast localization, inspection efficiency and success rate in time, optimization of kinematic configuration, and the possibility of underactuated mechanism. A detailed design, implementation, and experiments of the binocular vision system together with the recent development progress of the whole robotic system are reported in the later part of the paper, while, future work and expectation are described in the end.

  12. iPathology: Robotic Applications and Management of Plants and Plant Diseases

    Directory of Open Access Journals (Sweden)

    Yiannis Ampatzidis

    2017-06-01

    Full Text Available The rapid development of new technologies and the changing landscape of the online world (e.g., Internet of Things (IoT, Internet of All, cloud-based solutions provide a unique opportunity for developing automated and robotic systems for urban farming, agriculture, and forestry. Technological advances in machine vision, global positioning systems, laser technologies, actuators, and mechatronics have enabled the development and implementation of robotic systems and intelligent technologies for precision agriculture. Herein, we present and review robotic applications on plant pathology and management, and emerging agricultural technologies for intra-urban agriculture. Greenhouse advanced management systems and technologies have been greatly developed in the last years, integrating IoT and WSN (Wireless Sensor Network. Machine learning, machine vision, and AI (Artificial Intelligence have been utilized and applied in agriculture for automated and robotic farming. Intelligence technologies, using machine vision/learning, have been developed not only for planting, irrigation, weeding (to some extent, pruning, and harvesting, but also for plant disease detection and identification. However, plant disease detection still represents an intriguing challenge, for both abiotic and biotic stress. Many recognition methods and technologies for identifying plant disease symptoms have been successfully developed; still, the majority of them require a controlled environment for data acquisition to avoid false positives. Machine learning methods (e.g., deep and transfer learning present promising results for improving image processing and plant symptom identification. Nevertheless, diagnostic specificity is a challenge for microorganism control and should drive the development of mechatronics and robotic solutions for disease management.

  13. Pointing with a One-Eyed Cursor for Supervised Training in Minimally Invasive Robotic Surgery

    DEFF Research Database (Denmark)

    Kibsgaard, Martin; Kraus, Martin

    2016-01-01

    Pointing in the endoscopic view of a surgical robot is a natural and effcient way for instructors to communicate with trainees in robot-assisted minimally invasive surgery. However, pointing in a stereo-endoscopic view can be limited by problems such as video delay, double vision, arm fatigue......-day training units in robot- assisted minimally invasive surgery on anaesthetised pigs....

  14. Rough terrain motion planning for actively reconfigurable mobile robots

    International Nuclear Information System (INIS)

    Brunner, Michael

    2015-01-01

    In the aftermath of the Tohoku earthquake and the nuclear meltdown at the power plant of Fukushima Daiichi in 2011, reconfigurable robots like the iRobot Packbot were deployed. Instead of humans, the robots were used to investigate contaminated areas. Other incidents are the two major earthquakes in Northern Italy in May 2012. Besides many casualties, a large number of historical buildings was severely damaged. Due to the imminent danger of collapse, it was too dangerous for rescue personnel to enter many of the buildings. Therefore, the sites were inspected by reconfigurable robots, which are able to traverse the rubble and debris of the partially destroyed buildings. This thesis develops a navigation system enabling wheeled and tracked robots to safely traverse rough terrain and challenging structures. It consists of a planning mechanism and a controller. The focus of this thesis, however, is on the contribution to motion planning. The planning scheme employs a hierarchical approach to motion planning for actively reconfigurable robots in rough environments. Using a map of the environment the algorithm estimates the traversability under the consideration of uncertainties. Based on this analysis, an initial path search determines an approximate solution with respect to the robot's operating limits.Subsequently, a detailed planning step refines the initial path where it is required. The refinement step considers the robot's actuators and stability in addition to the quantities of the first search. Determining the robot-terrain interaction is very important in rough terrain. This thesis presents two path refinement approaches: a deterministic and a randomized approach. The experimental evaluation investigates the separate components of the planning scheme, the robot-terrain interaction for instance.In simulation as well as in real world experiments the evaluation demonstrates the necessity of such a planning algorithm in rough terrain and it provides

  15. Rough terrain motion planning for actively reconfigurable mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Brunner, Michael

    2015-02-05

    In the aftermath of the Tohoku earthquake and the nuclear meltdown at the power plant of Fukushima Daiichi in 2011, reconfigurable robots like the iRobot Packbot were deployed. Instead of humans, the robots were used to investigate contaminated areas. Other incidents are the two major earthquakes in Northern Italy in May 2012. Besides many casualties, a large number of historical buildings was severely damaged. Due to the imminent danger of collapse, it was too dangerous for rescue personnel to enter many of the buildings. Therefore, the sites were inspected by reconfigurable robots, which are able to traverse the rubble and debris of the partially destroyed buildings. This thesis develops a navigation system enabling wheeled and tracked robots to safely traverse rough terrain and challenging structures. It consists of a planning mechanism and a controller. The focus of this thesis, however, is on the contribution to motion planning. The planning scheme employs a hierarchical approach to motion planning for actively reconfigurable robots in rough environments. Using a map of the environment the algorithm estimates the traversability under the consideration of uncertainties. Based on this analysis, an initial path search determines an approximate solution with respect to the robot's operating limits.Subsequently, a detailed planning step refines the initial path where it is required. The refinement step considers the robot's actuators and stability in addition to the quantities of the first search. Determining the robot-terrain interaction is very important in rough terrain. This thesis presents two path refinement approaches: a deterministic and a randomized approach. The experimental evaluation investigates the separate components of the planning scheme, the robot-terrain interaction for instance.In simulation as well as in real world experiments the evaluation demonstrates the necessity of such a planning algorithm in rough terrain and it provides

  16. Natural Tasking of Robots Based on Human Interaction Cues

    Science.gov (United States)

    2005-06-01

    MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching

  17. Autonomous stair-climbing with miniature jumping robots.

    Science.gov (United States)

    Stoeter, Sascha A; Papanikolopoulos, Nikolaos

    2005-04-01

    The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.

  18. 30 Years of Robotic Surgery.

    Science.gov (United States)

    Leal Ghezzi, Tiago; Campos Corleta, Oly

    2016-10-01

    The idea of reproducing himself with the use of a mechanical robot structure has been in man's imagination in the last 3000 years. However, the use of robots in medicine has only 30 years of history. The application of robots in surgery originates from the need of modern man to achieve two goals: the telepresence and the performance of repetitive and accurate tasks. The first "robot surgeon" used on a human patient was the PUMA 200 in 1985. In the 1990s, scientists developed the concept of "master-slave" robot, which consisted of a robot with remote manipulators controlled by a surgeon at a surgical workstation. Despite the lack of force and tactile feedback, technical advantages of robotic surgery, such as 3D vision, stable and magnified image, EndoWrist instruments, physiologic tremor filtering, and motion scaling, have been considered fundamental to overcome many of the limitations of the laparoscopic surgery. Since the approval of the da Vinci(®) robot by international agencies, American, European, and Asian surgeons have proved its factibility and safety for the performance of many different robot-assisted surgeries. Comparative studies of robotic and laparoscopic surgical procedures in general surgery have shown similar results with regard to perioperative, oncological, and functional outcomes. However, higher costs and lack of haptic feedback represent the major limitations of current robotic technology to become the standard technique of minimally invasive surgery worldwide. Therefore, the future of robotic surgery involves cost reduction, development of new platforms and technologies, creation and validation of curriculum and virtual simulators, and conduction of randomized clinical trials to determine the best applications of robotics.

  19. Automating the Incremental Evolution of Controllers for Physical Robots

    DEFF Research Database (Denmark)

    Faina, Andres; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    the evolution of digital objects.…” The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration...... of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range...

  20. Robotic identification of kinesthetic deficits after stroke.

    Science.gov (United States)

    Semrau, Jennifer A; Herter, Troy M; Scott, Stephen H; Dukelow, Sean P

    2013-12-01

    Kinesthesia, the sense of body motion, is essential to proper control and execution of movement. Despite its importance for activities of daily living, no current clinical measures can objectively measure kinesthetic deficits. The goal of this study was to use robotic technology to quantify prevalence and severity of kinesthetic deficits of the upper limb poststroke. Seventy-four neurologically intact subjects and 113 subjects with stroke (62 left-affected, 51 right-affected) performed a robot-based kinesthetic matching task with vision occluded. The robot moved the most affected arm at a preset speed, direction, and magnitude. Subjects were instructed to mirror-match the movement with their opposite arm (active arm). A large number of subjects with stroke were significantly impaired on measures of kinesthesia. We observed impairments in ability to match movement direction (69% and 49% impaired for left- and right-affected subjects, respectively) and movement magnitude (42% and 31%). We observed impairments to match movement speed (32% and 27%) and increased response latencies (48% and 20%). Movement direction errors and response latencies were related to clinical measures of function, motor recovery, and dexterity. Using a robotic approach, we found that 61% of acute stroke survivors (n=69) had kinesthetic deficits. Additionally, these deficits were highly related to existing clinical measures, suggesting the importance of kinesthesia in day-to-day function. Our methods allow for more sensitive, accurate, and objective identification of kinesthetic deficits after stroke. With this information, we can better inform clinical treatment strategies to improve poststroke rehabilitative care and outcomes.

  1. 24th International Conference on Robotics in Alpe-Adria-Danube Region

    CERN Document Server

    2016-01-01

    This volume includes the Proceedings of the 24th International Conference on Robotics in Alpe-Adria-Danube Region, RAAD 2015, which was held in Bucharest, Romania, on May 27-29, 2015. The Conference brought together academic and industry researchers in robotics from the 11 countries affiliated to the Alpe-Adria-Danube space: Austria, Croatia, Czech Republic, Germany, Greece, Hungary, Italy, Romania, Serbia, Slovakia and Slovenia, and their worldwide partners. According to its tradition, RAAD 2015 covered all important areas of research, development and innovation in robotics, including new trends such as: bio-inspired and cognitive robots, visual servoing of robot motion, human-robot interaction, and personal robots for ambient assisted living. The accepted papers have been grouped in nine sessions: Robot integration in industrial applications; Grasping analysis, dexterous grippers and component design; Advanced robot motion control; Robot vision and sensory control; Human-robot interaction and collaboration;...

  2. ENVIRONMENT INDEPENDENT DIRECTIONAL GESTURE RECOGNITION TECHNIQUE FOR ROBOTS USING MULTIPLE DATA FUSION

    Directory of Open Access Journals (Sweden)

    Kishore Abishek

    2013-10-01

    Full Text Available A technique is presented here for directional gesture recognition by robots. The usual technique employed now is using camera vision and image processing. One major disadvantage with that is the environmental constrain. The machine vision system has a lot of lighting constrains. It is therefore only possible to use that technique in a conditioned environment, where the lighting is compatible with camera system used. The technique presented here is designed to work in any environment. It does not employ machine vision. It utilizes a set of sensors fixed on the hands of a human to identify the direction in which the hand is pointing. This technique uses cylindrical coordinate system to precisely find the direction. A programmed computing block in the robot identifies the direction accurately within the given range.

  3. Towards safe robots approaching Asimov’s 1st law

    CERN Document Server

    Haddadin, Sami

    2014-01-01

    The vision of seamless human-robot interaction in our everyday life that allows for tight cooperation between human and robot has not become reality yet. However, the recent increase in technology maturity finally made it possible to realize systems of high integration, advanced sensorial capabilities and enhanced power to cross this barrier and merge living spaces of humans and robot workspaces to at least a certain extent. Together with the increasing industrial effort to realize first commercial service robotics products this makes it necessary to properly address one of the most fundamental questions of Human-Robot Interaction: How to ensure safety in human-robot coexistence? In this authoritative monograph, the essential question about the necessary requirements for a safe robot is addressed in depth and from various perspectives. The approach taken in this book focuses on the biomechanical level of injury assessment, addresses the physical evaluation of robot-human impacts, and isolates the major factor...

  4. Implementation of a robotic flexible assembly system

    Science.gov (United States)

    Benton, Ronald C.

    1987-01-01

    As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.

  5. Design and Implementation of Autonomous Stair Climbing with Nao Humanoid Robot

    OpenAIRE

    Lu, Wei

    2015-01-01

    With the development of humanoid robots, autonomous stair climbing is an important capability. Humanoid robots will play an important role in helping people tackle some basic problems in the future. The main contribution of this thesis is that the NAO humanoid robot can climb the spiral staircase autonomously. In the vision module, the algorithm of image filtering and detecting the contours of the stair contributes to calculating the location of the stairs accurately. Additionally, the st...

  6. Line-feature-based calibration method of structured light plane parameters for robot hand-eye system

    Science.gov (United States)

    Qi, Yuhan; Jing, Fengshui; Tan, Min

    2013-03-01

    For monocular-structured light vision measurement, it is essential to calibrate the structured light plane parameters in addition to the camera intrinsic parameters. A line-feature-based calibration method of structured light plane parameters for a robot hand-eye system is proposed. Structured light stripes are selected as calibrating primitive elements, and the robot moves from one calibrating position to another with constraint in order that two misaligned stripe lines are generated. The images of stripe lines could then be captured by the camera fixed at the robot's end link. During calibration, the equations of two stripe lines in the camera coordinate system are calculated, and then the structured light plane could be determined. As the robot's motion may affect the effectiveness of calibration, so the robot's motion constraints are analyzed. A calibration experiment and two vision measurement experiments are implemented, and the results reveal that the calibration accuracy can meet the precision requirement of robot thick plate welding. Finally, analysis and discussion are provided to illustrate that the method has a high efficiency fit for industrial in-situ calibration.

  7. Neuromorphic vision sensors and preprocessors in system applications

    Science.gov (United States)

    Kramer, Joerg; Indiveri, Giacomo

    1998-09-01

    A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.

  8. Intelligent assistive robots recent advances in assistive robotics for everyday activities

    CERN Document Server

    Moreno, Juan; Kong, Kyoungchul; Amirat, Yacine

    2015-01-01

    This book deals with the growing challenges of using assistive robots in our everyday activities along with providing intelligent assistive services. The presented applications concern mainly healthcare and wellness such as helping elderly people, assisting dependent persons, habitat monitoring in smart environments, well-being, security, etc. These applications reveal also new challenges regarding control theory, mechanical design, mechatronics, portability, acceptability, scalability, security, etc.  

  9. Light-driven nano-robotics for sub-diffraction probing and sensing

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Bañas, Andrew Rafael; Palima, Darwin

    On the macro-scale robotics typically uses light for carrying information for machine vision for and feedback in artificially intelligent guidance systems and monitoring. Using the miniscule momentum of light shrinking robots down to the micro- and even nano-scale regime creates opportunities......]. Therefore, a generic approach for optimizing lightmatter interaction involves the combination of optimal light-shaping techniques with the use of optimized nano-featured shapes in light-driven micro-robotics structures. In this work, we designed different three-dimensional micro-structures and fabricated...

  10. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This paper summarizes the research results on the friendly network robotics in fiscal 1996. This research assumes an android robot as an ultimate robot and the future robot system utilizing computer network technology. The robot aiming at human daily work activities in factories or under extreme environments is required to work under usual human work environments. The human robot with similar size, shape and functions to human being is desirable. Such robot having a head with two eyes, two ears and mouth can hold a conversation with human being, can walk with two legs by autonomous adaptive control, and has a behavior intelligence. Remote operation of such robot is also possible through high-speed computer network. As a key technology to use this robot under coexistence with human being, establishment of human coexistent robotics was studied. As network based robotics, use of robots connected with computer networks was also studied. In addition, the R-cube (R{sup 3}) plan (realtime remote control robot technology) was proposed. 82 refs., 86 figs., 12 tabs.

  11. Toward The Robot Eye: Isomorphic Representation For Machine Vision

    Science.gov (United States)

    Schenker, Paul S.

    1981-10-01

    This paper surveys some issues confronting the conception of models for general purpose vision systems. We draw parallels to requirements of human performance under visual transformations naturally occurring in the ecological environment. We argue that successful real world vision systems require a strong component of analogical reasoning. We propose a course of investigation into appropriate models, and illustrate some of these proposals by a simple example. Our study emphasizes the potential importance of isomorphic representations - models of image and scene which embed a metric of their respective spaces, and whose topological structure facilitates identification of scene descriptors that are invariant under viewing transformations.

  12. Robotic assisted minimally invasive surgery

    Directory of Open Access Journals (Sweden)

    Palep Jaydeep

    2009-01-01

    Full Text Available The term "robot" was coined by the Czech playright Karel Capek in 1921 in his play Rossom′s Universal Robots. The word "robot" is from the check word robota which means forced labor.The era of robots in surgery commenced in 1994 when the first AESOP (voice controlled camera holder prototype robot was used clinically in 1993 and then marketed as the first surgical robot ever in 1994 by the US FDA. Since then many robot prototypes like the Endoassist (Armstrong Healthcare Ltd., High Wycombe, Buck, UK, FIPS endoarm (Karlsruhe Research Center, Karlsruhe, Germany have been developed to add to the functions of the robot and try and increase its utility. Integrated Surgical Systems (now Intuitive Surgery, Inc. redesigned the SRI Green Telepresence Surgery system and created the daVinci Surgical System ® classified as a master-slave surgical system. It uses true 3-D visualization and EndoWrist ® . It was approved by FDA in July 2000 for general laparoscopic surgery, in November 2002 for mitral valve repair surgery. The da Vinci robot is currently being used in various fields such as urology, general surgery, gynecology, cardio-thoracic, pediatric and ENT surgery. It provides several advantages to conventional laparoscopy such as 3D vision, motion scaling, intuitive movements, visual immersion and tremor filtration. The advent of robotics has increased the use of minimally invasive surgery among laparoscopically naοve surgeons and expanded the repertoire of experienced surgeons to include more advanced and complex reconstructions.

  13. An Approach for Environment Mapping and Control of Wall Follower Cellbot Through Monocular Vision and Fuzzy System

    OpenAIRE

    Farias, Karoline de M.; Rodrigues Junior, WIlson Leal; Bezerra Neto, Ranulfo P.; Rabelo, Ricardo A. L.; Santana, Andre M.

    2017-01-01

    This paper presents an approach using range measurement through homography calculation to build 2D visual occupancy grid and control the robot through monocular vision. This approach is designed for a Cellbot architecture. The robot is equipped with wall following behavior to explore the environment, which enables the robot to trail objects contours, residing in the fuzzy control the responsibility to provide commands for the correct execution of the robot movements while facing the advers...

  14. Semiautonomous teleoperation system with vision guidance

    Science.gov (United States)

    Yu, Wai; Pretlove, John R. G.

    1998-12-01

    This paper describes the ongoing research work on developing a telerobotic system in Mechatronic Systems and Robotics Research group at the University of Surrey. As human operators' manual control of remote robots always suffer from reduced performance and difficulties in perceiving information from the remote site, a system with a certain level of intelligence and autonomy will help to solve some of these problems. Thus, this system has been developed for this purpose. It also serves as an experimental platform to test the idea of using the combination of human and computer intelligence in teleoperation and finding out the optimum balance between them. The system consists of a Polhemus- based input device, a computer vision sub-system and a graphical user interface which communicates the operator with the remote robot. The system description is given in this paper as well as the preliminary experimental results of the system evaluation.

  15. Fiber optic coherent laser radar 3D vision system

    International Nuclear Information System (INIS)

    Clark, R.B.; Gallman, P.G.; Slotwinski, A.R.; Wagner, K.; Weaver, S.; Xu, Jieping

    1996-01-01

    This CLVS will provide a substantial advance in high speed computer vision performance to support robotic Environmental Management (EM) operations. This 3D system employs a compact fiber optic based scanner and operator at a 128 x 128 pixel frame at one frame per second with a range resolution of 1 mm over its 1.5 meter working range. Using acousto-optic deflectors, the scanner is completely randomly addressable. This can provide live 3D monitoring for situations where it is necessary to update once per second. This can be used for decontamination and decommissioning operations in which robotic systems are altering the scene such as in waste removal, surface scarafacing, or equipment disassembly and removal. The fiber- optic coherent laser radar based system is immune to variations in lighting, color, or surface shading, which have plagued the reliability of existing 3D vision systems, while providing substantially superior range resolution

  16. The development of advanced robotics technology in high radiation environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Cho, Jaiwan; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Lee, Jong Min; Park, Jin Suk; Kim, Seung Ho; Kim, Byung Soo; Moon, Byung Soo

    1997-07-01

    In the tele-operation technology using tele-presence in high radiation environment, stereo vision target tracking by centroid method, vergence control of stereo camera by moving vector method, stereo observing system by correlation method, horizontal moving axis stereo camera, and 3 dimensional information acquisition by stereo image is developed. Also, gesture image acquisition by computer vision and construction of virtual environment for remote work in nuclear power plant. In the development of intelligent control and monitoring technology for tele-robot in hazardous environment, the characteristics and principle of robot operation. And, robot end-effector tracking algorithm by centroid method and neural network method are developed for the observation and survey in hazardous environment. 3-dimensional information acquisition algorithm by structured light is developed. In the development of radiation hardened sensor technology, radiation-hardened camera module is designed and tested. And radiation characteristics of electric components is robot system is evaluated. Also 2-dimensional radiation monitoring system is developed. These advanced critical robot technology and telepresence techniques developed in this project can be applied to nozzle-dam installation /removal robot system, can be used to realize unmanned remotelization of nozzle-dam installation / removal task in steam generator of nuclear power plant, which can be contributed for people involved in extremely hazardous high radioactivity area to eliminate their exposure to radiation, enhance their task safety, and raise their working efficiency. (author). 75 refs., 21 tabs., 15 figs.

  17. The development of advanced robotics technology in high radiation environment

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Cho, Jaiwan; Lee, Nam Ho; Choi, Young Soo; Park, Soon Yong; Lee, Jong Min; Park, Jin Suk; Kim, Seung Ho; Kim, Byung Soo; Moon, Byung Soo.

    1997-07-01

    In the tele-operation technology using tele-presence in high radiation environment, stereo vision target tracking by centroid method, vergence control of stereo camera by moving vector method, stereo observing system by correlation method, horizontal moving axis stereo camera, and 3 dimensional information acquisition by stereo image is developed. Also, gesture image acquisition by computer vision and construction of virtual environment for remote work in nuclear power plant. In the development of intelligent control and monitoring technology for tele-robot in hazardous environment, the characteristics and principle of robot operation. And, robot end-effector tracking algorithm by centroid method and neural network method are developed for the observation and survey in hazardous environment. 3-dimensional information acquisition algorithm by structured light is developed. In the development of radiation hardened sensor technology, radiation-hardened camera module is designed and tested. And radiation characteristics of electric components is robot system is evaluated. Also 2-dimensional radiation monitoring system is developed. These advanced critical robot technology and telepresence techniques developed in this project can be applied to nozzle-dam installation /removal robot system, can be used to realize unmanned remotelization of nozzle-dam installation / removal task in steam generator of nuclear power plant, which can be contributed for people involved in extremely hazardous high radioactivity area to eliminate their exposure to radiation, enhance their task safety, and raise their working efficiency. (author). 75 refs., 21 tabs., 15 figs

  18. Internet remote control interface for a multipurpose robotic arm

    Directory of Open Access Journals (Sweden)

    Matthew W. Dunnigan

    2008-11-01

    Full Text Available This paper presents an Internet remote control interface for a MITSUBISHI PA10-6CE manipulator established for the purpose of the ROBOT museum exhibition during spring and summer 2004. The robotic manipulator is a part of the Intelligent Robotic Systems Laboratory at Heriot ? Watt University, which has been established to work on dynamic and kinematic aspects of manipulator control in the presence of environmental disturbances. The laboratory has been enriched by a simple vision system consisting of three web-cameras to broadcast the live images of the robots over the Internet. The Interface comprises of the TCP/IP server providing command parsing and execution using the open controller architecture of the manipulator and a client Java applet web-site providing a simple robot control interface.

  19. Efficient Active Sensing with Categorized Further Explorations for a Home Behavior-Monitoring Robot

    Directory of Open Access Journals (Sweden)

    Wenwei Yu

    2017-01-01

    Full Text Available Mobile robotics is a potential solution to home behavior monitoring for the elderly. For a mobile robot in the real world, there are several types of uncertainties for its perceptions, such as the ambiguity between a target object and the surrounding objects and occlusions by furniture. The problem could be more serious for a home behavior-monitoring system, which aims to accurately recognize the activity of a target person, in spite of these uncertainties. It detects irregularities and categorizes situations requiring further explorations, which strategically maximize the information needed for activity recognition while minimizing the costs. Two schemes of active sensing, based on two irregularity detections, namely, heuristic-based and template-matching-based irregularity detections, were implemented and examined for body contour-based activity recognition. Their time cost and accuracy in activity recognition were evaluated through experiments in both a controlled scenario and a home living scenario. Experiment results showed that the categorized further explorations guided the robot system to sense the target person actively. As a result, with the proposed approach, the robot system has achieved higher accuracy of activity recognition.

  20. Analysis and optimization on in-vessel inspection robotic system for EAST

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Weijun, E-mail: zhangweijun@sjtu.edu.cn; Zhou, Zeyu; Yuan, Jianjun; Du, Liang; Mao, Ziming

    2015-12-15

    Since China has successfully built her first Experimental Advanced Superconducting TOKAMAK (EAST) several years ago, great interest and demand have been increasing in robotic in-vessel inspection/operation systems, by which an observation of in-vessel physical phenomenon, collection of visual information, 3D mapping and localization, even maintenance are to be possible. However, it has been raising many challenges to implement a practical and robust robotic system, due to a lot of complex constraints and expectations, e.g., high remanent working temperature (100 °C) and vacuum (10{sup −3} pa) environment even in the rest interval between plasma discharge experiments, close-up and precise inspection, operation efficiency, besides a general kinematic requirement of D shape irregular vessel. In this paper we propose an upgraded robotic system with redundant degrees of freedom (DOF) manipulator combined with a binocular vision system at the tip and a virtual reality system. A comprehensive comparison and discussion are given on the necessity and main function of the binocular vision system, path planning for inspection, fast localization, inspection efficiency and success rate in time, optimization of kinematic configuration, and the possibility of underactuated mechanism. A detailed design, implementation, and experiments of the binocular vision system together with the recent development progress of the whole robotic system are reported in the later part of the paper, while, future work and expectation are described in the end.

  1. Surgery with cooperative robots.

    Science.gov (United States)

    Lehman, Amy C; Berg, Kyle A; Dumpert, Jason; Wood, Nathan A; Visty, Abigail Q; Rentschler, Mark E; Platt, Stephen R; Farritor, Shane M; Oleynikov, Dmitry

    2008-03-01

    Advances in endoscopic techniques for abdominal procedures continue to reduce the invasiveness of surgery. Gaining access to the peritoneal cavity through small incisions prompted the first significant shift in general surgery. The complete elimination of external incisions through natural orifice access is potentially the next step in reducing patient trauma. While minimally invasive techniques offer significant patient advantages, the procedures are surgically challenging. Robotic surgical systems are being developed that address the visualization and manipulation limitations, but many of these systems remain constrained by the entry incisions. Alternatively, miniature in vivo robots are being developed that are completely inserted into the peritoneal cavity for laparoscopic and natural orifice procedures. These robots can provide vision and task assistance without the constraints of the entry incision, and can reduce the number of incisions required for laparoscopic procedures. In this study, a series of minimally invasive animal-model surgeries were performed using multiple miniature in vivo robots in cooperation with existing laparoscopy and endoscopy tools as well as the da Vinci Surgical System. These procedures demonstrate that miniature in vivo robots can address the visualization constraints of minimally invasive surgery by providing video feedback and task assistance from arbitrary orientations within the peritoneal cavity.

  2. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    International Nuclear Information System (INIS)

    Ren, Y J; Zhu, J G; Yang, X Y; Ye, S H

    2006-01-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent

  3. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    Science.gov (United States)

    Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.

    2006-10-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.

  4. How to prepare the patient for robotic surgery: before and during the operation.

    Science.gov (United States)

    Lim, Peter C; Kang, Elizabeth

    2017-11-01

    Robotic surgery in the treatment of gynecologic diseases continues to evolve and has become accepted over the last decade. The advantages of robotic-assisted laparoscopic surgery over conventional laparoscopy are three-dimensional camera vision, superior precision and dexterity with EndoWristed instruments, elimination of operator tremor, and decreased surgeon fatigue. The drawbacks of the technology are bulkiness and lack of tactile feedback. As with other surgical platforms, the limitations of robotic surgery must be understood. Patient selection and the types of surgical procedures that can be performed through the robotic surgical platform are critical to the success of robotic surgery. First, patient selection and the indication for gynecologic disease should be considered. Discussion with the patient regarding the benefits and potential risks of robotic surgery and of complications and alternative treatments is mandatory, followed by patient's signature indicating informed consent. Appropriate preoperative evaluation-including laboratory and imaging tests-and bowel cleansing should be considered depending upon the type of robotic-assisted procedure. Unlike other surgical procedures, robotic surgery is equipment-intensive and requires an appropriate surgical suite to accommodate the patient side cart, the vision system, and the surgeon's console. Surgical personnel must be properly trained with the robotics technology. Several factors must be considered to perform a successful robotic-assisted surgery: the indication and type of surgical procedure, the surgical platform, patient position and the degree of Trendelenburg, proper port placement configuration, and appropriate instrumentation. These factors that must be considered so that patients can be appropriately prepared before and during the operation are described. Copyright © 2017. Published by Elsevier Ltd.

  5. Periodic activations of behaviours and emotional adaptation in behaviour-based robotics

    Science.gov (United States)

    Burattini, Ernesto; Rossi, Silvia

    2010-09-01

    The possible modulatory influence of motivations and emotions is of great interest in designing robotic adaptive systems. In this paper, an attempt is made to connect the concept of periodic behaviour activations to emotional modulation, in order to link the variability of behaviours to the circumstances in which they are activated. The impact of emotion is studied, described as timed controlled structures, on simple but conflicting reactive behaviours. Through this approach it is shown that the introduction of such asynchronies in the robot control system may lead to an adaptation in the emergent behaviour without having an explicit action selection mechanism. The emergent behaviours of a simple robot designed with both a parallel and a hierarchical architecture are evaluated and compared.

  6. 10th FSR (Field and Service Robotics)

    CERN Document Server

    Barfoot, Timothy

    2016-01-01

    This book contains the proceedings of the 10th FSR, (Field and Service Robotics) which is the leading single-track conference on applications of robotics in challenging environments. The 10th FSR was held in Toronto, Canada from 23-26 June 2015. The book contains 42 full-length, peer-reviewed papers organized into a variety of topics: Aquatic, Vision, Planetary, Aerial, Underground, and Systems. The goal of the book and the conference is to report and encourage the development and experimental evaluation of field and service robots, and to generate a vibrant exchange and discussion in the community. Field robots are non-factory robots, typically mobile, that operate in complex and dynamic environments: on the ground (Earth or other planets), under the ground, underwater, in the air or in space. Service robots are those that work closely with humans to help them with their lives. The first FSR was held in Canberra, Australia, in 1997. Since that first meeting, FSR has been held roughly every two years, cycling...

  7. Mobile robot for hazardous environments

    International Nuclear Information System (INIS)

    Bains, N.

    1995-01-01

    This paper describes the architecture and potential applications of the autonomous robot for a known environment (ARK). The ARK project has developed an autonomous mobile robot that can move around by itself in a complicated nuclear environment utilizing a number of sensors for navigation. The primary sensor system is computer vision. The ARK has the intelligence to determine its position utilizing open-quotes natural landmarks,close quotes such as ordinary building features at any point along its path. It is this feature that gives ARK its uniqueness to operate in an industrial type of environment. The prime motivation to develop ARK was the potential application of mobile robots in radioactive areas within nuclear generating stations and for nuclear waste sites. The project budget is $9 million over 4 yr and will be completed in October 1995

  8. Study of high-definition and stereoscopic head-aimed vision for improved teleoperation of an unmanned ground vehicle

    Science.gov (United States)

    Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian

    2012-06-01

    Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.

  9. Robot 2015 : Second Iberian Robotics Conference : Advances in Robotics

    CERN Document Server

    Moreira, António; Lima, Pedro; Montano, Luis; Muñoz-Martinez, Victor

    2016-01-01

    This book contains a selection of papers accepted for presentation and discussion at ROBOT 2015: Second Iberian Robotics Conference, held in Lisbon, Portugal, November 19th-21th, 2015. ROBOT 2015 is part of a series of conferences that are a joint organization of SPR – “Sociedade Portuguesa de Robótica/ Portuguese Society for Robotics”, SEIDROB – Sociedad Española para la Investigación y Desarrollo de la Robótica/ Spanish Society for Research and Development in Robotics and CEA-GTRob – Grupo Temático de Robótica/ Robotics Thematic Group. The conference organization had also the collaboration of several universities and research institutes, including: University of Minho, University of Porto, University of Lisbon, Polytechnic Institute of Porto, University of Aveiro, University of Zaragoza, University of Malaga, LIACC, INESC-TEC and LARSyS. Robot 2015 was focussed on the Robotics scientific and technological activities in the Iberian Peninsula, although open to research and delegates from other...

  10. Vision-based topological map building and localisation using persistent features

    CSIR Research Space (South Africa)

    Sabatta, DG

    2008-11-01

    Full Text Available stream_source_info Sabatta_2008.pdf.txt stream_content_type text/plain stream_size 32284 Content-Encoding UTF-8 stream_name Sabatta_2008.pdf.txt Content-Type text/plain; charset=UTF-8 Vision-based Topological Map... of topological mapping was introduced into the field of robotics following studies of human cogni- tive mapping undertaken by Kuipers [8]. Since then, much progress has been made in the field of vision-based topologi- cal mapping. Topological mapping lends...

  11. Multi-modal low cost mobile indoor surveillance system on the Robust Artificial Intelligence-based Defense Electro Robot (RAIDER)

    Science.gov (United States)

    Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.

  12. Robotics at Savannah River site: activity report

    International Nuclear Information System (INIS)

    Byrd, J.S.

    1984-09-01

    The objectives of the Robotics Technology Group at the Savannah River Laboratory are to employ modern industrial robots and to develop unique automation and robotic systems to enhance process operations at the Savannah River site (SRP and SRL). The incentives are to improve safety, reduce personnel radiation exposure, improve product quality and productivity, and to reduce operating costs. During the past year robotic systems have been installed to fill chemical dilution vials in a SRP laboratory at 772-F and remove radioactive waste materials in the SRL Californium Production Facility at 773-A. A robotic system to lubricate an extrusion press has been developed and demonstrated in the SRL robotics laboratory and is scheduled for installation at the 321-M fuel fabrication area. A mobile robot was employed by SRP for a radiation monitoring task at a waste tank top in H-Area. Several other robots are installed in the SRL robotics laboratories and application development programs are underway. The status of these applications is presented in this report

  13. Terpsichore. ENEA's autonomous robotics project; Progetto Tersycore, la robotica autonoma

    Energy Technology Data Exchange (ETDEWEB)

    Taraglio, S.; Zanela, S.; Santini, A.; Nanni, V. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Div. Robotica e Informatica Avanzata

    1999-10-01

    The article presents some of the Terpsichore project's results aimed to developed and test algorithms and applications for autonomous robotics. Four applications are described: dynamic mapping of a building's interior through the use of ultrasonic sensors; visual drive of an autonomous robot via a neural network controller; a neural network-based stereo vision system that steers a robot through unknown indoor environments; and the evolution of intelligent behaviours via the genetic algorithm approach.

  14. Active Exploration for Robust Object Detection

    OpenAIRE

    Velez, Javier J.; Hemann, Garrett A.; Huang, Albert S.; Posner, Ingmar; Roy, Nicholas

    2011-01-01

    Today, mobile robots are increasingly expected to operate in ever more complex and dynamic environments. In order to carry out many of the higher-level tasks envisioned a semantic understanding of a workspace is pivotal. Here our field has benefited significantly from successes in machine learning and vision: applications in robotics of off-the-shelf object detectors are plentiful. This paper outlines an online, any-time planning framework enabling the active exploration of such detections. O...

  15. Image Based Solution to Occlusion Problem for Multiple Robots Navigation

    Directory of Open Access Journals (Sweden)

    Taj Mohammad Khan

    2012-04-01

    Full Text Available In machine vision, occlusions problem is always a challenging issue in image based mapping and navigation tasks. This paper presents a multiple view vision based algorithm for the development of occlusion-free map of the indoor environment. The map is assumed to be utilized by the mobile robots within the workspace. It has wide range of applications, including mobile robot path planning and navigation, access control in restricted areas, and surveillance systems. We used wall mounted fixed camera system. After intensity adjustment and background subtraction of the synchronously captured images, the image registration was performed. We applied our algorithm on the registered images to resolve the occlusion problem. This technique works well even in the existence of total occlusion for a longer period.

  16. An approach to robot SLAM based on incremental appearance learning with omnidirectional vision

    Science.gov (United States)

    Wu, Hua; Qin, Shi-Yin

    2011-03-01

    Localisation and mapping with an omnidirectional camera becomes more difficult as the landmark appearances change dramatically in the omnidirectional image. With conventional techniques, it is difficult to match the features of the landmark with the template. We present a novel robot simultaneous localisation and mapping (SLAM) algorithm with an omnidirectional camera, which uses incremental landmark appearance learning to provide posterior probability distribution for estimating the robot pose under a particle filtering framework. The major contribution of our work is to represent the posterior estimation of the robot pose by incremental probabilistic principal component analysis, which can be naturally incorporated into the particle filtering algorithm for robot SLAM. Moreover, the innovative method of this article allows the adoption of the severe distorted landmark appearances viewed with omnidirectional camera for robot SLAM. The experimental results demonstrate that the localisation error is less than 1 cm in an indoor environment using five landmarks, and the location of the landmark appearances can be estimated within 5 pixels deviation from the ground truth in the omnidirectional image at a fairly fast speed.

  17. Intelligence for Human-Assistant Planetary Surface Robots

    Science.gov (United States)

    Hirsh, Robert; Graham, Jeffrey; Tyree, Kimberly; Sierhuis, Maarten; Clancey, William J.

    2006-01-01

    The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area.

  18. The influence of active vision on the exoskeleton of intelligent agents

    Science.gov (United States)

    Smith, Patrice; Terry, Theodore B.

    2016-04-01

    Chameleonization occurs when a self-learning autonomous mobile system's (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the "chameleon effect." Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its' optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.

  19. An experimental program on advanced robotics

    International Nuclear Information System (INIS)

    Yuan, J.S.C.; Stovman, J.; MacDonald, R.; Norgate, G.

    1987-01-01

    Remote handling in hostile environments, including space, nuclear facilities, and mines, requires hybrid systems which permit close cooperation between state of the art teleoperation and advanced robotics. Teleoperation using hand controller commands and television feedback can be enhanced by providing force-feel feedback and simulation graphics enhancement of the display. By integrating robotics features such as computer vision and force/tactile feedback with advanced local control systems, the overall effectiveness of the system can be improved and the operator workload reduced. This has been demonstrated in the laboratory. Applications such as a grappling drifting satellite or transferring material at sea are envisaged

  20. Robotics

    International Nuclear Information System (INIS)

    Scheide, A.W.

    1983-01-01

    This article reviews some of the technical areas and history associated with robotics, provides information relative to the formation of a Robotics Industry Committee within the Industry Applications Society (IAS), and describes how all activities relating to robotics will be coordinated within the IEEE. Industrial robots are being used for material handling, processes such as coating and arc welding, and some mechanical and electronics assembly. An industrial robot is defined as a programmable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for a variety of tasks. The initial focus of the Robotics Industry Committee will be on the application of robotics systems to the various industries that are represented within the IAS

  1. Social Robotics in Therapy of Apraxia of Speech

    Directory of Open Access Journals (Sweden)

    José Carlos Castillo

    2018-01-01

    Full Text Available Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction.

  2. MASSIVE OPEN ONLINE COURSES IN EDUCATION OF ROBOTICS

    Directory of Open Access Journals (Sweden)

    Gyula Mester

    2016-03-01

    Full Text Available Recently, the requirement for learning is constantly increasing. MOOC – massive open online courses represent educational revolution of the century. A MOOC is an online course accessible to unlimited number of participation and is an open access via the web. Mayor participants in the MOOCS are: Coursera, Udacity (Stanford, since 2012 and edX (Harvard, MIT, since 2012. In this paper two MOOCs are considered: Introduction for Robotics and Robotics Vision, both from the Queensland University of Technology, Brisbane, Australia.

  3. Development of a Vision-Based Robotic Follower Vehicle

    Science.gov (United States)

    2009-02-01

    resultant blob . . . . . . . . . . 14 Figure 13: A sample image and the recognized keypoints found using the SIFT algorithm...Figure 12: An example of a spherical target and the resultant blob (taken from [66]). To track multi-coloured objects, rather than using just one...International Journal of Advanced Robotic Systems, 2(3), 245–250. [37] Zhou, J. and Clark, C. (2006), Autonomous fish tracking by ROV using Monocular

  4. Low Vision FAQs

    Science.gov (United States)

    ... de los Ojos Cómo hablarle a su oculista Low Vision FAQs What is low vision? Low vision is a visual impairment, not correctable ... person’s ability to perform everyday activities. What causes low vision? Low vision can result from a variety of ...

  5. Emergent risk to workplace safety as a result of the use of robots in the work place

    NARCIS (Netherlands)

    Steijn, W.; Luiijf, E.; Beek, D. van der

    2016-01-01

    For decades now, robots have been a key part of future visions in films and books. As long ago as 1920, Karel Čapek wrote a play called RUR (Rossum’s Universal Robots). The first real robot, ‘Gargantuan’, was constructed between 1935 and 1937. It was made completely out of Meccano. Today’s

  6. Motion and Emotional Behavior Design for Pet Robot Dog

    Science.gov (United States)

    Cheng, Chi-Tai; Yang, Yu-Ting; Miao, Shih-Heng; Wong, Ching-Chang

    A pet robot dog with two ears, one mouth, one facial expression plane, and one vision system is designed and implemented so that it can do some emotional behaviors. Three processors (Inter® Pentium® M 1.0 GHz, an 8-bit processer 8051, and embedded soft-core processer NIOS) are used to control the robot. One camera, one power detector, four touch sensors, and one temperature detector are used to obtain the information of the environment. The designed robot with 20 DOF (degrees of freedom) is able to accomplish the walking motion. A behavior system is built on the implemented pet robot so that it is able to choose a suitable behavior for different environmental situation. From the practical test, we can see that the implemented pet robot dog can do some emotional interaction with the human.

  7. Machine vision for a selective broccoli harvesting robot

    NARCIS (Netherlands)

    Blok, Pieter M.; Barth, Ruud; Berg, Van Den Wim

    2016-01-01

    The selective hand-harvest of fresh market broccoli is labor-intensive and comprises about 35% of the total production costs. This research was conducted to determine whether machine vision can be used to detect broccoli heads, as a first step in the development of a fully autonomous selective

  8. An Inquiry-Based Vision Science Activity for Graduate Students and Postdoctoral Research Scientists

    Science.gov (United States)

    Putnam, N. M.; Maness, H. L.; Rossi, E. A.; Hunter, J. J.

    2010-12-01

    The vision science activity was originally designed for the 2007 Center for Adaptive Optics (CfAO) Summer School. Participants were graduate students, postdoctoral researchers, and professionals studying the basics of adaptive optics. The majority were working in fields outside vision science, mainly astronomy and engineering. The primary goal of the activity was to give participants first-hand experience with the use of a wavefront sensor designed for clinical measurement of the aberrations of the human eye and to demonstrate how the resulting wavefront data generated from these measurements can be used to assess optical quality. A secondary goal was to examine the role wavefront measurements play in the investigation of vision-related scientific questions. In 2008, the activity was expanded to include a new section emphasizing defocus and astigmatism and vision testing/correction in a broad sense. As many of the participants were future post-secondary educators, a final goal of the activity was to highlight the inquiry-based approach as a distinct and effective alternative to traditional laboratory exercises. Participants worked in groups throughout the activity and formative assessment by a facilitator (instructor) was used to ensure that participants made progress toward the content goals. At the close of the activity, participants gave short presentations about their work to the whole group, the major points of which were referenced in a facilitator-led synthesis lecture. We discuss highlights and limitations of the vision science activity in its current format (2008 and 2009 summer schools) and make recommendations for its improvement and adaptation to different audiences.

  9. Computer vision for shoe upper profile measurement via upper and sole conformal matching

    Science.gov (United States)

    Hu, Zhongxu; Bicker, Robert; Taylor, Paul; Marshall, Chris

    2007-01-01

    This paper describes a structured light computer vision system applied to the measurement of the 3D profile of shoe uppers. The trajectory obtained is used to guide an industrial robot for automatic edge roughing around the contour of the shoe upper so that the bonding strength can be improved. Due to the specific contour and unevenness of the shoe upper, even if the 3D profile is obtained using computer vision, it is still difficult to reliably define the roughing path around the shape. However, the shape of the corresponding shoe sole is better defined, and it is much easier to measure the edge using computer vision. Therefore, a feasible strategy is to measure both the upper and sole profiles, and then align and fit the sole contour to the upper, in order to obtain the best fit. The trajectory of the edge of the desired roughing path is calculated and is then smoothed and interpolated using NURBS curves to guide an industrial robot for shoe upper surface removal; experiments show robust and consistent results. An outline description of the structured light vision system is given here, along with the calibration techniques used.

  10. Investigation In Two Wheels Mobile Robot Movement: Stability and Motion Paths

    Directory of Open Access Journals (Sweden)

    Abdulrahman A.A. Emhemed

    2013-01-01

    Full Text Available This paper deals with the problem of dynamic modelling of inspection robot two wheels. Fuzzy controller based on robotics techniques for optimize of an inspection stability. The target is to enhancement of robot direction and avoids the obstacles. To find collision free area, distance-sensors such as ultra-sonic sensors and laser scanners or vision systems are usually employed. The distance-sensors offer only distance information between mobile robots and obstacles. Also the target are shown can be reached by different directions. The fuzzy logic controller is effect to avoid the abstacles and get ideal direction to “the target box”.

  11. Access to hands-on mathematics measurement activities using robots controlled via speech generating devices: three case studies.

    Science.gov (United States)

    Adams, Kim; Cook, Al

    2014-07-01

    To examine how using a robot controlled via a speech generating device (SGD) influences the ways students with physical and communication limitations can demonstrate their knowledge in math measurement activities. Three children with severe physical disabilities and complex communication needs used the robot and SGD system to perform four math measurement lessons in comparing, sorting and ordering objects. The performance of the participants was measured and the process of using the system was described in terms of manipulation and communication events. Stakeholder opinions were solicited regarding robot use. Robot use revealed some gaps in the procedural knowledge of the participants. Access to both the robot and SGD was shown to provide several benefits. Stakeholders thought the intervention was important and feasible for a classroom environment. The participants were able to participate actively in the hands-on and communicative measurement activities and thus meet the demands of current math instruction methods. Current mathematics pedagogy encourages doing hands-on activities while communicating about concepts. Adapted Lego robots enabled children with severe physical disabilities to perform hands-on length measurement activities. Controlling the robots from speech generating devices (SGD) enabled the children, who also had complex communication needs, to reflect and report on results during the activities. By using the robots combined with SGDs, children both exhibited their knowledge of and experienced the concepts of mathematical measurements.

  12. Computer Vision for Artificially Intelligent Robotic Systems

    Science.gov (United States)

    Ma, Chialo; Ma, Yung-Lung

    1987-04-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main

  13. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    Science.gov (United States)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  14. Evaluating the effect of three-dimensional visualization on force application and performance time during robotics-assisted mitral valve repair.

    Science.gov (United States)

    Currie, Maria E; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W A; Patel, Rajni; Peters, Terry; Kiaii, Bob B

    2013-01-01

    The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P robotic system with either 2D or 3D vision (P robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.

  15. [RESEARCH PROGRESS OF PERIPHERAL NERVE SURGERY ASSISTED BY Da Vinci ROBOTIC SYSTEM].

    Science.gov (United States)

    Shen, Jie; Song, Diyu; Wang, Xiaoyu; Wang, Changjiang; Zhang, Shuming

    2016-02-01

    To summarize the research progress of peripheral nerve surgery assisted by Da Vinci robotic system. The recent domestic and international articles about peripheral nerve surgery assisted by Da Vinci robotic system were reviewed and summarized. Compared with conventional microsurgery, peripheral nerve surgery assisted by Da Vinci robotic system has distinctive advantages, such as elimination of physiological tremors and three-dimensional high-resolution vision. It is possible to perform robot assisted limb nerve surgery using either the traditional brachial plexus approach or the mini-invasive approach. The development of Da Vinci robotic system has revealed new perspectives in peripheral nerve surgery. But it has still been at the initial stage, more basic and clinical researches are still needed.

  16. Automated rose cutting in greenhouses with 3D vision and robotics : analysis of 3D vision techniques for stem detection

    NARCIS (Netherlands)

    Noordam, J.C.; Hemming, J.; Heerde, van C.J.E.; Golbach, F.B.T.F.; Soest, van R.; Wekking, E.

    2005-01-01

    The reduction of labour cost is the major motivation to develop a system for robot harvesting of roses in greenhouses that at least can compete with manual harvesting. Due to overlapping leaves, one of the most complicated tasks in robotic rose cutting is to locate the stem and trace the stem down

  17. Control of articulated snake robot under dynamic active constraints.

    Science.gov (United States)

    Kwok, Ka-Wai; Vitiello, Valentina; Yang, Guang-Zhong

    2010-01-01

    Flexible, ergonomically enhanced surgical robots have important applications to transluminal endoscopic surgery, for which path-following and dynamic shape conformance are essential. In this paper, kinematic control of a snake robot for motion stabilisation under dynamic active constraints is addressed. The main objective is to enable the robot to track the visual target accurately and steadily on deforming tissue whilst conforming to pre-defined anatomical constraints. The motion tracking can also be augmented with manual control. By taking into account the physical limits in terms of maximum frequency response of the system (manifested as a delay between the input of the manipulator and the movement of the end-effector), we show the importance of visual-motor synchronisation for performing accurate smooth pursuit movements. Detailed user experiments are performed to demonstrate the practical value of the proposed control mechanism.

  18. The development of advanced robotics for the nuclear industry -The development of advanced robotic technology-

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jong Min; Lee, Yong Bum; Park, Soon Yong; Cho, Jae Wan; Lee, Nam Hoh; Kim, Woong Kee; Moon, Byung Soo; Kim, Seung Hoh; Kim, Chang Heui; Kim, Byung Soo; Hwang, Suk Yong; Lee, Yung Kwang; Moon, Je Sun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    Main activity in this year is to develop both remote handling system and telepresence techniques, which can be used for people involved in extremely hazardous working area to alleviate their burden. In the robot vision technology part, KAERI-PSM system, stereo imaging camera module, stereo BOOM/MOLLY unit, and stereo HMD unit are developed. Also, autostereo TV system which falls under the category of next generation stereo imaging technology has been studied. The performance of KAERI-PSM system for remote handling task is evaluated and compared with other stereo imaging systems as well as general TV imaging system. The result shows that KAERI-PSM system is superior to the other stereo imaging systems about remote operation speedup and accuracy. The automatic recognition algorithm of instrument panel is studied and passive visual target tracking system is developed. The 5 DOF camera serving unit has been designed and fabricated. It is designed to function like human`s eye. In the sensing and intelligent control research part, thermal image database system for thermal image analysis is developed and remote temperature monitoring technique using fiber optics is investigated. And also, two dimensional radioactivity sensor head for radiation profile monitoring system is designed. In the part of intelligent robotics, mobile robot is fabricated and its autonomous navigation using fuzzy control logic is studied. These remote handling and telepresence techniques developed in this project can be applied to nozzle-dam installation/removal robot system, reactor inspection unit, underwater nuclear pellet inspection and pipe abnormality inspection. And these developed remote handling and telepresence techniques will be applied in general industry, medical science, and military as well as nuclear facilities. 203 figs, 12 tabs, 72 refs. (Author).

  19. A design of toxic gas detecting security robot car based on wireless path-patrol

    Directory of Open Access Journals (Sweden)

    Cheng Ho-Chih

    2017-01-01

    Full Text Available Because a toxic gas detecting/monitoring system in a chemical plant is not movable, a gas detecting/monitoring system will be passive and the detecting range will also be constrained. This invention is an active multi-functional wireless patrol car that can substitute for humans that inspect a plant's security. In addition, to widen the monitoring vision within the environment, two motors used to rotate a wireless IPCAM with two axes are presented. Also, to control the robot car's movement, two axis motors used to drive the wheel of the robot car are also installed. Additionally, a toxic gas detector is linked to the microcontroller of the patrol car. The detected concentration of the gas will be fed back to the server pc. To enhance the robot car's patrolling duration, a movable electrical power unit in conjunction with a wireless module is also used. Consequently, this paper introduces a wireless path-patrol and toxic gas detecting security robot car that can assure a plant's security and protect workers when toxic gases are emitted.

  20. Robotic Label Applicator: Design, Development and Visual Servoing Based Control

    Directory of Open Access Journals (Sweden)

    Lin Chyi-Yeu

    2016-01-01

    Full Text Available Use of robotic arms and computer vision in manufacture, and assembly process are getting more interest as flexible customization is becoming priority over mass production as frontier industry practice. In this paper an innovative label applicator as end of arm tooling (EOAT capable of dispensing and applying label stickers of various dimensions to a product is designed, fabricated and tested. The system incorporates a label dispenserapplicator and had eye-in-hand camera system, attached to 6-dof robot arm can autonomously apply a label sticker to the target position on a randomly placed product. Employing multiple advantages from different knowledge basis, mechanism design and vision based automatic control, offers this system distinctive efficiency as well as flexibility to change in manufacturing and assembly process with time and cost saving.

  1. Color vision deficiencies and the child's willingness for visual activity: preliminary research

    Science.gov (United States)

    Geniusz, Malwina; Szmigiel, Marta; Geniusz, Maciej

    2017-09-01

    After a few weeks a newborn baby can recognize high contrasts in colors like black and white. They reach full color vision at the age of circa six months. Matching colors is the next milestone. Most children can do it at the age of two. Good color vision is one of the factors which indicate proper development of a child. Presented research shows the correlation between color vision and visual activity. The color vision of a group of children aged 3-8 was examined with saturated Farnsworth D-15. Fransworth test was performed twice - in a standard version and in a magnetic version. The time of completing standard and magnetic tests was measured. Furthermore, parents of subjects answered questions checking the children's visual activity in 1 - 10 scale. Parents stated whether the child willingly watched books, colored coloring books, put puzzles or liked to play with blocks etc. The Fransworth D-15 test designed for color vision testing can be used to test younger children from the age of 3 years. These are preliminary studies which may be a useful tool for further, more accurate examination on a larger group of subjects.

  2. Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1

    Science.gov (United States)

    Erickson, Jon D. (Editor)

    1994-01-01

    The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications.

  3. The cortical activation pattern by a rehabilitation robotic hand: a functional NIRS study.

    Science.gov (United States)

    Chang, Pyung-Hun; Lee, Seung-Hee; Gu, Gwang Min; Lee, Seung-Hyun; Jin, Sang-Hyun; Yeo, Sang Seok; Seo, Jeong Pyo; Jang, Sung Ho

    2014-01-01

    Clarification of the relationship between external stimuli and brain response has been an important topic in neuroscience and brain rehabilitation. In the current study, using functional near infrared spectroscopy (fNIRS), we attempted to investigate cortical activation patterns generated during execution of a rehabilitation robotic hand. Ten normal subjects were recruited for this study. Passive movements of the right fingers were performed using a rehabilitation robotic hand at a frequency of 0.5 Hz. We measured values of oxy-hemoglobin (HbO), deoxy-hemoglobin (HbR) and total-hemoglobin (HbT) in five regions of interest: the primary sensory-motor cortex (SM1), hand somatotopy of the contralateral SM1, supplementary motor area (SMA), premotor cortex (PMC), and prefrontal cortex (PFC). HbO and HbT values indicated significant activation in the left SM1, left SMA, left PMC, and left PFC during execution of the rehabilitation robotic hand (uncorrected, p < 0.01). By contrast, HbR value indicated significant activation only in the hand somatotopic area of the left SM1 (uncorrected, p < 0.01). Our results appear to indicate that execution of the rehabilitation robotic hand could induce cortical activation.

  4. Sensory Integration with Articulated Motion on a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    J. Rojas

    2005-01-01

    Full Text Available This paper describes the integration of articulated motion with auditory and visual sensory information that enables a humanoid robot to achieve certain reflex actions that mimic those of people. Reflexes such as reach-and-grasp behavior enables the robot to learn, through experience, its own state and that of the world. A humanoid robot with binaural audio input, stereo vision, and pneumatic arms and hands exhibited tightly coupled sensory-motor behaviors in four different demonstrations. The complexity of successive demonstrations was increased to show that the reflexive sensory-motor behaviors combine to perform increasingly complex tasks. The humanoid robot executed these tasks effectively and established the groundwork for the further development of hardware and software systems, sensory-motor vector-space representations, and coupling with higher-level cognition.

  5. Thermal Tracking in Mobile Robots for Leak Inspection Activities

    Directory of Open Access Journals (Sweden)

    Iñaki Maurtua

    2013-10-01

    Full Text Available Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.

  6. Thermal tracking in mobile robots for leak inspection activities.

    Science.gov (United States)

    Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki

    2013-10-09

    Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.

  7. iPathology: Robotic Applications and Management of Plants and Plant Diseases

    OpenAIRE

    Yiannis Ampatzidis; Luigi De Bellis; Andrea Luvisi

    2017-01-01

    The rapid development of new technologies and the changing landscape of the online world (e.g., Internet of Things (IoT), Internet of All, cloud-based solutions) provide a unique opportunity for developing automated and robotic systems for urban farming, agriculture, and forestry. Technological advances in machine vision, global positioning systems, laser technologies, actuators, and mechatronics have enabled the development and implementation of robotic systems and intelligent technologies f...

  8. The cortical activation pattern by a rehabilitation robotic hand : A functional NIRS study

    Directory of Open Access Journals (Sweden)

    Pyung Hun eChang

    2014-02-01

    Full Text Available Introduction: Clarification of the relationship between external stimuli and brain response has been an important topic in neuroscience and brain rehabilitation. In the current study, using functional near infrared spectroscopy (fNIRS, we attempted to investigate cortical activation patterns generated during execution of a rehabilitation robotic hand. Methods: Ten normal subjects were recruited for this study. Passive movements of the right fingers were performed using a rehabilitation robotic hand at a frequency of 0.5 Hz. We measured values of oxy-hemoglobin(HbO, deoxy-hemoglobin(HbR and total-hemoglobin(HbT in five regions of interest: the primary sensory-motor cortex (SM1, hand somatotopy of the contralateral SM1, supplementary motor area (SMA, premotor cortex (PMC, and prefrontal cortex (PFC. Results: HbO and HbT values indicated significant activation in the left SM1, left SMA, left PMC, and left PFC during execution of the rehabilitation robotic hand(uncorrected, pConclusions: Our results appear to indicate that execution of the rehabilitation robotic hand could induce cortical activation.

  9. Vision Algorithms Catch Defects in Screen Displays

    Science.gov (United States)

    2014-01-01

    Andrew Watson, a senior scientist at Ames Research Center, developed a tool called the Spatial Standard Observer (SSO), which models human vision for use in robotic applications. Redmond, Washington-based Radiant Zemax LLC licensed the technology from NASA and combined it with its imaging colorimeter system, creating a powerful tool that high-volume manufacturers of flat-panel displays use to catch defects in screens.

  10. Self discovery enables robot social cognition: are you my teacher?

    Science.gov (United States)

    Kaipa, Krishnanand N; Bongard, Josh C; Meltzoff, Andrew N

    2010-01-01

    Infants exploit the perception that others are 'like me' to bootstrap social cognition (Meltzoff, 2007a). This paper demonstrates how the above theory can be instantiated in a social robot that uses itself as a model to recognize structural similarities with other robots; this thereby enables the student to distinguish between appropriate and inappropriate teachers. This is accomplished by the student robot first performing self-discovery, a phase in which it uses actuation-perception relationships to infer its own structure. Second, the student models a candidate teacher using a vision-based active learning approach to create an approximate physical simulation of the teacher. Third, the student determines that the teacher is structurally similar (but not necessarily visually similar) to itself if it can find a neural controller that allows its self model (created in the first phase) to reproduce the perceived motion of the teacher model (created in the second phase). Fourth, the student uses the neural controller (created in the third phase) to move, resulting in imitation of the teacher. Results with a physical student robot and two physical robot teachers demonstrate the effectiveness of this approach. The generalizability of the proposed model allows it to be used over variations in the demonstrator: The student robot would still be able to imitate teachers of different sizes and at different distances from itself, as well as different positions in its field of view, because change in the interrelations of the teacher's body parts are used for imitation, rather than absolute geometric properties. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Grounding Robot Autonomy in Emotion and Self-awareness

    Science.gov (United States)

    Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita

    Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.

  12. University of Florida, University research program in robotics. Annual technical progress report

    International Nuclear Information System (INIS)

    Crane, C.D. III; Tulenko, J.S.

    1994-05-01

    Progress is reported in the areas of environmental hardening, database, world modeling, vision, man-machine interface, advanced liquid metal reactor inspection robot, and articulated transporter/manipulator system (ATMS) development

  13. University of Florida, University research program in robotics. Annual technical progress report

    Energy Technology Data Exchange (ETDEWEB)

    Crane, C.D. III; Tulenko, J.S.

    1994-05-01

    Progress is reported in the areas of environmental hardening, database, world modeling, vision, man-machine interface, advanced liquid metal reactor inspection robot, and articulated transporter/manipulator system (ATMS) development.

  14. Teacher Activism: Enacting a Vision for Social Justice

    Science.gov (United States)

    Picower, Bree

    2012-01-01

    This qualitative study focused on educators who participated in grassroots social justice groups to explore the role teacher activism can play in the struggle for educational justice. Findings show teacher activists made three overarching commitments: to reconcile their vision for justice with the realities of injustice around them; to work within…

  15. Machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2005-01-01

    In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl

  16. Robotic radiation survey and analysis system for radiation waste casks

    International Nuclear Information System (INIS)

    Thunborg, S.

    1987-01-01

    Sandia National Laboratories (SNL) and the Hanford Engineering Development Laboratories have been involved in the development of remote systems technology concepts for handling defense high-level waste (DHLW) shipping casks at the waste repository. This effort was demonstrated the feasibility of using this technology for handling DHLW casks. These investigations have also shown that cask design can have a major effect on the feasibility of remote cask handling. Consequently, SNL has initiated a program to determine cask features necessary for robotic remote handling at the waste repository. The initial cask handling task selected for detailed investigation was the robotic radiation survey and analysis (RRSAS) task. In addition to determining the design features required for robotic cask handling, the RRSAS project contributes to the definition of techniques for random selection of swipe locations, the definition of robotic swipe parameters, force control techniques for robotic swipes, machine vision techniques for the location of objects in 3-D, repository robotic systems requirements, and repository data management system needs

  17. Vision and Task Assistance using Modular Wireless In Vivo Surgical Robots

    Science.gov (United States)

    Platt, Stephen R.; Hawks, Jeff A.; Rentschler, Mark E.

    2009-01-01

    Minimally invasive abdominal surgery (laparoscopy) results in superior patient outcomes compared to conventional open surgery. However, the difficulty of manipulating traditional laparoscopic tools from outside the body of the patient generally limits these benefits to patients undergoing relatively low complexity procedures. The use of tools that fit entirely inside the peritoneal cavity represents a novel approach to laparoscopic surgery. Our previous work demonstrated that miniature mobile and fixed-based in vivo robots using tethers for power and data transmission can successfully operate within the abdominal cavity. This paper describes the development of a modular wireless mobile platform for in vivo sensing and manipulation applications. Design details and results of ex vivo and in vivo tests of robots with biopsy grasper, staple/clamp, video, and physiological sensor payloads are presented. These types of self-contained surgical devices are significantly more transportable and lower in cost than current robotic surgical assistants. They could ultimately be carried and deployed by non-medical personnel at the site of an injury to allow a remotely located surgeon to provide critical first response medical intervention irrespective of the location of the patient. PMID:19237337

  18. Vision and task assistance using modular wireless in vivo surgical robots.

    Science.gov (United States)

    Platt, Stephen R; Hawks, Jeff A; Rentschler, Mark E

    2009-06-01

    Minimally invasive abdominal surgery (laparoscopy) results in superior patient outcomes compared to conventional open surgery. However, the difficulty of manipulating traditional laparoscopic tools from outside the body of the patient generally limits these benefits to patients undergoing relatively low complexity procedures. The use of tools that fit entirely inside the peritoneal cavity represents a novel approach to laparoscopic surgery. Our previous work demonstrated that miniature mobile and fixed-based in vivo robots using tethers for power and data transmission can successfully operate within the abdominal cavity. This paper describes the development of a modular wireless mobile platform for in vivo sensing and manipulation applications. Design details and results of ex vivo and in vivo tests of robots with biopsy grasper, staple/clamp, video, and physiological sensor payloads are presented. These types of self-contained surgical devices are significantly more transportable and lower in cost than current robotic surgical assistants. They could ultimately be carried and deployed by nonmedical personnel at the site of an injury to allow a remotely located surgeon to provide critical first response medical intervention irrespective of the location of the patient.

  19. Inverse Modeling of Human Knee Joint Based on Geometry and Vision Systems for Exoskeleton Applications

    Directory of Open Access Journals (Sweden)

    Eduardo Piña-Martínez

    2015-01-01

    Full Text Available Current trends in Robotics aim to close the gap that separates technology and humans, bringing novel robotic devices in order to improve human performance. Although robotic exoskeletons represent a breakthrough in mobility enhancement, there are design challenges related to the forces exerted to the users’ joints that result in severe injuries. This occurs due to the fact that most of the current developments consider the joints as noninvariant rotational axes. This paper proposes the use of commercial vision systems in order to perform biomimetic joint design for robotic exoskeletons. This work proposes a kinematic model based on irregular shaped cams as the joint mechanism that emulates the bone-to-bone joints in the human body. The paper follows a geometric approach for determining the location of the instantaneous center of rotation in order to design the cam contours. Furthermore, the use of a commercial vision system is proposed as the main measurement tool due to its noninvasive feature and for allowing subjects under measurement to move freely. The application of this method resulted in relevant information about the displacements of the instantaneous center of rotation at the human knee joint.

  20. Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2015-01-01

    Full Text Available For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system.

  1. Robots in pipe and vessel inspection: past, present, and future

    International Nuclear Information System (INIS)

    Mueller, T.A.; Tyndall, J.F.

    1984-01-01

    Over the past several decades, remotely operated scanners have been employed to inspect piping and pressure vessels. These devices in their early forms were manually controlled manipulators functioning as mere extensions of the operator. With the addition of limit sensing, speed control, and positional feedback and display, the early manipulators became primitive robots. By adding computer controls with their degree of intelligence to the devices, they achieved the status of robots. Future applications of vision, adaptive control, proximity sensing, and pattern recognition will bring these devices to a level of intelligence that will make automated robotic inspection of pipes and pressure vessels a true reality

  2. Development of embedded real-time and high-speed vision platform

    Science.gov (United States)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  3. Direct methods for vision-based robot control : application and implementation

    NARCIS (Netherlands)

    Pieters, R.S.

    2013-01-01

    With the growing interest of integrating robotics into everyday life and industry, the requirements towards the quality and quantity of applications grows equally hard. This trend is profoundly recognized in applications involving visual perception. Whereas visual sensing in home environments tend

  4. Put Your Robot In, Put Your Robot Out: Sequencing through Programming Robots in Early Childhood

    Science.gov (United States)

    Kazakoff, Elizabeth R.; Bers, Marina Umaschi

    2014-01-01

    This article examines the impact of programming robots on sequencing ability in early childhood. Thirty-four children (ages 4.5-6.5 years) participated in computer programming activities with a developmentally appropriate tool, CHERP, specifically designed to program a robot's behaviors. The children learned to build and program robots over three…

  5. Research and development of advanced robots for nuclear power plants

    International Nuclear Information System (INIS)

    Tsukune, Hideo; Hirukawa, Hirohisa; Kitagaki, Kosei; Liu, Yunhui; Onda, Hiromu; Nakamura, Akira

    1994-01-01

    Social and economic demands have been pressing for automation of inspection tasks, maintenance and repair jobs of nuclear power plants, which are carried out by human workers under circumstances with high radiation level. Since the plants are not always designed for introduction of automatic machinery, sophisticated robots shall play a crucial role to free workers from hostile environments. We have been studying intelligent robot systems and regarded nuclear industries as one of the important application fields where we can validate the feasibility of the methods and systems we have developed. In this paper we firstly discuss on the tasks required in nuclear power plants. Secondly we introduce current status of R and D on special purpose robots, versatile robots and intelligent robots for automatizing the tasks. Then we focus our discussions on three major functions in realizing robotized assembly tasks under such unstructured environments as in nuclear power plants; planning, vision and manipulation. Finally we depict an image of a prototype robot system for nuclear power plants based on the advanced functions. (author) 64 refs

  6. Bio-inspired vision

    International Nuclear Information System (INIS)

    Posch, C

    2012-01-01

    Nature still outperforms the most powerful computers in routine functions involving perception, sensing and actuation like vision, audition, and motion control, and is, most strikingly, orders of magnitude more energy-efficient than its artificial competitors. The reasons for the superior performance of biological systems are subject to diverse investigations, but it is clear that the form of hardware and the style of computation in nervous systems are fundamentally different from what is used in artificial synchronous information processing systems. Very generally speaking, biological neural systems rely on a large number of relatively simple, slow and unreliable processing elements and obtain performance and robustness from a massively parallel principle of operation and a high level of redundancy where the failure of single elements usually does not induce any observable system performance degradation. In the late 1980's, Carver Mead demonstrated that silicon VLSI technology can be employed in implementing ''neuromorphic'' circuits that mimic neural functions and fabricating building blocks that work like their biological role models. Neuromorphic systems, as the biological systems they model, are adaptive, fault-tolerant and scalable, and process information using energy-efficient, asynchronous, event-driven methods. In this paper, some basics of neuromorphic electronic engineering and its impact on recent developments in optical sensing and artificial vision are presented. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time. It is argued that future artificial vision systems

  7. An Address Event Representation-Based Processing System for a Biped Robot

    Directory of Open Access Journals (Sweden)

    Uziel Jaramillo-Avila

    2016-02-01

    Full Text Available In recent years, several important advances have been made in the fields of both biologically inspired sensorial processing and locomotion systems, such as Address Event Representation-based cameras (or Dynamic Vision Sensors and in human-like robot locomotion, e.g., the walking of a biped robot. However, making these fields merge properly is not an easy task. In this regard, Neuromorphic Engineering is a fast-growing research field, the main goal of which is the biologically inspired design of hybrid hardware systems in order to mimic neural architectures and to process information in the manner of the brain. However, few robotic applications exist to illustrate them. The main goal of this work is to demonstrate, by creating a closed-loop system using only bio-inspired techniques, how such applications can work properly. We present an algorithm using Spiking Neural Networks (SNN for a biped robot equipped with a Dynamic Vision Sensor, which is designed to follow a line drawn on the floor. This is a commonly used method for demonstrating control techniques. Most of them are fairly simple to implement without very sophisticated components; however, it can still serve as a good test in more elaborate circumstances. In addition, the locomotion system proposed is able to coordinately control the six DOFs of a biped robot in switching between basic forms of movement. The latter has been implemented as a FPGA-based neuromorphic system. Numerical tests and hardware validation are presented.

  8. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    Science.gov (United States)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  9. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    James K. Archibald

    2006-12-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  10. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Fife WadeS

    2007-01-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  11. The development of advanced robotics for the nuclear industry -The development of advanced robotic technology-

    International Nuclear Information System (INIS)

    Lee, Jong Min; Lee, Yong Bum; Park, Soon Yong; Cho, Jae Wan; Lee, Nam Hoh; Kim, Woong Kee; Moon, Byung Soo; Kim, Seung Hoh; Kim, Chang Heui; Kim, Byung Soo; Hwang, Suk Yong; Lee, Yung Kwang; Moon, Je Sun

    1995-07-01

    Main activity in this year is to develop both remote handling system and telepresence techniques, which can be used for people involved in extremely hazardous working area to alleviate their burden. In the robot vision technology part, KAERI-PSM system, stereo imaging camera module, stereo BOOM/MOLLY unit, and stereo HMD unit are developed. Also, autostereo TV system which falls under the category of next generation stereo imaging technology has been studied. The performance of KAERI-PSM system for remote handling task is evaluated and compared with other stereo imaging systems as well as general TV imaging system. The result shows that KAERI-PSM system is superior to the other stereo imaging systems about remote operation speedup and accuracy. The automatic recognition algorithm of instrument panel is studied and passive visual target tracking system is developed. The 5 DOF camera serving unit has been designed and fabricated. It is designed to function like human's eye. In the sensing and intelligent control research part, thermal image database system for thermal image analysis is developed and remote temperature monitoring technique using fiber optics is investigated. And also, two dimensional radioactivity sensor head for radiation profile monitoring system is designed. In the part of intelligent robotics, mobile robot is fabricated and its autonomous navigation using fuzzy control logic is studied. These remote handling and telepresence techniques developed in this project can be applied to nozzle-dam installation/removal robot system, reactor inspection unit, underwater nuclear pellet inspection and pipe abnormality inspection. And these developed remote handling and telepresence techniques will be applied in general industry, medical science, and military as well as nuclear facilities. It has been looking for these techniques to expand the working area of human, raise the working efficiencies of remote task to the highest degree, and enhance the industrial

  12. Design of a vision-based sensor for autonomous pighouse cleaning

    DEFF Research Database (Denmark)

    Braithwaite, Ian David; Blanke, Mogens; Zhang, Guo-Quiang

    2005-01-01

    of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas...

  13. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2013-01-01

    Full Text Available With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognition.

  14. SWARM-BOT: Pattern Formation in a Swarm of Self-Assembling Mobile Robots

    OpenAIRE

    El Kamel, A.; Mellouli, K.; Borne, P.; Sahin, E.; Labella, T.H.; Trianni, V.; Deneubourg, J.-L.; Rasse, P.; Floreano, D.; Gambardella, L.M.; Mondada, F.; Nolfi, S.; Dorigo, M.

    2002-01-01

    In this paper we introduce a new robotic system, called swarm-bot. The system consists of a swarm of mobile robots with the ability to connect to/disconnect from each other to self-assemble into different kinds of structures. First, we describe our vision and the goals of the project. Then we present preliminary results on the formation of patterns obtained from a grid-world simulation of the system.

  15. Study on cooperative active sensing system

    International Nuclear Information System (INIS)

    Tsukune, Hideo; Kita, Nobuyuki; Kuniyoshi, Yasuo; Hara, Isao; Matsui, Toshihiro; Matsushita, Toshio; Nagata, Kazuyuki; Nagakubo, Akihiko

    1998-01-01

    This study aims to develop a dispersed cooperative intellectualized system technique and a sensing system required for construction of a robot group inspectable in patrol and maintainable in selfish in a plant with large scale and complex variety. In particular, in order to establish a system with flexibility response to environment and soundness durable to abnormal accident, a cooperative active sensing technique and real-time active vision sensing technique were started. On the base of last two years results, in 1996 fiscal year, important and expansion of each element technique was conducted to start a study on movement of focussing point which was an important function of the active vision sensing. (G.K.)

  16. Physical human-robot interaction of an active pelvis orthosis: toward ergonomic assessment of wearable robots.

    Science.gov (United States)

    d'Elia, Nicolò; Vanetti, Federica; Cempini, Marco; Pasquini, Guido; Parri, Andrea; Rabuffetti, Marco; Ferrarin, Maurizio; Molino Lova, Raffaele; Vitiello, Nicola

    2017-04-14

    In human-centered robotics, exoskeletons are becoming relevant for addressing needs in the healthcare and industrial domains. Owing to their close interaction with the user, the safety and ergonomics of these systems are critical design features that require systematic evaluation methodologies. Proper transfer of mechanical power requires optimal tuning of the kinematic coupling between the robotic and anatomical joint rotation axes. We present the methods and results of an experimental evaluation of the physical interaction with an active pelvis orthosis (APO). This device was designed to effectively assist in hip flexion-extension during locomotion with a minimum impact on the physiological human kinematics, owing to a set of passive degrees of freedom for self-alignment of the human and robotic hip flexion-extension axes. Five healthy volunteers walked on a treadmill at different speeds without and with the APO under different levels of assistance. The user-APO physical interaction was evaluated in terms of: (i) the deviation of human lower-limb joint kinematics when wearing the APO with respect to the physiological behavior (i.e., without the APO); (ii) relative displacements between the APO orthotic shells and the corresponding body segments; and (iii) the discrepancy between the kinematics of the APO and the wearer's hip joints. The results show: (i) negligible interference of the APO in human kinematics under all the experimented conditions; (ii) small (i.e., ergonomics assessment of wearable robots.

  17. vSLAM: vision-based SLAM for autonomous vehicle navigation

    Science.gov (United States)

    Goncalves, Luis; Karlsson, Niklas; Ostrowski, Jim; Di Bernardo, Enrico; Pirjanian, Paolo

    2004-09-01

    Among the numerous challenges of building autonomous/unmanned vehicles is that of reliable and autonomous localization in an unknown environment. In this paper we present a system that can efficiently and autonomously solve the robotics 'SLAM' problem, where a robot placed in an unknown environment, simultaneously must localize itself and make a map of the environment. The system is vision-based, and makes use of Evolution Robotic's powerful object recognition technology. As the robot explores the environment, it is continuously performing four tasks, using information from acquired images and the drive system odometry. The robot: (1) recognizes previously created 3-D visual landmarks; (2) builds new 3-D visual landmarks; (3) updates the current estimate of its location, using the map; (4) updates the landmark map. In indoor environments, the system can build a map of a 5m by 5m area in approximately 20 minutes, and can localize itself with an accuracy of approximately 15 cm in position and 3 degrees in orientation relative to the global reference frame of the landmark map. The same system can be adapted for outdoor, vehicular use.

  18. Design, implementation and testing of master slave robotic surgical system

    International Nuclear Information System (INIS)

    Ali, S.A.

    2015-01-01

    The autonomous manipulation of the medical robotics is needed to draw up a complete surgical plan in development. The autonomy of the robot comes from the fact that once the plan is drawn up off-line, it is the servo loops, and only these, that control the actions of the robot online, based on instantaneous control signals and measurements provided by the vision or force sensors. Using only these autonomous techniques in medical and surgical robotics remain relatively limited for two main reasons: Predicting complexity of the gestures, and human Safety. Therefore, Modern research in haptic force feedback in medical robotics is aimed to develop medical robots capable of performing remotely, what a surgeon does by himself. These medical robots are supposed to work exactly in the manner that a surgeon does in daily routine. In this paper the master slave tele-robotic system is designed and implemented with accuracy and stability by using 6DOF (Six Degree of Freedom) haptic force feedback devices. The master slave control strategy, haptic devices integration, application software designing using Visual C++ and experimental setup are considered. Finally, results are presented the stability, accuracy and repeatability of the system. (author)

  19. Design, Implementation and Testing of Master Slave Robotic Surgical System

    Directory of Open Access Journals (Sweden)

    Syed Amjad Ali

    2015-01-01

    Full Text Available The autonomous manipulation of the medical robotics is needed to draw up a complete surgical plan in development. The autonomy of the robot comes from the fact that once the plan is drawn up off-line, it is the servo loops, and only these, that control the actions of the robot online, based on instantaneous control signals and measurements provided by the vision or force sensors. Using only these autonomous techniques in medical and surgical robotics remain relatively limited for two main reasons: Predicting complexity of the gestures, and human Safety. Therefore, Modern research in haptic force feedback in medical robotics is aimed to develop medical robots capable of performing remotely, what a surgeon does by himself. These medical robots are supposed to work exactly in the manner that a surgeon does in daily routine. In this paper the master slave tele-robotic system is designed and implemented with accuracy and stability by using 6DOF (Six Degree of Freedom haptic force feedback devices. The master slave control strategy, haptic devices integration, application software designing using Visual C++ and experimental setup are considered. Finally, results are presented the stability, accuracy and repeatability of the system

  20. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  1. Application of robotics to distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Ramsbottom, W

    1986-06-01

    Robotic technology has been recognized as having potential application in lifeline maintenance and repair. A study was conducted to investigate the feasibility of utilizing robotics for this purpose, and to prepare a general design of appropriate equipment. Four lifeline tasks were selected as representative of the majority of work. Based on a detailed task decomposition, subtasks were rated on amenability to robot completion. All tasks are feasible, but in some cases special tooling is required. Based on today's robotics, it is concluded that a force reflecting master/slave telemanipulator, augmented by automatic robot tasks under a supervisory control system, provides the optimal approach. No commercially available products are currently adequate for lifeline work. A general design of the telemanipulator, which has been named the SKYARM has been developed, addressing all subsystems such as the manipulator, video, control power and insulation. The baseline system is attainable using today's technology. Improved performance and lower cost will be achieved through developments in artificial intelligence, machine vision, supervisory control and dielectrics. Immediate benefits to utilities include increased safety, better service and savings on a subset of maintenance tasks. In 3-5 years, the SKYARM will prove cost effective as a general purpose lifeline tool. 7 refs., 26 figs., 3 tabs.

  2. Using range vision for telerobotic control in hazardous environments

    International Nuclear Information System (INIS)

    Lipsett, M.G.; Ballantyne, W.J.

    1996-01-01

    This paper describes how range vision augments a telerobotic system. The robot has a manipulator arm mounted onto a mobile platform. The robot is driven by a human operator under remote control to a work site, and then the operator uses video cameras and laser range images to perform manipulation tasks. A graphical workstation displays a three-dimensional image of the workspace to the operator, and a CAD model of the manipulator moves in this 'virtual environment' while the actual manipulator moves in the real workspace. This paper gives results of field trials of a remote excavation system, and describes a remote inspection system being developed for reactor maintenance. (author)

  3. Performing mathematics activities with non-standard units of measurement using robots controlled via speech-generating devices: three case studies.

    Science.gov (United States)

    Adams, Kim D; Cook, Albert M

    2017-07-01

    Purpose To examine how using a Lego robot controlled via a speech-generating device (SGD) can contribute to how students with physical and communication impairments perform hands-on and communicative mathematics measurement activities. This study was a follow-up to a previous study. Method Three students with cerebral palsy used the robot to measure objects using non-standard units, such as straws, and then compared and ordered the objects using the resulting measurement. Their performance was assessed, and the manipulation and communication events were observed. Teachers and education assistants were interviewed regarding robot use. Results Similar benefits to the previous study were found in this study. Gaps in student procedural knowledge were identified such as knowing to place measurement units tip-to-tip, and students' reporting revealed gaps in conceptual understanding. However, performance improved with repeated practice. Stakeholders identified that some robot tasks took too long or were too difficult to perform. Conclusions Having access to both their SGD and a robot gave the students multiple ways to show their understanding of the measurement concepts. Though they could participate actively in the new mathematics activities, robot use is most appropriate in short tasks requiring reasonable operational skill. Implications for Rehabilitation Lego robots controlled via speech-generating devices (SGDs) can help students to engage in the mathematics pedagogy of performing hands-on activities while communicating about concepts. Students can "show what they know" using the Lego robots, and report and reflect on concepts using the SGD. Level 1 and Level 2 mathematics measurement activities have been adapted to be accomplished by the Lego robot. Other activities can likely be accomplished with similar robot adaptations (e.g., gripper, pen). It is not recommended to use the robot to measure items that are long, or perform measurements that require high

  4. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  5. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    Directory of Open Access Journals (Sweden)

    Juan Hernandez-Vicen

    2018-03-01

    Full Text Available New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator from the University Carlos III of Madrid.

  6. Active citizenship’ and feeding assistive robotics

    DEFF Research Database (Denmark)

    Nickelsen, Niels Christian Mossfeldt

    2018-01-01

    Would you want to be fed by a robot? This question may or may not sound attractive to a severely disabled citizen striving for self-reliance. Recently assistive robotics have become a new factor to rely on in relation to a number of aspects of social work and health care. These initiatives have...... study, I discuss the ways humans engage with them, how they co-produce everyday life in housing institutions, and the sensitivity of assistive robotics. During the latest years, Feeding Assistive Robotics (FAR) have enjoyed strong political endorsement in Denmark. Nevertheless, I argue, it is difficult...

  7. Non-manufacturing applications of robotics

    International Nuclear Information System (INIS)

    Dauchez, P.

    2000-12-01

    This book presents the different non-manufacturing sectors of activity where robotics can have useful or necessary applications: underwater robotics, agriculture robotics, road work robotics, nuclear robotics, medical-surgery robotics, aids to disabled people, entertainment robotics. Service robotics has been voluntarily excluded because this developing sector is not mature yet. (J.S.)

  8. Robot-assisted therapy for improving social interactions and activity participation among institutionalized older adults: a pilot study.

    Science.gov (United States)

    Sung, Huei-Chuan; Chang, Shu-Min; Chin, Mau-Yu; Lee, Wen-Li

    2015-03-01

    Animal-assisted therapy is gaining popularity as part of therapeutic activities for older adults in many long-term care facilities. However, concerns about dog bites, allergic responses to pets, disease, and insufficient available resources to care for a real pet have led to many residential care facilities to ban this therapy. There are situations where a substitute artificial companion, such as robotic pet, may serve as a better alternative. This pilot study used a one-group pre- and posttest design to evaluate the effect of a robot-assisted therapy for older adults. Sixteen eligible participants participated in the study and received a group robot-assisted therapy using a seal-like robot pet for 30 minutes twice a week for 4 weeks. All participants received assessments of their communication and interaction skills using the Assessment of Communication and Interaction Skills (ACIS-C) and activity participation using the Activity Participation Scale at baseline and at week 4. A total of 12 participants completed the study. Wilcoxon signed rank test showed that participants' communication and interaction skills (z = -2.94, P = 0.003) and activity participation (z = -2.66, P = 0.008) were significantly improved after receiving 4-week robot-assisted therapy. By interacting with a robot pet, such as Paro, the communication, interaction skills, and activity participation of the older adults can be improved. The robot-assisted therapy can be provided as a routine activity program and has the potential to improve social health of older adults in residential care facilities. Copyright © 2014 Wiley Publishing Asia Pty Ltd.

  9. Posture manipulation for rescue activity via small traction robots

    International Nuclear Information System (INIS)

    Iwano, Yuki; Osuka, Koichi; Amano, Hisanori

    2006-01-01

    We discuss a conceptual design of rescue robots against nuclear-power plant accidents. We claim that the rescue robots in nuclear-power plants should have the following properties. (1) The size is small. (2) The structure is simple. (3) The number of the robots is large. This paper studies the rescue robots to rescue people in an area polluted with radioactive leakage in nuclear power institutions. In particular, we propose a rescue system which consists of a group of small mobile robots. First, small traction robots set the posture of the fainted victims to carry easily, and carry them to the safety space with the mobile robots for the stretcher composition. In this paper, we describe the produced small traction robots. And, we confirm that the robots can manipulate a 40 kg dummy doll's posture. We also examine the optimal number of robots from a perspective of working efficiency in the assumption spot. (author)

  10. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    Directory of Open Access Journals (Sweden)

    Taikyeong Jeong

    2011-09-01

    Full Text Available In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS. A laser-vision sensor (LVS, consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality.

  11. Desarrollo de algoritmo para detección y comando de robots humanoides en tareas de recolección

    Directory of Open Access Journals (Sweden)

    Germán Andrés Vargas Torres

    2015-07-01

    Full Text Available This article presents an algorithm which commands a group of Bioloid humanoid robots in order to organize them around an object of interest, previously detected by an external vision system. The robots form a Multi-Agent System (MAS oriented towards cooperative gathering tasks. Development of the MAS, as well as each of the organization algorithm’s components and simulation inside a virtual environment are all detailed. The algorithm is subdivided in two dedicated threads: one of which handles machine vision (filtering, contour detection and classification achieved through EmguCV libraries and operational space calculations, and another which operates ZigBee wireless communication with the robots. Furthermore, the robots possess their own embedded code which enables them to translate a sequence of received instructions into gait patterns which allow them to move towards the object of interest. Total execution time for the gathering task is chosen as the global performance measure to evaluate.

  12. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Directory of Open Access Journals (Sweden)

    Kristel Knaepen

    Full Text Available In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support. Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  13. Human-Robot Interaction: Does Robotic Guidance Force Affect Gait-Related Brain Dynamics during Robot-Assisted Treadmill Walking?

    Science.gov (United States)

    Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain

    2015-01-01

    In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.

  14. How robotic-assisted surgery can decrease the risk of mucosal tear during Heller myotomy procedure?

    Science.gov (United States)

    Ballouhey, Quentin; Dib, Nabil; Binet, Aurélien; Carcauzon-Couvrat, Véronique; Clermidi, Pauline; Longis, Bernard; Lardy, Hubert; Languepin, Jane; Cros, Jérôme; Fourcade, Laurent

    2017-06-01

    We report the first description of robotic-assisted Heller myotomy in children. The purpose of this study was to improve the safety of Heller myotomy by demonstrating, in two adolescent patients, the contribution of the robot to the different steps of this procedure. Due to the robot's freedom of movement and three-dimensional vision, there was an improvement in the accuracy, a gain in the safety regarding different key-points, decreasing the risk of mucosal perforation associated with this procedure.

  15. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  16. Performance of Global-Appearance Descriptors in Map Building and Localization Using Omnidirectional Vision

    Directory of Open Access Journals (Sweden)

    Luis Payá

    2014-02-01

    Full Text Available Map building and localization are two crucial abilities that autonomous robots must develop. Vision sensors have become a widespread option to solve these problems. When using this kind of sensors, the robot must extract the necessary information from the scenes to build a representation of the environment where it has to move and to estimate its position and orientation with robustness. The techniques based on the global appearance of the scenes constitute one of the possible approaches to extract this information. They consist in representing each scene using only one descriptor which gathers global information from the scene. These techniques present some advantages comparing to other classical descriptors, based on the extraction of local features. However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy. In this paper we make an exhaustive comparison among some global appearance descriptors to solve the mapping and localization problem. With this aim, we make use of several image sets captured in indoor environments under realistic working conditions. The datasets have been collected using an omnidirectional vision sensor mounted on the robot.

  17. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.

    Science.gov (United States)

    Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro

    2018-01-01

    In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  18. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots

    Directory of Open Access Journals (Sweden)

    Yoshinobu Hagiwara

    2018-03-01

    Full Text Available In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA. Object recognition results using convolutional neural network (CNN, hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL, and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  19. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-03-01

    Full Text Available NASA's vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  20. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design

    Directory of Open Access Journals (Sweden)

    Scott A. Green

    2008-11-01

    Full Text Available NASA?s vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR, the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.

  1. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    Science.gov (United States)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.

  2. Multi-sensor integration for autonomous robots in nuclear power plants

    International Nuclear Information System (INIS)

    Mann, R.C.; Jones, J.P.; Beckerman, M.; Glover, C.W.; Farkas, L.; Bilbro, G.L.; Snyder, W.

    1989-01-01

    As part of a concerted RandD program in advanced robotics for hazardous environments, scientists and engineers at the Oak Ridge National Laboratory (ORNL) are performing research in the areas of systems integration, range-sensor-based 3-D world modeling, and multi-sensor integration. This program features a unique teaming arrangement that involves the universities of Florida, Michigan, Tennessee, and Texas; Odetics Corporation; and ORNL. This paper summarizes work directed at integrating information extracted from data collected with range sensors and CCD cameras on-board a mobile robot, in order to produce reliable descriptions of the robot's environment. Specifically, the paper describes the integration of two-dimensional vision and sonar range information, and an approach to integrate registered luminance and laser range images. All operations are carried out on-board the mobile robot using a 16-processor hypercube computer. 14 refs., 4 figs

  3. EnViSoRS: Enhanced Vision System for Robotic Surgery. A User-Defined Safety Volume Tracking to Minimize the Risk of Intraoperative Bleeding

    Directory of Open Access Journals (Sweden)

    Veronica Penza

    2017-05-01

    Full Text Available In abdominal surgery, intraoperative bleeding is one of the major complications that affect the outcome of minimally invasive surgical procedures. One of the causes is attributed to accidental damages to arteries or veins, and one of the possible risk factors falls on the surgeon’s skills. This paper presents the development and application of an Enhanced Vision System for Robotic Surgery (EnViSoRS, based on a user-defined Safety Volume (SV tracking to minimize the risk of intraoperative bleeding. It aims at enhancing the surgeon’s capabilities by providing Augmented Reality (AR assistance toward the protection of vessels from injury during the execution of surgical procedures with a robot. The core of the framework consists in (i a hybrid tracking algorithm (LT-SAT tracker that robustly follows a user-defined Safety Area (SA in long term; (ii a dense soft tissue 3D reconstruction algorithm, necessary for the computation of the SV; (iii AR features for visualization of the SV to be protected and of a graphical gage indicating the current distance between the instruments and the reconstructed surface. EnViSoRS was integrated with a commercial robotic surgical system (the dVRK system for testing and validation. The experiments aimed at demonstrating the accuracy, robustness, performance, and usability of EnViSoRS during the execution of a simulated surgical task on a liver phantom. Results show an overall accuracy in accordance with surgical requirements (<5 mm, and high robustness in the computation of the SV in terms of precision and recall of its identification. The optimization strategy implemented to speed up the computational time is also described and evaluated, providing AR features update rate up to 4 fps, without impacting the real-time visualization of the stereo endoscopic video. Finally, qualitative results regarding the system usability indicate that the proposed system integrates well with the commercial surgical robot and

  4. Technique of Substantiating Requirements for the Vision Systems of Industrial Robotic Complexes

    Directory of Open Access Journals (Sweden)

    V. Ya. Kolyuchkin

    2015-01-01

    Full Text Available In references, there is a lack of approaches to describe the justified technical requirements for the vision systems (VS of industrial robotics complexes (IRC. Therefore, an objective of the work is to develop a technique that allows substantiating requirements for the main quality indicators of VS, functioning as a part of the IRC.The proposed technique uses a model representation of VS, which, as a part of the IRC information system, sorts the objects in the work area, as well as measures their linear and angular coordinates. To solve the problem of statement there is a proposal to define the target function of a designed IRC as a dependence of the IRC indicator efficiency on the VS quality indicators. The paper proposes to use, as an indicator of the IRC efficiency, the probability of a lack of fault products when manufacturing. Based on the functions the VS perform as a part of the IRC information system, the accepted indicators of VS quality are as follows: a probability of the proper recognition of objects in the working IRC area, and confidential probabilities of measuring linear and angular orientation coordinates of objects with the specified values of permissible error. Specific values of these errors depend on the orientation errors of working bodies of manipulators that are a part of the IRC. The paper presents mathematical expressions that determine the functional dependence of the probability of a lack of fault products when manufacturing on the VS quality indicators and the probability of failures of IRC technological equipment.The offered technique for substantiating engineering requirements for the VS of IRC has novelty. The results obtained in this work can be useful for professionals involved in IRC VS development, and, in particular, in development of VS algorithms and software.

  5. A Modular, Reconfigurable Mold for a Soft Robotic Gripper Design Activity

    Directory of Open Access Journals (Sweden)

    Jiawei Zhang

    2017-09-01

    Full Text Available Soft robotics is an emerging field with strong potential to serve as an educational tool due to its advantages such as low costs and shallow learning curves. In this paper, we introduce a modular and reconfigurable mold for flexible design of pneumatic soft robotic grippers. By using simple assembly kits, students at all levels are able to design and construct soft robotic grippers that vary in function and performance. The process of constructing the modular mold enables students to understand how design choices impact system performance. Our unique modular mold allows students to select the number and length of fingers in a gripper, as well as to adjust the internal geometry of the pneumatic actuator cavity, which dictates how and where bending of a finger occurs. In addition, the mold may be deconstructed and reconfigured, which allows for fast iterative design and lowers material costs (since a new mold does not need to be made to implement a design change. We further demonstrate the feasibility of the modular mold by implementing it in a soft robot design activity in classrooms and showing a sufficiently high rate of student success in designing and constructing a functional soft robotic gripper.

  6. Object as a model of intelligent robot in the virtual workspace

    Science.gov (United States)

    Foit, K.; Gwiazda, A.; Banas, W.; Sekala, A.; Hryniewicz, P.

    2015-11-01

    The contemporary industry requires that every element of a production line will fit into the global schema, which is connected with the global structure of business. There is the need to find the practical and effective ways of the design and management of the production process. The term “effective” should be understood in a manner that there exists a method, which allows building a system of nodes and relations in order to describe the role of the particular machine in the production process. Among all the machines involved in the manufacturing process, industrial robots are the most complex ones. This complexity is reflected in the realization of elaborated tasks, involving handling, transporting or orienting the objects in a work space, and even performing simple machining processes, such as deburring, grinding, painting, applying adhesives and sealants etc. The robot also performs some activities connected with automatic tool changing and operating the equipment mounted on the wrist of the robot. Because of having the programmable control system, the robot also performs additional activities connected with sensors, vision systems, operating the storages of manipulated objects, tools or grippers, measuring stands, etc. For this reason the description of the robot as a part of production system should take into account the specific nature of this machine: the robot is a substitute of a worker, who performs his tasks in a particular environment. In this case, the model should be able to characterize the essence of "employment" in the sufficient way. One of the possible approaches to this problem is to treat the robot as an object, in the sense often used in computer science. This allows both: to describe certain operations performed on the object, as well as describing the operations performed by the object. This paper focuses mainly on the definition of the object as the model of the robot. This model is confronted with the other possible descriptions. The

  7. Object as a model of intelligent robot in the virtual workspace

    International Nuclear Information System (INIS)

    Foit, K; Gwiazda, A; Banas, W; Sekala, A; Hryniewicz, P

    2015-01-01

    The contemporary industry requires that every element of a production line will fit into the global schema, which is connected with the global structure of business. There is the need to find the practical and effective ways of the design and management of the production process. The term “effective” should be understood in a manner that there exists a method, which allows building a system of nodes and relations in order to describe the role of the particular machine in the production process. Among all the machines involved in the manufacturing process, industrial robots are the most complex ones. This complexity is reflected in the realization of elaborated tasks, involving handling, transporting or orienting the objects in a work space, and even performing simple machining processes, such as deburring, grinding, painting, applying adhesives and sealants etc. The robot also performs some activities connected with automatic tool changing and operating the equipment mounted on the wrist of the robot. Because of having the programmable control system, the robot also performs additional activities connected with sensors, vision systems, operating the storages of manipulated objects, tools or grippers, measuring stands, etc. For this reason the description of the robot as a part of production system should take into account the specific nature of this machine: the robot is a substitute of a worker, who performs his tasks in a particular environment. In this case, the model should be able to characterize the essence of 'employment' in the sufficient way. One of the possible approaches to this problem is to treat the robot as an object, in the sense often used in computer science. This allows both: to describe certain operations performed on the object, as well as describing the operations performed by the object. This paper focuses mainly on the definition of the object as the model of the robot. This model is confronted with the other possible

  8. INDUSTRIAL ROBOT REPEATABILITY TESTING WITH HIGH SPEED CAMERA PHANTOM V2511

    Directory of Open Access Journals (Sweden)

    Jerzy Józwik

    2016-12-01

    Full Text Available Apart from accuracy, one of the parameters describing industrial robots is positioning accuracy. The parameter in question, which is the subject of this paper, is often the decisive factor determining whether to apply a given robot to perform certain tasks or not. Articulated robots are predominantly used in such processes as: spot weld-ing, transport of materials and other welding applications, where high positioning repeatability is required. It is therefore essential to recognise the parameter in question and to control it throughout the operation of the robot. This paper presents methodology for robot positioning accuracy measurements based on vision technique. The measurements were conducted with Phantom v2511 high-speed camera and TEMA Motion software, for motion analysis. The object of the measurements was a 6-axis Yaskawa Motoman HP20F industrial robot. The results of measurements obtained in tests provided data for the calculation of positioning accuracy of the robot, which was then juxtaposed against robot specifications. Also analysed was the impact of the direction of displacement on the value of attained pose errors. Test results are given in a graphic form.

  9. Identification and location of catenary insulator in complex background based on machine vision

    Science.gov (United States)

    Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao

    2018-04-01

    It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.

  10. Robotic general surgery: current practice, evidence, and perspective.

    Science.gov (United States)

    Jung, M; Morel, P; Buehler, L; Buchs, N C; Hagen, M E

    2015-04-01

    Robotic technology commenced to be adopted for the field of general surgery in the 1990s. Since then, the da Vinci surgical system (Intuitive Surgical Inc, Sunnyvale, CA, USA) has remained by far the most commonly used system in this domain. The da Vinci surgical system is a master-slave machine that offers three-dimensional vision, articulated instruments with seven degrees of freedom, and additional software features such as motion scaling and tremor filtration. The specific design allows hand-eye alignment with intuitive control of the minimally invasive instruments. As such, robotic surgery appears technologically superior when compared with laparoscopy by overcoming some of the technical limitations that are imposed on the surgeon by the conventional approach. This article reviews the current literature and the perspective of robotic general surgery. While robotics has been applied to a wide range of general surgery procedures, its precise role in this field remains a subject of further research. Until now, only limited clinical evidence that could establish the use of robotics as the gold standard for procedures of general surgery has been created. While surgical robotics is still in its infancy with multiple novel systems currently under development and clinical trials in progress, the opportunities for this technology appear endless, and robotics should have a lasting impact to the field of general surgery.

  11. Learning for intelligent mobile robots

    Science.gov (United States)

    Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.

    2003-10-01

    Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A

  12. INSA: Vision and Activities

    International Nuclear Information System (INIS)

    Choe, Kwan-Kyoo

    2013-01-01

    INSA vision: Contribution to the world peace via advanced and excellent nuclear nonproliferation and security education and training; Objectives: Provide practical education and training programs; Raise internationally-recognized experts; Improve awareness about nuclear nonproliferation and security

  13. Toward a visual cognitive system using active top-down saccadic control

    NARCIS (Netherlands)

    LaCroix, J.; Postma, E.; van den Herik, J.; Murre, J.

    2008-01-01

    The saccadic selection of relevant visual input for preferential processing allows the efficient use of computational resources. Based on saccadic active human vision, we aim to develop a plausible saccade-based visual cognitive system for a humanoid robot. This paper presents two initial steps

  14. A Haptic Guided Robotic System for Endoscope Positioning and Holding.

    Science.gov (United States)

    Cabuk, Burak; Ceylan, Savas; Anik, Ihsan; Tugasaygi, Mehtap; Kizir, Selcuk

    2015-01-01

    To determine the feasibility, advantages, and disadvantages of using a robot for holding and maneuvering the endoscope in transnasal transsphenoidal surgery. The system used in this study was a Stewart Platform based robotic system that was developed by Kocaeli University Department of Mechatronics Engineering for positioning and holding of endoscope. After the first use on an artificial head model, the system was used on six fresh postmortem bodies that were provided by the Morgue Specialization Department of the Forensic Medicine Institute (Istanbul, Turkey). The setup required for robotic system was easy, the time for registration procedure and setup of the robot takes 15 minutes. The resistance was felt on haptic arm in case of contact or friction with adjacent tissues. The adaptation process was shorter with the mouse to manipulate the endoscope. The endoscopic transsphenoidal approach was achieved with the robotic system. The endoscope was guided to the sphenoid ostium with the help of the robotic arm. This robotic system can be used in endoscopic transsphenoidal surgery as an endoscope positioner and holder. The robot is able to change the position easily with the help of an assistant and prevents tremor, and provides a better field of vision for work.

  15. Visual Control of Robots Using Range Images

    Directory of Open Access Journals (Sweden)

    Fernando Torres

    2010-08-01

    Full Text Available In the last years, 3D-vision systems based on the time-of-flight (ToF principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  16. Two-Armed, Mobile, Sensate Research Robot

    Science.gov (United States)

    Engelberger, J. F.; Roberts, W. Nelson; Ryan, David J.; Silverthorne, Andrew

    2004-01-01

    The Anthropomorphic Robotic Testbed (ART) is an experimental prototype of a partly anthropomorphic, humanoid-size, mobile robot. The basic ART design concept provides for a combination of two-armed coordination, tactility, stereoscopic vision, mobility with navigation and avoidance of obstacles, and natural-language communication, so that the ART could emulate humans in many activities. The ART could be developed into a variety of highly capable robotic assistants for general or specific applications. There is especially great potential for the development of ART-based robots as substitutes for live-in health-care aides for home-bound persons who are aged, infirm, or physically handicapped; these robots could greatly reduce the cost of home health care and extend the term of independent living. The ART is a fully autonomous and untethered system. It includes a mobile base on which is mounted an extensible torso topped by a head, shoulders, and two arms. All subsystems of the ART are powered by a rechargeable, removable battery pack. The mobile base is a differentially- driven, nonholonomic vehicle capable of a speed >1 m/s and can handle a payload >100 kg. The base can be controlled manually, in forward/backward and/or simultaneous rotational motion, by use of a joystick. Alternatively, the motion of the base can be controlled autonomously by an onboard navigational computer. By retraction or extension of the torso, the head height of the ART can be adjusted from 5 ft (1.5 m) to 6 1/2 ft (2 m), so that the arms can reach either the floor or high shelves, or some ceilings. The arms are symmetrical. Each arm (including the wrist) has a total of six rotary axes like those of the human shoulder, elbow, and wrist joints. The arms are actuated by electric motors in combination with brakes and gas-spring assists on the shoulder and elbow joints. The arms are operated under closed-loop digital control. A receptacle for an end effector is mounted on the tip of the wrist and

  17. Markovian robots: Minimal navigation strategies for active particles

    Science.gov (United States)

    Nava, Luis Gómez; Großmann, Robert; Peruani, Fernando

    2018-04-01

    We explore minimal navigation strategies for active particles in complex, dynamical, external fields, introducing a class of autonomous, self-propelled particles which we call Markovian robots (MR). These machines are equipped with a navigation control system (NCS) that triggers random changes in the direction of self-propulsion of the robots. The internal state of the NCS is described by a Boolean variable that adopts two values. The temporal dynamics of this Boolean variable is dictated by a closed Markov chain—ensuring the absence of fixed points in the dynamics—with transition rates that may depend exclusively on the instantaneous, local value of the external field. Importantly, the NCS does not store past measurements of this value in continuous, internal variables. We show that despite the strong constraints, it is possible to conceive closed Markov chain motifs that lead to nontrivial motility behaviors of the MR in one, two, and three dimensions. By analytically reducing the complexity of the NCS dynamics, we obtain an effective description of the long-time motility behavior of the MR that allows us to identify the minimum requirements in the design of NCS motifs and transition rates to perform complex navigation tasks such as adaptive gradient following, detection of minima or maxima, or selection of a desired value in a dynamical, external field. We put these ideas in practice by assembling a robot that operates by the proposed minimalistic NCS to evaluate the robustness of MR, providing a proof of concept that is possible to navigate through complex information landscapes with such a simple NCS whose internal state can be stored in one bit. These ideas may prove useful for the engineering of miniaturized robots.

  18. Inter-rater reliability of kinesthetic measurements with the KINARM robotic exoskeleton.

    Science.gov (United States)

    Semrau, Jennifer A; Herter, Troy M; Scott, Stephen H; Dukelow, Sean P

    2017-05-22

    Kinesthesia (sense of limb movement) has been extremely difficult to measure objectively, especially in individuals who have survived a stroke. The development of valid and reliable measurements for proprioception is important to developing a better understanding of proprioceptive impairments after stroke and their impact on the ability to perform daily activities. We recently developed a robotic task to evaluate kinesthetic deficits after stroke and found that the majority (~60%) of stroke survivors exhibit significant deficits in kinesthesia within the first 10 days post-stroke. Here we aim to determine the inter-rater reliability of this robotic kinesthetic matching task. Twenty-five neurologically intact control subjects and 15 individuals with first-time stroke were evaluated on a robotic kinesthetic matching task (KIN). Subjects sat in a robotic exoskeleton with their arms supported against gravity. In the KIN task, the robot moved the subjects' stroke-affected arm at a preset speed, direction and distance. As soon as subjects felt the robot begin to move their affected arm, they matched the robot movement with the unaffected arm. Subjects were tested in two sessions on the KIN task: initial session and then a second session (within an average of 18.2 ± 13.8 h of the initial session for stroke subjects), which were supervised by different technicians. The task was performed both with and without the use of vision in both sessions. We evaluated intra-class correlations of spatial and temporal parameters derived from the KIN task to determine the reliability of the robotic task. We evaluated 8 spatial and temporal parameters that quantify kinesthetic behavior. We found that the parameters exhibited moderate to high intra-class correlations between the initial and retest conditions (Range, r-value = [0.53-0.97]). The robotic KIN task exhibited good inter-rater reliability. This validates the KIN task as a reliable, objective method for quantifying

  19. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  20. [Usefullness of the Da Vinci robot in urologic surgery].

    Science.gov (United States)

    Iselin, C; Fateri, F; Caviezel, A; Schwartz, J; Hauser, J

    2007-12-05

    A telemanipulator for laparoscopic instruments is now available in the world of surgical robotics. This device has three distincts advantages over traditional laparoscopic surgery: it improves precision because of the many degrees of freedom of its instruments, and it offers 3-D vision so as better ergonomics for the surgeon. These characteristics are most useful for procedures that require delicate suturing in a focused operative field which may be difficult to reach. The Da Vinci robot has found its place in 2 domains of laparoscopic urologic surgery: radical prostatectomy and ureteral surgery. The cost of the robot, so as the price of its maintenance and instruments is high. This increases healthcare costs in comparison to open surgery, however not dramatically since patients stay less time in hospital and go back to work earlier.

  1. Application of ultrasonic sensor for measuring distances in robotics

    Science.gov (United States)

    Zhmud, V. A.; Kondratiev, N. O.; Kuznetsov, K. A.; Trubin, V. G.; Dimitrov, L. V.

    2018-05-01

    Ultrasonic sensors allow us to equip robots with a means of perceiving surrounding objects, an alternative to technical vision. Humanoid robots, like robots of other types, are, first, equipped with sensory systems similar to the senses of a human. However, this approach is not enough. All possible types and kinds of sensors should be used, including those that are similar to those of other animals and creations (in particular, echolocation in dolphins and bats), as well as sensors that have no analogues in the wild. This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board. The characteristics of similar modules for comparison are given. A subroutine for working with the sensor is given.

  2. Development of radiation hardened robot for nuclear facility - Development of real-time stereo object tracking system using the optical correlator

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eun Soo; Lee, S. H.; Lee, J. S. [Kwangwoon University, Seoul (Korea)

    2000-03-01

    Object tracking, through Centroide method used in the KAERI-M1 Stereo Robot Vision System developed at Atomic Research Center, is too sensitive to target's light variation and because it has a fragility which can't reflect the surrounding background, the application in the actual condition is very limited. Also the correlation method can constitute a relatively stable object tracker in noise features but the digital calculation amount is too massive in image correlation so real time materialization is limited. So the development of Optical Correlation based on Stereo Object Tracking System using high speed optical information processing technique will put stable the real time stereo object tracking system and substantial atomic industrial stereo robot vision system to practical use. This research is about developing real time stereo object tracking algorithm using optical correlation system through the technique which can be applied to Atomic Research Center's KAERI-M1 Stereo Vision Robot which will be used in atomic facility remote operations. And revise the stereo disparity using real time optical correlation technique, and materializing the application of the stereo object tracking algorithm to KAERI-M1 Stereo Robot. 19 refs., 45 figs., 2 tabs. (Author)

  3. Robotic assisted laparoscopic colectomy.

    LENUS (Irish Health Repository)

    Pandalai, S

    2010-06-01

    Robotic surgery has evolved over the last decade to compensate for limitations in human dexterity. It avoids the need for a trained assistant while decreasing error rates such as perforations. The nature of the robotic assistance varies from voice activated camera control to more elaborate telerobotic systems such as the Zeus and the Da Vinci where the surgeon controls the robotic arms using a console. Herein, we report the first series of robotic assisted colectomies in Ireland using a voice activated camera control system.

  4. Towards Autonomous Operations of the Robonaut 2 Humanoid Robotic Testbed

    Science.gov (United States)

    Badger, Julia; Nguyen, Vienny; Mehling, Joshua; Hambuchen, Kimberly; Diftler, Myron; Luna, Ryan; Baker, William; Joyce, Charles

    2016-01-01

    The Robonaut project has been conducting research in robotics technology on board the International Space Station (ISS) since 2012. Recently, the original upper body humanoid robot was upgraded by the addition of two climbing manipulators ("legs"), more capable processors, and new sensors, as shown in Figure 1. While Robonaut 2 (R2) has been working through checkout exercises on orbit following the upgrade, technology development on the ground has continued to advance. Through the Active Reduced Gravity Offload System (ARGOS), the Robonaut team has been able to develop technologies that will enable full operation of the robotic testbed on orbit using similar robots located at the Johnson Space Center. Once these technologies have been vetted in this way, they will be implemented and tested on the R2 unit on board the ISS. The goal of this work is to create a fully-featured robotics research platform on board the ISS to increase the technology readiness level of technologies that will aid in future exploration missions. Technology development has thus far followed two main paths, autonomous climbing and efficient tool manipulation. Central to both technologies has been the incorporation of a human robotic interaction paradigm that involves the visualization of sensory and pre-planned command data with models of the robot and its environment. Figure 2 shows screenshots of these interactive tools, built in rviz, that are used to develop and implement these technologies on R2. Robonaut 2 is designed to move along the handrails and seat track around the US lab inside the ISS. This is difficult for many reasons, namely the environment is cluttered and constrained, the robot has many degrees of freedom (DOF) it can utilize for climbing, and remote commanding for precision tasks such as grasping handrails is time-consuming and difficult. Because of this, it is important to develop the technologies needed to allow the robot to reach operator-specified positions as

  5. Overview and Categorization of Robots Supporting Independent Living of Elderly People: What Activities Do They Support and How Far Have They Developed.

    Science.gov (United States)

    Bedaf, Sandra; Gelderblom, Gert Jan; De Witte, Luc

    2015-01-01

    Over the past decades, many robots for the elderly have been developed, supporting different activities of elderly people. A systematic review in four scientific literature databases and a search in article references and European projects was performed in order to create an overview of robots supporting independent living of elderly people. The robots found were categorized based on their development stage, the activity domains they claim to support, and the type of support provided (i.e., physical, non-physical, and/or non-specified). In total, 107 robots for the elderly were identified. Six robots were still in a concept phase, 95 in a development phase, and six of these robots were commercially available. These robots claimed to provide support related to four activity domains: mobility, self-care, interpersonal interaction & relationships, and other activities. Of the many robots developed, only a small percentage is commercially available. Technical ambitions seem to be guiding robot development. To prolong independent living, the step towards physical support is inevitable and needs to be taken. However, it will be a long time before a robot will be capable of supporting multiple activities in a physical manner in the home of an elderly person in order to enhance their independent living.

  6. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  7. Attention-based navigation in mobile robots using a reconfigurable sensor

    NARCIS (Netherlands)

    Maris, M.

    2001-01-01

    In this paper, a method for visual attentional selection in mobile robots is proposed, based on amplification of the selected stimulus. Attention processing is performed on the vision sensor, which is integrated on a silicon chip and consists of a contrast sensitive retina with the ability to change

  8. The First Korean Experience of Telemanipulative Robot-Assisted Laparoscopic Cholecystectomy Using the da Vinci System

    Science.gov (United States)

    Kang, Chang Moo; Chi, Hoon Sang; Hyeung, Woo Jin; Kim, Kyung Sik; Choi, Jin Sub; Kim, Byong Ro

    2007-01-01

    With the advancement of laparoscopic instruments and computer sciences, complex surgical procedures are expected to be safely performed by robot assisted telemanipulative laparoscopic surgery. The da Vinci system (Intuitive Surgical, Mountain View, CA, USA) became available at the many surgical fields. The wrist like movements of the instrument's tip, as well as 3-dimensional vision, could be expected to facilitate more complex laparoscopic procedure. Here, we present the first Korean experience of da Vinci robotic assisted laparoscopic cholecystectomy and discuss the introduction and perspectives of this robotic system. PMID:17594166

  9. Obstacle detection by stereo vision of fast correlation matching

    International Nuclear Information System (INIS)

    Jeon, Seung Hoon; Kim, Byung Kook

    1997-01-01

    Mobile robot navigation needs acquiring positions of obstacles in real time. A common method for performing this sensing is through stereo vision. In this paper, indoor images are acquired by binocular vision, which contains various shapes of obstacles. From these stereo image data, in order to obtain distances to obstacles, we must deal with the correspondence problem, or get the region in the other image corresponding to the projection of the same surface region. We present an improved correlation matching method enhancing the speed of arbitrary obstacle detection. The results are faster, simple matching, robustness to noise, and improvement of precision. Experimental results under actual surroundings are presented to reveal the performance. (author)

  10. Robots in P.W.R. nuclear powerplants

    International Nuclear Information System (INIS)

    Dubourg, M.

    1987-01-01

    The satisfactory operation of 37 900-MWe PWR powerplants in France, Belgium and South-Africa and the start-up of 1300 MWe powerplants allowed the development of a wide range of automatic units and robots for the periodic maintenance of nuclear plants, reducing the risk of ionizing radiation for the personnel. A large number of automated tools have been built. Among them: - inspection and maintenance systems for the tube bundle of steam generators, - robotized arms ROTETA and ROMEO for the heavy maintenance and delicate operations such as tube extraction or shot peening of tubes to improve their resistance to corrosion; - the versatile manipulator T.A.M. with electrically controlled articulations. The development of functionally versatile tools and robots and the integration of new technologies such as 3-D vision allowed the construction of the self-guided vehicle FRASTAR capable of moving within a nuclear building and in a cluttered environment. This vehicle includes means for avoiding isolated obstacles and can move on stairs [fr

  11. Robotics research in Chile

    Directory of Open Access Journals (Sweden)

    Javier Ruiz-del-Solar

    2016-12-01

    Full Text Available The development of research in robotics in a developing country is a challenging task. Factors such as low research funds, low trust from local companies and the government, and a small number of qualified researchers hinder the development of strong, local research groups. In this article, and as a case of study, we present our research group in robotics at the Advanced Mining Technology Center of the Universidad de Chile, and the way in which we have addressed these challenges. In 2008, we decided to focus our research efforts in mining, which is the main industry in Chile. We observed that this industry has needs in terms of safety, productivity, operational continuity, and environmental care. All these needs could be addressed with robotics and automation technology. In a first stage, we concentrate ourselves in building capabilities in field robotics, starting with the automation of a commercial vehicle. An important outcome of this project was the earn of the local mining industry confidence. Then, in a second stage started in 2012, we began working with the local mining industry in technological projects. In this article, we describe three of the technological projects that we have developed with industry support: (i an autonomous vehicle for mining environments without global positioning system coverage; (ii the inspection of the irrigation flow in heap leach piles using unmanned aerial vehicles and thermal cameras; and (iii an enhanced vision system for vehicle teleoperation in adverse climatic conditions.

  12. Influence of robotic shoal size, configuration, and activity on zebrafish behavior in a free-swimming environment.

    Science.gov (United States)

    Butail, Sachit; Polverino, Giovanni; Phamduy, Paul; Del Sette, Fausto; Porfiri, Maurizio

    2014-12-15

    In animal studies, robots have been recently used as a valid tool for testing a wide spectrum of hypotheses. These robots often exploit visual or auditory cues to modulate animal behavior. The propensity of zebrafish, a model organism in biological studies, toward fish with similar color patterns and shape has been leveraged to design biologically inspired robots that successfully attract zebrafish in preference tests. With an aim of extending the application of such robots to field studies, here, we investigate the response of zebrafish to multiple robotic fish swimming at different speeds and in varying arrangements. A soft real-time multi-target tracking and control system remotely steers the robots in circular trajectories during the experimental trials. Our findings indicate a complex behavioral response of zebrafish to biologically inspired robots. More robots produce a significant change in salient measures of stress, with a fast robot swimming alone causing more freezing and erratic activity than two robots swimming slowly together. In addition, fish spend more time in the proximity of a robot when they swim far apart than when the robots swim close to each other. Increase in the number of robots also significantly alters the degree of alignment of fish motion with a robot. Results from this study are expected to advance our understanding of robot perception by live animals and aid in hypothesis-driven studies in unconstrained free-swimming environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. ePAL Vision 2020 for active ageing of senior professionals

    NARCIS (Netherlands)

    Afsarmanesh, H.; Msanjila, S.S.

    2010-01-01

    In order to enhance the active life of senior professionals, one fundamental challenge is to identify ways to assist promoting the role of elder people within the continuously ageing European society. This paper proposes a vision for establishing required support environment for communities of

  14. HexaMob—A Hybrid Modular Robotic Design for Implementing Biomimetic Structures

    Directory of Open Access Journals (Sweden)

    Sasanka Sankhar Reddy CH.

    2017-10-01

    Full Text Available Modular robots are capable of forming primitive shapes such as lattice and chain structures with the additional flexibility of distributed sensing. The biomimetic structures developed using such modular units provides ease of replacement and reconfiguration in co-ordinated structures, transportation etc. in real life scenarios. Though the research in the employment of modular robotic units in formation of biological organisms is in the nascent stage, modular robotic units are already capable of forming such sophisticated structures. The modular robotic designs proposed so far in modular robotics research vary significantly in external structures, sensor-actuator mechanisms interfaces for docking and undocking, techniques for providing mobility, coordinated structures, locomotions etc. and each robotic design attempted to address various challenges faced in the domain of modular robotics by employing different strategies. This paper presents a novel modular wheeled robotic design - HexaMob facilitating four degrees of freedom (2 degrees for mobility and 2 degrees for structural reconfiguration on a single module with minimal usage of sensor-actuator assemblies. The crucial features of modular robotics such as back-driving restriction, docking, and navigation are addressed in the process of HexaMob design. The proposed docking mechanism is enabled using vision sensor, enhancing the capabilities in docking as well as navigation in co-ordinated structures such as humanoid robots.

  15. A reliability study on brain activation during active and passive arm movements supported by an MRI-compatible robot.

    Science.gov (United States)

    Estévez, Natalia; Yu, Ningbo; Brügger, Mike; Villiger, Michael; Hepp-Reymond, Marie-Claude; Riener, Robert; Kollias, Spyros

    2014-11-01

    In neurorehabilitation, longitudinal assessment of arm movement related brain function in patients with motor disability is challenging due to variability in task performance. MRI-compatible robots monitor and control task performance, yielding more reliable evaluation of brain function over time. The main goals of the present study were first to define the brain network activated while performing active and passive elbow movements with an MRI-compatible arm robot (MaRIA) in healthy subjects, and second to test the reproducibility of this activation over time. For the fMRI analysis two models were compared. In model 1 movement onset and duration were included, whereas in model 2 force and range of motion were added to the analysis. Reliability of brain activation was tested with several statistical approaches applied on individual and group activation maps and on summary statistics. The activated network included mainly the primary motor cortex, primary and secondary somatosensory cortex, superior and inferior parietal cortex, medial and lateral premotor regions, and subcortical structures. Reliability analyses revealed robust activation for active movements with both fMRI models and all the statistical methods used. Imposed passive movements also elicited mainly robust brain activation for individual and group activation maps, and reliability was improved by including additional force and range of motion using model 2. These findings demonstrate that the use of robotic devices, such as MaRIA, can be useful to reliably assess arm movement related brain activation in longitudinal studies and may contribute in studies evaluating therapies and brain plasticity following injury in the nervous system.

  16. Computer vision for automatic inspection of agricultural produce

    Science.gov (United States)

    Molto, Enrique; Blasco, Jose; Benlloch, Jose V.

    1999-01-01

    Fruit and vegetables suffer different manipulations from the field to the final consumer. These are basically oriented towards the cleaning and selection of the product in homogeneous categories. For this reason, several research projects, aimed at fast, adequate produce sorting and quality control are currently under development around the world. Moreover, it is possible to find manual and semi- automatic commercial system capable of reasonably performing these tasks.However, in many cases, their accuracy is incompatible with current European market demands, which are constantly increasing. IVIA, the Valencian Research Institute of Agriculture, located in Spain, has been involved in several European projects related with machine vision for real-time inspection of various agricultural produces. This paper will focus on the work related with two products that have different requirements: fruit and olives. In the case of fruit, the Institute has developed a vision system capable of providing assessment of the external quality of single fruit to a robot that also receives information from other senors. The system use four different views of each fruit and has been tested on peaches, apples and citrus. Processing time of each image is under 500 ms using a conventional PC. The system provides information about primary and secondary color, blemishes and their extension, and stem presence and position, which allows further automatic orientation of the fruit in the final box using a robotic manipulator. Work carried out in olives was devoted to fast sorting of olives for consumption at table. A prototype has been developed to demonstrate the feasibility of a machine vision system capable of automatically sorting 2500 kg/h olives using low-cost conventional hardware.

  17. Robotic technology in surgery: current status in 2008.

    Science.gov (United States)

    Murphy, Declan G; Hall, Rohan; Tong, Raymond; Goel, Rajiv; Costello, Anthony J

    2008-12-01

    There is increasing patient and surgeon interest in robotic-assisted surgery, particularly with the proliferation of da Vinci surgical systems (Intuitive Surgical, Sunnyvale, CA, USA) throughout the world. There is much debate over the usefulness and cost-effectiveness of these systems. The currently available robotic surgical technology is described. Published data relating to the da Vinci system are reviewed and the current status of surgical robotics within Australia and New Zealand is assessed. The first da Vinci system in Australia and New Zealand was installed in 2003. Four systems had been installed by 2006 and seven systems are currently in use. Most of these are based in private hospitals. Technical advantages of this system include 3-D vision, enhanced dexterity and improved ergonomics when compared with standard laparoscopic surgery. Most procedures currently carried out are urological, with cardiac, gynaecological and general surgeons also using this system. The number of patients undergoing robotic-assisted surgery in Australia and New Zealand has increased fivefold in the past 4 years. The most common procedure carried out is robotic-assisted laparoscopic radical prostatectomy. Published data suggest that robotic-assisted surgery is feasible and safe although the installation and recurring costs remain high. There is increasing acceptance of robotic-assisted surgery, especially for urological procedures. The da Vinci surgical system is becoming more widely available in Australia and New Zealand. Other surgical specialties will probably use this technology. Significant costs are associated with robotic technology and it is not yet widely available to public patients.

  18. Localization from Visual Landmarks on a Free-Flying Robot

    Science.gov (United States)

    Coltin, Brian; Fusco, Jesse; Moratto, Zack; Alexandrov, Oleg; Nakamura, Robert

    2016-01-01

    We present the localization approach for Astrobee, a new free-flying robot designed to navigate autonomously on the International Space Station (ISS). Astrobee will accommodate a variety of payloads and enable guest scientists to run experiments in zero-g, as well as assist astronauts and ground controllers. Astrobee will replace the SPHERES robots which currently operate on the ISS, whose use of fixed ultrasonic beacons for localization limits them to work in a 2 meter cube. Astrobee localizes with monocular vision and an IMU, without any environmental modifications. Visual features detected on a pre-built map, optical flow information, and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise, and extensively evaluate the localization algorithm.

  19. Pose estimation for mobile robots working on turbine blade

    Energy Technology Data Exchange (ETDEWEB)

    Ma, X.D.; Chen, Q.; Liu, J.J.; Sun, Z.G.; Zhang, W.Z. [Tsinghua Univ., Beijing (China). Key Laboratory for Advanced Materials Processing Technology, Ministry of Education, Dept. of Mechanical Engineering

    2009-03-11

    This paper discussed a features point detection and matching task technique for mobile robots used in wind turbine blade applications. The vision-based scheme used visual information from the robot's surrounding environment to match successive image frames. An improved pose estimation algorithm based on a scale invariant feature transform (SIFT) was developed to consider the characteristics of local images of turbine blades, pose estimation problems, and conditions. The method included a pre-subsampling technique for reducing computation and bidirectional matching for improving precision. A random sample consensus (RANSAC) method was used to estimate the robot's pose. Pose estimation conditions included a wide pose range; the distance between neighbouring blades; and mechanical, electromagnetic, and optical disturbances. An experimental platform was used to demonstrate the validity of the proposed algorithm. 20 refs., 6 figs.

  20. Vision based persistent localization of a humanoid robot for locomotion tasks

    Directory of Open Access Journals (Sweden)

    Martínez Pablo A.

    2016-09-01

    Full Text Available Typical monocular localization schemes involve a search for matches between reprojected 3D world points and 2D image features in order to estimate the absolute scale transformation between the camera and the world. Successfully calculating such transformation implies the existence of a good number of 3D points uniformly distributed as reprojected pixels around the image plane. This paper presents a method to control the march of a humanoid robot towards directions that are favorable for visual based localization. To this end, orthogonal diagonalization is performed on the covariance matrices of both sets of 3D world points and their 2D image reprojections. Experiments with the NAO humanoid platform show that our method provides persistence of localization, as the robot tends to walk towards directions that are desirable for successful localization. Additional tests demonstrate how the proposed approach can be incorporated into a control scheme that considers reaching a target position.

  1. Automating the Incremental Evolution of Controllers for Physical Robots.

    Science.gov (United States)

    Faíña, Andrés; Jacobsen, Lars Toft; Risi, Sebastian

    2017-01-01

    Evolutionary robotics is challenged with some key problems that must be solved, or at least mitigated extensively, before it can fulfill some of its promises to deliver highly autonomous and adaptive robots. The reality gap and the ability to transfer phenotypes from simulation to reality constitute one such problem. Another lies in the embodiment of the evolutionary processes, which links to the first, but focuses on how evolution can act on real agents and occur independently from simulation, that is, going from being, as Eiben, Kernbach, & Haasdijk [2012, p. 261] put it, "the evolution of things, rather than just the evolution of digital objects.…" The work presented here investigates how fully autonomous evolution of robot controllers can be realized in hardware, using an industrial robot and a marker-based computer vision system. In particular, this article presents an approach to automate the reconfiguration of the test environment and shows that it is possible, for the first time, to incrementally evolve a neural robot controller for different obstacle avoidance tasks with no human intervention. Importantly, the system offers a high level of robustness and precision that could potentially open up the range of problems amenable to embodied evolution.

  2. Soft-Material Robotics

    OpenAIRE

    Wang, L; Nurzaman, SG; Iida, Fumiya

    2017-01-01

    There has been a boost of research activities in robotics using soft materials in the past ten years. It is expected that the use and control of soft materials can help realize robotic systems that are safer, cheaper, and more adaptable than the level that the conventional rigid-material robots can achieve. Contrary to a number of existing review and position papers on soft-material robotics, which mostly present case studies and/or discuss trends and challenges, the review focuses on the fun...

  3. Biologically inspired robots as artificial inspectors

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2002-06-01

    Imagine an inspector conducting an NDE on an aircraft where you notice something is different about him - he is not real but rather he is a robot. Your first reaction would probably be to say 'it's unbelievable but he looks real' just as you would react to an artificial flower that is a good imitation. This science fiction scenario could become a reality at the trend in the development of biologically inspired technologies, and terms like artificial intelligence, artificial muscles, artificial vision and numerous others are increasingly becoming common engineering tools. For many years, the trend has been to automate processes in order to increase the efficiency of performing redundant tasks where various systems have been developed to deal with specific production line requirements. Realizing that some parts are too complex or delicate to handle in small quantities with a simple automatic system, robotic mechanisms were developed. Aircraft inspection has benefitted from this evolving technology where manipulators and crawlers are developed for rapid and reliable inspection. Advancement in robotics towards making them autonomous and possibly look like human, can potentially address the need to inspect structures that are beyond the capability of today's technology with configuration that are not predetermined. The operation of these robots may take place at harsh or hazardous environments that are too dangerous for human presence. Making such robots is becoming increasingly feasible and in this paper the state of the art will be reviewed.

  4. Robot-assisted motor activation monitored by time-domain optical brain imaging

    Science.gov (United States)

    Steinkellner, O.; Wabnitz, H.; Schmid, S.; Steingräber, R.; Schmidt, H.; Krüger, J.; Macdonald, R.

    2011-07-01

    Robot-assisted motor rehabilitation proved to be an effective supplement to conventional hand-to-hand therapy in stroke patients. In order to analyze and understand motor learning and performance during rehabilitation it is desirable to develop a monitor to provide objective measures of the corresponding brain activity at the rehabilitation progress. We used a portable time-domain near-infrared reflectometer to monitor the hemodynamic brain response to distal upper extremity activities. Four healthy volunteers performed two different robot-assisted wrist/forearm movements, flexion-extension and pronation-supination in comparison with an unassisted squeeze ball exercise. A special headgear with four optical measurement positions to include parts of the pre- and postcentral gyrus provided a good overlap with the expected activation areas. Data analysis based on variance of time-of-flight distributions of photons through tissue was chosen to provide a suitable representation of intracerebral signals. In all subjects several of the four detection channels showed a response. In some cases indications were found of differences in localization of the activated areas for the various tasks.

  5. University of Michigan workscope for 1991 DOE University program in robotics for advanced reactors

    International Nuclear Information System (INIS)

    Wehe, D.K.

    1990-01-01

    The University of Michigan (UM) is a member of a team of researchers, including the universities of Florida, Texas, and Tennessee, along with Oak Ridge National Laboratory, developing robotic for hazardous environments. The goal of this research is to develop the intelligent and capable robots which can perform useful functions in the new generation of nuclear reactors currently under development. By augmenting human capabilities through remote robotics, increased safety, functionality, and reliability can be achieved. In accordance with the established lines of research responsibilities, our primary efforts during 1991 will continue to focus on the following areas: radiation imaging; mobile robot navigation; three-dimensional vision capabilities for navigation; and machine-intelligence. This report discuss work that has been and will be done in these areas

  6. REPORT ON FIRST INTERNATIONAL WORKSHOP ON ROBOTIC SURGERY IN THORACIC ONCOLOGY

    Directory of Open Access Journals (Sweden)

    Giulia Veronesi

    2016-10-01

    Full Text Available A workshop of experts from France, Germany, Italy and the United States took place at Humanitas Research Hospital Milan, Italy, on 10-11 February 2016, to examine techniques for and applications of robotic surgery to thoracic oncology. The main topics of presentation and discussion were: robotic surgery for lung resection; robot-assisted thymectomy; minimally invasive surgery for esophageal cancer; new developments in computer-assisted surgery and medical applications of robots; the challenge of costs; and future clinical research in robotic thoracic surgery. The following article summarizes the main contributions to the workshop. The Workshop consensus was that, since video-assisted thoracoscopic surgery (VATS is becoming the mainstream approach to resectable lung cancer in North America and Europe, robotic surgery for thoracic oncology is likely to be embraced by an increasing numbers of thoracic surgeons, since it has technical advantages over VATS, including intuitive movements, tremor filtration, more degrees of manipulative freedom, motion scaling, and high definition stereoscopic vision. These advantages may make robotic surgery more accessible than VATS to trainees and experienced surgeons, and also lead to expanded indications. However the high costs of robotic surgery and absence of tactile feedback remain obstacles to widespread dissemination. A prospective multicentric randomized trial (NCT02804893 to compare robotic and VATS approaches to stage I and II lung cancer will start shortly.

  7. Report on First International Workshop on Robotic Surgery in Thoracic Oncology.

    Science.gov (United States)

    Veronesi, Giulia; Cerfolio, Robert; Cingolani, Roberto; Rueckert, Jens C; Soler, Luc; Toker, Alper; Cariboni, Umberto; Bottoni, Edoardo; Fumagalli, Uberto; Melfi, Franca; Milli, Carlo; Novellis, Pierluigi; Voulaz, Emanuele; Alloisio, Marco

    2016-01-01

    A workshop of experts from France, Germany, Italy, and the United States took place at Humanitas Research Hospital Milan, Italy, on February 10 and 11, 2016, to examine techniques for and applications of robotic surgery to thoracic oncology. The main topics of presentation and discussion were robotic surgery for lung resection; robot-assisted thymectomy; minimally invasive surgery for esophageal cancer; new developments in computer-assisted surgery and medical applications of robots; the challenge of costs; and future clinical research in robotic thoracic surgery. The following article summarizes the main contributions to the workshop. The Workshop consensus was that since video-assisted thoracoscopic surgery (VATS) is becoming the mainstream approach to resectable lung cancer in North America and Europe, robotic surgery for thoracic oncology is likely to be embraced by an increasing numbers of thoracic surgeons, since it has technical advantages over VATS, including intuitive movements, tremor filtration, more degrees of manipulative freedom, motion scaling, and high-definition stereoscopic vision. These advantages may make robotic surgery more accessible than VATS to trainees and experienced surgeons and also lead to expanded indications. However, the high costs of robotic surgery and absence of tactile feedback remain obstacles to widespread dissemination. A prospective multicentric randomized trial (NCT02804893) to compare robotic and VATS approaches to stages I and II lung cancer will start shortly.

  8. Effects of V4c-ICL Implantation on Myopic Patients’ Vision-Related Daily Activities

    Directory of Open Access Journals (Sweden)

    Taixiang Liu

    2016-01-01

    Full Text Available The new type implantable Collamer lens with a central hole (V4c-ICL is widely used to treat myopia. However, halos occur in some patients after surgery. The aim is to evaluate the effect of V4c-ICL implantation on vision-related daily activities. This retrospective study included 42 patients. Uncorrected visual acuity (UCVA, best corrected visual acuity (BCVA, intraocular pressure (IOP, endothelial cell density (ECD, and vault were recorded and vision-related daily activities were evaluated at 3 months after operation. The average spherical equivalent was -0.12±0.33 D at 3 months after operation. UCVA equal to or better than preoperative BCVA occurred in 98% of eyes. The average BCVA at 3 months after operation was -0.03±0.07 LogMAR, which was significantly better than preoperative BCVA (0.08±0.10 LogMAR (P=0.029. Apart from one patient (2.4% who had difficulty reading computer screens, all patients had satisfactory or very satisfactory results. During the early postoperation, halos occurred in 23 patients (54.8%. However there were no significant differences in the scores of visual functions between patients with and without halos (P>0.05. Patients were very satisfied with their vision-related daily activities at 3 months after operation. The central hole of V4c-ICL does not affect patients’ vision-related daily activities.

  9. Using a Curricular Vision to Define Entrustable Professional Activities for Medical Student Assessment.

    Science.gov (United States)

    Hauer, Karen E; Boscardin, Christy; Fulton, Tracy B; Lucey, Catherine; Oza, Sandra; Teherani, Arianne

    2015-09-01

    The new UCSF Bridges Curriculum aims to prepare students to succeed in today's health care system while simultaneously improving it. Curriculum redesign requires assessment strategies that ensure that graduates achieve competence in enduring and emerging skills for clinical practice. To design entrustable professional activities (EPAs) for assessment in a new curriculum and gather evidence of content validity. University of California, San Francisco, School of Medicine. Nineteen medical educators participated; 14 completed both rounds of a Delphi survey. Authors describe 5 steps for defining EPAs that encompass a curricular vision including refining the vision, defining draft EPAs, developing EPAs and assessment strategies, defining competencies and milestones, and mapping milestones to EPAs. A Q-sort activity and Delphi survey involving local medical educators created consensus and prioritization for milestones for each EPA. For 4 EPAs, most milestones had content validity indices (CVIs) of at least 78 %. For 2 EPAs, 2 to 4 milestones did not achieve CVIs of 78 %. We demonstrate a stepwise procedure for developing EPAs that capture essential physician work activities defined by a curricular vision. Structured procedures for soliciting faculty feedback and mapping milestones to EPAs provide content validity.

  10. Automatic micropart assembly of 3-Dimensional structure by vision based control

    International Nuclear Information System (INIS)

    Wang, Lidai; Kim, Seung Min

    2008-01-01

    We propose a vision control strategy to perform automatic microassembly tasks in three-dimension (3-D) and develop relevant control software: specifically, using a 6 degree-of-freedom (DOF) robotic workstation to control a passive microgripper to automatically grasp a designated micropart from the chip, pivot the micropart, and then move the micropart to be vertically inserted into a designated slot on the chip. In the proposed control strategy, the whole microassembly task is divided into two subtasks, micro-grasping and micro-joining, in sequence. To guarantee the success of microassembly and manipulation accuracy, two different two-stage feedback motion strategies, the pattern matching and auto-focus method are employed, with the use of vision-based control system and the vision control software developed. Experiments conducted demonstrate the efficiency and validity of the proposed control strategy

  11. Automatic micropart assembly of 3-Dimensional structure by vision based control

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Lidai [University of Toronto, Toronto (Canada); Kim, Seung Min [Korean Intellectual Property Office, Daejeon (Korea, Republic of)

    2008-12-15

    We propose a vision control strategy to perform automatic microassembly tasks in three-dimension (3-D) and develop relevant control software: specifically, using a 6 degree-of-freedom (DOF) robotic workstation to control a passive microgripper to automatically grasp a designated micropart from the chip, pivot the micropart, and then move the micropart to be vertically inserted into a designated slot on the chip. In the proposed control strategy, the whole microassembly task is divided into two subtasks, micro-grasping and micro-joining, in sequence. To guarantee the success of microassembly and manipulation accuracy, two different two-stage feedback motion strategies, the pattern matching and auto-focus method are employed, with the use of vision-based control system and the vision control software developed. Experiments conducted demonstrate the efficiency and validity of the proposed control strategy

  12. Actively Perceiving and Responsive Soft Robots Enabled by Self-Powered, Highly Extensible, and Highly Sensitive Triboelectric Proximity- and Pressure-Sensing Skins.

    Science.gov (United States)

    Lai, Ying-Chih; Deng, Jianan; Liu, Ruiyuan; Hsiao, Yung-Chi; Zhang, Steven L; Peng, Wenbo; Wu, Hsing-Mei; Wang, Xingfu; Wang, Zhong Lin

    2018-06-04

    Robots that can move, feel, and respond like organisms will bring revolutionary impact to today's technologies. Soft robots with organism-like adaptive bodies have shown great potential in vast robot-human and robot-environment applications. Developing skin-like sensory devices allows them to naturally sense and interact with environment. Also, it would be better if the capabilities to feel can be active, like real skin. However, challenges in the complicated structures, incompatible moduli, poor stretchability and sensitivity, large driving voltage, and power dissipation hinder applicability of conventional technologies. Here, various actively perceivable and responsive soft robots are enabled by self-powered active triboelectric robotic skins (tribo-skins) that simultaneously possess excellent stretchability and excellent sensitivity in the low-pressure regime. The tribo-skins can actively sense proximity, contact, and pressure to external stimuli via self-generating electricity. The driving energy comes from a natural triboelectrification effect involving the cooperation of contact electrification and electrostatic induction. The perfect integration of the tribo-skins and soft actuators enables soft robots to perform various actively sensing and interactive tasks including actively perceiving their muscle motions, working states, textile's dampness, and even subtle human physiological signals. Moreover, the self-generating signals can drive optoelectronic devices for visual communication and be processed for diverse sophisticated uses. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    Science.gov (United States)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  14. Biomechanical effects of robot assisted walking on knee joint kinematics and muscle activation pattern.

    Science.gov (United States)

    Thangavel, Pavithra; Vidhya, S; Li, Junhua; Chew, Effie; Bezerianos, Anastasios; Yu, Haoyong

    2017-07-01

    Since manual rehabilitation therapy can be taxing for both the patient and the physiotherapist, a gait rehabilitation robot has been built to reduce the physical strain and increase the efficacy of the rehabilitation therapy. The prototype of the gait rehabilitation robot is designed to provide assistance while walking for patients with abnormal gait pattern and it can also be used for rehabilitation therapy to restore an individual's normal gait pattern by aiding motor recovery. The Gait Rehabilitation Robot uses gait event based synchronization, which enables the exoskeleton to provide synchronous assistance during walking that aims to reduce the lower-limb muscle activation. This study emphasizes on the biomechanical effects of assisted walking on the lower limb by analyzing the EMG signal, knee joint kinematics data that was collected from the right leg during the various experimental conditions. The analysis of the measured data shows an improved knee joint trajectory and reduction in muscle activity with assistance. The result of this study does not only assess the functionality of the exoskeleton but also provides a profound understanding of the human-robot interaction by studying the effects of assistance on the lower limb.

  15. The Development of Radiation hardened tele-robot system - Development of artificial force reflection control for teleoperated mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ju Jang; Hong, Sun Gi; Kang, Young Hoon; Kim, Min Soeng [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-04-01

    One of the most important issues in teleoperation is to provide the sense of telepresence so as to conduct the task more reliably. In particular, teleoperated mobile robots are needed to have some kinds of backup system when the operator is blind for remote situation owing to the failure of vision system. In the first year, the idea of artificial force reflection was researched to enhance the reliability of operation when the mobile robot travels on the plain ground. In the second year, we extend previous results to help the teleoperator even when the robot climbs stairs. Finally, we apply the developed control algorithms to real experiments. The artificial force reflection method has two modes; traveling on the plain ground and climbing stairs. When traveling on the plain ground, the force information is artificially generated by using the range data from the environment while generating the impulse force when climbing stairs. To verify the validity of our algorithm, we develop the simulator which consists of the joystick and the visual display system. Through some experiments using this system, we confirm the validity and effectiveness of our new idea of artificial force reflection in the teleoperated mobile robot. 11 refs., 30 figs. (Author)

  16. Multi-Locomotion Robotic Systems New Concepts of Bio-inspired Robotics

    CERN Document Server

    Fukuda, Toshio; Sekiyama, Kosuke; Aoyama, Tadayoshi

    2012-01-01

    Nowadays, multiple attention have been paid on a robot working in the human living environment, such as in the field of medical, welfare, entertainment and so on. Various types of researches are being conducted actively in a variety of fields such as artificial intelligence, cognitive engineering, sensor- technology, interfaces and motion control. In the future, it is expected to realize super high functional human-like robot by integrating technologies in various fields including these types of researches. The book represents new developments and advances in the field of bio-inspired robotics research introducing the state of the art, the idea of multi-locomotion robotic system to implement the diversity of animal motion. It covers theoretical and computational aspects of Passive Dynamic Autonomous Control (PDAC), robot motion control, multi legged walking and climbing as well as brachiation focusing concrete robot systems, components and applications. In addition, gorilla type robot systems are described as...

  17. Mobile HTS-SQUID NDE system with robot arm and active shielding using fluxgate

    Energy Technology Data Exchange (ETDEWEB)

    Hatsukade, Y. [Department of Ecological Engineering, Toyohashi University of Technology, 1-1 Hibarigaoka, Tempaku-cho, Toyohashi, Aichi 441-8580 (Japan)], E-mail: hatukade@eco.tut.ac.jp; Yotsugi, K.; Tanaka, S. [Department of Ecological Engineering, Toyohashi University of Technology, 1-1 Hibarigaoka, Tempaku-cho, Toyohashi, Aichi 441-8580 (Japan)

    2008-09-15

    A robot-arm-based mobile HTS-SQUID NDE system was developed for inspection of advanced structures such as hydrogen fuel cell tanks. In order to realize stable operation of HTS-SQUID exposed in Earth's field and robot arm's noise without flux trapping, flux jumping and unlocking during motion, a new active magnetic shielding (AMS) technique using fluxgate was introduced. The high sensitive fluxgate, which could measure magnetic field of up to several 10 {mu}T, was mounted near an HTS-SQUID gradiometer on the robot arm to measure the ambient noise and feed back its output to a compensation coil, which surrounded both SQUID and fluxgate to cancel the ambient noise around them. The AMS technique successfully enabled the HTS-SQUID gradiometer to be moved at 10 mm/s by the robot arm in unshielded environment without flux trapping, jumping and unlocking. Detection of hidden slots in multi-layer composite-metal structures imitating the fuel cell tank was demonstrated.

  18. Environmental restoration and waste management: Robotics technology development program: Robotics 5-year program plan

    International Nuclear Information System (INIS)

    1991-01-01

    This plan covers robotics Research, Development, Demonstration, Testing and Evaluation activities in the Program for the next five years. These activities range from bench-scale R ampersand D to full-scale hot demonstrations at DOE sites. This plan outlines applications of existing technology to near-term needs, the development and application of enhanced technology for longer-term needs, and initiation of advanced technology development to meet those needs beyond the five-year plan. The objective of the Robotic Technology Development Program (RTDP) is to develop and apply robotics technologies that will enable Environmental Restoration and Waste Management (ER ampersand WM) operations at DOE sites to be safer, faster and cheaper. Five priority DOE sites were visited in March 1990 to identify needs for robotics technology in ER ampersand WM operations. This 5-Year Program Plan for the RTDP detailed annual plans for robotics technology development based on identified needs. In July 1990 a forum was held announcing the robotics program. Over 60 organizations (industrial, university, and federal laboratory) made presentations on their robotics capabilities. To stimulate early interactions with the ER ampersand WM activities at DOE sites, as well as with the robotics community, the RTDP sponsored four technology demonstrations related to ER ampersand WM needs. These demonstrations integrated commercial technology with robotics technology developed by DOE in support of areas such as nuclear reactor maintenance and the civilian reactor waste program. 2 figs

  19. Robotics in medicine

    Science.gov (United States)

    Kuznetsov, D. N.; Syryamkin, V. I.

    2015-11-01

    Modern technologies play a very important role in our lives. It is hard to imagine how people can get along without personal computers, and companies - without powerful computer centers. Nowadays, many devices make modern medicine more effective. Medicine is developing constantly, so introduction of robots in this sector is a very promising activity. Advances in technology have influenced medicine greatly. Robotic surgery is now actively developing worldwide. Scientists have been carrying out research and practical attempts to create robotic surgeons for more than 20 years, since the mid-80s of the last century. Robotic assistants play an important role in modern medicine. This industry is new enough and is at the early stage of development; despite this, some developments already have worldwide application; they function successfully and bring invaluable help to employees of medical institutions. Today, doctors can perform operations that seemed impossible a few years ago. Such progress in medicine is due to many factors. First, modern operating rooms are equipped with up-to-date equipment, allowing doctors to make operations more accurately and with less risk to the patient. Second, technology has enabled to improve the quality of doctors' training. Various types of robots exist now: assistants, military robots, space, household and medical, of course. Further, we should make a detailed analysis of existing types of robots and their application. The purpose of the article is to illustrate the most popular types of robots used in medicine.

  20. Stereoscopic Machine-Vision System Using Projected Circles

    Science.gov (United States)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a